X
There is no widget. You should add your widgets into Fixed sidebar area on Appearance => Widgets of your dashboard.

Blog

Keith Griffiths > News > Technology > Artificial Intelligence (AI), are we ready for the next phase?

Artificial Intelligence (AI), are we ready for the next phase?

Artificial intelligence is a hot topic at this moment. follow the warnings scholars give us about this type of technology but don’t worry, it’s not going to destroy the world, not yet anyway.

Technology is always going to be open to abuse, this is why we create laws and best practices to protect humans, for example: ‘The three laws of robotics’.

Any rational person should know that an ‘AI’ isn’t really a threat to anybody. After all, no matter how smart technology is, whether it’s a device or software, it has no motive for destroying human life. The only threat is our self-preservation and the assumption we will lose our authoritative power, which is a balance of our ego. We should embrace AI and what this can do for us.

We could assume that AI is only a threat if we program devices to become violent. Added to which, we still have the off button in our hands, or kill switch if anything goes wrong. Let’s not forget about the ‘three laws of Robotics’.

Bill Gates Warns About Artificial Intelligence (AI)

Bill said ‘he cannot understand why people are not taking the threat more seriously’, and yet Microsoft is working on AI technology to date. Ok. Added to this, Mr. Gates implied ‘it will take over many human tasks without any ethical understanding’, but that is a moot point on its own.

Humans have morals and ethics and there are people who live among us that don’t. This is the case in every walk of life including the animal kingdom.

When AI comes into existence, developers could include Isaac Asimov’s ‘Three Laws of Robotics’ laws which form the AI’s morals and ethic values or principles. An AI would learn from people and influences that surround it, just like humans do.

Embrace what I’m about to say, what we have learned in a lifetime an AI robot could learn in just a few days. What could this unit create after gaining this knowledge? And where could this take us in just a few years if they are this smart?

Movies and TV Have They Caused Us to Panic?

The Terminator, Matrix, Star Trek, Battlestar Galactica, TV programs and movies have made people worry that we won’t be able to turn an AI unit off. It has also made us concerned that an AI system may view us as a threat and try to eliminate us, this notion alone is a very broken-down version of how we should be looking at AI.

AI would have no emotions, no instincts and no conscious, so why would there be a will to survive or the requirement for self-preservation exist? Why would this unit want to destroy its maker? What benefit is the benefit of not having humans around? Will an AI unit develop a conscience? After all, this is a subject in itself, let’s ask the question, where does consciousness come from?

Who Says We Are In Danger From Extermination?

Elon Musk said that AI may be a reality in as few as five years, and Professor Hawking told the BBC, ‘The development of full artificial intelligence could spell the end of the human race.’ Bill Gates also claims that AI is a big threat, despite the fact his company is working on a personal assistant that resembles AI.

Elon Musk has invested ten million dollars ($10,000,000) into investigating the negative consequences of AI, and he said it would be more devastating than the invention of atomic weapons.

There is only one problem with AI, it might not be possible to create an AI robot in our lifetime. We are clearly going to get some very good imitations of an AI, and there could be programs in the future that are able to learn in a way that makes it appear intelligent, but the fact is that true Artificial Intelligence may not be possible in the same way. Not yet anyway.

Bill Gates said

I am concerned about super intelligence. Machines will do a lot of jobs for us without being super intelligent, which will work if we manage it well. However, at some point artificial intelligence may be a concern. I agree with Elon Musk and some others about the existential threat caused by AI and I don’t understand why people are not concerned.’

Will Any AI Ever Pass A Genuine Turing Test?

The Turing Test helps to identify a machine’s ability to exhibit intelligent behavior that is equivalent to, or that is indistinguishable from, the behavior of a human.

Though supercomputers might one day create a high level of intelligence within a piece of software or a device, and even though this will it be able to retain information and do things far better than the human mind, the fact is that it’s very unlikely going to be able to function as a human brain does, which means we will always have the advantage due to our capacity for original and unique thought.

Let me know your thoughts about artificial intelligence in the comments below.

3 Comments Published

By Rinalds Mar 8, 2016 Reply

The idea of AI is not bad. But it opens it for many bad things to happen. Terrorists or hackers can change the AI. You can develop new AI from the one that’s there. After that there is no telling on what’s happening.

In articled is mentioned that there will be shut down button. What if someone adds weapons to AI, and closes all options to shut it down? What then? Are we going to shoot them down?

It can and most likely will be used as a threat to our lives. I’m not a fun of that idea.. however I would like to see full human AI robot that I could speak to. That would be something amazing. However I’m not sure if I will live enough.

By Kelly Mar 7, 2016 Reply

Interesting post! You definitely raised some good points. Personally, I’m not worried about ‘robots taking over the world’ or try to exterminate the human race, I am more worried that this technology will fall in the wrong hands. Already we have seen weapons of mass destruction fall into the wrong hands of dictators and extremists. AI is potentially another weapon these people can use, and that’s what scares me.

By Jen Mar 2, 2016 Reply

I think maybe the fear is that AI, as with any technology, has the potential to be programmed to destroy and hurt people…sure I’m not worried about AI taking over by themselves, but I might be concerned if somebody evil had access to lots of robots. You’re right though — I don’t think they will try to exterminate us, and movies come up with some pretty farfetched scenarios that can’t be taken seriously.

Leave a Comment

*

css.php