Humanity needs rules for dealing with artificial intelligence (AI) in the same way it learned to manage fire and nuclear technology, one of the sector’s up and coming voices has claimed.
Bruno Maisonnier, founder and CEO of AI firm AnotherBrain, admitted there was a danger with the new technology, but that is no different from every major discovery since the dawn of man.
Speaking at the Future Investment Initiative Forum in Riyad, Maisonnier said: “There’s risk with AI as well as there are risk with every new technology, that’s part of human history
“We brought fire and people died from fire, we brought nuclear and people died from that .
“Each time we have the same reaction: First we fear and then we start to put the feedback and learn and put rules to get the positive out of this technology
“The same goes with AI. The question is when do we have to set these rules?
“Rules must be put but first we must allow the evolution to happen.”
Also speaking at the forum, Pascal Weinberger, CEO and co-founder of tech firm Bardeen AI, insisted that machines will never be able to fully replace humans in many environments.
“There are a lot of things that machines are better at than humans, and vice versa — especially at common sense,” he said.