Why we should build human-level artificial intelligence

There is no bigger challenge than to understand the human brain, says Starmind co-founder Pascal Kaufmann.

Why we should build human-level artificial intelligence There is no bigger challenge than to understand the human brain, says Starmind co-founder Pascal Kaufmann.

Dan Patterson: You talk about cracking the brain code. At least with the big data AI team or group, I can understand their goals. Why they want to accomplish AGI or something similar. Why do you want to crack the brain code? Why does it matter? Why do we care?

Pascal Kaufmann: There are three reasons why I want to crack the brain code. Reason one is, it's the most exciting endeavor and challenge of our time. I mean it's like the moon landing of the 21st century. There's no bigger challenge than understanding the human brain. The second reason is pure fear. If the intellectual property and artificial intelligence goes to the wrong hands, owned by a government or by some large tech companies, I do not want to live in such a world.

So we definitely needs to move first in order to understand AI, in order to own the AIP and make it an open source. The third reason is, there are so many challenges around the globe. I do not think that the human brain, its human intelligence, can actually cope with all these challenges. We need better answers than our human intelligence. I also think there is a lot of know-how. There's a lot of know-how these days. If you connected it somehow, you could definitely solve many, many problems we have.

The UN goals, for example, I think we could about solve 10 of these instantly, if you only connected the know-how that exists in the world, by means of AI. These are the three reasons why I think we need to crack the brain code.

Dan Patterson: How will we know?

Pascal Kaufmann: How do we know when we created human level AI? There are many tests even about artificial intelligence. Depending on who you ask, you get all the definitions. My gold-standard test in AI would be to build a machine you cannot distinguish from myself. So, if I put here, the biological Pascal or the artificial Pascal, you would not have a chance to find out who's who. If we have that, then we are where we want to be.

SEE: IT leader's guide to deep learning (Tech Pro Research)

Dan Patterson: What are the other indicators? If a Google develops AGI or Facebook or another public company develops AGI, or China or the United States we won't have the opportunity to engage in a conversation and be able to determine whether this is AGI or not. Those are real possibilities that a Google or Facebook or government develops an AI. So how will we know and what are the dangers of one of those actors acquiring this technology?

Pascal Kaufmann: Some people ask me, Hey Pascal are you really sure that no one in the world has already cracked the brain code and is already in possession of the intellectual property in AI? And, it's very hard, of course, to rule it out. However, given the state of what we know about the brain, given what all these scientists are doing around the globe, there is a huge, huge lack in understanding how the brain works. I do not think that we are already there.

However, and you are right here— if the lab of a big-tech company cracks the brain code that creates human AI, what's the motivation sharing that with everybody? So definitely if the wrong people are inventing AI, you will never know about it. Therefore, it's so important that the foundation, or people should create AI for the benefit of the people, to an open source.

Dan Patterson: Let's close there. You do have a foundation and you clearly believe that it is in humanity's best interest for us to collaborate on, if not on solving the rest of the world's problems, at least, on this very big existential issue. How do we build that platform? How do we build something where we can apply the best minds in artificial intelligence, the best talent and the best products?

SEE: Internet of Things Policy (Tech Pro Research)

Pascal Kaufmann: So in 2010 we founded a company call Starmind in Switzerland. Just a few weeks ago, we did one of the largest financing rounds in AI in Europe. And why we did that is [because] we do not believe in big data. We do not believe in deep learning. So what this technology's doing: we connect employees within large, large corporations and turn these corporations into super-organisms, like really fast problem-solving engines by connecting all those employees.

We can fire any kind of question to a so-called corporate brain and in about 95% of all cases, the brain tells you stop. Don't ask this question. Someone else has already asked it. Here's directly the solution. And if you really ask a new question: algorithms in background figure out who is currently online, and then identify this with know-how alerts, so it's actually a very powerful corporate brain that you're building.

This technology... we can even apply to companies, but also to large human networks, for example, to all the small people in AI. At Starmind technology, we are leveraging a global community in AI. We lay the focus on one topic, namely to crack the brain code. And this is the so-called Mindfire Foundation. But the technology behind is Starmind, and it was developed in Switzerland, yes.

ALSO SEE

20180726pascal5dan.jpg