Published on FORBES.COM
Link to publication
Elon Musk has stated his opinion that AI could lead to the extinction of humanity, and it’s one of the reasons he’s working hard to make us a multi-planetary species. Stephen Hawking was incredibly clear as well: true AI could be the “worst thing” for humanity.
And yet, every country and major company is racing to build AI systems.
Small wonder: Russian president Vladimir Putin has said that the nation that leads in AI will be the ruler of the world. And China is investing heavily in winning the race.
I was in Moscow recently to speak at Skolkovo Robotics Forum. One of the highlights: a visit to Russia’s top cybernetics institute, the National University of Science and Technology, or MISiS.
I asked two of its leaders about AI, its dangers, and — of course — one of the tasks we might use AI for: self-driving cars.
Koetsier: There’s a lot of noise about AI today. We have machine learning and neural networks … but what is true AI?
Olga Uskova (Head of the Department of Engineering Cybernetics): From my experience I can say that under ‘artificial intelligence’ people understand the state of an object when it actually becomes the subject and begins to have an independent abstract thinking.
Koetsier: You’ve been working on AI for decades, and the MISIS Cybernetics department just celebrated its 50th birthday. How far have we come in that time?
Konstantin Bakulev (Deputy Head of the Department of Engineering Cybernetics): 50 years ago, when MISiS Department of Engineering Cybernetics was created, and it was created by a group of very young scientists from the Institute of Theoretical Physics under the leadership of Alexander Kronrod, the AI theme was represented by methods of Heuristic Programming. And literally in the first year of the department’s existence, young scientists together with students created the first heuristic algorithms for playing cards.
The computer complex at that time occupied two rooms — an area of about 80 square meters. The code of the program was entered by perforated cards. Each response of the machine to the cards took several minutes.
In 2018 the students of the department, as a course work, created a system for semantic analysis and analysis of news feeds to create analytical reports on changing of purchasing power of Russians during the holidays. Now gigabytes of information are processed in just a few seconds and this work was done by two fourth-year students just within a month.
Koetsier: For achieving AI, do we need more speed/processors/memory? Or do we need different thinking/algorithms?
Olga Uskova: We need all of these.
On one hand some leading developers, including us, are following the path of building an anthropomorphic model. When we started studying the decision-making process of the person behind the wheel [note: Uskova also leads a self-driving startup, Cognitive Pilot] we discovered that the logical intelligence is not the only one participating in this process. A significant part is occupied by emotional intelligence, the data for which doesn’t go through sequential processing by standard methods. The final solutions are achieved by connecting some types of neural networks.
By the same principle, we are now building neural networks for our automotive AI. There may not be many pictures, but they must be correctly marked and they must give a new knowledge to the neural network.
So we came to the theme of programming of intuition. In particular, the analysis of the behavior of small objects on the road as a material for predicting the change in the road scene in the next few seconds (for example it may be the change in the angle of the car driving next to you or the ball that rolls out onto the road). Thus in many cases it’s necessary to have not ‘more’ data/information, but ‘smarter’ data/information. This is a bit like teaching people how to read fast.
Koetsier: The kind of AI everyone is waiting for is a kind of Star Trek intelligence that you can talk to, get answers from, and have human-like conversations with. We’re seeing the beginnings of this, maybe, with Siri, Alexa, Cortana, and the Google Assistant. How far away are we from near-human-like intelligence from these assistants?
Konstantin Bakulev: This is a multi-layered question.
Technologically we have already come to the possibility of programming emotional intelligence, and this is extremely important for communication. But when a person is brought up, he/she genetically or historically has a number of restrictions – moral, religious, social. In different value systems, the basic moral values can differ diametrically.
If you are engaged in the development/formation of AI, especially the formation of its emotional part without limitations, then the result can arise with a set of some aggressive characteristics. Because aggression is one of the strongest emotions in social networks and it is dangerous to feed such data to AI.
Koetsier: Super-human intelligence, of course, is what some people worry about from AI. Do you see that as inevitable? And, will it come quickly when real breakthroughs in near-human AI are made?
Olga Uskova: I share Steven Hawking’s opinion with whom we had a short talk on this topic a few years ago. And now I’m totally convinced that he really foresaw many things.
Continuing the anthropomorphic analogy: when a person grows up and learns, besides the recognition of images and meanings of surrounding objects, the self-awareness of himself/herself as a person arises. The same way the AI at the stage of recognizing meanings at some point will definitely come to the process of self-awareness as a separate entity.
And if by this time we don’t bring into the training system any moral limitations – the consequences for humanity can be instant and terrible.
Even now for a lot of people it’s not always obvious that they are useful for an existing ecosystem. People destroy ecology, litter a lot, kill rare species of animals, so the logically thinking AI will quickly come to a conclusion about the uselessness of mankind.
Koetsier: Should we worry about super-intelligent AIs? Will they be dangerous?
Konstantin Bakulev: I think that it’s necessary to solve two problems in parallel: we need to adjust our own behavior towards good and love and impose some moral restraints on the whole territory of the planet when programming AI on the principles of Isaac Asimov.
The principles of AI development and management should be similar to the principles of working with weapons of mass destruction.
Koetsier: When will we get there?
Olga Uskova: Well, here I want to be extremely honest. When programming neural networks, we clearly understand what is the input (what happens at the entrance), we understand what is the output (what is the result), but we do not always understand what is done inside.
While some of the tests at the testing facility there were cases when a multi-ton vehicle suddenly made its own independent decision to improve the situation, which, we think, we haven’t programmed. And then after several months of analysis of what has happened – we’ve got new knowledge about the behavior of deep neural networks.
So it’s not only that we develop and teach the artificial brain, but it’s also teaching us.
And neurophysiologists that consult our team use the results of Cognitive Pilot’s work to restore some of their patients after serious accidents. We all live in a mixed society already, where both biological and silicon organisms are present. And while we are fighting to make silicon organisms smoother and smarter, it’s very important to make sure that we don’t totally ‘mute’ the biological ones.
Koetsier: Talking about self-driving cars … how close are we, in your opinion?
Olga Uskova: Our approach is that for industrial use on autonomous transport we should use only systems with an accuracy of recognition very close to 100%. It’s difficult to achieve this accuracy, but the latest technologies allow it.
Cognitive Low Level Data Fusion is an approach that allows you to increase the accuracy of autonomous systems up to 99.99%. It combines raw data from all sensors of the machine and processes it with a neural network.
This tech will allow you to drive the car better than a person does.
In some sense, in August 2017 a new era of autonomous vehicles began. It brought the use of fully autonomous vehicles on the roads of the world much closer. Of course, in addition to technological, there are serious legal, social and moral limitations that require special development and attention of the whole humanity without division into national and state borders. This is a very important issue, just as the use of nuclear energy for peaceful purposes.
Therefore, a complete transition to self-driving cars in the world will require a minimum of 10-12 years to develop new traffic rules, moral restrictions and legislative norms for mixed car flows. The United States has the most developed practice in this sphere and undoubtedly the experience that America receives allowing the use of driverless cars on public roads is very important for all AI developers around the world.
Koetsier: Thank you both for your time!