Auto-navigating Moscow?

Auto-navigating Moscow?

Published on TAIPEITIMES.COM
Link to publication

Chaotic roads, bad weather and reckless habits make the Russian capital one of the worst to drive, and its quest to build an autonomous car uniquely challenging.

In certain sunny climes, self-driving cars are multiplying. Dressed in signature spinning sensors, the vehicles putter along roads in California, Arizona and Nevada, hoovering up data that will one day make them smart enough to run without humans.

Besides perennial sunshine, those places share other common traits: wide, well-manicured roads, functional traffic enforcement, and agreeable local governments. That’s how Chandler, Arizona — a Phoenix suburb on nobody’s radar as of a few weeks ago — became the first US town to host autonomous cars on public streets without human safety drivers. Courtesy of Waymo, they’re expected to start carrying passengers within the next few months.

If you ask many Silicon Valley companies, the future of driverless cars is just a couple of years away. But halfway across the world, the outlook is a lot more skeptical.

 

“We don’t have the luxury of California roads,” says Olga Uskova of Cognitive Technologies, a Russian software maker that specializes in autonomous vehicles. “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.”

BAD TRAFFIC

Uskova asserts that technology tested in sun-drenched utopias can’t possibly translate to a city like Moscow. Gnarly road planning, terrible weather and reckless habits make the Russian capital one of the worst cities in the world for drivers.

With roads that spread like a cobweb away from the Kremlin, disturbances like car wrecks, construction and government motorcades can wreak havoc for miles. Seat belts are scorned, and traffic laws widely ignored; speeding violations are enforced with US$4 fines, paid by phone. It’s no surprise that Russia’s rate of road fatalities is nearly double that of the US, with an average of 20 serious accidents a day just in Moscow.

Or, for that matter, that dashcam videos of Russian road fights and collisions make up such a popular subgenre on YouTube.

But most of the world’s roads look more like Russia than Mountain View, and according to Uskova, that gives Russian developers an edge in building the brains of autonomous cars.

That theory was tested at a recent event in Moscow, advertised as the world’s first hackathon for driverless cars. In an austere, Soviet-era dormitory, top engineering students from far-flung schools like MIT, Cambridge and Peking University sank into beanbag chairs for a three-day coding binge.

“We’re here because it’s a chance to change the world over the next 10 to 15 years,” said Mitch Mueller, a student who traveled from the University of Wisconsin to compete. They were also competing for a cash prize, bragging rights and — most importantly — the attention of participating companies, including Uber and Nvidia, eager to recruit the next generation of AI talent.

MIMICKING THE BRAIN

The event had another purpose: to advance a credo that when it comes to autonomous cars, tougher conditions produce smarter technology. Lidar — the expensive, light-pulsing sensors relied upon by current autonomous car models — is worthless in snow and thus “a fake,” says Uskova. Instead, cars should be trained to operate using high-definition cameras, low-cost radars and powerful AI that mimics the human brain.

As the 150 engineers pored over Moscow road data, it was obvious that this vision is a long ways off. Most cars struggled to identify signs, for instance, which were hard to detect in snow or rain; and for non-Russian speakers, the task was practically impossible.

“The problem is that the signs are small, and in Russia they look very similar,” explained Sami Mian, a computer scientist at Arizona State University. “The main difference is numbers and arrows, and a city entry sign can look almost the same as a stop sign. The top team had 40 percent accuracy.”

That team, three local guys from Moscow, had tapped into a secret weapon: a trove of the popular dashcam footage, which had been harvested and stored at nearby Moscow State University. Derived from 100,000 dashcam videos, that data served as the building blocks of a basic neural network hammered out by the cigarette-puffing coders, who mentioned that they had slept a total of five hours over three days.

Russian-built autonomous systems are already in use by Kamaz, Russia’s largest truck maker, and an agricultural equipment company. Both are working with Cognitive Technologies to build autonomous machines. But adapting the technology for city use, and bringing it to the international stage, is a steep battle.

No government agency has developed regulations for autonomous cars, so road testing is constrained to designated testing zones. The only car testing zone in Moscow is a 400-meter track embellished with pedestrian crossings, road signs, markings and a section with circular traffic.

It’s a lousy facsimile of Moscow roads, or any road. But even worse is its location far outside the city center: a planned ride-along was scrapped because of bad traffic.

Nice Roads and Careful Pedestrians Are a Roadblock to Fully Autonomous Cars

Nice Roads and Careful Pedestrians Are a Roadblock to Fully Autonomous Cars

Published on INVERSE.COM
Link to publication

To make the best autonomous cars, we’ll have to teach their A.I. how to navigate in the the worst possible conditions. That’s why the most daring innovation in the field may wind up taking place far from the sun-soaked streets of California, and instead in less forgiving environments.
“No one will purchase a self-driving car to ride it in California only. This is a question of the next level industrial systems,” Olga Uskova, president of Russia’s Cognitive Technologies and founder of the C-Pilot autonomous driving system, tells Inverse. “For example in our system, we use such a tech called ‘virtual tunnel’. The vehicle moves not only by the road marking, but it defines the road scene the same way the human brain does, by analyzing the lateral situations — the location of trees, buildings, the horizon line etc.”
Uskova notes that 70 percent of the world’s roads are nothing like the ones found in California. But instead of working their way up from empty test tracks to more real-world situations, Uskova’s team decided to use these harsh conditions as a starting point. Driving in bad weather, they determined, was using an estimated 35 to 40 percent of testing time anyway.
“Climate in most parts of Russia is presented by a large number of days per year when drivers must travel in bad weather conditions — on the roads with snow, mud, lack of road marking and poor visibility,” Uskova says.
It’s this deep-end-first approach that characterizes a great deal of the autonomous car development on the international stage. In the United Kingdom, for example, there are no laws against jaywalking. Some startups have argued this is an ideal venue for teaching car-driving A.I. how to deal with pesky pedestrians. One, based at Imperial College London, has already developed a system capable of understanding over 150 behaviors to judge whether a pedestrian is about to step out into the road.
“We are very confident that we are able to predict if someone is going to cross or not,” Leslie Noteboom, co-founder of Humanising Autonomy, told the Evening Standard. “Cars need to understand the full breadth of human behavior before they’re ever going to be implemented into urban environments. The current technology is able to understand whether something is a pedestrian and not a lamp post, and where that pedestrian is moving, framing them as a box. We’re looking inside that box to see what the person is doing, where they’re looking, are they aware of the car, are they on the phone or running — does this mean they are distracted, or risky?”
London is expected to host its first autonomous taxis in 2021, courtesy of Oxford-based developer Oxbotica and taxi firm Addison Lee. Oxbotica has completed a series of limited grocery deliveries as part of its tests, while preparing for a London-to-Oxford autonomous drive in the second half of 2019. The 60-mile journey has patchy cellular service, which will make car communications difficult. The country as a whole has around 75 percent geographic 3G and 4G coverage. The team will have to work out how the car should react when it loses internet connectivity.
 In the case of Cognitive Pilot, it’s had to develop new sensors capable of handling the road come what may. It has developed a radar capable of creating a 3D projection of objects from 300 meters away. While Silicon Valley largely focuses on lidar solutions that struggle with harsh weather, radar is better equipped for all seasons. In bad weather conditions, the range of the team’s radar falls by just 50 to 100 meters to reach between 200 to 250 meters. Lidar, which uses a spinning laser to bounce off objects and read their distance, can fail in snow when their lasers instead bounce off of falling flakes.
Silicon Valley is not blind to these issues. Waymo tested its autonomous driving system trekking through snow in South Lake Tahoe back in March 2017. And Tesla, which considers lidar as having too many flaws, has already opted for a combination of cameras and radar for its “Hardware 2” suite designed to support autonomy at a later date. Even CEO Elon Musk, however, notes that it’s “extremely difficult” to develop an all-purpose autonomous driving solution.
Technology firms have recently had to scale back their expectations, as Waymo’s trials in Arizona struggle with complex intersections. Drive.AI has even suggested redesigning roads to support these new cars. While Musk is still confident that Tesla could achieve a point-to-point solution sometime next year, the challenges faced by international developers show it’s unclear how these systems will work elsewhere.

Top Russian Cybernetics Experts On AI, Robot Morals, Human Extinction ... And Self-Driving Cars

Top Russian Cybernetics Experts On AI, Robot Morals, Human Extinction … And Self-Driving Cars

Published on FORBES.COM
Link to publication

Elon Musk has stated his opinion that AI could lead to the extinction of humanity, and it’s one of the reasons he’s working hard to make us a multi-planetary species. Stephen Hawking was incredibly clear as well: true AI could be the “worst thing” for humanity.

And yet, every country and major company is racing to build AI systems.

Small wonder: Russian president Vladimir Putin has said that the nation that leads in AI will be the ruler of the world. And China is investing heavily in winning the race.

I was in Moscow recently to speak at Skolkovo Robotics Forum. One of the highlights: a visit to Russia’s top cybernetics institute, the National University of Science and Technology, or MISiS.

I asked two of its leaders about AI, its dangers, and — of course — one of the tasks we might use AI for: self-driving cars.

Koetsier: There’s a lot of noise about AI today. We have machine learning and neural networks … but what is true AI?

Olga Uskova (Head of the Department of Engineering Cybernetics): From my experience I can say that under ‘artificial intelligence’ people understand the state of an object when it actually becomes the subject and begins to have an independent abstract thinking.

Koetsier: You’ve been working on AI for decades, and the MISIS Cybernetics department just celebrated its 50th birthday. How far have we come in that time?

Konstantin Bakulev (Deputy Head of the Department of Engineering Cybernetics): 50 years ago, when MISiS Department of Engineering Cybernetics was created, and it was created by a group of very young scientists from the Institute of Theoretical Physics under the leadership of Alexander Kronrod, the AI theme was represented by methods of Heuristic Programming. And literally in the first year of the department’s existence, young scientists together with students created the first heuristic algorithms for playing cards.

The computer complex at that time occupied two rooms — an area of about 80 square meters. The code of the program was entered by perforated cards. Each response of the machine to the cards took several minutes.

In 2018 the students of the department, as a course work, created a system for semantic analysis and analysis of news feeds to create analytical reports on changing of purchasing power of Russians during the holidays. Now gigabytes of information are processed in just a few seconds and this work was done by two fourth-year students just within a month.

Koetsier: For achieving AI, do we need more speed/processors/memory? Or do we need different thinking/algorithms?

Olga Uskova: We need all of these.

On one hand some leading developers, including us, are following the path of building an anthropomorphic model. When we started studying the decision-making process of the person behind the wheel [note: Uskova also leads a self-driving startup, Cognitive Pilot] we discovered that the logical intelligence is not the only one participating in this process. A significant part is occupied by emotional intelligence, the data for which doesn’t go through sequential processing by standard methods. The final solutions are achieved by connecting some types of neural networks.

By the same principle, we are now building neural networks for our automotive AI. There may not be many pictures, but they must be correctly marked and they must give a new knowledge to the neural network.

So we came to the theme of programming of intuition. In particular, the analysis of the behavior of small objects on the road as a material for predicting the change in the road scene in the next few seconds (for example it may be the change in the angle of the car driving next to you or the ball that rolls out onto the road). Thus in many cases it’s necessary to have not ‘more’ data/information, but ‘smarter’ data/information. This is a bit like teaching people how to read fast.

Koetsier: The kind of AI everyone is waiting for is a kind of Star Trek intelligence that you can talk to, get answers from, and have human-like conversations with. We’re seeing the beginnings of this, maybe, with Siri, Alexa, Cortana, and the Google Assistant. How far away are we from near-human-like intelligence from these assistants?

Konstantin Bakulev: This is a multi-layered question.

Technologically we have already come to the possibility of programming emotional intelligence, and this is extremely important for communication. But when a person is brought up, he/she genetically or historically has a number of restrictions – moral, religious, social. In different value systems, the basic moral values can differ diametrically.

If you are engaged in the development/formation of AI, especially the formation of its emotional part without limitations, then the result can arise with a set of some aggressive characteristics. Because aggression is one of the strongest emotions in social networks and it is dangerous to feed such data to AI.

Koetsier: Super-human intelligence, of course, is what some people worry about from AI. Do you see that as inevitable? And, will it come quickly when real breakthroughs in near-human AI are made?

Olga Uskova: I share Steven Hawking’s opinion with whom we had a short talk on this topic a few years ago. And now I’m totally convinced that he really foresaw many things.

Continuing the anthropomorphic analogy: when a person grows up and learns, besides the recognition of images and meanings of surrounding objects, the self-awareness of himself/herself as a person arises. The same way the AI at the stage of recognizing meanings at some point will definitely come to the process of self-awareness as a separate entity.

And if by this time we don’t bring into the training system any moral limitations – the consequences for humanity can be instant and terrible.

Even now for a lot of people it’s not always obvious that they are useful for an existing ecosystem. People destroy ecology, litter a lot, kill rare species of animals, so the logically thinking AI will quickly come to a conclusion about the uselessness of mankind.

Koetsier: Should we worry about super-intelligent AIs? Will they be dangerous?

Konstantin Bakulev: I think that it’s necessary to solve two problems in parallel: we need to adjust our own behavior towards good and love and impose some moral restraints on the whole territory of the planet when programming AI on the principles of Isaac Asimov.

The principles of AI development and management should be similar to the principles of working with weapons of mass destruction.

Koetsier: When will we get there?

Olga Uskova: Well, here I want to be extremely honest. When programming neural networks, we clearly understand what is the input (what happens at the entrance), we understand what is the output (what is the result), but we do not always understand what is done inside.

While some of the tests at the testing facility there were cases when a multi-ton vehicle suddenly made its own independent decision to improve the situation, which, we think, we haven’t programmed. And then after several months of analysis of what has happened – we’ve got new knowledge about the behavior of deep neural networks.

So it’s not only that we develop and teach the artificial brain, but it’s also teaching us.

And neurophysiologists that consult our team use the results of Cognitive Pilot’s work to restore some of their patients after serious accidents. We all live in a mixed society already, where both biological and silicon organisms are present. And while we are fighting to make silicon organisms smoother and smarter, it’s very important to make sure that we don’t totally ‘mute’ the biological ones.

Koetsier: Talking about self-driving cars … how close are we, in your opinion?

Olga Uskova: Our approach is that for industrial use on autonomous transport we should use only systems with an accuracy of recognition very close to 100%. It’s difficult to achieve this accuracy, but the latest technologies allow it.

Cognitive Low Level Data Fusion is an approach that allows you to increase the accuracy of autonomous systems up to 99.99%. It combines raw data from all sensors of the machine and processes it with a neural network.

This tech will allow you to drive the car better than a person does.

In some sense, in August 2017 a new era of autonomous vehicles began. It brought the use of fully autonomous vehicles on the roads of the world much closer. Of course, in addition to technological, there are serious legal, social and moral limitations that require special development and attention of the whole humanity without division into national and state borders. This is a very important issue, just as the use of nuclear energy for peaceful purposes.

Therefore, a complete transition to self-driving cars in the world will require a minimum of 10-12 years to develop new traffic rules, moral restrictions and legislative norms for mixed car flows. The United States has the most developed practice in this sphere and undoubtedly the experience that America receives allowing the use of driverless cars on public roads is very important for all AI developers around the world.

Koetsier: Thank you both for your time!

Why the best self-driving cars may not come from the well-kept freeways of California

Why the best self-driving cars may not come from the well-kept freeways of California

Published on BIGTHINK.COM
Link to publication

The weather in most parts of Russia forces drivers to face harsh conditions—snow, mud, and poor visibility. It’s in this environment that Cognitive Technologies saw an opportunity.

Terrible traffic, prayer-inducing merges, road signs that are all but impossible to read, dangerous road conditions, and drivers who hazard sudden, scream-worthy maneuvers, all add to Moscow’s commuting woes. Sadly, this is what 98% of the world’s roads are like, and why one Russian company, Cognitive Technologies Group, may come out ahead in the race to birth the self-driving car.

President and founder of the group Olga Uskova, is skeptical of Silicon Valley’s sunny projections of when autonomous vehicles will go mainstream. The reason, she told The Guardian was that there are too many variables in most places to look out for. In Moscow for instance, “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.” Note that the most well-known autonomous prototypes, the Financial Times recently reported, have trouble navigating through snow. Uskova assures that her model doesn’t have that problem.

Cognitive Tech. began in 1993 when two of its founders developed the world’s 1st computer chess master, Kaissa. Besides this, they’ve sold software to the likes of Intel and Yandex. In 2014, the company launched its autonomous vehicle program— Cognitive Pilot (C-Pilot), Russia’s first and largest player in the nascent autonomous vehicle market.

Their secret isn’t any specialized software–like Tesla’s Autopilot or hardware–like Mobileye’s patented microchip. They took a different approach. Instead, Uskova and her team taught an A.I. program the intricacies of driving in Moscow. They did this by exposing it to 100,000 dashcam videos and other footage collected by Moscow State University.

Uskova and her team put together a neural network using the footage, which they say allows their vehicle to better maneuver around the mean streets of Moscow. By utilizing run-of-the-mill computer hardware, their incarnation becomes less expensive than competitor versions and easier to upgrade.

Cognitive technologies hopes to put out a level four autonomous vehicle by the end of 2019. That’s not all. They’ve partnered with Russian truck maker Kamaz to develop a self-driving tractor trailer by 2020, and Uskova and colleagues plan to have an autonomous combine harvester farm ready by 2024.

And their car prototype? So far, they’ve rigged out a Nissan X-Trail with a C-Pilot system. It can recognize three dozen road signs with almost 100% accuracy, as well as stop, accelerate, and heed traffic lights. Now, the company is setting up two US offices, reaching out to English speaking media, and seeking additional funding. It also demoed C-Pilot at the latest Consumer Electronics Show (CES), held every January in Las Vegas. One snag—visa issues due to a heating up of tensions between the US and Russia, have made it difficult for Cognitive Technologies to gain a solid foothold in the US.

So how does their system work? Recently, I asked Uskova via email. First, high resolution cameras, imaging radar, and a bevy of onboard sensors collect data, which is fed into one of four operating systems: the observer module—which monitors the car’s surroundings, the geographer module—which pinpoints the location of the vehicle, the navigator module—which finds the quickest route, and the machinist module—which handles the physical driving of the vehicle. All of this raw data is processed and then blended together by a deep learning neural network, provided by an energy-efficient onboard processor.

Similar to a biological brain, it absorbs and processes the information and then decides how to proceed. Most self-driving cars use LIDAR (Light Detection and Ranging), which works much like radar but instead of radio waves, uses beams of infrared light. In other words, it relies on invisible lasers to sense the environment. I asked what type of system C-pilot uses.

“Our main sensors are radar and cameras, not LIDAR,” Uskova said. “We believe that radar is the future of autonomous driving, as it is the most appropriate sensor for this technology. Radar is significantly more reliable in bad weather (snow, rain, fog). Our radar constructs a dynamic 3D projection at a distance of 150-200 meters (492-656 ft.). When the weather gets worse—the range falls to just 100 m (328 ft.).” Radar is also more cost-effective.

According to Uskova, the autonomous vehicle market is just beginning to firm up, with major players taking positions in certain niches. Cognitive technologies believes their advantage comes in sensor technology. “The human eye has a much higher resolution in its central part. When we try to zoom-in and look closer at something—we use foveal vision. The same method is used in C-Pilot’s Virtual Tunnel tech. Its algorithm tracks all movements and focuses attention on the main risk zones,” she wrote.

Uskova also said:

We also believe that within the next 10 years, as processor capacities grow, the resolution of sensors will also increase significantly. Now the cameras for autonomous vehicles have a resolution of 2-5 megapixels, and the resolution of the human eye can be estimated at 100 megapixels. And for better detection of small objects and animals, the resolution of the onboard cameras should grow. Now, our system can recognize the average size animal at a distance of up to 30 meters (98 ft.).

I asked what makes her system different from those being developed by Uber, Waymo (Google), other Silicon Valley companies, and the big automakers, Ford in particular. To date, there are 27 companies working on autonomous vehicles. “At the moment, we are the best in the world in the field of road scene perception and detection,” she said. “We have 19 unique patents and inventions. 22 million dollars have been invested in the product and we have real industrial practice in the most severe weather conditions.”

Autonomous Car Testing in Moscow Is Hard & Necessary

Autonomous Car Testing in Moscow Is Hard & Necessary

Published on THECONNECTEDCAR.COM
Link to publication

What is the worst confluence of driving conditions you can possibly imagine?

It’s probably nighttime in a densely populated city. There would be a vision-obscuring blizzard making the roads slick and unreliable. Traffic would be thick and moving irregularly. Worse, the drivers around you would have seemingly little regard for your safety, or even their own. Even the road signs would be difficult to see.

Welcome to life for drivers in Moscow.

Poorly laid out roads, bad weather and not-so-cautious driving practices make for dangerous automobile treks in the Russian capital. Moscow streets are home to 20 serious car accidents per day, and the road fatality rate in Russia is double that of the United States.

Not surprisingly, that makes it a difficult landscape for autonomous vehicles to navigate. But the conditions in Russia are not all that different from other countries in the world, which is why autonomous developers believe that, with apologies to New York, “if they can make it there, they can make it anywhere.”

At a three-day hackathon in Moscow, engineering students from around the globe and corporate sponsors like Nvidia and Uber gathered to take a crack at developing autonomous systems for Moscow’s roads.

“The event had another purpose: to advance a credo that when it comes to autonomous cars, tougher conditions produce smarter technology,” writes Gaus. “Lidar — the expensive, light-pulsing sensors relied upon by current autonomous car models — is worthless in snow … Instead, cars should be trained to operate using high-definition cameras, low-cost radars and powerful AI that mimics the human brain.”

The idea that Lidar is “worthless in snow,” advanced by Olga Uskova of Russian AV software developer Cognitive Technologies may be an extreme position — nearly all driverless car manufacturers incorporate Lidar sensors in some form.

Nevertheless, Lidar does not work as effectively in the snow, and developers have relied more heavily on other hardware to navigate in adverse conditions. An autonomous vehicle in Finland primarily used radar sensors to complete a journey through a wintry mix.

As for the hackathon engineers in Moscow, cobbling together an autonomous driving system over the course of three sleepless days proved difficult. The top team only managed to achieve 40% accuracy in identifying road signs. The expected culprits were to blame for the difficulties: Snow-covered road signs were difficult for systems to detect, and non-Russian speakers had an even more challenging time differentiating between similar looking road signs.

While success proved elusive at the hackathon, in some ways, that was beside the point.

The fact that driverless car developers are moving beyond building cars that can work in the idealistic sunny climates of Arizona and California and shifting to the more challenging task of creating vehicles that can work in more realistic scenarios is an important sign of progress.

Many expect that self-driving cars will eventually operate 24/7, but they will only be able to do that if they can handle the weather and unique road conditions that present themselves at every moment of the day all over the world.

Published on INVERSE Link to publication

Published on INVERSE Link to publication

Published on INVERSE.COM
Link to publication

Cognitive Technologies has a big plan.

Russia’s first and largest autonomous car project is about to reach the United States. At the Consumer Electronics Show in Las Vegas next week, Cognitive Technologies will demonstrate a self-driving car system that’s primed for the world’s roughest roads and runs on standard computer hardware.

As the company’s president tells Inverse, it’s the culmination of a journey sparked by the success of a chess computer, fueled by a desire to reduce road deaths, and hampered by American visa issues.

The company has an impressive resumé. Founded in 1993 by the guys who created the world’s first computer chess champion, Cognitive Technologies worked on technologies including image and voice recognition, and in the past has worked with big names like Intel and Yandex. In 2014, the Cognitive Pilot program was set up to apply the company’s talents to autonomous driving.

“The whole project was built to design the autopilot for real roads, Russian roads,” Roman Tarasov, the company’s VP for global business, told Inverse in March 2017. “Most of the roads on the planet are like this. So lack of light, snow, fog, bad road marks.”

Watch the company test its autonomous driving system on the roads in November 2017.

As Waymo, Tesla and Uber battle it out in Silicon Valley, Cognitive has quietly moved from strength to strength. It’s hosted a hackathon with students from MIT, Cambridge and Peking University. It’s developed assistive driving technologies for Russian trucking giant Kamaz, with a view to developing a fully autonomous truck by 2020. The team has moved some of its operations to Amsterdam, while research continues in Moscow. CES is the next big step.

Inverse spoke with Olga Uskova, president and founder of Cognitive Technologies and developer of Cognitive Pilot, to find out more about the big moment.

How are you feeling about the big CES moment for your company? Nervous? Excited?

It’s a first run, a premiere for us. It will be Cognitive Technologies’ first time at CES. Of course we worry. We will present a number of completely new technologies that we have never demonstrated before at any other event. And of course we are naturally worried about the reaction of specialists who come to CES from all over the world.

How many people are going from the company, who is representing Cognitive?

Unfortunately, some negative visa issues that arose recently between our countries made it impossible for half of the announced team members to arrive to the U.S., but key specialists will be at CES. For example, everyone will be able to chat at the booth with the CTO of the company.

We believe that CES is the quintessential show of all consumer wishes and desires for the upcoming year. New trends, latest releases, brand wars, newcomers and outsiders – all will be there. A lot becomes clear after you study the list of participants and their contribution to the world’s economy.

What message do you want to give to people?

We can and we must save millions of people who die on the roads around the world. The official statistics show more than 1.3 million deaths per year. Our technologies can already reduce this horrible figure by 44 percent, so several hundred thousand people will remain alive if these technologies will be promptly implemented.

My academic supervisor died in 1993 in a terrible car accident. After a car accident in 1997, I personally experienced seven operations on my face. Almost every family in the world has something to remember on this terrible topic. It seems to me that there will be nothing more important at CES than our booth.

Do you expect a lot of skepticism from people more familiar with big names like Waymo?

No, I don’t. The new markets in the automotive sphere are still forming at the moment. Only fools will wrinkle their noses at the sight of the new names. In reality everyone is looking for fresh solutions and new breakthrough technologies. The markets of neural systems developers, vision systems manufacturers, manufacturers of autonomous cars — these are still very young and exclusive. They now require some serious scientific and financial investments. Therefore, there is an increased attention of serious experts to such companies as ours. For us it is a very big responsibility.

Have you had any interest from CES attendees already?

Of course we’ve been preparing for this exhibition and in advance we have arranged a number of meetings with some partners we are interested in. We are waiting for innovations in the fields of microprocessors and video cameras, and also in the fields of Internet of Things and connected cars. We are very interested in the joint work, because this is the only way we can offer an ideal product for a modern customer, a user of an autonomous vehicle.

Will you be demonstrating the tech at CES? How is the booth set up?

Our mission is to show real technologies that can work in real time on real roads. We are bringing from Moscow technologies that are not afraid of snow, mud and impassibility of roads. Google is doing fine on the dry and sunny roads of California. Cognitive Technologies will represent on CES the technology for the remaining 98 percent of the world’s roads. At our booth you will be able to see a live demonstration of the technology working on snowy roads, in a storm and in a thunderstorm, with interrupted or hidden-under-the-snow markings. A special show will be organized for farmers. We will also present a separate recognition of small details of cars – headlights, mirrors, car plates etc.

We will also present our global trend for 2018 – the concept of Low Level Data Fusion. This is our technology that combines data from several operating systems and sensors: neural networks, data from high resolution image radars and cameras.

This technology allows the computer vision model to efficiently use all the combined data coming from various sensors to the computing unit. Basically, information taken from each of the sensors is synchronized and reduced to one single coordinate system. The “raw” data goes to computer, where it is processed, and then materials from cameras and radars mutually enrich each other.

This integration of data from different devices makes it possible to fill the missing information for better understanding of the current road scene. Cameras, for example, correctly recognize objects in 80 percent of cases, additional data from radar raises the detection accuracy to 99 percent and higher.

Complex use of all data allows to combine information about the speed, type of the object and distance to it, its location and physical characteristics. The implementation of this Fusion technology alone will reduce the accident rate of autonomous vehicles up to 25 percent.

What do you hope to see at CES from others, are you looking for any opportunities?

At CES we are looking forward to seeing some compact and productive information processing solutions that could be placed on the vehicles. We also want to see new solutions for car cameras that will help us improve the quality of our system on the roads. Basically, we are interested in any systems that will ensure a comfortable stay for the passenger and a driver during the autonomous driving in a traffic jam. We also hope to perhaps see something completely new, something that we couldn’t even imagine has already been created. From CES you always expect a miracle!

Moscow Is Not the Ideal Place to Develop Self-Driving Cars, Russian Firms Say

Moscow Is Not the Ideal Place to Develop Self-Driving Cars, Russian Firms Say

Published on THEDRIVE.COM
Link to publication

Developers of self-driving cars in the United States are lucky that Silicon Valley is the home of the U.S. tech industry. California’s sunny weather and well-maintained roads make it a good testing ground. That can’t be said of Moscow.

The Russian capital is challenging even for human drivers, but local companies are testing self-driving cars there anyway, according to a recent report by The Guardian. Poor weather, including snow that obscures traffic signs, bad roads, and aggressive, unpredictable drivers make Moscow a self-driving car hell.

Cars developed in less-harsh environments can’t possibly function in Moscow, Olga Uskova of Cognitive Technologies, a Russian software company, told The Guardian. Indeed, at a recent hackathon for self-driving cars, international competitors found the data from Russian roads to be inadequate. Snow often obscured traffic signs, which non-Russians said tend to be hard to tell apart anyway. A Russian team won the competition by digging into the trove of local dash-cam footage that has also become a staple of YouTube.

Russian drivers feel the need to equip their cars with dash cams because crashes are common. Traffic laws are widely ignored in Moscow, where speeding violations only lead to a $4 fine, paid over the phone. There are nearly 20 serious crashes a day in the city.

But for now, the biggest hurdle for autonomous cars in Russia may be regulatory. Russia has no formal regulations sanctioning the test of self-driving cars on public roads. The only designated autonomous-car testing area in Moscow is a 400-meter (1,312-foot) track sprinkled with traffic signs and pedestrian crossings to simulate a real street. Test cars will need to rack up some real-world mileage eventually.

If self-driving cars are allowed onto the streets of Moscow in large numbers, the city could become an important crucible for the technology. Like Indian cities or even New York City, the challenging environment will push the limits of self-driving cars. If these cars can handle Moscow, they can probably handle anything.

Testing Autonomous Cars In Russia Will Be Hell But It's The Reality Of The Situation

Testing Autonomous Cars In Russia Will Be Hell But It’s The Reality Of The Situation

Published on JALOPNIK.COM
Link to publication

The majority of self-driving tech testing is happening on the pristine streets of sunny California, but across the Pacific, autonomous tech developers in Russia are also working to make robot cars a reality. But in the hellish driving environments found in Moscow, that’s not an easy feat, as a fascinating story from The Guardian recounts.

Silicon Valley, for one thing, offers clear weather and well-maintained roadways. In Russia, where testing is allowed in very limited circumstances right now, that won’t be the case. From The Guardian:

” says Olga Uskova of Cognitive Technologies, a Russian software maker that specializes in autonomous vehicles. “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.”

Uskova asserts that technology tested in sun-drenched utopias can’t possibly translate to a city like Moscow. Gnarly road planning, terrible weather and reckless habits make the Russian capital one of the worst cities in the world for drivers.

Uskova isn’t wrong. Some automakers have lately taken to testing autonomous cars in cities that regularly face inclement weather, like Detroit, but for the most part, the development has been centered around Silicon Valley. That can’t last if autonomous cars are ever going to catch on.

So the insight from Russia is worth taking into consideration. Here’s more from The Guardian:

With roads that spread like a cobweb away from the Kremlin, disturbances like car wrecks, construction and government motorcades can wreak havoc for miles. Seat belts are scorned, and traffic laws widely ignored; speeding violations are enforced with $4 fines, paid by phone. It’s no surprise that Russia’s rate of road fatalities is nearly double that of the US, with an average of 20 serious accidents a day just in Moscow. Or, for that matter, that dashcam videos of Russian road fights and collisions make up such a popular subgenre on YouTube.

The Guardian visited a three-day “hackathon” event for driverless cars, and found that one common issue for driverless cars—signs—tripped up some of the engineers in attendance.

Most cars struggled to identify signs, for instance, which were hard to detect in snow or rain; and for non-Russian speakers, the task was practically impossible.

“The problem is that the signs are small, and in Russia they look very similar,” explained Sami Mian, a computer scientist at Arizona State University. “The main difference is numbers and arrows, and a city entry sign can look almost the same as a stop sign. The top team had 40% accuracy.”

The story notes that car testing is limited in Moscow to a quarter-mile track outfitted with pedestrian crossings, road signs and a roundabout, so there’s still a ways to go before testing expands. But it offers relevant context for the auto industry: it’s going to be a slow crawl to expand autonomous driving across the world.

Moscow is a terrifying city for drivers. So what if a car doesn't have one?

Moscow is a terrifying city for drivers. So what if a car doesn’t have one?

Published on TheGUARDIAN.COM
Link to publication

Chaotic roads, bad weather and reckless habits make the Russian capital one of the worst to drive, and its quest to build an autonomous car uniquely challenging.

In certain sunny climes, self-driving cars are multiplying. Dressed in signature spinning sensors, the vehicles putter along roads in California, Arizona and Nevada, hoovering up data that will one day make them smart enough to run without humans.

Besides perennial sunshine, those places share other common traits: wide, well-manicured roads, functional traffic enforcement, and agreeable local governments. That’s how Chandler, Arizona – a Phoenix suburb on nobody’s radar as of a few weeks ago – became the first US town to host autonomous cars on public streets without human safety drivers. Courtesy of Waymo, they’re expected to start carrying passengers within the next few months.

If you ask many Silicon Valley companies, the future of driverless cars is just a couple of years away. But halfway across the world, the outlook is a lot more skeptical.

“We don’t have the luxury of California roads,” says Olga Uskova of Cognitive Technologies, a Russian software maker that specializes in autonomous vehicles. “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.”

Uskova asserts that technology tested in sun-drenched utopias can’t possibly translate to a city like Moscow. Gnarly road planning, terrible weather and reckless habits make the Russian capital one of the worst cities in the world for drivers.

With roads that spread like a cobweb away from the Kremlin, disturbances like car wrecks, construction and government motorcades can wreak havoc for miles. Seat belts are scorned, and traffic laws widely ignored; speeding violations are enforced with $4 fines, paid by phone. It’s no surprise that Russia’s rate of road fatalities is nearly double that of the US, with an average of 20 serious accidents a day just in Moscow. Or, for that matter, that dashcam videos of Russian road fights and collisions make up such a popular subgenre on YouTube.

But most of the world’s roads look more like Russia than Mountain View, and according to Uskova, that gives Russian developers an edge in building the brains of autonomous cars.

That theory was tested at a recent event in Moscow, advertised as the world’s first hackathon for driverless cars. In an austere, Soviet-era dormitory bedecked with Steve Jobs and Elon Musk posters, top engineering students from far-flung schools like MIT, Cambridge and Peking University sank into beanbag chairs for a three-day coding binge.

“We’re here because it’s a chance to change the world over the next 10 to 15 years,” said Mitch Mueller, a student who traveled from the University of Wisconsin to compete. They were also competing for a cash prize, bragging rights and – most importantly – the attention of participating companies, including Uber and Nvidia, eager to recruit the next generation of AI talent.

The event had another purpose: to advance a credo that when it comes to autonomous cars, tougher conditions produce smarter technology. Lidar – the expensive, light-pulsing sensors relied upon by current autonomous car models – is worthless in snow and thus “a fake”, says Uskova. Instead, cars should be trained to operate using high-definition cameras, low-cost radars and powerful AI that mimics the human brain.

As the 150 engineers pored over Moscow road data, it was obvious that this vision is a long ways off. Most cars struggled to identify signs, for instance, which were hard to detect in snow or rain; and for non-Russian speakers, the task was practically impossible.

“The problem is that the signs are small, and in Russia they look very similar,” explained Sami Mian, a computer scientist at Arizona State University. “The main difference is numbers and arrows, and a city entry sign can look almost the same as a stop sign. The top team had 40% accuracy.”

That team, three local guys from Moscow, had tapped into a secret weapon: a trove of the popular dashcam footage, which had been harvested and stored at nearby Moscow State University. Derived from 100,000 dashcam videos, that data served as the building blocks of a basic neural network hammered out by the cigarette-puffing coders, who mentioned that they had slept a total of five hours over three days.

Russian-built autonomous systems are already in use by Kamaz, Russia’s largest truck maker, and an agricultural equipment company. Both are working with Cognitive Technologies to build autonomous machines. But adapting the technology for city use, and bringing it to the international stage, is a steep battle.

No government agency has developed regulations for autonomous cars, so road testing is constrained to designated testing zones. The only car testing zone in Moscow is a 400m track embellished with pedestrian crossings, road signs, markings and a section with circular traffic. It’s a lousy facsimile of Moscow roads, or any road. But even worse is its location far outside the city center: a planned ride-along was scrapped because of bad traffic.

Man Vs. Machine: Russian Company Compares Human, AI Driving Capabilities

Man Vs. Machine: Russian Company Compares Human, AI Driving Capabilities

Published on SPUTNIKNEWS.COM
Link to publication

A Russian software developer called Cognitive Technologies has conducted a detailed study comparing AI reaction to that of human drivers. Speaking to Sputnik’s sister agency, the company said its findings demonstrate that AI capabilities to operate a motor vehicle are rapidly expanding, coming close to matching or even surpassing those of people.

The Moscow-based company, creator of its own automated driving assistance technology known as Cognitive Pilot, has completed a study to determine who – human beings or artificial intelligence, are better able to detect road signs, other vehicles and pedestrians in various traffic situations.

Speaking to Russia’s RIA Novosti, Cognitive Technologies’ autonomous software department head Yuri Minkin said that the tests’ main goal was to determine whether its AI had already outmatched human drivers’ reaction time and accuracy.

Put up against 17 human volunteers, the company’s AI was put to the test in a variety of challenging conditions (dusk, nighttime driving, driving in rain or conditions of blinding sunlight). The volunteers and the AI took turns identifying a series of objects on the road on a single monitor.

Taking place between September and November, tests featured the use of a single camera, and were conducted along 27 different urban routes at speeds between 50-60 km/h. Researchers limited the number of objects appearing onscreen simultaneously to three for the human volunteers’ benefit.

The first stage, studying the quality of detection of objects in good weather and road conditions, showed humans and AI as near-equals, in terms of both accuracy and speed, exceeding 99% accurate detection rate.

Meanwhile, the AI proved superior when it came to objects that were not completely visible, hidden behind trees, parked cars or other obstructions. The AI also proved faster in detecting road signs, picking up even obscured signs in just a fraction of a second.

“The testing showed that under more difficult conditions, the human volunteers often noticed the road signs a moment later than the AI. This time gives the [AI] control system an additional advantage for the processing and analysis of information about the situation on the road as a whole,” Minkin said.

The AI was slightly better in terms of speed and accuracy of detecting objects in rainy conditions, (98.3% vs. 97% for humans). In twilight and blinding sunlight, human volunteers proved significantly slower in recognizing road objects, although the overall recognition rate remained 98% for both.

One area where AI lagged behind its human counterparts was in spotting pedestrians in rainy and nighttime conditions (with an accuracy of 98.2% compared to 99.2% for people shown).

“A person is a rather complex object for recognition,” Minkin explained. “Pedestrians do not have a constant form, and can travel embracing, holding hands, carrying an object, etc. And while the AI can match the abilities of a [human driver] in clear conditions, in more difficult conditions, the human’s abilities are still slightly superior.”

According to Minkin, the main takeaway from the study is that artificial intelligence is rapidly approaching the capabilities of human beings in the recognition of objects on the road. Furthermore, the more complex the road conditions, the better the AI performs, relative to its human counterparts.

“It can be expected that with an increase in computing power, and in the quality of sensors and software, the advantage of the AI will become all the more obvious, similarly to how it has become in chess,” he concluded.

Cognitive Technologies president Olga Uskova says that the company has plans to conduct further testing, and plans to use industry experts to come up with a more advanced testing methodology.

“But we must understand that this was the first attempt to compare the capabilities of artificial intelligence and people. With this first approximation, we’ve received real results, which can and should be considered, and on whose basis it will be possible to predict the direction in the creation of AI that can drive vehicles,” she noted.