Archives November 2018

Auto-navigating Moscow?

Auto-navigating Moscow?

Link to publication

Chaotic roads, bad weather and reckless habits make the Russian capital one of the worst to drive, and its quest to build an autonomous car uniquely challenging.

In certain sunny climes, self-driving cars are multiplying. Dressed in signature spinning sensors, the vehicles putter along roads in California, Arizona and Nevada, hoovering up data that will one day make them smart enough to run without humans.

Besides perennial sunshine, those places share other common traits: wide, well-manicured roads, functional traffic enforcement, and agreeable local governments. That’s how Chandler, Arizona — a Phoenix suburb on nobody’s radar as of a few weeks ago — became the first US town to host autonomous cars on public streets without human safety drivers. Courtesy of Waymo, they’re expected to start carrying passengers within the next few months.

If you ask many Silicon Valley companies, the future of driverless cars is just a couple of years away. But halfway across the world, the outlook is a lot more skeptical.


“We don’t have the luxury of California roads,” says Olga Uskova of Cognitive Technologies, a Russian software maker that specializes in autonomous vehicles. “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.”


Uskova asserts that technology tested in sun-drenched utopias can’t possibly translate to a city like Moscow. Gnarly road planning, terrible weather and reckless habits make the Russian capital one of the worst cities in the world for drivers.

With roads that spread like a cobweb away from the Kremlin, disturbances like car wrecks, construction and government motorcades can wreak havoc for miles. Seat belts are scorned, and traffic laws widely ignored; speeding violations are enforced with US$4 fines, paid by phone. It’s no surprise that Russia’s rate of road fatalities is nearly double that of the US, with an average of 20 serious accidents a day just in Moscow.

Or, for that matter, that dashcam videos of Russian road fights and collisions make up such a popular subgenre on YouTube.

But most of the world’s roads look more like Russia than Mountain View, and according to Uskova, that gives Russian developers an edge in building the brains of autonomous cars.

That theory was tested at a recent event in Moscow, advertised as the world’s first hackathon for driverless cars. In an austere, Soviet-era dormitory, top engineering students from far-flung schools like MIT, Cambridge and Peking University sank into beanbag chairs for a three-day coding binge.

“We’re here because it’s a chance to change the world over the next 10 to 15 years,” said Mitch Mueller, a student who traveled from the University of Wisconsin to compete. They were also competing for a cash prize, bragging rights and — most importantly — the attention of participating companies, including Uber and Nvidia, eager to recruit the next generation of AI talent.


The event had another purpose: to advance a credo that when it comes to autonomous cars, tougher conditions produce smarter technology. Lidar — the expensive, light-pulsing sensors relied upon by current autonomous car models — is worthless in snow and thus “a fake,” says Uskova. Instead, cars should be trained to operate using high-definition cameras, low-cost radars and powerful AI that mimics the human brain.

As the 150 engineers pored over Moscow road data, it was obvious that this vision is a long ways off. Most cars struggled to identify signs, for instance, which were hard to detect in snow or rain; and for non-Russian speakers, the task was practically impossible.

“The problem is that the signs are small, and in Russia they look very similar,” explained Sami Mian, a computer scientist at Arizona State University. “The main difference is numbers and arrows, and a city entry sign can look almost the same as a stop sign. The top team had 40 percent accuracy.”

That team, three local guys from Moscow, had tapped into a secret weapon: a trove of the popular dashcam footage, which had been harvested and stored at nearby Moscow State University. Derived from 100,000 dashcam videos, that data served as the building blocks of a basic neural network hammered out by the cigarette-puffing coders, who mentioned that they had slept a total of five hours over three days.

Russian-built autonomous systems are already in use by Kamaz, Russia’s largest truck maker, and an agricultural equipment company. Both are working with Cognitive Technologies to build autonomous machines. But adapting the technology for city use, and bringing it to the international stage, is a steep battle.

No government agency has developed regulations for autonomous cars, so road testing is constrained to designated testing zones. The only car testing zone in Moscow is a 400-meter track embellished with pedestrian crossings, road signs, markings and a section with circular traffic.

It’s a lousy facsimile of Moscow roads, or any road. But even worse is its location far outside the city center: a planned ride-along was scrapped because of bad traffic.

Nice Roads and Careful Pedestrians Are a Roadblock to Fully Autonomous Cars

Nice Roads and Careful Pedestrians Are a Roadblock to Fully Autonomous Cars

Published on INVERSE.COM
Link to publication

To make the best autonomous cars, we’ll have to teach their A.I. how to navigate in the the worst possible conditions. That’s why the most daring innovation in the field may wind up taking place far from the sun-soaked streets of California, and instead in less forgiving environments.
“No one will purchase a self-driving car to ride it in California only. This is a question of the next level industrial systems,” Olga Uskova, president of Russia’s Cognitive Technologies and founder of the C-Pilot autonomous driving system, tells Inverse. “For example in our system, we use such a tech called ‘virtual tunnel’. The vehicle moves not only by the road marking, but it defines the road scene the same way the human brain does, by analyzing the lateral situations — the location of trees, buildings, the horizon line etc.”
Uskova notes that 70 percent of the world’s roads are nothing like the ones found in California. But instead of working their way up from empty test tracks to more real-world situations, Uskova’s team decided to use these harsh conditions as a starting point. Driving in bad weather, they determined, was using an estimated 35 to 40 percent of testing time anyway.
“Climate in most parts of Russia is presented by a large number of days per year when drivers must travel in bad weather conditions — on the roads with snow, mud, lack of road marking and poor visibility,” Uskova says.
It’s this deep-end-first approach that characterizes a great deal of the autonomous car development on the international stage. In the United Kingdom, for example, there are no laws against jaywalking. Some startups have argued this is an ideal venue for teaching car-driving A.I. how to deal with pesky pedestrians. One, based at Imperial College London, has already developed a system capable of understanding over 150 behaviors to judge whether a pedestrian is about to step out into the road.
“We are very confident that we are able to predict if someone is going to cross or not,” Leslie Noteboom, co-founder of Humanising Autonomy, told the Evening Standard. “Cars need to understand the full breadth of human behavior before they’re ever going to be implemented into urban environments. The current technology is able to understand whether something is a pedestrian and not a lamp post, and where that pedestrian is moving, framing them as a box. We’re looking inside that box to see what the person is doing, where they’re looking, are they aware of the car, are they on the phone or running — does this mean they are distracted, or risky?”
London is expected to host its first autonomous taxis in 2021, courtesy of Oxford-based developer Oxbotica and taxi firm Addison Lee. Oxbotica has completed a series of limited grocery deliveries as part of its tests, while preparing for a London-to-Oxford autonomous drive in the second half of 2019. The 60-mile journey has patchy cellular service, which will make car communications difficult. The country as a whole has around 75 percent geographic 3G and 4G coverage. The team will have to work out how the car should react when it loses internet connectivity.
 In the case of Cognitive Pilot, it’s had to develop new sensors capable of handling the road come what may. It has developed a radar capable of creating a 3D projection of objects from 300 meters away. While Silicon Valley largely focuses on lidar solutions that struggle with harsh weather, radar is better equipped for all seasons. In bad weather conditions, the range of the team’s radar falls by just 50 to 100 meters to reach between 200 to 250 meters. Lidar, which uses a spinning laser to bounce off objects and read their distance, can fail in snow when their lasers instead bounce off of falling flakes.
Silicon Valley is not blind to these issues. Waymo tested its autonomous driving system trekking through snow in South Lake Tahoe back in March 2017. And Tesla, which considers lidar as having too many flaws, has already opted for a combination of cameras and radar for its “Hardware 2” suite designed to support autonomy at a later date. Even CEO Elon Musk, however, notes that it’s “extremely difficult” to develop an all-purpose autonomous driving solution.
Technology firms have recently had to scale back their expectations, as Waymo’s trials in Arizona struggle with complex intersections. Drive.AI has even suggested redesigning roads to support these new cars. While Musk is still confident that Tesla could achieve a point-to-point solution sometime next year, the challenges faced by international developers show it’s unclear how these systems will work elsewhere.

Imaging Radar Detects Objects Up To 300 Meters Away

Imaging Radar Detects Objects Up To 300 Meters Away

Published on SENSORS.COM
Link to publication

In the prototype stage, Cognitive 4D Imaging Radar can detect objects at a distance of 300 meters in the range of azimuth angles greater than 90 to 100 degrees and elevation angles from to 15 to 20 degrees. The frequency band is 76 to 81 GHz. The size of the radar is about that of two iPhones. Notably, Cognitive 4D Imaging Radar does the vertical scanning without the use of any mechanical elements.

Olga Uskova, president of Cognitive says, “Until now on the autonomous driving market there is no such radar that is ready for a serial mass production. Cognitive Imaging Radar detects not only the coordinates and speed of the road scene objects, but also their shape – just like a video camera does. This is truly a third eye of an autonomous vehicle. Radar works at any speed, in any weather conditions and has the best resolution and accuracy of objects detection – over 97.7%. In combination with video camera – this fusion guarantees safety on the road. This is the revolutionary development for the entire automotive industry. Another important thing is that the device has an affordable cost and compact dimensions, which makes it possible to start mass production right now.”

According to Cognitive Technologies, the design of the existing radars, that are currently available on the market, work only in one horizontal dimension. These conventional radars can only calculate distance to objects, trajectory of their movements and speed. But they cannot determine the shape of the objects. For example, such radars are practically unable to distinguish a car from a pedestrian or a bridge from a long truck.

To get the necessary information about the road scene, many car manufacturers must use LiDAR. However, the physical characteristics of LiDARs significantly decrease in rain, snow, fog and dust. In addition, their cost is usually comparable to the price of the whole vehicle. These factors exclude the possibility of their industrial use now.

In addition, radar supports the tech Synthetic-aperture radar (SAR) technique that is used to recreate the environment around the vehicle. This technology uses the radar and on-board computer of the vehicle to build a map of the environment around. Such a map is necessary for any autonomous vehicle to understand where the car is located and what scenarios on the road are possible. The technology also allows the robocar to see in high quality such objects as potholes, curbs, roadside verges.

Together with video cameras and Cognitive Low level Data Fusion technology, the cost of the radar will not exceed a few hundred dollars. As per Cognitive Technologies’ Olga Uskova, the company has got a pre-order from a car manufacturer for 200 thousand units. For more info, visit Cognitive Technologies.