Archives 2018

Why is Russia so good at getting women into technology?

Why is Russia so good at getting women into technology?

Published on ZDNET.COM
Link to publication

A century ago, Russia pushed for equal rights to education and work for men and women. The effects are still being felt in tech today.

A stubborn brother and 3,000 Russian rubles were all it took to convince Elena Tverdokhlebova to go into science and technology. “I was 10 years old when my brother, who was studying for the university admissions exams, gave me a math problem to solve,” she says.

He was jumping around the living room offering her 100 rubles, then 1,000, and finally 3,000 if she could do it. “To his surprise, I was able to solve it, and he gave me 3,000 rubles, about $100 at that time,” she says.

This small incentive and the support she received from her family convinced Tverdokhlebova to study math and later computer science. “I became addicted,” she says.

Tverdokhlebova, who is now a data scientist, was part of an all-women Russian team who won the International Quant Championship, a fintech competition organized by computer-powered hedge fund WorldQuant.

She was representing the Moscow Institute of Physics and Technology, together with Tatiana Shpakova and Karina Ashurbekova. “We’re all working in machine learning and data science. We have a good harmony inside our team,” Tverdokhlebova says.

According to the UNESCO Institute of Statistics, 40 percent of Russian researchers are women. Local technology companies such as Yandex, dubbed the Russian Google, say that women make up about a third of their employees.

Other countries in the region, such as Bulgaria and Romania, are also above average when it comes to diversity in science, technology, engineering, and mathematics.

“Eastern Europe consistently produces remarkable women in tech or cybersecurity,” says Jane Frankland, managing director at Cyber Security Capital in the UK and author of the book InSecurity.

SEE: The state of women in computer science: An investigative report (cover story PDF)

She believes there are several reasons for that: girls are expected to take up computer science from an early age and perform well, and there’s no stigma associated with studying technology.

But there’s something more: “Culturally, women in Eastern Europe are characterized as having a forthright nature and this means they’re more inclined to speak up for themselves, and be hardy to rejection, which is typically needed in a male-dominated environment,” Frankland says.

A century of women in tech

Elena Tverdokhlebova says after the 3,000-ruble bet, her mother noticed her interest in math, and pushed her to study harder. Her teammate Tatiana Shpakova also has her mother to thank for the career she has today.

Parental encouragement and having women as role models prompted her to choose a career in tech. “I think it usually comes from the family,” Shpakova says.

As a child, Shpakova lent her parents her savings when they ran out of money, but she kept books, calculating interest rates. “I had a special notebook where I wrote down how much more I needed to buy myself an apartment,” she says.

A study carried out by Microsoft in 2017 showed that more than 60 percent of Russian parents encourage their daughters to study science, technology, engineering, and mathematics.

“Over half (55 percent) of Russian girls feel there are encouraging role models out there for them, compared with, for example, just 35 percent of Dutch respondents,” the report reads. It also points out that Russian girls become interested in science, tech, engineering, and math at the age of 10, a year before the rest of Europe.

The history of Russian women in tech dates back almost a hundred years. In the 1920s, it was one of the first countries to issue legislation that established equal rights to education and work for men and women. The percentage of women researchers soared after that, from 10 percent in 1917 to 42 percent in 1938, according to the Russian Ministry of Education.

Russia’s current generation of science and technology professionals capitalizes on the past, says Olga Uskova, president of Cognitive Technologies Group. She leads a team of engineers who develop autonomous-driving solutions for ground transportation, both agricultural and civil. The team has developed, for instance, a 4D Imaging Radar tailored for harsh weather.

Uskova was born in the mid-1960s, into a family that nurtured her love of science and math. Her parents both majored in math and computer science, and her father was one of the creators of the Kaissa chess program, which became the first world computer chess champion.

When she was a teenager in the 1970s, scientists in Russia were rock stars. Uskova had plenty of science journals for children to dive into, and several math clubs to join. “Back in the 1970s, there was a real cult of the technical specialist. Plus, it was almost the only area not related to politics,” she says.

The Soviet regime, which needed the workforce, encouraged women to have a job and defined them as both mothers and workers. Many had to work in factories and take positions such as welder or bricklayer. To avoid that, some studied hard in school to be able to choose, based on grades, occupations such as engineer or researcher.

Uskova decided to go into the tech because it was the field she was best at. Her driving force was poverty. In the 1980s, when the economy stagnated, people lived on food rations, and mundane things — such as chocolate, soda, or fruits — seemed opulent.

“I was selling OCR [optical character recognition] software and in the stores there was no bread,” Uskova says. “The strongest motivation is the threat to existence. If you want to survive, you will be able to sell snow to the Eskimo in winter.”

SEE: Hacking the Nazis: The secret story of the women who broke Hitler’s codes (cover story PDF)

Looking back on those times, Uskova remembers how much she and other women have worked and believes they’ve raised their daughters in the same spirit. “All the babysitting and raising of artificial brains with deep learning suits female hands. Men lack patience,” she says.

Uskova advises Russian women interested in tech today to pay attention to the field of deep neural networks and to be interested in more than one area of science: “A good specialist must be good in fusing math, biology, psychology, physics, and more.”

Although Russia is above average when it comes to women in science and tech, in recent years the number of female scientists has decreased from 151,500 in 2014 to 148,300 in 2016, according to data provided by the Russian Ministry of Education.

While women based in Moscow or Saint Petersburg dream of having a career, those from small towns and villages are still encouraged to start a family in their early to mid-twenties, often missing out on professional opportunities. They are also expected to do most of the household chores.

Olga Uskova says more should be done to encourage women to assume not only technical positions, but also business-related ones, both in Russia and abroad.

During negotiations for projects such as self-driving tractors, she often finds herself the only woman at the table. “The international auto world is a ‘sausage’ market,” she says.


Auto-navigating Moscow?

Auto-navigating Moscow?

Link to publication

Chaotic roads, bad weather and reckless habits make the Russian capital one of the worst to drive, and its quest to build an autonomous car uniquely challenging.

In certain sunny climes, self-driving cars are multiplying. Dressed in signature spinning sensors, the vehicles putter along roads in California, Arizona and Nevada, hoovering up data that will one day make them smart enough to run without humans.

Besides perennial sunshine, those places share other common traits: wide, well-manicured roads, functional traffic enforcement, and agreeable local governments. That’s how Chandler, Arizona — a Phoenix suburb on nobody’s radar as of a few weeks ago — became the first US town to host autonomous cars on public streets without human safety drivers. Courtesy of Waymo, they’re expected to start carrying passengers within the next few months.

If you ask many Silicon Valley companies, the future of driverless cars is just a couple of years away. But halfway across the world, the outlook is a lot more skeptical.


“We don’t have the luxury of California roads,” says Olga Uskova of Cognitive Technologies, a Russian software maker that specializes in autonomous vehicles. “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.”


Uskova asserts that technology tested in sun-drenched utopias can’t possibly translate to a city like Moscow. Gnarly road planning, terrible weather and reckless habits make the Russian capital one of the worst cities in the world for drivers.

With roads that spread like a cobweb away from the Kremlin, disturbances like car wrecks, construction and government motorcades can wreak havoc for miles. Seat belts are scorned, and traffic laws widely ignored; speeding violations are enforced with US$4 fines, paid by phone. It’s no surprise that Russia’s rate of road fatalities is nearly double that of the US, with an average of 20 serious accidents a day just in Moscow.

Or, for that matter, that dashcam videos of Russian road fights and collisions make up such a popular subgenre on YouTube.

But most of the world’s roads look more like Russia than Mountain View, and according to Uskova, that gives Russian developers an edge in building the brains of autonomous cars.

That theory was tested at a recent event in Moscow, advertised as the world’s first hackathon for driverless cars. In an austere, Soviet-era dormitory, top engineering students from far-flung schools like MIT, Cambridge and Peking University sank into beanbag chairs for a three-day coding binge.

“We’re here because it’s a chance to change the world over the next 10 to 15 years,” said Mitch Mueller, a student who traveled from the University of Wisconsin to compete. They were also competing for a cash prize, bragging rights and — most importantly — the attention of participating companies, including Uber and Nvidia, eager to recruit the next generation of AI talent.


The event had another purpose: to advance a credo that when it comes to autonomous cars, tougher conditions produce smarter technology. Lidar — the expensive, light-pulsing sensors relied upon by current autonomous car models — is worthless in snow and thus “a fake,” says Uskova. Instead, cars should be trained to operate using high-definition cameras, low-cost radars and powerful AI that mimics the human brain.

As the 150 engineers pored over Moscow road data, it was obvious that this vision is a long ways off. Most cars struggled to identify signs, for instance, which were hard to detect in snow or rain; and for non-Russian speakers, the task was practically impossible.

“The problem is that the signs are small, and in Russia they look very similar,” explained Sami Mian, a computer scientist at Arizona State University. “The main difference is numbers and arrows, and a city entry sign can look almost the same as a stop sign. The top team had 40 percent accuracy.”

That team, three local guys from Moscow, had tapped into a secret weapon: a trove of the popular dashcam footage, which had been harvested and stored at nearby Moscow State University. Derived from 100,000 dashcam videos, that data served as the building blocks of a basic neural network hammered out by the cigarette-puffing coders, who mentioned that they had slept a total of five hours over three days.

Russian-built autonomous systems are already in use by Kamaz, Russia’s largest truck maker, and an agricultural equipment company. Both are working with Cognitive Technologies to build autonomous machines. But adapting the technology for city use, and bringing it to the international stage, is a steep battle.

No government agency has developed regulations for autonomous cars, so road testing is constrained to designated testing zones. The only car testing zone in Moscow is a 400-meter track embellished with pedestrian crossings, road signs, markings and a section with circular traffic.

It’s a lousy facsimile of Moscow roads, or any road. But even worse is its location far outside the city center: a planned ride-along was scrapped because of bad traffic.

Nice Roads and Careful Pedestrians Are a Roadblock to Fully Autonomous Cars

Nice Roads and Careful Pedestrians Are a Roadblock to Fully Autonomous Cars

Published on INVERSE.COM
Link to publication

To make the best autonomous cars, we’ll have to teach their A.I. how to navigate in the the worst possible conditions. That’s why the most daring innovation in the field may wind up taking place far from the sun-soaked streets of California, and instead in less forgiving environments.
“No one will purchase a self-driving car to ride it in California only. This is a question of the next level industrial systems,” Olga Uskova, president of Russia’s Cognitive Technologies and founder of the C-Pilot autonomous driving system, tells Inverse. “For example in our system, we use such a tech called ‘virtual tunnel’. The vehicle moves not only by the road marking, but it defines the road scene the same way the human brain does, by analyzing the lateral situations — the location of trees, buildings, the horizon line etc.”
Uskova notes that 70 percent of the world’s roads are nothing like the ones found in California. But instead of working their way up from empty test tracks to more real-world situations, Uskova’s team decided to use these harsh conditions as a starting point. Driving in bad weather, they determined, was using an estimated 35 to 40 percent of testing time anyway.
“Climate in most parts of Russia is presented by a large number of days per year when drivers must travel in bad weather conditions — on the roads with snow, mud, lack of road marking and poor visibility,” Uskova says.
It’s this deep-end-first approach that characterizes a great deal of the autonomous car development on the international stage. In the United Kingdom, for example, there are no laws against jaywalking. Some startups have argued this is an ideal venue for teaching car-driving A.I. how to deal with pesky pedestrians. One, based at Imperial College London, has already developed a system capable of understanding over 150 behaviors to judge whether a pedestrian is about to step out into the road.
“We are very confident that we are able to predict if someone is going to cross or not,” Leslie Noteboom, co-founder of Humanising Autonomy, told the Evening Standard. “Cars need to understand the full breadth of human behavior before they’re ever going to be implemented into urban environments. The current technology is able to understand whether something is a pedestrian and not a lamp post, and where that pedestrian is moving, framing them as a box. We’re looking inside that box to see what the person is doing, where they’re looking, are they aware of the car, are they on the phone or running — does this mean they are distracted, or risky?”
London is expected to host its first autonomous taxis in 2021, courtesy of Oxford-based developer Oxbotica and taxi firm Addison Lee. Oxbotica has completed a series of limited grocery deliveries as part of its tests, while preparing for a London-to-Oxford autonomous drive in the second half of 2019. The 60-mile journey has patchy cellular service, which will make car communications difficult. The country as a whole has around 75 percent geographic 3G and 4G coverage. The team will have to work out how the car should react when it loses internet connectivity.
 In the case of Cognitive Pilot, it’s had to develop new sensors capable of handling the road come what may. It has developed a radar capable of creating a 3D projection of objects from 300 meters away. While Silicon Valley largely focuses on lidar solutions that struggle with harsh weather, radar is better equipped for all seasons. In bad weather conditions, the range of the team’s radar falls by just 50 to 100 meters to reach between 200 to 250 meters. Lidar, which uses a spinning laser to bounce off objects and read their distance, can fail in snow when their lasers instead bounce off of falling flakes.
Silicon Valley is not blind to these issues. Waymo tested its autonomous driving system trekking through snow in South Lake Tahoe back in March 2017. And Tesla, which considers lidar as having too many flaws, has already opted for a combination of cameras and radar for its “Hardware 2” suite designed to support autonomy at a later date. Even CEO Elon Musk, however, notes that it’s “extremely difficult” to develop an all-purpose autonomous driving solution.
Technology firms have recently had to scale back their expectations, as Waymo’s trials in Arizona struggle with complex intersections. Drive.AI has even suggested redesigning roads to support these new cars. While Musk is still confident that Tesla could achieve a point-to-point solution sometime next year, the challenges faced by international developers show it’s unclear how these systems will work elsewhere.

Imaging Radar Detects Objects Up To 300 Meters Away

Imaging Radar Detects Objects Up To 300 Meters Away

Published on SENSORS.COM
Link to publication

In the prototype stage, Cognitive 4D Imaging Radar can detect objects at a distance of 300 meters in the range of azimuth angles greater than 90 to 100 degrees and elevation angles from to 15 to 20 degrees. The frequency band is 76 to 81 GHz. The size of the radar is about that of two iPhones. Notably, Cognitive 4D Imaging Radar does the vertical scanning without the use of any mechanical elements.

Olga Uskova, president of Cognitive says, “Until now on the autonomous driving market there is no such radar that is ready for a serial mass production. Cognitive Imaging Radar detects not only the coordinates and speed of the road scene objects, but also their shape – just like a video camera does. This is truly a third eye of an autonomous vehicle. Radar works at any speed, in any weather conditions and has the best resolution and accuracy of objects detection – over 97.7%. In combination with video camera – this fusion guarantees safety on the road. This is the revolutionary development for the entire automotive industry. Another important thing is that the device has an affordable cost and compact dimensions, which makes it possible to start mass production right now.”

According to Cognitive Technologies, the design of the existing radars, that are currently available on the market, work only in one horizontal dimension. These conventional radars can only calculate distance to objects, trajectory of their movements and speed. But they cannot determine the shape of the objects. For example, such radars are practically unable to distinguish a car from a pedestrian or a bridge from a long truck.

To get the necessary information about the road scene, many car manufacturers must use LiDAR. However, the physical characteristics of LiDARs significantly decrease in rain, snow, fog and dust. In addition, their cost is usually comparable to the price of the whole vehicle. These factors exclude the possibility of their industrial use now.

In addition, radar supports the tech Synthetic-aperture radar (SAR) technique that is used to recreate the environment around the vehicle. This technology uses the radar and on-board computer of the vehicle to build a map of the environment around. Such a map is necessary for any autonomous vehicle to understand where the car is located and what scenarios on the road are possible. The technology also allows the robocar to see in high quality such objects as potholes, curbs, roadside verges.

Together with video cameras and Cognitive Low level Data Fusion technology, the cost of the radar will not exceed a few hundred dollars. As per Cognitive Technologies’ Olga Uskova, the company has got a pre-order from a car manufacturer for 200 thousand units. For more info, visit Cognitive Technologies.

C2-A2 AGRODROID the world’s new Smart Farming product

C2-A2 AGRODROID the world’s new Smart Farming product

Link to publication

European software developer ‘Cognitive Technologies’ has developed the world’s first industrial agrodroid for international agricultural market.

Cognitive Technologies – one of the top developers of AI-based systems for self-driving cars and autonomous transportation – announces the launch of the world’s first C2-A2 AGRODROID (Cognitive 2 – Agro 2 – Droid 1), an industrial model of the universal control system for autonomous agricultural machinery.

“C2-A2 is an artificial brain that is equipped with a cradle – a universal device for fast connection with different agricultural machinery: harvesters, tractors, sprayers and others, – says Olga Uskova, president of Cognitive Technologies, – Within our team we consider C2-A2 as a brother-in-law of the world-famous R2-D2 (an astromech droid character of the Star Wars epic space opera)”.

“Installation of C2-A2 AGRODROID makes any harvester or tractor autonomous and any agricultural activity smart. Supplying the solution with a cradle makes it possible to move this artificial brain from one machine to another without purchasing a new system each time”, – continues Olga Uskova.

The C2-A2 AGRODROID is developed on the basis of the Cognitive Agro Pilot – an autonomous driving system for agricultural machinery that was presented earlier. The key innovation of the new product is the state-of-the-art Convolutional Neural Network (CNN) that was modified by the Cognitive Pilot team for agricultural purposes and tasks.

“On international agricultural market this is the first product of such type that is based on standard Nvidia computing device (Nvidia Jetson TX2) with deep neural networks”, – claims Uskova.

An important feature of the new product is a complete safety of all the fieldworks. In comparison with the existing autonomous driving systems for agricultural machinery, which are GPS-based, the neural networks based C2-A2 opens a new class of systems that is able to protect equipment and people from all possible collisions.

Another competitive advantage of the presented solution is the lack of expensive sensors. Unlike other analogues that use expensive laser scanners (Lidars) and stereo cameras, Cognitive Technologies team has developed such a computer vision system that is able to achieve similar results with the use of just one single video camera.

The use of just one sensor, not 3 or 4 as proposed by other manufacturers, allows to reduce the cost of the whole solution by 3-5 times. The final cost of the C2-A2 AGRODROID will be about 3.000 USD, which is approximately 1.5% of the cost of the combine harvester and about 3% of the cost of the tractor.

“We estimate the agrodroids market volume at 94 billion USD and expect to get at least 15% market share in the next five years. The world’s tractor fleet that is ready for our solution is about 27 million machines”, – concludes Olga Uskova.

Programming As Art: How Blockchain Can Help Artists (And Save Art)

Programming As Art: How Blockchain Can Help Artists (And Save Art)

Published on INC Magazine
Link to publication

Can blockchain save art?

Last month I spent a week in Moscow where I spoke at the Skolkova Robotics Forum on Smart Matter: 4 Things That Are Making Every “Thing” Smart. While there, I happened to visit a very unique gallery in the heart of Russia’s top cybernetics institute, the National University of Science and Technology, or MISiS.

There, I met Anna Karganova, the director of the Russian Abstract Art Foundation, and Olga Uskova, its president. (Olga is also a scientist, CEO, and self driving car technologist.)

After viewing some of the art, our conversation surprisingly turned to blockchain.

To put it mildly, that’s not what I expected from an art historian.

But as the conversation developed, it became clear that artists and curators are looking to blockchain as a possible solution to three problems in art. Provenance, or where an artwork came from, is always a challenge. Fraud will be an issue as long as people are paying millions of dollars for famous paintings. And knowledge about the art is something that curators are always hoping to share.

Here’s a summary of our conversation:

Koetsier: How can blockchain help artists and the art world?

Karganova: In the future, within just 5-7 years, blockchain technology will significantly increase the safety level for all participants of the art process. There are issues that blockchain can already solve now and some issues for which the technology still needs to “grow up.”

For us collectors, the most attractive and important thing this technology can give is the potential transparency of all the processes. In the open decentralized database which we can already build with the use of blockchain, we can store information and learn about the origin of the artwork … we can get info [such as] who’s the owner of it and who owns the copyrights. This technology also makes it possible to monitor all the transactions with the particular piece of art and maintain the provenance (exhibitions participation, publications in the catalogs, etc.).

If we have such a database, all the painters and their heirs will be able to track all the movements and relocations of their artworks. This will protect them from illegal sales and situations when after the exhibition the works are not returned to the owners for long time. It’s worth mentioning that the technology will be really useful and important for acceptance of artist’s resale royalties. So in long term perspective, painters and collectors will be more willing to participate and give their works for various temporary exhibitions.

The most interesting feature that can be developed with the help of blockchain technology is the possibility to purchase a piece/a share of an artwork. But for this one – the necessary legislative base is not yet available in the world.

Uskova: In this regard, in my opinion, we can implement such an advanced thing as a special cryptocurrency that will be used to evaluate artworks. Accumulation of the art’s capital/net worth can depend exclusively on demand: for example – on the total number of views or on the number of acquisitions.

Koetsier: Where would it be the most useful?

Karganova: First of all, blockchain technology can significantly help us increase and control circulation of the artworks. If we link all the originals to a single open database – this will ensure the number of copies of the paintings/photos/videos is fixed and guaranteed. In general, for all the new multimedia in art – blockchain is a perfect breakthrough system. And it will be especially interesting for those potential buyers who are attracted by innovations and high-tech in arts, but who are often stopped from a real purchase because of the particular insecurity of the art segment.

Koetsier: Honestly, I was really shocked to hear you talk about blockchain. Maybe I had an internal prejudice … art is creative, and blockchain is technology. How did you get interested in technology?

Karganova: Art and Technology have been linked for a long time already and we just can’t ignore this fact. Some time ago there were doubts about online auctions, but now this method of bidding has successfully and organically merged into the art environment that is historically quite conservative.

The convergence of arts and technology is a process that comes from several directions.

Artists who work with audiovisual and VR technologies often build their works on the basis of rethinking classic art and ideas embedded in it. More and more traditional museums include media artworks in their expositions. And of course all museums are trying to make their expositions digital to store them in worldwide web. One of the important reasons for this is the necessity to attract young audience. There are steps towards art from the developers of artificial intelligence too.

Uskova: Blockchain is a technology that is based on a new revolutionary ideology. For the artist it’s not only about the safety of the artworks’ storage and an easy access to virtual galleries, but it is an opportunity and a tool for creation of a new type of digital art. For example it may be an object that consists of many decentered, infinitely embedded worlds that are linked to and united by a single idea.

In the collection of our foundation we have works of a unique artist, Vladislav Zubarev. Back in 1977, when the world hadn’t yet suspected the existence of String Theory and before the discovery of the Higgs Boson particle, Zubarev introduced to the contemporary art world his Concept of Temporality.

He said that in a current time, with its dynamics and pace of change, it’s impossible to be a truly modern creator without putting time into a single coordinate system. He began to draw in four dimensions and his paintings got really magical dynamics, secrets of which are still not solved up to date by experts from around the Globe. Zubarev’s Theory of Temporal Art (1977) included so many correct guesses about the nature of space and time that in 2000s delegations of physicists visited him trying to understand how could an artist in the 1970s visualize what has later been discovered in 2000s.

So this is what can happen with blockchain technology too. Decentralized blockchain is a system of the different connected worlds of ever-changing information … a great basis for art objects of a new type.

Koetsier: How big an issue is fraud in the art world? Any idea of the scope of the problem?

Uskova: The problem of buying fakes is not that big at the moment as it was 10-15 years ago. There are several explanations for this.

Firstly, buyers’ interest has shifted towards the post-WWII and contemporary art, where there are a lot of options to track the origin of the artwork and its provenance. Secondly, methods of technological analysis have really improved. As for those who prefer to buy antique and classic art – these people do this for many years already and they are experts themselves. It’s more correct to call them not collectors but connoisseurs.

For Russian art, the most frequently falsified period is Russian avant garde of the beginning of the 20th century. It may now seem to everyone that the most famous fraud cases are left in 2000s, but nearly several months ago the Museum of Fine Arts in Ghent was involved in one huge scandal. Russian and international experts doubted the authenticity of some avant garde paintings from one private collection on show in the museum. This led to a large-scale investigation and early closure of the exhibition. It turned out that the provenance of paintings was unclear and consisted of different fake legends, and even the mentioned publications in exhibition catalogs were forged.

So what should we prepare for? I think that in a short term perspective the art of the middle 20th century will be in focus of fake makers. In Russian Abstract Art Foundation we have already started creating the database of samples for our artists and completing the catalogues raisonnés for internal use.

Koetsier: Defending against fraud is one thing blockchain can help the art world with. Anything else?

Karganova: Before buying an artwork, you should check as many details and facts as possible.

There are two main types of expertise: technological research and the one provided by art historians. Technological expertise studies pigments & binders and defines whether the painting fits the period that is claimed. But this type of study doesn’t prove or identify the artwork’s authorship. To confirm or to disprove the authorship, during the technological expertise, experts take X-rays. This study case helps to see the structure of the painting and compare it to the museum samples. In some cases ultraviolet light may be implemented. It identifies the signatures applied over the old varnish and shows the preparatory drawings that are individual for each artist.

If we talk about the expertise by art historians, they usually do some kind of a scholarly research. If you are about to close a deal, it makes sense to ask for an opinion of several experts, and better from different countries. Usually for a certain time period or an author – there is a limited number of experts. If two or three experts say that the work is genuine, then in the case of suspicion there will be no one to make an objection. The given certificates themselves should also be checked for authenticity. Nowadays to do that, specialists from auction houses ask the organization, that provided the expertise, to confirm has it issued the submitted papers or not.

All these processes are very time-consuming. And just imagine if all the data could be uploaded to one open database!

A clear provenance is a very strong reason to buy an artwork. The ideal and rare situation is when the whole history of the artwork can be traced from the artist’s studio to all the exhibitions and all owners. If any time periods are missing – then the provenance research is required.

Koetsier: Anything else that I’m missing?

Uskova: Nowadays we can witness just the beginning of the blockchain technology formation process; it is now still on the early stage. But the first deals begin to appear. There are still very few of them, but they set a precedent and allow us to identify all the possible downsides and limitations.

I think that the attractiveness of blockchain will grow with the generation that develops it. The great role of the current art world is played by people who are used to some certain rules and entourage. Pre-auction exhibitions, electrified atmosphere in the auction houses, discoveries of various unexpected data, positive art experts’ feedback – all of these provides emotions that are so important to the collectors of the old formation. This emotional experience, that is integral from the process of artwork purchase, is one of the most important parts of arts collecting. When people for whom speed and results are more important will get the necessary resources — then the introduction of the blockchain will no longer be an issue.

But now, when mathematicians and software developers work with AI projects, they also can no longer work without Contemporary Art. For example the Cognitive Pilot project team, that is now developing neural networks for self-driving cars, has recently moved to a new level – developers are now creating emotions for artificial intelligence.

This kind of work requires a fundamentally different approach: not mathematics, but arts … in order to understand and project emotions. So in order to understand different emotions, neural networks specialists participate in master classes about Arts that are conducted by the unique method of Ely Belyutin.

Modern programming is a form of a modern art. It has ceased to be a purely logical apparatus. With the advent of heuristic methods of programming and the creation of AI-objects, software products have a new theme of emotion that is so inherent to contemporary art.

Top Russian Cybernetics Experts On AI, Robot Morals, Human Extinction ... And Self-Driving Cars

Top Russian Cybernetics Experts On AI, Robot Morals, Human Extinction … And Self-Driving Cars

Published on FORBES.COM
Link to publication

Elon Musk has stated his opinion that AI could lead to the extinction of humanity, and it’s one of the reasons he’s working hard to make us a multi-planetary species. Stephen Hawking was incredibly clear as well: true AI could be the “worst thing” for humanity.

And yet, every country and major company is racing to build AI systems.

Small wonder: Russian president Vladimir Putin has said that the nation that leads in AI will be the ruler of the world. And China is investing heavily in winning the race.

I was in Moscow recently to speak at Skolkovo Robotics Forum. One of the highlights: a visit to Russia’s top cybernetics institute, the National University of Science and Technology, or MISiS.

I asked two of its leaders about AI, its dangers, and — of course — one of the tasks we might use AI for: self-driving cars.

Koetsier: There’s a lot of noise about AI today. We have machine learning and neural networks … but what is true AI?

Olga Uskova (Head of the Department of Engineering Cybernetics): From my experience I can say that under ‘artificial intelligence’ people understand the state of an object when it actually becomes the subject and begins to have an independent abstract thinking.

Koetsier: You’ve been working on AI for decades, and the MISIS Cybernetics department just celebrated its 50th birthday. How far have we come in that time?

Konstantin Bakulev (Deputy Head of the Department of Engineering Cybernetics): 50 years ago, when MISiS Department of Engineering Cybernetics was created, and it was created by a group of very young scientists from the Institute of Theoretical Physics under the leadership of Alexander Kronrod, the AI theme was represented by methods of Heuristic Programming. And literally in the first year of the department’s existence, young scientists together with students created the first heuristic algorithms for playing cards.

The computer complex at that time occupied two rooms — an area of about 80 square meters. The code of the program was entered by perforated cards. Each response of the machine to the cards took several minutes.

In 2018 the students of the department, as a course work, created a system for semantic analysis and analysis of news feeds to create analytical reports on changing of purchasing power of Russians during the holidays. Now gigabytes of information are processed in just a few seconds and this work was done by two fourth-year students just within a month.

Koetsier: For achieving AI, do we need more speed/processors/memory? Or do we need different thinking/algorithms?

Olga Uskova: We need all of these.

On one hand some leading developers, including us, are following the path of building an anthropomorphic model. When we started studying the decision-making process of the person behind the wheel [note: Uskova also leads a self-driving startup, Cognitive Pilot] we discovered that the logical intelligence is not the only one participating in this process. A significant part is occupied by emotional intelligence, the data for which doesn’t go through sequential processing by standard methods. The final solutions are achieved by connecting some types of neural networks.

By the same principle, we are now building neural networks for our automotive AI. There may not be many pictures, but they must be correctly marked and they must give a new knowledge to the neural network.

So we came to the theme of programming of intuition. In particular, the analysis of the behavior of small objects on the road as a material for predicting the change in the road scene in the next few seconds (for example it may be the change in the angle of the car driving next to you or the ball that rolls out onto the road). Thus in many cases it’s necessary to have not ‘more’ data/information, but ‘smarter’ data/information. This is a bit like teaching people how to read fast.

Koetsier: The kind of AI everyone is waiting for is a kind of Star Trek intelligence that you can talk to, get answers from, and have human-like conversations with. We’re seeing the beginnings of this, maybe, with Siri, Alexa, Cortana, and the Google Assistant. How far away are we from near-human-like intelligence from these assistants?

Konstantin Bakulev: This is a multi-layered question.

Technologically we have already come to the possibility of programming emotional intelligence, and this is extremely important for communication. But when a person is brought up, he/she genetically or historically has a number of restrictions – moral, religious, social. In different value systems, the basic moral values can differ diametrically.

If you are engaged in the development/formation of AI, especially the formation of its emotional part without limitations, then the result can arise with a set of some aggressive characteristics. Because aggression is one of the strongest emotions in social networks and it is dangerous to feed such data to AI.

Koetsier: Super-human intelligence, of course, is what some people worry about from AI. Do you see that as inevitable? And, will it come quickly when real breakthroughs in near-human AI are made?

Olga Uskova: I share Steven Hawking’s opinion with whom we had a short talk on this topic a few years ago. And now I’m totally convinced that he really foresaw many things.

Continuing the anthropomorphic analogy: when a person grows up and learns, besides the recognition of images and meanings of surrounding objects, the self-awareness of himself/herself as a person arises. The same way the AI at the stage of recognizing meanings at some point will definitely come to the process of self-awareness as a separate entity.

And if by this time we don’t bring into the training system any moral limitations – the consequences for humanity can be instant and terrible.

Even now for a lot of people it’s not always obvious that they are useful for an existing ecosystem. People destroy ecology, litter a lot, kill rare species of animals, so the logically thinking AI will quickly come to a conclusion about the uselessness of mankind.

Koetsier: Should we worry about super-intelligent AIs? Will they be dangerous?

Konstantin Bakulev: I think that it’s necessary to solve two problems in parallel: we need to adjust our own behavior towards good and love and impose some moral restraints on the whole territory of the planet when programming AI on the principles of Isaac Asimov.

The principles of AI development and management should be similar to the principles of working with weapons of mass destruction.

Koetsier: When will we get there?

Olga Uskova: Well, here I want to be extremely honest. When programming neural networks, we clearly understand what is the input (what happens at the entrance), we understand what is the output (what is the result), but we do not always understand what is done inside.

While some of the tests at the testing facility there were cases when a multi-ton vehicle suddenly made its own independent decision to improve the situation, which, we think, we haven’t programmed. And then after several months of analysis of what has happened – we’ve got new knowledge about the behavior of deep neural networks.

So it’s not only that we develop and teach the artificial brain, but it’s also teaching us.

And neurophysiologists that consult our team use the results of Cognitive Pilot’s work to restore some of their patients after serious accidents. We all live in a mixed society already, where both biological and silicon organisms are present. And while we are fighting to make silicon organisms smoother and smarter, it’s very important to make sure that we don’t totally ‘mute’ the biological ones.

Koetsier: Talking about self-driving cars … how close are we, in your opinion?

Olga Uskova: Our approach is that for industrial use on autonomous transport we should use only systems with an accuracy of recognition very close to 100%. It’s difficult to achieve this accuracy, but the latest technologies allow it.

Cognitive Low Level Data Fusion is an approach that allows you to increase the accuracy of autonomous systems up to 99.99%. It combines raw data from all sensors of the machine and processes it with a neural network.

This tech will allow you to drive the car better than a person does.

In some sense, in August 2017 a new era of autonomous vehicles began. It brought the use of fully autonomous vehicles on the roads of the world much closer. Of course, in addition to technological, there are serious legal, social and moral limitations that require special development and attention of the whole humanity without division into national and state borders. This is a very important issue, just as the use of nuclear energy for peaceful purposes.

Therefore, a complete transition to self-driving cars in the world will require a minimum of 10-12 years to develop new traffic rules, moral restrictions and legislative norms for mixed car flows. The United States has the most developed practice in this sphere and undoubtedly the experience that America receives allowing the use of driverless cars on public roads is very important for all AI developers around the world.

Koetsier: Thank you both for your time!

Why the best self-driving cars may not come from the well-kept freeways of California

Why the best self-driving cars may not come from the well-kept freeways of California

Published on BIGTHINK.COM
Link to publication

The weather in most parts of Russia forces drivers to face harsh conditions—snow, mud, and poor visibility. It’s in this environment that Cognitive Technologies saw an opportunity.

Terrible traffic, prayer-inducing merges, road signs that are all but impossible to read, dangerous road conditions, and drivers who hazard sudden, scream-worthy maneuvers, all add to Moscow’s commuting woes. Sadly, this is what 98% of the world’s roads are like, and why one Russian company, Cognitive Technologies Group, may come out ahead in the race to birth the self-driving car.

President and founder of the group Olga Uskova, is skeptical of Silicon Valley’s sunny projections of when autonomous vehicles will go mainstream. The reason, she told The Guardian was that there are too many variables in most places to look out for. In Moscow for instance, “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.” Note that the most well-known autonomous prototypes, the Financial Times recently reported, have trouble navigating through snow. Uskova assures that her model doesn’t have that problem.

Cognitive Tech. began in 1993 when two of its founders developed the world’s 1st computer chess master, Kaissa. Besides this, they’ve sold software to the likes of Intel and Yandex. In 2014, the company launched its autonomous vehicle program— Cognitive Pilot (C-Pilot), Russia’s first and largest player in the nascent autonomous vehicle market.

Their secret isn’t any specialized software–like Tesla’s Autopilot or hardware–like Mobileye’s patented microchip. They took a different approach. Instead, Uskova and her team taught an A.I. program the intricacies of driving in Moscow. They did this by exposing it to 100,000 dashcam videos and other footage collected by Moscow State University.

Uskova and her team put together a neural network using the footage, which they say allows their vehicle to better maneuver around the mean streets of Moscow. By utilizing run-of-the-mill computer hardware, their incarnation becomes less expensive than competitor versions and easier to upgrade.

Cognitive technologies hopes to put out a level four autonomous vehicle by the end of 2019. That’s not all. They’ve partnered with Russian truck maker Kamaz to develop a self-driving tractor trailer by 2020, and Uskova and colleagues plan to have an autonomous combine harvester farm ready by 2024.

And their car prototype? So far, they’ve rigged out a Nissan X-Trail with a C-Pilot system. It can recognize three dozen road signs with almost 100% accuracy, as well as stop, accelerate, and heed traffic lights. Now, the company is setting up two US offices, reaching out to English speaking media, and seeking additional funding. It also demoed C-Pilot at the latest Consumer Electronics Show (CES), held every January in Las Vegas. One snag—visa issues due to a heating up of tensions between the US and Russia, have made it difficult for Cognitive Technologies to gain a solid foothold in the US.

So how does their system work? Recently, I asked Uskova via email. First, high resolution cameras, imaging radar, and a bevy of onboard sensors collect data, which is fed into one of four operating systems: the observer module—which monitors the car’s surroundings, the geographer module—which pinpoints the location of the vehicle, the navigator module—which finds the quickest route, and the machinist module—which handles the physical driving of the vehicle. All of this raw data is processed and then blended together by a deep learning neural network, provided by an energy-efficient onboard processor.

Similar to a biological brain, it absorbs and processes the information and then decides how to proceed. Most self-driving cars use LIDAR (Light Detection and Ranging), which works much like radar but instead of radio waves, uses beams of infrared light. In other words, it relies on invisible lasers to sense the environment. I asked what type of system C-pilot uses.

“Our main sensors are radar and cameras, not LIDAR,” Uskova said. “We believe that radar is the future of autonomous driving, as it is the most appropriate sensor for this technology. Radar is significantly more reliable in bad weather (snow, rain, fog). Our radar constructs a dynamic 3D projection at a distance of 150-200 meters (492-656 ft.). When the weather gets worse—the range falls to just 100 m (328 ft.).” Radar is also more cost-effective.

According to Uskova, the autonomous vehicle market is just beginning to firm up, with major players taking positions in certain niches. Cognitive technologies believes their advantage comes in sensor technology. “The human eye has a much higher resolution in its central part. When we try to zoom-in and look closer at something—we use foveal vision. The same method is used in C-Pilot’s Virtual Tunnel tech. Its algorithm tracks all movements and focuses attention on the main risk zones,” she wrote.

Uskova also said:

We also believe that within the next 10 years, as processor capacities grow, the resolution of sensors will also increase significantly. Now the cameras for autonomous vehicles have a resolution of 2-5 megapixels, and the resolution of the human eye can be estimated at 100 megapixels. And for better detection of small objects and animals, the resolution of the onboard cameras should grow. Now, our system can recognize the average size animal at a distance of up to 30 meters (98 ft.).

I asked what makes her system different from those being developed by Uber, Waymo (Google), other Silicon Valley companies, and the big automakers, Ford in particular. To date, there are 27 companies working on autonomous vehicles. “At the moment, we are the best in the world in the field of road scene perception and detection,” she said. “We have 19 unique patents and inventions. 22 million dollars have been invested in the product and we have real industrial practice in the most severe weather conditions.”

Autonomous Car Testing in Moscow Is Hard & Necessary

Autonomous Car Testing in Moscow Is Hard & Necessary

Link to publication

What is the worst confluence of driving conditions you can possibly imagine?

It’s probably nighttime in a densely populated city. There would be a vision-obscuring blizzard making the roads slick and unreliable. Traffic would be thick and moving irregularly. Worse, the drivers around you would have seemingly little regard for your safety, or even their own. Even the road signs would be difficult to see.

Welcome to life for drivers in Moscow.

Poorly laid out roads, bad weather and not-so-cautious driving practices make for dangerous automobile treks in the Russian capital. Moscow streets are home to 20 serious car accidents per day, and the road fatality rate in Russia is double that of the United States.

Not surprisingly, that makes it a difficult landscape for autonomous vehicles to navigate. But the conditions in Russia are not all that different from other countries in the world, which is why autonomous developers believe that, with apologies to New York, “if they can make it there, they can make it anywhere.”

At a three-day hackathon in Moscow, engineering students from around the globe and corporate sponsors like Nvidia and Uber gathered to take a crack at developing autonomous systems for Moscow’s roads.

“The event had another purpose: to advance a credo that when it comes to autonomous cars, tougher conditions produce smarter technology,” writes Gaus. “Lidar — the expensive, light-pulsing sensors relied upon by current autonomous car models — is worthless in snow … Instead, cars should be trained to operate using high-definition cameras, low-cost radars and powerful AI that mimics the human brain.”

The idea that Lidar is “worthless in snow,” advanced by Olga Uskova of Russian AV software developer Cognitive Technologies may be an extreme position — nearly all driverless car manufacturers incorporate Lidar sensors in some form.

Nevertheless, Lidar does not work as effectively in the snow, and developers have relied more heavily on other hardware to navigate in adverse conditions. An autonomous vehicle in Finland primarily used radar sensors to complete a journey through a wintry mix.

As for the hackathon engineers in Moscow, cobbling together an autonomous driving system over the course of three sleepless days proved difficult. The top team only managed to achieve 40% accuracy in identifying road signs. The expected culprits were to blame for the difficulties: Snow-covered road signs were difficult for systems to detect, and non-Russian speakers had an even more challenging time differentiating between similar looking road signs.

While success proved elusive at the hackathon, in some ways, that was beside the point.

The fact that driverless car developers are moving beyond building cars that can work in the idealistic sunny climates of Arizona and California and shifting to the more challenging task of creating vehicles that can work in more realistic scenarios is an important sign of progress.

Many expect that self-driving cars will eventually operate 24/7, but they will only be able to do that if they can handle the weather and unique road conditions that present themselves at every moment of the day all over the world.

Published on INVERSE Link to publication

Published on INVERSE Link to publication

Published on INVERSE.COM
Link to publication

Cognitive Technologies has a big plan.

Russia’s first and largest autonomous car project is about to reach the United States. At the Consumer Electronics Show in Las Vegas next week, Cognitive Technologies will demonstrate a self-driving car system that’s primed for the world’s roughest roads and runs on standard computer hardware.

As the company’s president tells Inverse, it’s the culmination of a journey sparked by the success of a chess computer, fueled by a desire to reduce road deaths, and hampered by American visa issues.

The company has an impressive resumé. Founded in 1993 by the guys who created the world’s first computer chess champion, Cognitive Technologies worked on technologies including image and voice recognition, and in the past has worked with big names like Intel and Yandex. In 2014, the Cognitive Pilot program was set up to apply the company’s talents to autonomous driving.

“The whole project was built to design the autopilot for real roads, Russian roads,” Roman Tarasov, the company’s VP for global business, told Inverse in March 2017. “Most of the roads on the planet are like this. So lack of light, snow, fog, bad road marks.”

Watch the company test its autonomous driving system on the roads in November 2017.

As Waymo, Tesla and Uber battle it out in Silicon Valley, Cognitive has quietly moved from strength to strength. It’s hosted a hackathon with students from MIT, Cambridge and Peking University. It’s developed assistive driving technologies for Russian trucking giant Kamaz, with a view to developing a fully autonomous truck by 2020. The team has moved some of its operations to Amsterdam, while research continues in Moscow. CES is the next big step.

Inverse spoke with Olga Uskova, president and founder of Cognitive Technologies and developer of Cognitive Pilot, to find out more about the big moment.

How are you feeling about the big CES moment for your company? Nervous? Excited?

It’s a first run, a premiere for us. It will be Cognitive Technologies’ first time at CES. Of course we worry. We will present a number of completely new technologies that we have never demonstrated before at any other event. And of course we are naturally worried about the reaction of specialists who come to CES from all over the world.

How many people are going from the company, who is representing Cognitive?

Unfortunately, some negative visa issues that arose recently between our countries made it impossible for half of the announced team members to arrive to the U.S., but key specialists will be at CES. For example, everyone will be able to chat at the booth with the CTO of the company.

We believe that CES is the quintessential show of all consumer wishes and desires for the upcoming year. New trends, latest releases, brand wars, newcomers and outsiders – all will be there. A lot becomes clear after you study the list of participants and their contribution to the world’s economy.

What message do you want to give to people?

We can and we must save millions of people who die on the roads around the world. The official statistics show more than 1.3 million deaths per year. Our technologies can already reduce this horrible figure by 44 percent, so several hundred thousand people will remain alive if these technologies will be promptly implemented.

My academic supervisor died in 1993 in a terrible car accident. After a car accident in 1997, I personally experienced seven operations on my face. Almost every family in the world has something to remember on this terrible topic. It seems to me that there will be nothing more important at CES than our booth.

Do you expect a lot of skepticism from people more familiar with big names like Waymo?

No, I don’t. The new markets in the automotive sphere are still forming at the moment. Only fools will wrinkle their noses at the sight of the new names. In reality everyone is looking for fresh solutions and new breakthrough technologies. The markets of neural systems developers, vision systems manufacturers, manufacturers of autonomous cars — these are still very young and exclusive. They now require some serious scientific and financial investments. Therefore, there is an increased attention of serious experts to such companies as ours. For us it is a very big responsibility.

Have you had any interest from CES attendees already?

Of course we’ve been preparing for this exhibition and in advance we have arranged a number of meetings with some partners we are interested in. We are waiting for innovations in the fields of microprocessors and video cameras, and also in the fields of Internet of Things and connected cars. We are very interested in the joint work, because this is the only way we can offer an ideal product for a modern customer, a user of an autonomous vehicle.

Will you be demonstrating the tech at CES? How is the booth set up?

Our mission is to show real technologies that can work in real time on real roads. We are bringing from Moscow technologies that are not afraid of snow, mud and impassibility of roads. Google is doing fine on the dry and sunny roads of California. Cognitive Technologies will represent on CES the technology for the remaining 98 percent of the world’s roads. At our booth you will be able to see a live demonstration of the technology working on snowy roads, in a storm and in a thunderstorm, with interrupted or hidden-under-the-snow markings. A special show will be organized for farmers. We will also present a separate recognition of small details of cars – headlights, mirrors, car plates etc.

We will also present our global trend for 2018 – the concept of Low Level Data Fusion. This is our technology that combines data from several operating systems and sensors: neural networks, data from high resolution image radars and cameras.

This technology allows the computer vision model to efficiently use all the combined data coming from various sensors to the computing unit. Basically, information taken from each of the sensors is synchronized and reduced to one single coordinate system. The “raw” data goes to computer, where it is processed, and then materials from cameras and radars mutually enrich each other.

This integration of data from different devices makes it possible to fill the missing information for better understanding of the current road scene. Cameras, for example, correctly recognize objects in 80 percent of cases, additional data from radar raises the detection accuracy to 99 percent and higher.

Complex use of all data allows to combine information about the speed, type of the object and distance to it, its location and physical characteristics. The implementation of this Fusion technology alone will reduce the accident rate of autonomous vehicles up to 25 percent.

What do you hope to see at CES from others, are you looking for any opportunities?

At CES we are looking forward to seeing some compact and productive information processing solutions that could be placed on the vehicles. We also want to see new solutions for car cameras that will help us improve the quality of our system on the roads. Basically, we are interested in any systems that will ensure a comfortable stay for the passenger and a driver during the autonomous driving in a traffic jam. We also hope to perhaps see something completely new, something that we couldn’t even imagine has already been created. From CES you always expect a miracle!