Windows 10 News and info | Forum
December 11, 2019, Loading... *
Welcome, Guest. Please login or register.

Login with username, password and session length
News: This is a clean Ad-free Forum and protected by StopForumSpam, Project Honeypot, Botscout and AbuseIPDB | This forum does not use audio ads, popups, or other annoyances. New member registration currently disabled.
 
  Website   Home   Windows 8 Website GDPR Help Login Register  
By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy.
Pages: [1]
  Print  
Share this topic on Del.icio.usShare this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on MySpaceShare this topic on RedditShare this topic on StumbleUponShare this topic on TechnoratiShare this topic on TwitterShare this topic on YahooShare this topic on Google buzz
Author Topic: Microsoft helps self-driving cars know its limitations  (Read 155 times)
javajolt
Administrator
Hero Member
*****
Offline Offline

Gender: Male
United States United States

Posts: 30794


I Do Windows


WWW Email
« on: January 28, 2019, 11:18:21 AM »
ReplyReply

Ignorance is bliss, and it’s often the most ignorant who make the surest decisions, not being encumbered by the knowledge that they could be wrong.

In many situations, this is all fine and good, but at the current level of self-driving car development having a Tesla confidently crash into a fire truck or white van (both of which happened) can be rather dangerous.

The issue is that self-driving cars are just smart enough to drive cars, but not to know when they are entering a situation outside their level of confidence and capability.

Microsoft Research has worked with MIT to help cars know exactly when situations are ambiguous.

As MIT news notes, a single situation can receive many different signals, because the system perceives many situations as identical. For example, an autonomous car may have cruised alongside a large car many times without slowing down and pulling over. But, in only one instance, an ambulance, which appears exactly the same to the system, cruises by. The autonomous car doesn’t pull over and receives a feedback signal that the system took an unacceptable action. Because the unusual circumstance is rare cars may learn to ignore them, when they are still important despite being rare.

The new system, to which Microsoft contributed, will recognize these rare systems with conflicted training and can learn in a situation where it may have, for instance, performed acceptably 90 percent of the time, the situation is still ambiguous enough to merit a “blind spot.”

“When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution,” said Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory.

Read much more detail at MIT News here.

source
Logged


Pages: [1]
  Print  
 
Jump to:  

Powered by SMF 1.1.21 | SMF © 2017, Simple Machines

Google visited last this page November 16, 2019, 07:14:24 PM