Author Topic: Microsoft won’t let their AI be used for war bots  (Read 456 times)

Offline javajolt

  • Administrator
  • Hero Member
  • *****
  • Posts: 35126
  • Gender: Male
  • I Do Windows
    • windows10newsinfo.com
Microsoft won’t let their AI be used for war bots
« on: April 11, 2018, 02:27:41 PM »
Concerns over the potential abuse of artificial intelligence technology have led Microsoft to cut off some of its customers, says Eric Horvitz, technical fellow, and director at Microsoft Research Labs.

Horvitz laid out Microsoft’s commitment to AI ethics today at the Carnegie Mellon University – K&L Gates Conference on Ethics and AI, presented in Pittsburgh.

One of the key groups focusing on the issue at Microsoft is the Aether Committee, where “Aether” stands for AI and Ethics in Engineering and Research.

“It’s been an intensive effort … and I’m happy to say that this committee has teeth,” Horvitz said during his lecture.

He said the committee reviews how Microsoft’s AI technology could be used by its customers and makes recommendations that go all the way up to senior leadership.

“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’ ”

Horvitz didn’t go into detail about which customers or specific applications have been ruled out as the result of the Aether Committee’s work, although he referred to Microsoft’s human rights commitments.

Over the past year or so, the company has been providing government and industry customers with a cloud-based suite of Microsoft Cognitive Services, including face recognition and emotion recognition.

In response to an inquiry, Microsoft emailed GeekWire a statement about the issue:

Quote
“Microsoft believes it is very important to develop and deploy AI in a responsible, trusted and ethical manner. Microsoft created the Aether Committee to identify, study and recommend policies, procedures, and best practices on questions, challenges, and opportunities coming to the fore on influences of AI on people and society.”

The company provided no further details to flesh out Horvitz’s comments.



Ethical issues surrounding AI and large-scale data analysis have gained much more attention in the wake of reported lapses in data privacy safeguards by Facebook. That company’s CEO, Mark Zuckerberg, is scheduled to address the controversy this week during high-profile congressional hearings.

One of the big concerns has to do with how Cambridge Analytica took advantage of Facebook data to target voters during the 2016 presidential campaign. Horvitz listed voter manipulation as one of the potential misuses for AI applications — along with facilitating human rights violations, raising the risk of death or serious injury, or denying resources and services.

Addressing such concerns might require new regulatory schemes. Horvitz said he could imagine a role for “an Underwriters Laboratories or an FDA … somebody looking at this as best practice.”

There might even be ways to program AI agents to police themselves. “You can imagine giving systems the ability to monitor their performance,” Horvitz said.

Such a computerized sentry might have helped Microsoft avoid the problems it had with Tay, a millennial-modeled AI agent that online pranksters taught to spew racist comments back in 2016. “It’s a great example of things going awry,” Horvitz acknowledged.

The most effective, and potentially the least problematic, settings for AI agents may well be as backstops for human decision-making.



Horvitz pointed to a Microsoft AI program that identifies patients with a higher risk of being readmitted within 30 days of their release from the hospital. That could help caregivers target vulnerable patients for post-discharge interventions. A scholarly assessment of the program determined that it could reduce rehospitalizations by 18 percent and cut a hospital’s costs by nearly 4 percent.

The ideal is for humans and machines to work together. “We’re talking about AI designs for complementarity,” Horvitz said.

For example, AI agents could anticipate the sorts of issues that flesh-and-blood experts typically miss. “It’s thinking about what the human doesn’t know,” Horvitz said.

But AI agents can have blind spots as well, as illustrated by recent high-profile fatalities involving self-driving vehicles with humans behind the wheel.

In the years ahead, Horvitz said there’ll have to be a lot more thought given to the appropriate timing for putting an AI technology out into the field, as well as the procedure for rolling out phased trials, reporting the results of a trial, providing full disclosure and coming up with failsafe designs when necessary.

One of the lines of research being pursued at Microsoft has to do with anticipating the blind spots, or “unknown unknowns,” that an AI agent might face.

“How do you discover blind spots in a system that learned its whole life, how to live and do its thing in a simulator, [and is] now put in the real world?” Horvitz said. “Where should it be a little bit nervous or anxious?”

Anxious AI? Machines with a code of ethics, or a sense of self-doubt? Sometimes it seems as if artificial intelligence is on a track toward becoming … all too human.

Update for 12:15 p.m. PT April 10: We’ve added Microsoft’s emailed statement about the Aether Committee. GeekWire’s Tom Krazit contributed to this report.

source
« Last Edit: April 11, 2018, 03:34:37 PM by javajolt »