Windows 10 News and info | Forum
April 13, 2021, Loading... *
Welcome, Guest. Please login or register.

Login with username, password and session length
News: This is a clean Ad-free Forum and protected by StopForumSpam, Project Honeypot, Botscout and AbuseIPDB | This forum does not use audio ads, popups, or other annoyances. New member registration currently disabled.
 
  Website   Home   Windows 8 Website GDPR Help Login Register  
By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy.
Pages: [1]
  Print  
Share this topic on Del.icio.usShare this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on MySpaceShare this topic on RedditShare this topic on StumbleUponShare this topic on TechnoratiShare this topic on TwitterShare this topic on YahooShare this topic on Google buzz
Author Topic: Thought-detection: AI has infiltrated our last bastion of privacy  (Read 65 times)
javajolt
Administrator
Hero Member
*****
Offline Offline

Gender: Male
United States United States

Posts: 32401


I Do Windows


WWW Email
« on: February 14, 2021, 09:44:08 PM »
ReplyReply

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a Ph.D. student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine the public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are experts at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.


Above Black Mirror, “Crocodile”
This is not the only thought technology example on the horizon with dystopian potential. In “Crocodile,” an episode of Netflix’s series Black Mirror, the show portrayed a memory-reading technique used to investigate accidents for insurance purposes. The “corroborator” device used a square node placed on a victim’s temple, then displayed their memories of an event on the screen. The investigator says the memories: “may not be totally accurate, and they’re often emotional. But by collecting a range of recollections from yourself and any witnesses, we can help build a corroborative picture.”

If this seems farfetched, consider that researchers at Kyoto University in Japan developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain. Using a neural network, they correlated these with images shown to the individuals and projected the results onto a screen. Though far from polished, this was essentially a reconstruction of what they were thinking about. One prediction estimates this technology could be in use by the 2040s.

Brain-computer interfaces (BCI) are making steady progress on several fronts. In 2016, research at Arizona State University showed a student wearing what looks like a swim cap that contained nearly 130 sensors connected to a computer to detect the student’s brain waves.


An Arizona State University Ph.D. student demo’s a mind-controlled drone flight in 2016.
The student is controlling the flight of three drones with his mind. The device lets him move the drones simply by thinking directional commands: up, down, left, right.


Flying drones with your brain in 2019. Source: University of Southern Florida
Advance a few years to 2019 and the headgear is far more streamlined. Now there are brain-drone races.

Besides the flight examples, BCIs are being developed for medical applications. MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

A wearable device with electrodes pick-up neuromuscular signals in the jaw and face that are triggered by internal verbalizations also referred to as subvocalizations. The signals are fed to a neural network that has been trained to correlate these signals with particular words. The idea behind this development is to meld humans and machines “such that computing, the internet, and AI would weave into human personality as a ‘second self.’” Those who cannot speak could use the technology to communicate as the subvocalizations could connect to a synthesizer that would speak the words.

Chip implants could be coming soon


Interfacing with devices through silent speech. Source: MIT Media Lab
The ultimate BCI could be that proposed by Neuralink, owned by Elon Musk. Unlike the previous examples, Neuralink promises direct implants into the brain. The near-term goal of Neuralink and others is to build a BCI that can cure a wide variety of diseases. Longer-term, Musk has a grander vision: He believes this interface will be necessary for humans to keep pace with increasingly powerful AI. Just last week, Musk announced that human trials of the implants could begin later this year. He claims the company already has a monkey with “a wireless implant in [his] skull with tiny wires who can play video games with his mind.”

The advancements being made in BCI are beginning to match what science fiction authors have dreamed up in works of fiction. In The Resisters, a new novel by Gish Jen, a “RegiChip” is implanted at birth into all of those deemed “Surplus,” meaning there will not be work for them in the aftermath of mass automation. Instead, they will be issued a universal basic income and have no responsibilities but to consume, to keep the automated economy operating at an efficient level. Among other things, the RegiChip is used to track everyone, their physical location but also their activities, to complete a surveillance society. Of course, the RegiChip, like all digital technologies, has the potential to be hacked.

Cognitive scientists have said that the mind is the software of the brain. Increasingly, physical software has the capacity to meld with and augment the human mind. If AI-enabled BCI achievements already seem unbelievable, it stands to reason that BCI breakthroughs in the not-too-distant future could be truly momentous. Will the technology be harnessed for positive use cases to cure diseases or for mind control? As with most technology, there will likely be both good and bad. Software is poised to eat the mind. For now, our unexpressed thoughts remain private, but that may no longer be true in the near future.

source
Logged


Pages: [1]
  Print  
 
Jump to:  

Powered by SMF 1.1.21 | SMF © 2017, Simple Machines

Google visited last this page February 16, 2021, 05:11:29 AM