Brain-computer interfaces are developing faster than the policy debate around them
A few days ago, Facebook disentangled itself from a nettlesome investigation by the Federal Trade Commission into how the company violated users’ privacy. And then, with that matter now squarely behind it, Facebook on Tuesday stepped forward to share some information about its effort to read our minds.
Two years after the company announced its mind-reading initiative, Facebook has an update to share. The company sponsored an experiment conducted by researchers at the University of California San Francisco in which they built an interface for decoding spoken dialogue from brain signals. The results were published today in Nature Communication.
The work itself is fascinating, as you might expect from the subject matter. Brain-computer interfaces aren’t new, but the existing ones aren’t particularly efficient — particularly the ones that don’t involve drilling into your skill. Facebook’s approach relies on high-density electrocorticography, aka ECoG, which implants sensors on the brain and uses them to record brain activity.
And its most recent research apparently showed promise, Adi Robertson reports:
If participants heard someone ask “Which musical instrument do you like listening to,” for example, they’d respond with one of several options like “violin” or “drums” while their brain activity was recorded. The system would guess when they were asking a question and when they were answering it, then guess the content of both speech events. The predictions were shaped by prior context — so once the system determined which question subjects were hearing, it would narrow the set of likely answers. The system could produce results with 61 to 76 percent accuracy, compared with the 7 to 20 percent accuracy expected by chance.
“Here we show the value of decoding both sides of a conversation — both the questions someone hears and what they say in response,” said lead author and UCSF neurosurgery professor Edward Chang, in a statement. But Chang noted that this system only recognizes a very limited set of words so far; participants were only asked nine questions with 24 total answer options. The study’s subjects — who were being prepped for epilepsy surgery — used highly invasive implants. And they were speaking answers aloud, not simply thinking them.
If successful, the work will have important clinical applications — it could help patients to communicate who have lost the ability to speak, for example. Facebook hopes the technology has a broader use — enabling what former Facebook crazy-project chief Regina Dugan once called a “brain click.” Allow people to click through dialog boxes with their minds, she told us in 2017, and you create lots of interesting new possibilities for augmented and virtual reality.
That goal remains very far away. But that seems like a good time to ask whether any of this work should, you know, be done in the first place. Antonio Regalado’s piece on the Facebook experiment gets at why:
Comments
Post a Comment