Mind Reading Fever Flares Up Again
Every now and again, we have a little flare up of mind-reading fever. And when that happens, I break out my picture of this tin-foil-hat-wearing cat.
The latest flare up comes courtesy of Paul Root Wolpe over at Forbes.com, in a column titled Is My Mind Mine? Yes, Paul, it is.
Here’s the argument:
Neuroscience has, for the first time, demonstrated that there may be ways to directly access human thought–even, perhaps, without the thinker’s consent. While the research is still preliminary, the science is advancing at an astonishing rate. While many obstacles need to be overcome and the technology is not yet practicable, the implications for our current state of knowledge are profound.
While our abilities in these areas are still quite limited, and while there is always the possibility that the technology will never progress to the point where it can extract truly useful information from anyone, the time to think about the implications of this endeavor is now, before the technology is upon us. The appeal of the technology to the state is obvious. So we need to ask ourselves: What are the limits of the use of this technology? Should we ever allow the courts, or the state, to demand access to the recesses of our minds?
My answer is no. The skull should be designated as a domain of absolute privacy. No one should be able to probe an individual’s mind against their will. We should not permit it with a court order. We should not permit it for military or national security. We should forgo the use of the technology under coercive circumstances even though using it may serve the public good.
If by “for the first time” you mean “for years now,” then, yes, neuroscience has shown us we can look inside the brain “for the first time.” In fact, the stuff we’re starting to be able to do is rather amazing. There’s work on fMRI lie detection, which is — for now — scientifically interesting if practically useless. Scientists in Japan have recently used fMRI to crudely reconstruct an image a subject views in a brain scanner. It’s possible to figure out which image a person is looking at, when given a limited set of images. Another cool thing scientists are working on is determining what word a person is thinking of; researchers at Carnegie-Mellon University are making rather amazing progress in being able to tell what word (for instance, “knife” versus “barn”) a person is thinking of in a brain scanner (link to Science here; PDF of study here; 13 minute CBS News segment embedded here).
Now, while this is all very rudimentary, if anything humans will ever be able to do could be called “mind reading,” this is the beginning of it. So, should we be worried?
First of all, we should realize that any significant real-world application of these types of technologies is decades away. fMRI lie detection, for all the hype, is simply nowhere near ready to be held to the standards that would be necessary to admit it in court. What’s more, even if it did work, it would be virtually impossible to administer to an unwilling subject. (The one major case where someone tried to introduce fMRI lie detection evidence in California involved a defendant eager to use the technology to prove his innocence.) The reason is simple: fMRI lie detection involves sitting in an MRI machine for long periods of time — and lying still to allow for accurate readings.
Other mind-reading technologies would seem to be constrained in much the same way. The types of experiments we’re doing all require a tremendous amount of cooperation from the subject. And if a subject is lying about what he’s doing with his own mind while we’re measuring his brain activity (you tell him to picture the crime scene, or ask him if he recognizes a room you’re showing him in a photo — perhaps he’s concentrating with all his might on a mental image of a kitten sitting in a bowl of oatmeal), that could compromise the results pretty thoroughly.
And even assuming that something like fMRI lie detection worked and could be administered to an unwilling participant — what’s so much worse about that than, say, regular lie detection? We’ve already decided as a society that we’re okay with the idea of a lie detector (so comfortable, in fact, that we don’t care that the ones we already have don’t really work). Why would we be uncomfortable with a lie detector that simply utilized a different technology?
The bigger problem, it seems to me, is if these new technologies are as flawed as (or worse than) current technologies, yet we trust them anyway. People tend to trust anything that involves a picture of a brain scan, regardless of its validity. Add that to the usual terrible job jurors do, and we’ve got a recipe for a new generation of faulty convictions.
However, to the extent these technologies work, I think the greatest question is constitutional. While the Fifth Amendment gives one the right not to self-incriminate, what about a case where you put someone in a brain scanner, don’t require them to speak, but show them pictures or ask them questions while scanning their brain? The person would not have been forced to speak. But his mind may have incriminated him.
And then there’s the question of the use of these technologies in intelligence gathering. Detainees would, of course, not be protected by the Fifth Amendment. Would mind-reading technology be better or worse than, say, the interrogation policies under the Bush regime? I can’t help but think it would be better both for intelligence and for detainees than the prospect of torture.