Why Do Tech Companies Want to Read Our Minds? (Brain-computer interfaces, surgical implants and the end of privacy)

We are free to think whatever we want.

From a young age, we are told that no one else can know what we are thinking. Someone can study our actions, our facial expressions and our body language, but they can never know for sure what we are thinking. Our thoughts are private and secure, and shared only with those we trust.

This gives us various abilities: the capacity to lie, to hide secrets and to present the ‘best’ version of ourselves rather than revealing everything we think, all at once. It allows us to ponder, to ruminate and to come to conclusions after brainstorming. It allows us to have bad impulses but not act on them, or to think bad things but still be good people. It is key to games like Poker. It is essential to human society.

Well, that is set to change.

Facebook has decided that big data is not enough.

The company is currently working on mind reading technology, in an attempt to destroy the concept of privacy altogether. What’s worse, the technology is already being tested and is in use today.

The device being tested can already pick up certain words (Yes, no, hot, cold) and display them on a machine, say researchers working for Facebook at the University of California.

The brain-computer interfaces (currently surgical implants) pick up key words that someone is thinking at any point in time and transmit those words into text.

The technology is being tested on patients with paralysis to allow them to “speak” through a machine. It is hoped that this will allow them the ability to communicate and to operate certain basic utilities (doors, fridges and so on).

But the target audience is much, much larger.

If it has its way, Facebook will create a device that anyone can wear, without using any surgical implant. The device could be as simple as a headset that users clip onto their heads. It will be able to read someone’s mind and pickup words and commands that the wearer thinks, allowing them to operate certain software or hardware accordingly.

It sounds like science fiction.
Although the current list of commands is small (one-syllable words mainly), that is set to grow over time. Eventually, the aim is to be able to ‘mind read’ all of the English language from users. In other words, the machine will be able to eventually read the minds of anyone wearing a corresponding headset, whatever they may be thinking, and transmit that information directly to Facebook.

Rival companies including Elon Musk’s Neuralink, Kernel and Paradomics are also attempting to create mind reading technology.

Whoever does so first will gain not only a new technology, billions of dollars and fame but also the capacity to reshape the very nature of human society.
What are the ethical implications?

There are numerous ethical implications to mind reading technology.

Here are a few that come to mind.

1. The Loss of Privacy:

In the 1998 classic film, The Truman Show, the protagonist, Truman Burbank, lives in a reality TV show where all of his actions are recorded and everyone around him is an actor. He does not realize this until the end of the film (Spoilers, sorry).

At the end of the film, Truman talks to the creator of the reality show of which he is a star.

“I know you better than you know yourself,” Christof, the show’s creator, says.
“You never had a camera in my head,” Truman replies.

The implication is simple: that no matter how much control we might exert over another person, they always have the freedom to think whatever they want to think.

In that way, even someone in slavery is ultimately free. Even someone in a fake reality filled with actors still has freedom to think what they want to think.

Philosophers from Ayn Rand to Sartre have contemplated the idea that our free will exists even in the most despicable of circumstances. This is because of our ability and freedom to think about the situation at hand. In the worst prison, we can think of a paradise, for example.

In the age of ‘thought capitalism’ as we might call it, this kind of privacy will cease to exist.

The worst totalitarian regimes will be able to not only control our physical state but also gain insight, and ultimately control, our thoughts. With the capacity to read minds becoming common technology, the freedom to think what we want will be at risk.

It is the exact kind of future Mark Zuckerberg would love. In the end, we will have to “share” everything about ourselves, all of the time. Nothing will be secret. Nothing will be private.

2. The Loss of Freedom of Thought:

One of the biggest risks of this new technology, aside from privacy, is the risk that we will lose our capacity for independent thought.

The risk is threefold. Firstly, we may come to think like everyone else due to a new kind of ‘thought peer pressure’. Secondly, we may come to think like Facebook or another company wants us to think, in order to comply with the use of their technology. And thirdly, certain thoughts might become regulated or monitored in some way. In the worst case, certain thoughts might become crimes.

Let’s start with the first point:

When we know what others are thinking, that may change how we ourselves think, leading to a homogenization of human thought.

Consider it in this way: when social media first arose, a trend was noticed. The more we learned of each other’s behaviour, the more we tended to mirror each other’s behaviour. Facebook is a carnival of similar shared experiences: weddings, new jobs, parties and so on. The similarity comes from the fact that we know what other people are posting, and so we follow suite. A peer pressure exists on social media platforms that leads to homogenization.

Imagine the same dynamic but with thought. If we know what other users are thinking, will that change what we think on a daily basis? Will platforms implore us to “share what we are thinking?” as Facebook currently does, but with a more direct imperative? Will trendy thoughts emerge? Will corporate thoughts emerge?

How do we hashtag thoughts?

Let’s take this one step further. If companies have the capacity to read our thoughts, will they demand that we think certain things to use their apps? Perhaps there will be certain login ‘phrases’ that users will have to think, to trigger positive associations with the apps that they are using. How much control will we have over our thinking when using these platforms? The questions in this area are endless, with few, if any, easy answers.

3. The Psychological Manipulation of Users, Elections and Democracy:

Tech giants are already learning or facilitating the psychological manipulation of users, either directly or indirectly (See: the Cambridge Analytica saga), alongside the rigging of elections, the spreading of misinformation and the spreading of addiction to apps, which I have discussed elsewhere.

‘Brain data,’ as we might call it, will make these trends even worse.

With the capacity to track what users are thinking, a company could begin to psychologically manipulate those users.

This manipulation could occur through advertising (literally showing users what they want to see, in order to inspire a purchase). It could also occur through rigging an election, by responding to users’ thoughts and thereby allaying concerns about a candidate and or triggering a certain thought process or vote.

By combining ‘brain data’ and big data, companies could chronicle what we think when we look at certain content, and change the content accordingly.

Algorithms could be used to create the most addictive content ever known to man. This would be a super-charged version of what is currently occurring in the video gaming industry.

The power of big data combined with thought data would allow companies to predict, determine and then change our responses to products, ads and software.

This would all be done without our knowledge, at the back end of programs through algorithms and code.

Currently, app companies use big data to determine when someone logs off or disengages. With thought data, app companies will be able to determine when users are not thinking about the product in front of them.

This gives new definition to the “attention economy”. It will become a race to the bottom to get users to think about products as frequently as possible while using them, without letting their brains wander to dinner or romance or other, irrelevant topics.

The risk here is profound. What if we lose our entire ability to daydream? What if apps start automatically logging out when we stop thinking about them?

Companies could certainly begin to train us to think what they want us to think in these circumstances, raising huge questions about free will, autonomy and privacy rights.

What can we do?

It is time for governments the world over to create laws, regulations and protections against brain-machine interface technologies in order to safeguard us against privacy risks, security risks and autonomy risks.

Laws should be aggressive in asserting the rights of users against company interface. Specific attention should be paid to privacy and what we might call freedom of thought.

New laws should also safeguard users from unjust contractual provisions. Companies should not be able to get users to give away their ‘thought rights’ using complicated contractual agreements.

Instead, we should empower consumers with legal protections and rights and empower government agencies with oversight legislation.

The risks posed by this technology are so severe that a practical debate must be had as to how it can be controlled effectively and whether such control is feasible.