Telepathic AI, Neuronal Art and Squids, Everywhere!

Pierre Huyghe Image from The guardian

Technology in the Gap

This is the first in a little series of posts about the real-life developments in technology that led to the bizarre extrapolated versions in my new book, Mind in the Gap. In this one, I share some of my research on the possibilities of creating art directly from the brain.

Squids, Everywhere

SQUID is a real device used to detect minute changes in electromagnetic fields, including those in the brain. It stands for Superconducting Quantum Interference Device, and has been used in science fiction for years, most memorably in William Gibson’s novels as a way to read-out information stored in neural circuitry.

The current method for brain imaging, Functional Magnetic Resonance Imaging (fMRI), has limitations in that it is blind to detailed and direct neuronal activity. SQUID is one of the developments being investigated as a replacement for the future.

I used this idea in my story Frankie. I wanted to show such technology becoming so canonical and safe that it was used in a socio-commercial setting as opposed to only in medicine. I created an alternate reality where people no longer carry around mobile devices to interact with the world, but instead wear headpieces that continually read and output brain signals (and actually look like a squids!).

Telepathic AI

Scientists in a Kyoto laboratory have been working on a project that uses AI to analyse data collected during fMRI scans, and to interpret them into visual representations of what the individual was imagining at the time from a database of photographs. They call it Deep Image Reconstruction. Artist Pierre Huyghe worked with this recently by asking volunteers to imagine things he described, and then getting the AI to create a visual from their brain signals.

“If I tell you to think of an apple, the apple you think of will not be the same apple I think of,” he told The Guardian. It is one subjective impression (quale) informing another, which is then interpreted by an artificial intelligence. The resulting images are far from accurate according to those involved; they look nightmarish, fleshy and deformed (see the image above). They are uncanny: somehow recognisable to us but just strange enough that we know they can’t be real. You can read the whole article here.

I like to imagine these are the kind of images AI could think up independently in the future if we tried to simulate human perception. Would these grotesque mashed up images define us as a species in the mind of a robot? And I’m not even going to get into the possibilities of AI becoming capable of spontaneously reading our minds. I’ll save that for when I come to post about the horrors of my story One…

Painting with Thoughts

For several years we have been able use a brain-computer interface to command painting software: painting pictures with our thoughts, choosing colours and placement based on the way we focus our attention. It has been used to help people who don’t have use of their motor functions, and is hoped to even become an effective communication channel for people suffering locked-in syndrome.

Some say we can also use brain painting as a meditative therapy of sorts. If we ‘map’ our thought patterns and create a visual representation of what altering that would look like. It’s a bit like Cognitive Behavioural Therapy with a visual aid and creative output, and I’d be very interested to see whether this is proven effective in the future.

image

Do Androids Dream?

A type of AI called a Convolutional Neural Network (CNN) has filters capable of abstracting out aspects of images in layers. This has been used in various experiments. For example, we know that CNN can produce new images that combine the ‘content’ of one existing image and the ‘style’ of another – think of the filters you have on your phone.

Google’s DeepDream uses a CNN to find and enhance patterns in images via algorithmic Pareidolia to produce psychedelic, over-processed images (pictured above) These experiments with neural nets are already evolving at pace. Artwork created by CNN is selling for thousands of dollars, and is informing the way virtual and augmented reality develops.

Frankie

In Frankie, I combined the general ‘output’ ideas of Deep Image Reconstruction, CNN and Brain Painting with the ‘input’ of advanced brain scans at neuronal level. I imagined a little piece of worn tech taking minute signals from the brain, that could output them instantly onto the surfaces around us to create a sort of communal psychedelic wonderland. I thought about what it could be like if certain skilled individuals were able to build the output images up in layers to create hologram-like objects for as long as concentration would allow. My nameless protagonist lives in a world where this is what memes have become.

Being a lover of psychology, I then began to wonder: what, then, might happen if the headpieces could take readings from the subconscious mind to show us things we didn’t realise we were thinking? It could tell us ‘You share an exciting chemistry with that person over there,’ or ‘you are harbouring deep-set doubts about this.’ And what if those thoughts in the subconscious weren’t intermittent, but ever-present in the background, and ever-growing? You’ll have to read Frankie to find out!

* * *

If you’ve already read Mind in the Gap, have you seen the Connections and Easter Eggs page? People are starting to add their theories and findings, and I’d love you to join in. If you haven’t read it yet, you can check out the blurb or pick up a signed copy here. It’s also available as an ebook on Kindle or as a paperback from anywhere that sells books. Thank you!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: