THINKERS LECTURE 2011: PONDSCUM, SCARED MICE, AND THE GLOBAL BRAIN

Yesterday I had the great joy of experiencing the Pre-Yule Lecture of Extropia

Welcome to this year’s Christmas lecture!

The talk this year is called ‘ Pond Scum, Scared Mice and the Global Brain’. In a way, this talk links back to the first one I ever gave, which was called ‘Technological Mount Improbable’. I gave that lecture back in April 29 2007 I think, and I wrote an article for H+ magazine based on it in 2009. Both the lecture and the article presented a visual metaphor for understanding what I believe to be the key principles driving us towards a Singularity: Cumulative knowledge and convergent knowledge.

Cumulative knowledge was summed up by Kurzweil in the following way: “Each stage or epoch uses the information-processing methods of the previous epoch to create the next”. In other words, the technology of the past equipped us with the ways and means to reach our current technological level, and this in turn will enable the development of future technologies. An obvious example of this is using today’s computers and software to design the next-generation of computers

In that example, the relationship between past, present and future technology is plain to see, but this is not always so. ‘Convergent Knowledge’ refers to situations where a solution comes from a seemingly unrelated area of research. In an article he wrote for ‘Nature’ (‘The Creativity Machine’) Vernor Vinge talked about the need to ‘extend the capabilities of search engines and social networks to produce services that can bridge barriers created by technological jargon and forge links between unrelated specialities, bringing research groups with complementary problems and solutions together- even when they have not noticed the possibility of collaboration’.

Which is where that pond scum and those scared mice come in. The pond scum I am referring to goes by the Latin name of Chlamydomonas. It is an algae that looks a bit like a tiny football with a tail. A German biologist called Peter Hegemann studied this algae in order to figure out how its molecular motor worked. It was already known that chlamydomonas was somehow powered by light. After all, exposing the algae to light caused the little tail to spin wildly, propelling it along. Hegemann and his colleagues eventually worked out that there were coiled-up protein molecules that studded the surface of the cell’s membrane. A photon hitting one of those protein molecules causes it to uncurl, and that creates a tiny pore in the membrane. Charged ions then flow across the membrane, changing its electrical properties. The membrane discharges a tiny shock, and that powers the tail. More studies determined which genes coded for these light-sensitive proteins. These genes were given the name ‘Opsins’.

Biophysicists and microbiologists studied opsins for reasons that had nothing to do with neuroscience. And yet, those very genes ended up being used as a key component in one of neuroscience’s most capable new technologies. The beginning of this example of convergent knowledge can be traced back to Francis Crick, the co-discoverer of the structure of DNA. In a 1979 Scientific American article (‘Thinking About The Brain’), Crick wrote about how the tools used to understand the brain were too crude. Ever since the 1940s, we have applied tiny electric currents to the brain in order to stimulate areas of it. This practice was first carried out by a neurosurgeon called Wilder Penfield, who applied miniscule electric shocks to the brains of patients undergoing surgery for eplileptic seisures. Experiments such as this helped build upon studies in microanatomy, which showed the brain can be mapped into distinct regions, each responsible for a distinct function. Through research like Penfield’s we now know that any experience you may have is associated with some specific pattern of neural activity.

I expect most of you are familiar with Parkinson’s sufferers whose symptoms are managed with electrodes buried in the brain; or drugs that are used to treat depression. Both kinds of treatment come with side-effects because both affect many types of neurons indiscriminately, rather than target only the specific neural circuits that are the root of the problem. It is this lack of finesse that made reverse-engineering the brain so difficult. EEG and fMRI record averaged signals from oxygen consumption by millions of nerve cells. This lets us know where in the brain a particular mental task is being performed, but it cannot tell us how.

We have the capability to monitor single neurons, which obviously provides useful information, but on its own a neuron is not much use. As Blue Brain leader Henry Markram put it, “neurons are not islands. They need a group of neurons around them, and the minimum set of group of neurons around them turns out to be approximately the size of a column”. A ‘column’ is a kind of microcircuit, and it is the precise wiring and function of these microcircuits that neuroscience needs to reverse-engineer.

Francis Crick speculated that light might be used to control specific circuits of the brain, because it can be delivered in precisely- controlled pulses. Achieving this required somehow making particular neurons light-sensitive. That is how Chlamydomona came into the picture or, more specifically, those opsin genes coding for light sensitivity. A psychiatrist from Stanford called Karl Deisseroth took a particular opsin called ‘channelrhodopsin’ (which was discovered by Peter Hegemann) and used it to create a counter-clockwise mouse. It was also Deisseroth who would give this technology its name: Optogenetics.

As the name suggests, ‘optogenetics’ is “the combination of genetic and optical methods to control specific events in targeted cells in living tissue”. We gain precise control thanks to techniques in genetic engineering. This involves using viruses to deliver channelrhopsin genes into cells. You can think of a virus as being like a tiny syringe that injects instructions for making more syringes into cells. Now imagine that all those genes are removed, and the channelrhopsin gene is put in the ‘syringe’ instead. Only this gene will be injected into the cell by the virus. By injecting tiny amounts of virus, it is possible to ensure that only specific areas of brain tissue will receive the new gene. This area can be as small as a cubic millimetre. It is also possible to target specific cell types in a targeted area of brain tissue. This is achieved using a ‘promoter’ which is a piece of DNA that controls whether or not a given cell type uses a gene. So, the viruses inject the chanelrhodopsin gene into all nerve cells in a cubic millimetre of brain tissue, but the promoter gene ensures it only gets ‘switched on’ in specific neurons. In all the others it is inactive.

In the case of the counter-clockwise mouse, the channelrhropsin gene was cut-and-pasted into the right anterior motor cortex, which controls the left legs. Fibre optics were fed through the skull of the animal in order to direct light at the modified tissue. As soon as the light was shone, the mouse began running in circles in a counter-clockwise direction. When the light was turned off, the mouse stopped running and went back to doing whatever it was doing.

Since this demonstration in 2007, we have improved optogenetics to gain more control over brain tissue, which is enabling more understanding of how neural circuitry functions. An opsin called halorhodopsin was found that can inhibit neurons from firing. In other words, if channelrhropsin is an on switch for neurons, halorhodopsin is an off switch.

A useful aspect of opsins is that different ones react most strongly to different colours (or wavelengths) of light. For instance, channelrhodopsin reacts most strongly to light at 480 nanometres, which is blue light, Halorhodopsin reacts to yellow light. This makes it possible to turn on neurons in one specific area of the brain by shining blue light, while simultaneously shining yellow light to inhibit neurons in another area of the brain.

It is also possible to monitor neuronal activity. We do this by including what is known as ‘green fluorescent protein’ or GFP along with the opsin and the promoter. GFP causes the neurons that make up the targeted circuit to flash green, allowing us to simultaneously stimulate and record the activity of specific circuits. Moreover, by adding genes that cause neurons to flash when they turn on genes that manufacture certain neurotransmitters, we can (in the words of Andrew Hives at Howard Hughes Medical Institute) “potentially have each neurotransmitter assigned to a different colour GFP variant. Orange for glutamate, red for GABA, yellow for seratonin”.

So, thanks to optogenetics, we can begin mapping neural circuits in great detail and infer the computational and informational roles of those circuits from how they transform our signals. Because it is so precise, optogenetics may one day lead to implanted devices that target specific circuits in the brain, enabling us to understand exactly what causes certain neurological conditions and eliminate them without side effects. And we shall gain a more thorough understanding of how healthy brains actually work.

At the end of the ‘Counterclockwise Mouse’ demonstration, Karl Deissertoth made it clear that convergent knowledge had played a vital role, saying, “these microorganisms were studied for decades by people who just thought they were cool. They didn’t have a thought for neurology, much less neuroscience… (but) without that, we would not be able to do what we did”.

Powerful as it is, optogenetics will only allow a partial understanding of how the brain works. If you think of the brain as being sort of like a computer, optogenetics helps us figure out the ‘hardware’ but we also need the ‘software’. This requires ‘cracking the neural code’. That is, the rules the brain follows to convert collections of electrical impulses into perception, memory, knowledge and behaviour. Scared mice are being used to help crack the neural code.

Among the teams working on cracking the neural code is a group lead by Joe T Tsien. As Tsien himself explained, “we study the questions that many people are curious about. How the brain works, how memory works. We then take it down to different levels. What is the molecular basis for the memory level process? That means what genes are involved in laying down memory at a very fundamental level?”.

Investigating memory at this level, Tsien had determined what molecules are critical to the process, and had used this knowledge to genetically-engineer a genius mouse that the team nicknamed ‘Doogie’. But while this achievement leant more weight to the materialist philosophy of mind, ‘memory’ was still rather mysterious. As Tsien said, “nobody knew how, exactly, the activation of nerve cells in the brain represents memory”. So, he and his team set out to “find a way to describe, mathematically and physiologically, what memory is”.

Along with Longian Ling, Tsien developed a recording device that would enable them to monitor the activities of hundreds of nerve cells. This probe was set up to record activity in a region of the hippocampus called CA1, which was already known to be key in memory formation. With their brains rigged to record any activity going on, mice were put through a series of experiments designed to be mildly alarming while not causing actual harm. The reason why startling events were chosen is because they tend to produce strong and lasting memories, and that requires a large number of cells in the hippocampus. This made it more likely that the team “would be able to find cells activated by the experience and gather enough data to unravel any patterns involved in the process”.

The mice were subjected to an ‘earthquake’ (put inside a container, which was shaken), an ‘owl attack’ (simulated by a puff of air to the mouse’s back) and an ‘elevator drop’ (put in a box that was allowed to free-fall a short distance). Each mouse was put through each event seven times, with each episode separated by several hours of rest. At all times, the brain was monitored for any activity.

The data gathered during the experiment was then analyzed using pattern recognition methods, especially one called ‘Multiple Discrimimant Analysis’. MDA can be thought of as a kind of translation tool that converts the native language of neurons (which we do not understand) into a visual format we can make sense of. Tsien himself described MDA as “a mathematical subspace capable of discriminating distinct patterns generated by different effects”. The team projected the data they had gathered from an individual mouse onto MDA’s 3D graph, and it showed four distinct “bubbles”, three for each startling episode and one for when the mouse was resting. What these bubbles represented was distinct patterns of activity in the CA1 neural ensembles.

The MDA analysis was repeated again and again for different times of that animal’s experience, thus enabling the team to see how the patterns evolved dynamically, allowing them to see more clearly how the animal was laying down memories of each event. They then used another method called ‘Hierarchical Clustering Analysis’ with the sequential MDA method in order to figure out how the network of neurons were encoding different events.

Using these methods, the team discovered that a particular group of neurons was firing for every event. It seemed reasonable to assume this group were responding to something all events had in common: The fact that they were startling. Tsien calls these distinct subsets of neural populations ‘neural cliques’ explaining, “a clique is a group of neurons that respond similarly to a select event and thus operate collectively as a robust coding unit”. It was also determined that each event was represented by a set of neural cliques encoding different features of an event. Because some cliques were activated by all three episodes, others by fewer, and others by only one specific kind of event, the team theorised that “information about those episodic events is represented by neural clique assemblies that are invariantly organized hierarchically (from general to specific)”.

Say you put a mouse through the elevator drop. You get a neural clique that also appears in the ‘earthquake’ and ‘owl attack/air puff’. Call this a ‘Startle Clique’. You also get a clique that is present during the ‘earthquake’ but not the ‘owl attack/ air puff’. What does the earthquake have in common with the elevator drop that is not shared by the owl attack? Well, the former two involve some kind of motion whereas the latter does not. So, call this a “general motion clique”. You get a clique that is activated only by the elevator drop. It must therefore be encoding specific details of motion not shared by the “earthquake” event: A ‘drop clique’.

Drawing on this discovery, Joe Tsien explained, “the brain relies on memory-encoding cliques to record and extract different features of some event, and it essentially arranges the information relating to a given event into a pyramid whose levels are arranged hierarchically from the most general, abstract features to the most specific. We also believe that each such pyramid can be thought of as a component of a polyhedron that represents all events falling into a shared category”.

One useful aspect of this way of organizing memory is that information extracted from a novel experience can be integrated with past experiences that have something in common with it (whether it be specific details or more, general abstract ones). What the brain essentially does, is it substitutes the specific cliques that sit on the apex of the memory pyramid. So, what this combinatorial, hierarchical approach to memory formation provides, is a way for the brain to encode key features of specific episodes while simultaneously extracting general information that it can apply to future events- ones that may share some essential details but differ in other ways.

Having uncovered the basic mechanism of memory formation, the team then set about devising a method that would allow them to compare patterns from one brain to another; pass information from a brain to a computer and even decipher what someone remembers and thinks. To do this, they used a mathematical treatment called ‘Matrix Inversion’. This enabled them to translate neural clique assemblies into a string of binary code, where a 1 shows a clique is active and a 0 shows inactivity. Because each memory pyramid generates a unique string of binary code, simply by scanning the code the team could infer what experience the mouse had been through and where it had happened, with up to 99% accuracy.

Considering the future applications of this work, Tsien said, “realtime processing of memory might, one day, lead to memories downloaded directly to computers for permanent digital storage…Someday, intelligent computers and machines…with a logical architecture similar to the hierarchical organization of memory-encoding units in the hippocampus might…even exceed our human ability to handle complex cognitive tasks”.

What I want to consider now is how this kind of work in understanding the ‘software’ of the mind, and technologies like optogenetics that enable increasingly exact interfacing with neural circuitry might lead to a new kind of Internet that overcomes the deficiencies of the Web we are familiar with today.

Recently, there have been several books from experts in fields such as neuroscience and psychology, pointing to what they see as negative consequences on the individual, the family, and society, resulting from the Web. Sherry Turkle sees the rise of home Pcs creating ‘post familial families’, where members of a family prefer not to be together but live largely separate lives, each individual in his or her room, logged-in to social networks and games. The same author argues that former communal spaces like parks and train stations have transformed into places of social collection where people gather but do not speak to one another, fixated instead on the social spaces they can access on their mobile devices.

It might be tempting to blame the sheer ubiquity of networked devices for this erosion of face-to-face communication, but maybe the real problem is that the Internet is not ubiquitous enough? Currently, it may seem as though one must choose between either socialising in real-space or focusing on the screen, but what if we eliminate the screen? What if it could be replaced with a method for communicating not just words but also touch, emotions and thoughts as well? In short, what if it no longer made sense to think of the Web as somehow separate to real space, and instead it was just always there like one more sense the brain experienced? I would argue that optogenetics and Joe Tsien’s work in deciphering the neural code may one day lead to a new kind of Internet that enables just that.

Let us just recap what modern neuroscience has achieved:

We now have fibre-optic tools that can deliver light to any area of the brain, be it on the surface or deep inside. We can observe, control and map working neural circuits with great precision. Indeed, with fibre-optics, the beam of light can be made narrow enough so that only one neuron is affected. We can simultaneously control mixed populations of cells by using different wavelengths of light to send different commands. We are making progress in reverse-engineering the rules the brain follows to turn electrical impulses into subjective experiences. We are beginning to learn how to translate brain activity into binary code.

In short, we are beginning to network brains with technology, using interfaces more intimate than anything that has come before. The next stage may be to internetwork brains, which would involve directly connecting one brain to others using brain-computer interfaces that transmit and receive communication protocols over the Internet. Suitably connected, it would be possible to transmit thoughts from one mind to another. So, I think of something. This mental experience is correlated with a specific pattern of neural cliques, and, by observing this pattern, it can be determined with 99% accuracy that I am thinking of a banana. The mental pattern is converted into a unique string of binary digits to be transmitted over the Internet. The computer interfacing with your brain converts the binary code into your pattern of neural cliques corresponding to the concept of ‘banana’.

This might sound like mind reading, but a better term for it might be ‘mind inference’. This is because the method described here can read my pattern of neural cliques and trigger your pattern, but it cannot write my pattern into your brain. In other words, the ‘hard problem’ of consciousness- knowing what subjective experiences feel to another person- remains unresolved. What this hypothetical technology does is to instruct one mind to interpret the content of another through neural pathways that are wired in a unique way, shaped by a life experience specific to that individual.

Also, this method can only activate cliques that exist and that have already been mapped. Think of it as being an inventory of cliques. The more cliques you share in common with someone, the more likely you would be to correctly infer what that person is thinking or experiencing. But, if that person is having an experience you have not had, there will invariably be missing cliques, perhaps too many for you to make sense of the incoming signals.

This might sound like a cumbersome way of communicating a thought. Why not just send a message like ‘I am thinking of a banana’? But, if the mechanics can be hidden from conscious awareness, this hypothetical method would feel as effortless as simply picturing a banana in your mind.

It is also worth remembering that connecting brains directly to the Internet would enable not just the transmission of thoughts from one brain to others, but also everything else the brain is responsible for generating. So far as we know, every experience we can possibly have- every thought, feeling, action- is associated with some specific pattern of neural activity.

Currently, Second Life can tell you that a friend has come online or logged off. Now, suppose implants could transmit data to your spine’s prosterior column-medial lemniscuses pathway. What this normally does is track the position of your limbs in space. If you close your eyes, extend your arm and sweep it about, you can sense precisely where your limb is. We call this sense ‘proprioception’. An implant might provide a proprioceptive sense of your friend’s position in 3D space, enabling you to track their position in the virtual environment with the intuitive ease of tracking your own limb. One can imagine future professional sports players or the military taking advantage of a sixth sense that enables teamwork coordinated to levels unmatched by the unaugmented.

So, you know that your friend is online and you have a sense of her whereabouts. What mood is she in? Online worlds as they exist today, with text messages and emoticons, are often poor substitutes for the richness and subtlety of communication in physical space. More expressive avatars would help, but there is no reason in principle why the neural activity correlated with emotion cannot be read and the equivalent pattern triggered in your brain. Michael Chorost described this as ‘Telempathy’- the ability to mentally tune-in on another’s emotion (though, again, this is inferring her emotion through your own subjectivity, not feeling exactly how she feels).

OK, so you can track your friend’s position in the world and have some sense of her moods, perhaps as a background sensation that you can choose to focus on or let slip into unconsciousness. The avatars we use today are rather senseless things. A hug in SL does not have the tactile sense of an embrace in real life. But, again, with proper stimulation of the requisite brain regions (in this case, the somatosensory regions) there is no reason why you should not feel like you fully embody an avatar. No doubt, virtuoso lovers will make full use of sensual avatars, telempathic communication and the ability to trigger the neural correlates of sexual pleasure.

That is what could be achieved via the linking of one brain to another. But why stop at one? Any number of brains could be networked together. In his book, ‘World Wide Mind’, Michael Chororst imagined how scientific discovery might advance in an age of linked brains. One scientist is on the verge of discovery. Not yet a fully articulated thought, just the very beginnings of an idea. The neural cliques associated with this developing idea trigger ‘aha’ sensations in the minds of other scientists. Now, the germ of an idea that formed in the first scientist’s mind is taking root in the minds of others. Because they all work in the same field, the group share enough cliques to make sensible inferences, while at the same time the unique way in which each brain is wired means the idea is seen from slightly different perspectives in each mind. As the group sketch out the idea, its reinforcement would strengthen the collective excitement, drawing in more scientists. Alternatively, if the idea cannot stand up, the collective would break apart, with each person going back to his or her individual concerns.

One can almost imagine this internetworked group of minds as a region of a meta-brain that is specialised for tackling a problem or idea in a particular scientific field. People have wondered whether the Internet is (or could be) self aware. In and of itself, it is unlikely the Internet is conscious for several reasons. Whereas there are many kinds of neurons in mammalian brains, each carrying out a specialised function, all of today’s computers are essentially the same. Also, neurons are far more densely connected. Each computer can only process one incoming bit at a time, which is paltry compared to the massively parallel nature of the brain’s information-processing capabilities. Chorost pointed out that, as of 2009, a total of 2 billion computers were connected to the Internet, which is fiftyfold less than the 100 billion neurons that make up a human brain. All of which makes it very unlikely that the Internet as the complexity sufficient for self-awareness.

But, Chorost argues that the Internet has aligned complementary strengths with human capabilities. Both are intensely networked, communicative entities. The Internet can retain far more information and retrieve it far more quickly. But, it cannot understand that information whereas humans can. If you include collective human activity as a component of the Web, if you view the Internet as networks of computers plus their associated users, then the possibility of an emerging global brain becomes more plausible.

In ‘The New Elective Brain’, Elkhonon Goldberg suggested that a combination of human declarative knowledge, human choices about that knowledge, computer systems that collect votes about that knowledge and high-speed communication networks integrating them all are enabling search engines to perform equivalent tasks to the brain’s frontal lobes and hippocampus. The job of the frontal lobes is to get ‘votes’ from many parts of the brain and use this information to select what is important. This is comparable to Google’s Pagerank algorithm, which treats every link to a webpage as a ‘vote’ from some human being who has deemed the page worth reading. It is not just the fact that people link to a page that Google analyses. It also measures how long users spend on a page, working under the assumption that the more time people spend on a page, the more useful it is.

The end result of all this is that the most popular pages will amass such a high pagerank, they effectively have permanent storage in the Web’s long-term memory. And the least popular pages all but disappear. So we can say Google performs analogous functions to a hippocampus, which performs the job of deciding which short-term memories are worth turning into long-term ones (and, remember, when I say ‘Google’ I include collective human activity as part of the system).

Michael Chorost sees other specialised regions forming. “Blogs… could be seen as a collective amygdala, in that they respond emotionally to events and thus signal their importance to the rest of the system…Facebook can be seen as the beginning of an oxytocin/vasopressin/seratonin system, in that it acts as a moderator of social bonding”.

These comparisons may seem like a case of stretching analogies too far, but as our connections to the Internet grow more intimate, as we link together thoughts, feelings and actions, and as we learn to maximise the potential of crowdsourced talent and networked machine intelligence, perhaps we should expect the emergence of some new entity? After all, we have past examples. Chemical systems of increasing complexity became ‘life’. Many single cells working together eventually formed societies so intimate and interdependent that it is more appropriate to see them as a single organism, an ‘animal’ or a ‘plant’.

The idea of humanity and its technology might coalesce into a global brain is sometimes seen as a dystopian future. The very notion of an amalgamation of minds seems reminiscent of the Borg. Would we risk loosing our individuality in becoming part of a hive mind? Arguably, what evidence we have points to an intensification of individuality as a result of forming a collective. The different type of cells in an animal (muscle, heart, blood, nerve cells to name a few) show more specialisation than the daughter cells of an amoeba. Humans live in more complex societies than any other animal, and this necessitated more and more specialization in knowledge and skills. Perhaps, then, Star Trek’s Borg paints entirely the wrong picture?

Pierre Teillhard, French philosopher, paleontologist and theologian, certainly would have disagreed that the Borg represented humanity’s future. In ‘Phenomenon of Man’, Teillhard argued that the history of the universe consisted of matter and energy organizing itself into increasingly complex forms. For instance, the ‘geosphere’ comprised of systems like the weather, the oceans, and geological activity. It enabled the evolution of life and therefore the emergence of complex ecosystems or a ‘biosphere’. Lifeforms evolved more and more complex brains, nervous systems and communication. Teillhard believed the obvious next step would be for humans- the most sociable, co-dependent and communicative species on the planet- to form a ‘noosphere’- what we might call a global brain. And he did not stop there, instead pushing onwards to what he saw as the obvious conclusion: Global brains coalescing into a single universal intelligence that Teillhard called Omega Point and which he identified with God. As this development progressed, Teillhard imagined “each particular consciousness becoming still more itself and thus more clearly distinct from others the closer it gets to omega”.

Were he alive today, Teillhard would surely have seen our global communication systems, networked computational devices and online social networks as representing the embryonic stage of a global brain. He would no doubt have expected such networks to increase in complexity, for global communications to transmit not just words, sounds and pictures but also feelings, thoughts, tactile sense and dreams. Imaginations converted into zeros and ones, shared with the group and the group providing a wealth of fresh perspectives that provide the individual with new ways to grow, more ways to develop and therefore increased individualization. Us. We and our information technology, woven into a meta-mind that spans a planet and which owes its existence, if only partly, to pond scum and scared mice.

Well, that is cumulative and convergent knowledge for you.