More human than human
In a University of Calgary experiment nerve cells grown on a microchip were found to be capable of processing signals from the silicon, forming memory, and transmitting signals back to the silicon. The theoretical implications of this are quite fantastic - mind-machine interaction, holographic memory, perhaps even a true 'cyberspace'. Practically, however, there's such a long way to go before any of those things happen that you and I don't need be too concerned with this development - in the near term we might perhaps see better artificial-limb control, and some advances in biological computing research (for example, instead of modelling insect behavior in a computer, we could just graft silicon right onto the bugs themselves... and be free of concerns about the accuracy of our system model), but the ability to coherently affect human perception would require a level of complexity that far exceeds anything we can even imagine right now.
Essentially, computer chips are 2-dimensional devices. The surface of the silicon is etched and doped and coated to create a complex system, but these systems are still best represented by 2-dimensional maps. Looking at the logic that the chip represents, this architecture makes sense... silicon computation usually involves digital signals that go from point A to point B through logic gates, and a specific subcircuit encodes a specific idea: on or off, true or false, the state of each gate has a meaning that is recognizably relevant to the overall computation. The topology of our brains, however, is much more complicated, both spatially and logically. Each signal connects to thousands or millions of other signals, with varying strengths of connections and threshold responses. The signals themselves are analog. And most importantly, computation is so intricately distributed throughout the brain that the state of a specific neuron or group of neurons rarely has a specific meaning. (Obvious exceptions being neurons that control physical processes, which is why artificial-limb control will be one of the first areas to benefit from this technology.) This severely complicates the task of connecting silicon to actual living nerves in a meaningful way.
That won't stop anyone from trying, however. While the Matrix-like ability to directly download knowledge ("I know kung fu") is just a fantasy, the dream of such technology is very seductive, and is likely to engage the sciences for years to come. A later generation than my own will have to determine whether these technologies are used to make machines more like people, or people more like machines.
Essentially, computer chips are 2-dimensional devices. The surface of the silicon is etched and doped and coated to create a complex system, but these systems are still best represented by 2-dimensional maps. Looking at the logic that the chip represents, this architecture makes sense... silicon computation usually involves digital signals that go from point A to point B through logic gates, and a specific subcircuit encodes a specific idea: on or off, true or false, the state of each gate has a meaning that is recognizably relevant to the overall computation. The topology of our brains, however, is much more complicated, both spatially and logically. Each signal connects to thousands or millions of other signals, with varying strengths of connections and threshold responses. The signals themselves are analog. And most importantly, computation is so intricately distributed throughout the brain that the state of a specific neuron or group of neurons rarely has a specific meaning. (Obvious exceptions being neurons that control physical processes, which is why artificial-limb control will be one of the first areas to benefit from this technology.) This severely complicates the task of connecting silicon to actual living nerves in a meaningful way.
That won't stop anyone from trying, however. While the Matrix-like ability to directly download knowledge ("I know kung fu") is just a fantasy, the dream of such technology is very seductive, and is likely to engage the sciences for years to come. A later generation than my own will have to determine whether these technologies are used to make machines more like people, or people more like machines.
0 Comments:
Post a Commentreturn to front page