Human Brain is More Powerful than All Computers Ever Made

Human Brain is More Powerful than All Computers Ever Made
Human Brain is More Powerful than All Computers Ever Made

For you created my inmost being; you knit me together in my mother's womb. I praise you because I am fearfully and wonderfully made; your works are wonderful, I know that full well...Psalm 139
In a prior article about the human brain; "A Single Brain More Poweful Than All Computers Ever Made", a comparison was made between memory capacity of modern computers and that of the human brain. This article compares the "speed of thought" between the two and --the human brain wins.

These authors mention design a number of times, apparently quite unselfconsciously. We would have to agree that design rather than random, undirected mutation explains why the brain is so fearfully and wonderfully made....s8int.com

How Does the Speed of Thought Compare for Brains and Digital Computers?
By Naveen Nagarajan and Charles F. Stevens
University of California at San Francisco and The Salk Institute, PO Box 85800,
Molecular Neurobiology Laboratory, San Diego California 92138-9216, USA

In the early part of the 20th century, the Harvard University Observatory employed a small army of women — they were known at the time as girl computers — to identify images of stars on photographic plates and then to record the intensity and location of each identified star.

The job done by these girl computers has long since been taken over by the digital sort. We all know that digital computers are much better than we are at doing arithmetic, but over the past few decades computers have been taking over jobs, like playing chess or recognizing speech or carrying out symbolic mathematical manipulations, that we used to think of as the province of the human brain.

How close are computers, like HAL in the movie 2001, to matching those things that now only our brains can do? Our goal here is to compare the capabilities and speeds of the brain with those of modern-day computers.

Hardware
Our starting point will be to compare the brain's hardware with that of computers. Of course, because the architecture of the two sorts of computers are so very different (as described in more detail below), comparisons are difficult. The transistor is the basic active element upon which computer circuits are based.

Modern very large scale integrated (VLSI) microprocessor circuits have about a million transistors per square millimeter of chip area, and the approximately ten layers of wires and circuit components covering the chip surface give an overall volume density of about 0.3 × 109 transistors per microliter.

In the brain, the smallest computer element that transmits and transforms information is the synapse, and the grey matter of most brain regions contains about 109 synapses per microliter, a value not so different from the volume density of transistors.

Each synapse requires around 3 to 4 μm of wire (axon and dendrite) to support it, and each transistor on a microprocessor chip is supported by about 30 μm of wire. Again, the numbers are not so different and the wire diameters are also close to the same (on the order of 100 nm).

The difference between brains and computers arises not so much in the size of the elementary computer elements as in their numbers: where a modern microprocessor chip has 109 transistors, the human brain contains about 1014 synapses (and a brain uses about as much power as a microprocessor). A state of-the-art microprocessor could have close to 30 km of total wire connecting its transistors, where the brain has 3 to 4 × 105 km of wire (most of which is axons). The brain's total wire, then, is about the same as the mean distance from the earth to the moon (a little less than 4 × 105 km).

Clearly, although the sizes of the basic computer elements are not so different between brains and computers, what is vastly (a million fold) different is the number of elements.
To make a preliminary comparison of processing speeds, we can suppose that each neuron carries out an instruction each time it produces a nerve impulse. Since the neocortex contains about 1010 neurons, each of which fires nerve impulses at about 10 Hz, that would give 1011 ‘instructions’ per second, about 100 fold more than the 103 million instructions per second (MIPS) of a modern microprocessor with multiple cores (computing units). Despite the fact that brains have so many more synapses than computers have transistors, the computer is only 100 times slower than the brain, by this measure, because of the computer's multi-gigahertz processor speed.

Architecture
What makes the comparison between brains and computers so hard is that they have such completely different designs. A few of the more important differences are considered here.

Modern computers are all based on what is known as the Von Neumann architecture, one central feature of which is the separation of the central processing unit (CPU) that does the actual computing, and the memory, where data and the instructions that govern the computations are stored.

In sharp contrast, memory in the brain is embedded in the very neural circuits that carry out the computations. Rapid communication between neurons occurs at synapses, and the effectiveness of these synapses at transmitting information (their synaptic strength) can be modified by the brain's own activity so that the same information supplied to a neural circuit can give, at various times, different results.

The brain has available many different mechanisms to alter synaptic strength and to modify the computations carried out by its neural circuits. These various mechanisms change synaptic strength on many time scales from milliseconds to years, so the computations carried out by the circuit can be changed just temporarily or long-term.

Because computers employ the Von Neumann architecture in which memory is separate from the CPU, computers must act in a sequential way, one step at a time: at each time step, a new instruction about what is to be computed is fetched from memory and carried out. A single master clock marks the time that determines these steps (although some complicated steps may require a number of clock ticks), and all parts of the CPU must be kept informed about the current time so that one small step of the computation is completed before the next one is started.

Neural circuits have no need for a central clock to keep actions exactly synchronized because any neural circuit in the brain has its own instructions embedded in the circuit itself: whenever it is presented with information, the circuit knows just what to do with it.
Because the brain is not bound by the Von Neumann architecture, exactly what a particular neural circuit computes can be modified on the fly without reference to other circuits (as when we shift our focus of attention from one thing to another) and can also remember things for a lifetime (how to ride a bicycle).

The fact that computers are based on the Von Neumann architecture and brain circuits are not is the first major difference between them. A second major difference is that brains are massively parallel and computers are not. To see what this means, look at the problem (presented to the Harvard Observatory girl computers) of identifying the image of a star on a photographic plate.

For a usual computer to do this, it has to examine the pixels representing the image one by one, but the girl computers processed all parts of the image at the same time. The brain generally presents vast quantities of data at a time to a neural circuit, and the circuit carries out all of the processing steps required by the data in parallel.

What makes the brain ‘massively’ parallel is this enormous quantity of data that can be handled at the same time.

Digital computers have increased their processing speed (measured in MIPS) about a million fold since the first Von Neumann architecture computer was built 60 years ago (the Manchester Mark 1 in England), and most of that speed increase came first by new technology (VLSI) and then more recently by carrying out various sub-steps of a single computation step in parallel (so called instruction level parallelism). Many computers now on the market have taken this incorporation of parallel computing further by having two or four cores, which means two or four computers on a single chip working at the same time.

And special subsidiary computers designed for carrying out particular types of computations — like the display processor that controls the computer's screen display — have become very much faster by having many (256) processors working in parallel. Indeed, some of these special types of hardware can also be used as a general purpose computer with much more parallel computation than the usual computers.

The problem with emulating the brain's massive parallelism, however, is that we are not even close to being able to use the increased hardware power efficiently; how to program parallel computers is a very active subject now in computer science.

Computers have components with incredible reliability: trillions of operations are carried out without a single error, and many modern machines include circuits for checking and correcting the rare errors that do occur. The human brain, in contrast, operates probabilistically. For example, when a nerve impulse arrives at a typical synapse, it is common for that synapse to inform the postsynaptic neuron of the impulse arrival only one time in five. The four times out of five that information about a nerve impulse arriving at a synapse is not relayed on to the target cell by a synapse could be viewed as errors, but in fact synapses are designed this way. Neural circuits are highly redundant, with the same information arriving simultaneously at many synapses on different neurons so that, on average, neural components are predictable, in the same sense that a fair coin is predictable: you never know on a given flip whether heads or tails will turn up, but you can be sure that there will be very close to 500 heads out of a thousand flips.

Circuits that are redundant and that average over probabilistic components have an important advantage over the super-reliable computer circuits: the brain is very fault tolerant, so that failure of any individual component has effectively no impact on the overall computation whereas a single component failure can be catastrophic for computer circuits.
Thus, many neurons can die (as inevitably happens over time), and yet the brain still is able to function at a high level. Because of another brain design principle, the fact that neurons with the same function are located close to one another in the brain (this is called the doctrine of localization of function), the brain is much more tolerant to random death of neurons than it is to focal injury (such as a bullet wound or a stroke).

The brain also employs the probabilistic nature of its synapses another way. To alter synaptic strength very rapidly, a synapse need not alter its structure but rather can just quickly change the probability with which information about nerve impulse arrivals is transmitted to the target neuron. Many mechanisms are used to store information for brief intervals by changing the probability of synaptic transmission, although most neurobiologists believe that actual structural changes are required for the long-term modification of synaptic strength.

There is a final big difference between the designs of computers and brains considered here. Every time the performance of a computer circuit is improved, major design changes are necessary. Even modest alterations, like modifying the thickness of the wires on the computer chip, mean the computing components on the chip must be rearranged (a very difficult process).

For evolution to work, however, neural circuits must have what is called a scalable architecture. This means that the computing performance can be improved by simply increasing the number of components and enlarging the circuit in accordance with the original design. Brain circuits generally have scalable architectures so that, for example, we are not even aware of the usual two to three fold differences in the size of brain areas from one brain to the next.

The speed of thought
From the differences in computer and brain designs discussed above, it is clear that determining their relative speeds of processing cannot be achieved by comparing hardware specifications. The standard method for comparing computing apples and oranges is to determine the relative times for solving benchmark problems, and this is at least partly possible for comparing brains and computers.

For various commercial and security reasons, computer systems that can recognize faces automatically are of considerable value, and research in this area has been well supported for the last three decades. Starting in 1993, the National Institute of Standards and Technology has run a series of competitions to evaluate computer face-recognition technology, with the results for the last competition reported in 2007.

The first year that fully automatic face recognition was achieved was 1997, at which time target faces were missed about half the time. In the decade that followed, this error rate (missing a face that the system is trying to detect) has dropped to about 1% under the best conditions and is better than 10% over a wide range of conditions. Detecting the face (again, depending on conditions) takes about a second.

Humans can detect a familiar face when it is placed within a string of unfamiliar faces presented at 10 Hz. The accuracy of the human and the computers is about the same when it is tested under the same conditions, so that would mean that the human processing speed is about 10-fold greater than the computer's for face recognition. But this comparison is misleading, because the computer face recognition system is optimized just for that job, whereas the human visual system is designed to detect a very large number of objects.

One of the most difficult things for a computer to do is to extract objects from a visual scene, but we do this so rapidly and effortlessly that we are not even aware that it is hard. When we look around, we automatically see a world full of objects.

Perhaps a more informative way to compare the capabilities of humans and computers is to examine tasks that humans can do easily and that are too hard for computers. At the dawn of the computer age, Alan Turing proposed a test, known as the Turing test, to get at the question of whether or not a computer can think. His test was to have a human judge ask questions of a computer and of a human; if the judge cannot tell from the answers which is the human, the computer has passed the test.

Starting about a dozen years ago, being able to tell a human from a computer became commercially important, because internet companies needed to prevent computers from signing up for services, like email accounts that could be used to send spam, intended for humans. The idea was to develop questions that are very easy for humans to answers but that are too hard for computers.

A variety of methods, known as ‘completely automated public Turing tests to tell computers and humans apart’ (CAPTCHAs), have been developed to do this. Most of these CAPTCHAs rely on the fact that humans can easily read letters that have been disguised by mixing fonts, distorting the letters or masking them with distracters, but this is a very difficult task for computers. The ease with which CAPTCHAs can be developed exposes obvious gaps between capabilities of computers and the brain.
The power of computers has been growing exponentially over the last 60 years, and every year or so we find they can do something — like optical character recognition or speech recognition — that we never imagined they would be able to do. When will the computers catch up with our brains? It has been predicted that this will happen in the next several decades, but we believe the problem is not computer power and ability to program parallel machines, but rather our nearly total ignorance about what computations are actually carried out by the brain.

Our view is that computers will never equal our best abilities until we can understand the brain's design principles and the mathematical operations employed by neural circuits well enough to build machines that incorporate them.

Source: Current Biology

<<<<

 
LINKS
Mobile Phones Make You Senile?

Brain's 'God module' may affect religious intensity

The Brain's Versatile Toolbox

7 Wonders of Mount St. Helens..Grand Canyon changes in days vs millions of years

Lack of Human Genetic Variability..Nearly Wiped Out--in the past say geneticists

Robert Gentry’s Proof that the Earth’s Crust was formed in 3 minutes

The Origin of Life and the Suppression of the Truth ..Missler

God Did Not Create, Evil, Cold or Darkness..A Brief Encounter

The Unraveling of Scientific Materialism.. By Philip Johnson

DNA, Design & the Origin of Life..Charles Thaxton

Archeological Coverups..Will Hart

The Suppression of Anomalous Artifacts of Science Viewzone

Aoccrndng to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a total mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.

  • Human Brain is More Powerful than All Computers Ever Made

Please Support the Research of S8int.com!

Since 2002, Chris Parker has done the majority of the research and writing of articles for s8int.com. If this site has been an encouragement to you, please donate to support Chris's ongoing research. (S8int.com is not incorporated and your donations may not be tax deductable.)

More Posts About Intelligent Design

Peer Reviewed Scientific Paper Gets Scientists &ldquo;All Up in Their Feelings&rdquo;, by Mentioning a Creator.
What Darwin Never Imagined  (Link to Video)
Design and the Anthropic Principle
Human Brain is More Powerful than All Computers Ever Made
From GOO to GOD, Does Science Support Design by an Intelligent Creator? Creationism or Darwinism? Which is the true science?
News: Is Science Shaking from DNA Studies? Materialists and Evolutionists Have Ignorance Down to a Science. World&#039;s Most Famous Atheist&#039;s Penchant for Bumping His Head Against His Own Ego
&ldquo;Why Crocodiles Have Changed so Little Since the Age of the Dinosaurs&rdquo;
David Berlinski: Rebelious Intellectual Defies Darwinism

Warning: Parameter 2 to googleAnalytics() expected to be a reference, value given in /home1/s8int/public_html/tmp/templates_c/ae8c2ccbf8ab62fdd7d23a4b5e88bd475f81a724_0.cms_template.bfscripts.php on line 85