Is the brain a computer? Are computers conscious?
Posted on 18 September 2012, 12:54
In scientific investigation it is desirable to thoroughly understand the views of our opponents: it may help us to get closer to the truth. Orthodox Materialists for example, maintain that consciousness is “nothing but” the dance of electrons in the brain. This is probably the majority view in academic psychology today. For such Materialists the computational theory of mind has great appeal. The brain and its functions are being mapped in greater and greater detail; this area is concerned with vision, with hearing, sight, balance, speech, and so on. Damage to one part of the brain can impair short-term memory, separation of the two hemispheres has been observed to produce two independent seats of mind. We are all aware of mental impairment occasioned by alcohol and drugs, or by Alzheimer’s disease. These discoveries can easily persuade us that consciousness and mind is a function of the brain, and that the brain is the seat of remembering.
The question that has always been intractable is how the brain can produce conscious awareness. There is still no viable theory. This fact is widely termed “The Hard Problem.” The Materialist psychological theory of Behaviourism had huge popularity in English speaking academia between 1915 and 1960. This theory held that consciousness was an epi-phenomenon of the working of the brain, like the noise made by an engine, and had no directive function. Introspection was taboo, because consciousness had no function: one simply studied behaviour.
Psychotherapy for Behaviourists came down to behavioural modification. [An exception to all this of course was that we always heard the introspections of the inner workings of the mind of the Behaviourist therapist.] Behaviourists solved the problem of conscious awareness by denying its significance. Although the theory blatantly contradicts universal experience, it was the majority view in academia for all those years.
Currently a popular academic notion is that conscious awareness somehow emerges from the brain seen as a biological computer.
John Searle (below) (a Berkeley philosopher) wrote: “Oddly enough I have encountered more passion from adherents of the computational theory of the mind than from adherence of traditional religious doctrines of the soul. Some computationalists invest an almost religious intensity into their faith that our deepest problems about the mind will have a computational solution. Many people apparently believe that somehow or other, unless we are proven to be computers something terribly important will be lost.” (Searle, 1997, page 189)
The thinking appears to be that as conscious awareness seems to emerge from the biological computer which is the brain, so some kind of similar awareness must exist in our physical computers and more especially in the computers of humanoid robots. Thus we can find a Wiki article on Roboethics containing the following words:
A ROBOT BILL OF RIGHTS?
“Robot rights are the moral obligations of society towards its machines, similar to human rights or animal rights. These may include the right to life and liberty, freedom of thought and expression and equality before the law; the issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.”
“Experts disagree whether specific and detailed laws will be required soon or safely in the distant future.. Glenn McGee reports that sufficiently humanoid robots may appear by 2020.. Ray Kurzweil sets the date at 2029.. However, most scientists suppose that at least 50 years may have to pass before any sufficiently advanced system exists.”
“Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC [artificial consciousness] believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation.”
To the non-Materialist, this talk may well sound ridiculous; but if you are a Materialist and reject the nonsense of Behaviourism, and if you think of the brain as nothing but a computational machine, it is only natural that you should believe that consciousness and even feeling should be generated in machines. Behaviourism results in pure nihilism and is unbearable: whereas seeing mind, consciousness, and feeling apparently emerging from computation gives us a chance to recognise a wider spectrum of what it means to be human. Higher and more spiritual aspects of humanity have a chance to be recognised and allowed for. Aldous Huxley’s Brave New World is avoided (“Natural reproduction has been done away with and children are created, ‘decanted’ and raised in Hatcheries and Conditioning Centres, where they are divided into five castes”), but this happens at the cost of having a belief in the emergence of conscious awareness in the computers.
“Computer science seemed to open up many possibilities for the understanding of memory, not least because the vocabulary adopted by pioneering computer engineers gave prominence to “memory” related terms. So we still find many psychologists discussing memory (often rather unreflectingly) within a broad framework according to which the brain receives “input” “encoded” by successive stages of the sensory pathways, and this input is passed into one or more forms of short term working memory or are part (a buffer) and thence (perhaps in recoded form) to an “address” with them and more permanent memory store (“external memory”) from which in due course it can on the receipt of appropriate cues be “retrieved” and further processed.”
Against the idea of consciousness residing in computers and robots, Searle cites the famous Chinese Room argument, which was directed at the central claim of strong artificial intelligence – specifically, the idea that running a computer program can of itself be sufficient for, or constitutive of understanding (John Searle, 1980, 1984, 1990) The essence of this thought experiment is that a person knows no Chinese, but who appropriately answers questions in Chinese by virtue of manipulating symbols or the according to the rules of a (hypothetical) computer program, would not thereby understand any part of the resulting “conversation.” Computing is by definition purely syntactical, and consists only of manipulation of uninterpreted formal symbols according to the explicit rules of a program. The occupant of the Chinese room does all that, but it understands nothing” pp21-2
Simulation is not duplication, and to act as if one understands Chinese does not guarantee that one does.
Here is no place to go into great detail, but those were some sketches of what we are up against in our studies of dimensions of consciousness.
Of course there is a multitude of verified phenomena that suggest that the brain interacts with a conscious awareness: OBEs, NDEs, distance viewing, mediumistic phenomena, precognition, psychokinesis: readers will be familiar with much of the evidence. The work of many leading quantum physicists strongly suggests that contrary to the idea that brains produce consciousness and mind, it would be more correct to say that mind produces brains, just as it most certainly produces computers as well as robots, however humanlike they may behave.
http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001109 evolutionary psychology of the hard problem
Wiki article: The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, concern with the moral behavior of artificial moral agents (AMAs)
Afterlife Teaching From Stephen the Martyr by Michael Cocks is published by White Crow Books and available from Amazon and other bookstores.
Next blog October 2