Susan Schneider, associate professor of philosophy at the UConn, was invited last year to speak at a conference sponsored by NASA and the Library of Congress on astrobiology and preparing for the discovery of life beyond Earth. Her remarks, titled “Alien Minds,” were drawn from her research and writings on the computational nature of the brain, and new work in philosophy on superintelligence. Her basic premise is that the most advanced alien civilizations would likely be forms of artificial intelligence (A.I.).
How did a philosopher receive an invitation to speak to a group of astrobiologists at a NASA event?
NASA was interested in a piece I had written for the New York Times titled “The Philosophy of Her” about uploading the brain. It was based on the story in the Spike Jonze film, “Her,” which is about a romantic relationship between a human being and an ultra-intelligent computer program. NASA had asked me to speak on the speculative topic of what alien minds could be like. This is not something I’d thought about before, frankly, but I took a NASA workshop invitation seriously. I tried to bring new ideas to the table. There were a lot of distinctly philosophical problems to raise because philosophers have for decades been debating whether A.I. can be conscious, whether it can think and how it would think. I got the pleasure of learning some astrobiology, and meeting with some of the finest astrobiologists in the country.
Would the first alien contact be with something more artificial than biological?
I actually think the first discovery of life on other planets will probably involve microbial life; I am concentrating on intelligent life in my work on this topic though. I only claim that the most advanced civilizations will likely be post biological. Several other well-known members of the astrobiology community have also argued for this, such as Paul Davies, Seth Shostak, and Steven Dick, although they didn’t discuss the new topic of superintelligence. If the astrobiologists are right that there are a lot of planets capable of giving rise to life, then given the fact that Earth is a lot younger than most planets—we’re galactic babies—it’s likely that these creatures have already reached beyond human intelligence through altering their biological natures. Consider the situation on Earth. Computer speed seems to be doubling every year or so. We’re now seeing people in computer science saying that A.I. is on the horizon. I think A.I. could eventually be smarter than us and that humans might begin to alter their own thinking using silicon-based aids. Given the older, more advanced level of other civilizations relative to Earth, they might now be vastly smarter insofar as they survived their technological maturity.
If artificial intelligence can be conscious, what form can it take?
It will likely be “superintelligent A.I.” There is currently a lot of discussion in the media and among A.I. specialists and philosophers about the possibility of A.I. becoming “superintelligent,” that is, advancing to a level of intelligence that outperforms humans in every domain such as social intelligence, mathematical reasoning, etc. These discussions have largely been about the genesis of superintelligence on Earth. Nick Bostrom, the head of the Future of Humanity Institute at Oxford recently wrote an important book on this topic: Superintelligence: Paths, Dangers, Stategies. I argued that the most advanced alien civlizations would likely be superintelligent A.I.
But it is controversial to claim that A.I. can be conscious, as I do. The topic of consciousness is a traditional subject in philosophy of mind. Philosophers frequently consider whether A.I. can be conscious because it’s not biological. Our best science suggests that a silicon-based system, like a robot, could be conscious if it was sophisticated enough. I’m not a chemist but it looks like we can set up situations where artificial silicon-based neurons can communicate with regular neurons and mimic properties of neurons. Silicon is widely distributed throughout the universe and for many reasons, it may be a superior medium for information processing. It seems like it could be used on other planets. We’ll never be able to definitively rule out the possibility that even though a computer may look like it is conscious—say, an android that’s very sophisticated and conscious like the Data character in Star Trek—it may not be the case. It could convince us that it’s conscious and has experience but has no inner life. It may just be involved in computation with no mental life because there’s something special about our biology that is responsible for the fact that we feel at this moment awake and alive. That view is called “biological naturalism.” I opposed that view in print, in the sense that I don’t see any scientific evidence for it. In a sense, we can never really definitively rule out this possibility though. Consciousness is something experienced from the inside. Our vast universe may contain intelligence that is not conscious, but it is unlikely.
Your discussion about uploading the brain because it functions like a computer brings up all kinds of issues.
Philosophers love thought experiments. Would you be able to upload your brain, and transfer your consciousness to a computer, as Johnny Depp’s character did in the film “Transcendence”? There is a lot of good science fiction on this, such as Robert Sawyer’s Mindscan. The protagonist is this poor guy who has a brain tumor and he decides he’s going to live forever by uploading his brain. He wakes up, but he’s still on the scanning table. His mind failed to transfer! All kind of legal and ethical issues abound because he’s already signed away all of his possessions to his clone, and his clone insists he is the original.
You also raise the serious question of the risks to humans from superintelligence as noted in the recent book Superintelligence by the Swedish philosopher Nick Bostrom. What are some of the concerns?
People like Bill Gates, Elon Musk of Space X and Telsa Motors, Max Tegmark at MIT and Stephen Hawking have been making public comments about the dangers of superintelligent A.I. because they’ve been reading Nick Bostrom’s book. The book discusses the “Control Problem;” the problem of how you create A.I. that will work to ensure the survival of humanity and not inadvertently destroy humanity. Bostrom uses the example a machine that decides that all it wants to do is create paperclips and in the process uses human cells and destroys humanity.
This type of scenario raises lots of questions about the Search for Extra Terrestrial Intelligence (SETI). What could happen if there is contact with alien superintelligence?
Both Seth Shostak [director of the SETI Institute] and myself suspect that superintelligence is not going to be that interested visiting us because we will be of such lesser intelligence—would you really cross the universe to interact with an ant? What could we offer superintelligent aliens? So I don’t think an “Independence Day” scenario is likely to happen. But there was just a conference at the SETI Institute about Active SETI, the project to actively send signals out into space, rather than merely listen for signals. Shostak is a proponent of Active SETI, and wants to beam the entire Internet into outer space! I don’t think that’s a good idea. My view is that even if the chance is 1 percent that a superintelligence could be harmful for reasons Bostrom outlined, we still shouldn’t do it. We don’t know what could be out there.
Does it surprise you that there has been a lot of media interest in your NASA presentation?
The general public is interested in life’s mysteries: is there life on other planets? Is there anything after we die? Can science really explain everything? I find this wonderful.
By: Kenneth Best | Story courtesy of UConn Today