Some readers might have seen a version of this story that appeared in the Washington Post last weekend about a Google AI program called LaMDA that one of its creators claimed was sentient and “conscious.” Here is the CNN version of that story, “No, Google’s AI Is Not Sentient.” Even the description of what it can do makes it clear this machine is just operating a sophisticated version of John Searle’s Chinese Box example. See Robert Kuhn’s interview here with Searle on his amazing program Closer to Truth-–which, by the way is now being uploaded to Youtube. And for dozens of other programs exploring these fundamental questions do the search on the Closer to Truth web site for the “Mind Body Problem.”
A Sentient Computer? I Think Not…
I am a historian of ancient Mediterranean/Ancient Near Eastern religions so my comment here is stictly “outside my field,” as the saying goes. I have to agree with Roger Penrose, this idea that intelligent and consciousness—not to mention self-consciousness—is based on this kind of a computational model of complex data association is a basic category mistake, see his enlightening interview with Lex Fridman, “Consciousness is Not a Computation.”
I don’t think one needs to move to some “wholly other” force and call it “Mind,” as if naming a phenomenon conveys understanding it. I am quite sure our self-conscious minds are rooted in “this” world—I remain a Monist in that Whiteheadean sense, but our reductionistic assumption that the “material” is just STUFF…in contrast to some “spiritual” other—is just wrong headed. As if naming Gravity, Light, and the Strong and Weak Nuclear forces somehow means we have understood them and made them part of some so-called “material” aspect of the cosmos. The problem here, as I see it, is the very assumptions that are built into our dualistic categorization that seems to be built into our language anytime we discuss the so-called “spiritual.”