Thursday, December 29, 2022

ChatGPT and the animals

 Like most people I know, I've spent the last month+ experimenting with ChatGPT. I've asked it to write poetry; I've asked it arcane questions about various books and articles; I've asked it to create imaginary dialogues, podcasts, arguments, and debates between thinkers I admire from different centuries. And like most people who have put the new platform through its paces, I've come away with a mixed appraisal of its various skills. I've been impressed by its ability to handle requests that generally fall under the banner of what people think of as "creative" tasks; its poetry, for instance, especially within certain constraints, is a particular highlight. And I've enjoyed the dialogues it can come up with, spinning genuinely interesting fictional conversations between historical figures who, in reality, never met. I've been less taken with its logical abilities, which are questionable at best, and at worst make it abundantly clear that the bot isn't actually thinking, but just parroting text back at us. David Deutsch's flying-horse question gives a neat demonstration of this: the bot flatly contradicts itself and misunderstands the question in a way that can only mean it doesn't actually know what it's saying.

Playing with the bot has afforded one kind of pleasure; but nearly as fun and interesting has been to observe various public thinkers' reactions to this new technology. In particular, I've been struck by the far-reaching claims voiced by Tyler Cowen, who for much of this year has been making cryptic references to a coming revolution in AI tech that, he argues, will entirely transform how we use the internet, as well as how we work with and produce ideas. He says that ChatGPT is just the first step on this path--"bread crumbs, not dessert"--and although he could well be right, some of the individual claims he makes seem questionable to me.

For instance, he claims in a recent blog post that ChatGPT's linguistic competence likely narrows the gap between human and animal intelligence. This seems wrong to me--indeed, it's striking that one could make exactly the opposite argument just as plausibly (perhaps more plausibly). Although Tyler doesn't flesh out his version of the argument, I assume his line of thought would run something like this: ChatGPT displays enormous competency without needing to think; the competency approaches that of humans w/r/t language-use; this suggests that we aren't quite as special as we thought, and that other program-running devices like animals (which are like versions of ChatGPT, except they're programmed by evolution rather than by OpenAI) can approach the type of skills we have; thus we differ from them in degree rather than in kind.

However, that argument says much more about Tyler's "priors" than it does about humans or ChatGPT. If anything, to my eyes an argument with the exact opposite conclusion seems even more compelling! Here goes: ChatGPT shows that a program can display enormous competencies without needing to think; animals display enormous competencies, which many people want to attribute to thought; however, we can now see a demonstration that behavior that strikes a casual human observer as thought-like might not depend on thought at all; thus, it follow that animals might not actually need to think in order to perform the tasks they can perform; and if that's the case, the gulf between humans and animals is very likely to be wider than people think!

Of course, the fact that ChatGPT demonstrates that an entity can perform thought-like actions without thinking says nothing about the question of whether some other entity does need to think. So, this argument is purely analytic. CPT's ability to write sonnets doesn't ultimately settle the question of whether rats are conscious. But the version of the argument that I've laid out seems, to me, to increase the plausibility that complex behavior by non-humans can be accomplished algorithmically. Of course, just as Tyler's version of the argument says more about his priors than it does about animals, the same is true for my version. I was already inclined, intuitively, to buy the arguments of philosophers who think that consciousness exists only in people.

Tyler's other claims about the philosophical implications of ChatGPT seem equally stretched. He asks, for instance, why the aliens haven't visited us. Now that we see how easily the trappings of intelligence can be conjured by programmers, surely evolution or some other such force must have created far more intelligence across the universe than we can know? But, to build on Deutsch's point about GPT and the flying-horses, the better conclusion to draw would be that the outward suggestion of intelligent behavior is misleading, and that genuine thought remains rare, on our planet and possibly elsewhere as well.

No comments:

Post a Comment