There are two reasons I’m not concerned about ChatGPT and its byproducts.
First, it isn’t even close to the sort of artificial superintelligence that might conceivably pose a threat to humankind. The models underpinning it are slow learners that require immense volumes of data to construct anything akin to the versatile concepts humans can concoct from only a few examples. In this sense, it’s not “intelligent”.
Second, many of the more catastrophic artificial general intelligence scenarios depend on premises I find implausible. For instance, there seems to be a prevailing (but unspoken) assumption that sufficient intelligence amounts to limitless real-world power. If this was true, more scientists would be billionaires.
Cognition, as we understand it in humans, takes place as part of a physical environment (which includes our bodies) – and this environment imposes limitations. The concept of AI as a “software mind” unconstrained by hardware has more in common with 17th century dualism (the idea that the mind and body are separable) than with contemporary theories of the mind existing as part of the physical world.
WHY THE SUDDEN CONCERN?
Still, doomsaying is old hat, and the events of the last few years probably haven’t helped. But there may be more to this story than meets the eye.
Among the prominent figures calling for AI regulation, many work for or have ties to incumbent AI companies. This technology is useful, and there is money and power at stake – so fearmongering presents an opportunity.
Almost everything involved in building ChatGPT has been published in research that anyone can access. OpenAI’s competitors can (and have) replicated the process, and it won’t be long before free and open-source alternatives flood the market.