Doug Wilson
2 min readNov 24, 2023

--

Ridiculous.

Unfounded, fear-based assumptions based on unquestioned presumptions, stacked precariously atop pooled ignorance and opinion, e.g. "how many months [do you think] will it be before the first superintelligent AI is created?"

ChatGPT (where we are today in 2023) isn't Artificial General Intelligence (AGI) -- where all this hypothetical cyber-mayhem is even a possibility -- at all. "It ain't the same f*ckin' ballpark. It ain't the same league. It ain't even the same f*ckin' sport.", as Jules from Pulp Fiction would argue.

"ChatGPT utilizes Deep Learning techniques from the Neural Network world for the purpose of Natural Language Processing/Natural Language Understanding using Large Language Models." These are all part of "Artificial Narrow Intelligence (ANI) or 'weak' AI, narrowly-defined set of specific tasks".

That's the state of the art today.

We'd have to move from today's Artificial Narrow Intelligence to "Artificial General Intelligence (AGI) or 'strong' AI, think and make decisions like us" in order for a hypothetical, AI-driven acceleration to "Artificial Super Intelligence (ASI), surpassing human intelligence" to even potentially take place.

Definitions, etc above from:

https://medium.com/@frickingruvin/artificial-intelligence-ai-taxonomies-caa2ddc6cc7e

ChatGPT certainly isn't going to make that leap, and realistically the journey to anything even resembling AGI that "thinks" like us is decades or even centuries away.

Now before you start typing defensively, John von Neumann, Ray Kurzweil (1990's "The Age of Intelligent Machines", 1999's "The Age of Spiritual Machines", 2005's "The Singularity Is Near"), Vernor Vinge, and others have been predicting the "singularity" since the late 1950s (more than half a century) ... and then periodically revising those predictions, a lot like still un-raptured believers.

It's also worth considering the other side. A large number of prominent neuroscientists, cognitive psychologists, linguists, scientists, engineers, technologists, etc, e.g. Steven Pinker ("How the Mind Works" and "The Language Instinct") and Gordon Moore (yes, that Moore), disagree that such a progression is plausible at all.

And then there's the huge, implicit assumption in all of this that "runaway" AGI, possibly resulting in ASI, would be "bad" ... you know, 'cause we humans have done such a spectacular job of governing ourselves and managing the limited resources in our environment.

Another way to look at things is presented in Iain M. Banks' "Culture" novels, where AI "Minds" exist in hyperspace where they can think far more quickly and deeply and care for and protect what humans have become in a post-scarcity utopia.

So maybe let's take a deeeep, calming breath.

And in the meantime, it's inconsiderate to assume knowledge, e.g. the definition of jargon like "FOOM", on the part of one's audience.

Many understand FOOM as "Fast Onset of Overwhelming Mastery", but it's always considerate to define terms before using them.

https://medium.com/@frickingruvin/considerate-communication-46fea4d2a7f2

--

--

Doug Wilson
Doug Wilson

Written by Doug Wilson

Doug Wilson is an experienced software application architect, music lover, problem solver, former film/video editor, philologist, and father of four.

Responses (3)