The article includes and is predicated on the following statements:
"general-purpose AI models, with examples like ChatGPT"
"general-purpose superhuman model"
"weak-to-strong generalization"
It's generally agreed that there are three classes of artificial intelligence (AI):
• Artificial Narrow Intelligence (ANI) or “weak” AI, narrowly-defined set of specific tasks
• Artificial General Intelligence (AGI) or “strong” AI, think and make decisions like us
• Artificial Super Intelligence (ASI), surpassing human intelligence
The terminology seems to echo or allude to this well-established vocabulary while seemingly ignoring its important distinctions.
ChatGPT is in no way a "general-purpose AI model". It is an example of ANI's "narrowly-defined set of specific tasks", e.g. understanding human language (Natural Language Processing or "NLP") and responding to text or image-based prompts with either text or images based on its training data set.
It does not, for example, compose original music, edit videos, or do original drug research. Other narrow, specific ANI systems do those and only those things; whereas, humans -- who actually possess general intelligence -- do all of them.
I've yet to see anyone (even OpenAI) actually make a credible start on actual AGI, which might (and might not) produce AI that thinks and makes decisions like us, let alone AI that surpasses us.
The topics this article deals with are important, and it's good that people are beginning to consider them, but we are way, way, WAY too early to consider them essential to solve in the near term. We don't even properly understand what intelligence is yet.