Challenging Unrealistic AI Assertions

Doug Wilson
5 min readJan 17, 2025

--

Who knows? You do, actually.

There are some pretty wild assertions being made by many people these days — some rich and famous, some in academia, and some by artificial intelligence (AI) grifters hoping to keep alive the flood of venture capital investment money that funds their lifestyles.

An Elon Musk post claiming that artificial intelligence (AI) will “superset” the intelligence of any single human by the end of 2025 and maybe all humans by 2027/2028

Normally, I defer to those who have verifiable credentials, i.e. I believe experts who have worked hard to gain their expertise, and anyone who basically makes sense, i.e. lunatics and liars need not apply (even experts can have bad days, have questionable motives, just be crazy, etc).

But when the rich and famous, academicians, and even those who helm AI companies make pronouncements that make us go, “Eeek!” or “WTF?!”, how should we respond? And how can non-experts dare to question the opinions of “the mighty”?

Simple. By paying attention to details and by employing our critical thinking skills.

For example, here are my main concerns with Musk’s statement, which I consider nonsensical for the following reasons:

1. Incorrect Grammar, Leading to Ambiguity — “superset” is a noun, not a verb, so the statement “AI will superset the intelligence of any single human” is ambiguous … at best.

Perhaps he meant “supercede” (verb) or “will result in a superset of”, but these are very different claims that are each problematic in their own ways.

“Supercede” would seem to indicate a “magical” leap from where AI is today — “Artificial Narrow Intelligence (ANI) or ‘weak’ AI, narrowly-defined set of specific tasks — to “Artificial Super Intelligence (ASI), surpassing human intelligence”, and skipping the intervening AGI stage altogether (“Artificial General Intelligence (AGI) or ‘strong’ AI, think and make decisions like us”). There are so many problems with this baseless, “and then a miracle occurs” presumption, it’s hard to know where to begin. This is science, after all … right?

“will result in a superset of” is somewhat less problematic, primarily because it is less specific and far more easily achievable (one would suppose), but conversely far less important/significant, i.e. a superset of “the intelligence of any single human” (at least in the mathematical sense of the word “superset”) would be created by adding a single new aspect or example of intelligence to a single human being. Big whoop. I learned a G chord on the guitar yesterday. Does that count? Adding a single new aspect or example of intelligence to “all humans” would be harder to measure and to prove, not least of which because it is also extremely ambiguous, i.e. all humans … alive today … who have ever lived, e.g. Einstein, etc? I mean, what are the parameters here? And remember, this is not MY statement. I’m not required to defend it. Carry your own water.

2. The Lack of an Objectively Verifiable Definition of “Intelligence” — Based on common definitions of “intelligence”, e.g. “the ability to acquire and apply knowledge and skills” (Oxford Languages, https://languages.oup.com/google-dictionary-en), one could easily argue that current AI fails as intelligence altogether since it cannot itself acquire knowledge. It must be trained, i.e. have knowledge provided to it.

Even Unsupervised Learning (a subset of the Machine Learning type of AI — one of six main types), which can discover patterns and anomalies in large data sets without extensive training, needs human help to identify these patterns and to determine whether they are meaningful.

Note: To learn more about six main kinds of AI please see my Artificial Intelligence (AI) Taxonomies article using the embedded link above or at the end of this article.

3. Which AI? — Musk’s statement contains still more ambiguity. By my own count (and depending on how one decides to classify work in the field of AI), there are at least six (6) different kinds of AI: Machine Learning (ML), Natural Language Processing (NLP)/Natural Language Understanding (NLU), Neural Networks, Robotics, Expert Systems, and Fuzzy Logic.

Alternately, AI can be grouped into the ANI, AGI, and ASI “stages” we’ve already discussed and/or different extents, e.g. reactive, self-aware, etc. We are currently in the early ANI stage with not even the faintest clue about what constitutes general intelligence (AGI), let alone super intelligence (ASI).

4. Lack of Citation — Musk claims that “AI” (of some unspecified type) will “superset” human “intelligence” in a few years and that this is “increasingly likely”.

Sez who? People who want to be taken seriously usually provide citations that support their statements. People who just expect to be fawned over, admired (or worshipped), and most of all obeyed are comfortable just saying stuff, no matter how outrageous or insupportable. Ahem.

5. Lack of Assessment Methodology — How is the intelligence of “any single human” or “all humans” to be aggregated, let alone assessed, for purposes of meaningful comparison?

6. Still More Lack of Citation/Attribution — Musk states confidently that the “Probability that AI exceeds the intelligence of all humans combined by 2030 is ~100%”.

Again, sez who? Him? So what? He says a lot of things, many of which prove to be wildly inaccurate.

What is the rational basis then for this outrageous claim — outrageous because we are at the very earliest, “baby steps” stage of narrow, weak ANI and cannot even conceive of how to define, let alone achieve, the next general AGI or super ASI stages?

By paying attention to details and by employing my own critical thinking skills, I conclude that this claim is nonsense. And you can (and should) do this too. Reach your own conclusion. Just follow the process.

I’m not saying that you can disbelieve things just because you don’t like them, don’t understand them, or personally disagree with them.

Hold yourself to the same high standards that you hold someone else to when making or evaluating a claim:

  • Pay attention to detail,
  • Define terms,
  • Be clear/avoid ambiguity,
  • Provide credible citations,
  • Consider and provide context,
  • Consider but don’t claim (especially to yourself) to know motives,
  • Be tenacious (don’t be intimidated, and don’t give up),
  • Be curious, not judgmental, i.e. practice inquiry rather than advocacy,
  • Keep your sense of humor,
  • Bow to no man,
  • Carry your own water, and
  • Show your work.

If, like me, you weren’t taught this process formally in school, you’ve had enough time to derive or discover it on your own.

Note: For more communication DOs and DON’Ts, please see my Considerate Communication article below.

In conclusion, I don’t believe that AI is going to kill us all or take all our jobs (although it may encourage some of us to update our skills a bit).

Stay frosty, my friends. Don’t buy the B.S.

--

--

Doug Wilson
Doug Wilson

Written by Doug Wilson

Doug Wilson is an experienced software application architect, music lover, problem solver, former film/video editor, philologist, and father of four.

Responses (3)