Doug Wilson
2 min readFeb 16, 2025

--

Thanks, Amy! I'm glad we reached the same conclusion. And thank you very much for reading and for commenting!

In my own experience, people who aren't careful with important details like language, definitions, etc and who don't take steps to ensure that they're being understood correctly, that the claims they make are supported by experts and data, or that there are realistic ways to measure their claims aren't hiding things. They simply don't actually know how things work and have no problem making "authoritative" statements ... without any real, relevant authority.

Mr. Musk's statement is perfect example. And we've seen plenty of examples of claims like his not only being proven spectacularly incorrect but actually being misleading and even dangerous.

People like this seem primarily motivated by their own egos (and/or greed) and not constrained by facts (or consequences).

Yes, I've read a lot about transhumanism, starting 50+ years ago in science fiction before the term was widely known. I love the idea of AI enabling a utopian, post-scarcity future, rather than a dystopian hellscape. Please see Iain M. Banks "Culture" novels (https://www.amazon.com/Culture-9-book-series/dp/B07WLZZ9WV) for one of the better examples.

I minored in physics and mathematics, and I've worked with computer software professionally since the 1990s, designing and leading teams to deliver capabilities and services that you've probably used yourself or are at least aware of.

I've read widely on many related or foundational topics like Drexler's Engines of Creation (nanotechnology), Pinker's How the Mind Works and the Language Instinct, Kurzweil's The Age of Intelligent Machines, The Age of Spiritual Machines, etc, etc, etc.

One thing about Kurzweil's (and others') confident predictions is that they've all been wrong in terms of time frame. And not just a little wrong. We were all supposed to have uploaded ourselves a decade or more ago when "the singularity" arrived. Still waiting, and Mr. Kurzweil continues to predict it in The Singularity Is Near (2005) and The Singularity Is Nearer (almost 20 years later in 2024).

I wish that we were being kept in the dark and that all of this were available now. But I see no reason to believe that this is the case and every reason to believe that each of these disciplines requires long, hard, slow, human effort with many setbacks along the way.

My own work on web application architecture over the past 25+ years has been an example of this. I needed scalable, on-demand infrastructure (networking, servers, etc) and was blocked for years without it ... until someone finally created virtualization and the cloud, and I could once again see a way forward.

So I think my assessment of AI's current level, e.g. first baby steps in narrow, "weak" ANI, zero credible pathway to general, "strong" AGI, and even less of an idea about super ASI, is pretty accurate.

We don't even have a solid, widely-accepted, measurable definition of intelligence itself -- just a lot of (mostly contradictory) opinions and a lot of wild, unsupported claims from people seemingly motivated by ego, greed, etc.

I wish it weren't the case, but I'm afraid that's where we are.

Thanks again for reading and for commenting!

--

--

Doug Wilson
Doug Wilson

Written by Doug Wilson

Doug Wilson is an experienced software application architect, music lover, problem solver, former film/video editor, philologist, and father of four.

Responses (1)