Doug Wilson
1 min readDec 4, 2023

--

Of course.

Assuming one believes AGI is even possible, it will be important to be able to constrain both its goals and the tactics it uses to accomplish them, as well as to understand how it reaches conclusions, etc.

Organizations like Anthropic are focused on parts of this, and that's a good thing.

But even companies that claim to be pursuing AGI aren't really. Chat, games, and cyber security -- where most "AGI" companies are working today -- aren't really general at all; they're just a little less narrow.

As a human being, I can do a little security work while chatting with friends and colleagues and then take a break with a new game. That's general intelligence.

What I object to is the fear-based notion that we are somehow going to leap forward from slogging through ANI and magically find ourselves confronting ASI, having skipped AGI entirely, which is Mr. Pueyo's "OpenAI and the Biggest Threat in the History of Humanity" premise.

--

--

Doug Wilson
Doug Wilson

Written by Doug Wilson

Doug Wilson is an experienced software application architect, music lover, problem solver, former film/video editor, philologist, and father of four.

No responses yet