Algorithms can certainly work differently, i.e. follow different execution paths with different evaluation and weighting factors, than humans think (when humans bother to think at all), but I'm not sure that the article you reference -- excellent and fascinating though it is -- actually makes the case that that algorithms think.
In fact, the quote "a formal representation of thinking is not enough to engender consciousness" seems to indicate that the author may believe the opposite to be the case.
So for the moment at least, "stochastic parrot" is still a perfectly accurate way to describe what generative AI algorithms do and how they do it. And without models of the physical world they describe, they are bound to spew nonsense that sounds good but that is easily disproved.