ChatGPT and other Large Language Model (LLM)-based generative chatbots do not think. They calculate the highest probability for each subsequent word they parrot from their vast sets of training data -- data created, at least initially, by human beings.
They have therefore been called "stochastic parrots", which I think is apt.
They have very limited memory. They have no model of the physical world against which to examine or test their output. They do not reason. They do not solve problems. All they can do is to synthesize answers from work already done by humans.
Therefore, they do not think.