ChatGPT is not Bullshit?

Kan Yuenyong
2 min readJun 20, 2024

--

The article “ChatGPT is Bullshit” by Michael Townsen Hicks, James Humphries, and Joe Slater provides an engaging philosophical critique of large language models (LLMs) like ChatGPT, claiming that their outputs are best characterized as “bullshit” rather than mere errors or “hallucinations.” While the authors raise important points, their argument appears somewhat superficial due to a lack of technical depth.

Firstly, the authors argue that LLMs produce text with a “reckless disregard for the truth,” akin to Harry Frankfurt’s definition of bullshit. However, this perspective overlooks the fact that inaccuracies in LLM outputs are often technical issues related to data quality and training processes. In my experience with models like Phi-3, fine-tuning with specific datasets significantly reduces errors, indicating that these issues are manageable through targeted technical improvements.

Secondly, the authors’ philosophical background leads to a broad-brush critique that fails to recognize the nuanced advancements in LLM technology. For instance, when fine-tuned appropriately, models can provide accurate and coherent responses. This capacity for improvement challenges the article’s broad claim that LLMs are fundamentally indifferent to truth.

Moreover, the insistence on labeling AI errors as “bullshit” instead of “hallucinations” mischaracterizes the nature of these inaccuracies. “Hallucination” is a well-established term in AI research, describing plausible but incorrect outputs. By using this term, researchers emphasize the ongoing efforts to address and mitigate these errors through technical enhancements.

Additionally, the article seems to dismiss the potential for LLMs to be fine-tuned and improved through better training data. Practical experiences, such as enhancing model performance for specific tasks like legal text generation, show that targeted interventions can lead to significant improvements, contradicting the authors’ view that AI outputs are inherently unreliable.

In conclusion, while “ChatGPT is Bullshit” raises valid ethical questions, it lacks a comprehensive engagement with the technical realities and potential of LLMs. The critique would benefit from a more balanced perspective that considers both the philosophical implications and the practical advancements in AI technology. Readers should approach the article critically, recognizing the importance of technical solutions in addressing the issues highlighted.

PS: We’ve tested with some chatbot providers, and found that certain ethical guardrails may interpret the Thai prompt as asking the chatbot to recommend a loan with a high interest rate, which might violate ethical guidelines, leading to task denial. However, with the English prompt, there was no such issue. This might also reflect the nuanced intentions within different languages.

--

--

Kan Yuenyong

A geopolitical strategist who lives where a fine narrow line amongst a collision of civilizations exists.