Ethics and Information Technology
27(3)
Doi:
https://doi.org/10.1007/s10676-025-09845-2
Abstract In this article, we examine a peculiar issue apropos large language models (LLMs) and generative AI more broadly: the frequently overlooked phenomenon of output homogenization. It describes the tendency of chatbots to structure their outputs in a highly recognizable manner, which often amounts to the aggregation of verbal, visual, and narrative clichés, trivialities, truisms, predictable argumentations, and similar. We argue that the most appropriate conceptual lens through which said phenomenon can be framed is that of Frankfurtian bullshit. In this respect, existing attempts at applying the BS framework to LLMs are insufficient, as those are chiefly presented in opposition to the so-called algorithmic hallucinations. Here, we contend that further conceptual rupture from the original metaphor of Frankfurt (1986) is needed, distinguishing between the what-BS, which manifests in falsehoods and factual inconsistencies of LLMs, and the how-BS, which reifies in the dynamics of output homogenization. We also discuss how issues of algorithmic biases and model collapse can be framed as critical instances of the how-BS. The homogenization problem, then, is more significant than it initially appears, potentially exhibiting a powerful structuring effect on individuals, organizations, institutions, and society at large. We discuss this in the concluding section of the article.