Art Notions in the Age of (Mis)anthropic AI

Art Notions in the Age of (Mis)anthropic AI
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, I take the cultural effects of generative artificial intelligence (generative AI) as a context for examining a broader perspective of AI’s impact on contemporary art notions. After the introductory overview of generative AI, I summarize the distinct but often confused aspects of art notions and review the principal lines in which AI influences them: the strategic normalization of AI through art, the representation of AI art in the artworld, academia, and AI research, and the mutual permeability of art and kitsch in the digital culture. I connect these notional factors with the conceptual and ideological substrate of the computer science and AI industry, which blends the machinic agency fetishism, the equalization of computers and humans, the sociotechnical blindness, and cyberlibertarianism. The overtones of alienation, sociopathy, and misanthropy in the disparate but somehow coalescing philosophical premises, technical ideas, and political views in this substrate remain underexposed in AI studies so, in the closing discussion, I outline their manifestations in generative AI and introduce several viewpoints for a further critique of AI’s cultural zeitgeist. They add a touch of skepticism to pondering how technological trends change our understanding of art and in which directions they stir its social, economic, and political roles.


💡 Research Summary

Dejan Grba’s 2024 article “Art Notions in the Age of (Mis)anthropic AI” offers a comprehensive cultural critique of generative artificial intelligence, focusing on how text‑to‑image (TTI) models such as DALL·E, Midjourney, Stable Diffusion, and others reshape contemporary understandings of art. The paper begins with a brief technical overview: since the release of OpenAI’s CLIP in early 2021, diffusion‑based TTI services have lowered the barrier to entry, allowing amateurs, hobbyists, and professional artists alike to generate high‑fidelity visual content by crafting textual prompts. These prompts serve both as task definitions and evaluative criteria, positioning the user as a co‑creator who must negotiate the opaque latent space of the model. Grba notes that while the technology democratizes access, it also inherits the biases of its training data—largely scraped from dominant online platforms—and is subject to corporate filtering policies that constrain thematic and aesthetic choices.

The core of the article is a tripartite analysis of “art notions”: anthropological (art as a human social practice), ontological (what counts as an artwork), and disciplinary/taxonomic (how artistic fields are categorized). Grba argues that the anthropological dimension remains relatively stable, but the ontological and taxonomic dimensions are destabilized by AI‑generated outputs. Issues of copyright, exhibition, and market valuation become ambiguous when a machine produces a work without a clear authorial intent, prompting the emergence of a new “AI art” category that blurs traditional boundaries between artist and tool.

Grba then situates these artistic shifts within what he calls the ideological substrate of the computer‑science and AI industry. He identifies four interlocking concepts: (1) machinic agency fetishism—the tendency to ascribe autonomous creative agency to algorithms; (2) the equalization of computers and humans, a myth that frames AI as a peer rather than a tool; (3) sociotechnical blindness, the assumption that technology is value‑neutral and can be deployed without considering broader social impacts; and (4) cyber‑libertarianism, which champions minimal regulation and market‑driven innovation. These narratives, he contends, serve to mask underlying power asymmetries, data‑driven homogenization, and the reinforcement of existing cultural hierarchies.

Crucially, Grba links these industry narratives to a set of darker philosophical undercurrents—alienation, sociopathy, and misanthropy. He argues that generative AI’s reliance on regurgitative learning amplifies stereotypes, marginalizes minority cultures, and creates a feedback loop where AI‑generated content is scraped to train subsequent models, further entrenching a narrow cultural canon. Algorithmic content filters, justified as protective measures, also act as gatekeepers that define what is “acceptable” in artistic expression, thereby neutering political, historical, and critical dimensions of creativity. The paper highlights how the gamified nature of prompt engineering triggers dopamine‑driven reward cycles, encouraging users to chase novelty rather than engage in sustained critical practice.

In the concluding section, Grba calls for a more nuanced, interdisciplinary critique of AI’s cultural zeitgeist. He proposes several avenues for future work: (a) rigorous ontological frameworks to delineate AI‑generated artifacts; (b) transparency in training data provenance and filtering criteria; (c) integration of AI ethics and critical media studies into art‑education curricula; and (d) policy interventions that protect human‑centered creativity while ensuring accountability for AI developers. By mapping the technical, aesthetic, and ideological dimensions of generative AI, the article underscores that the transformation of art is not merely a matter of new tools but reflects deeper sociopolitical dynamics that must be interrogated to prevent the erosion of artistic agency and cultural diversity.


Comments & Academic Discussion

Loading comments...

Leave a Comment