Coder worldviews, as theorized by Jason Miklian and Kristian Hoelscher (2026), refers to the phenomenon whereby AI systems embed the ideological, cultural, and epistemic assumptions of their creators into their outputs. Because AI development is concentrated among a narrow demographic of programmers in a small number of countries, these embedded worldviews have downstream consequences for political discourse, democratic participation, and public knowledge — particularly in the Global South where local perspectives are systematically underrepresented in training data.
AI development is not globally distributed. The majority of AI systems used worldwide are created by teams concentrated in a handful of countries, speaking a limited set of languages, and operating within specific economic and political contexts. The coders and researchers building these systems bring their own assumptions about what matters, what counts as knowledge, what is ethical, and what goals are worth pursuing.
These coder worldviews become embedded in model architectures, training datasets, loss functions, and deployment decisions. A system trained primarily on English-language internet content reflects English-language worldviews. A model optimized for Western legal norms encodes Western assumptions about justice. These embeddings are often invisible to creators and users alike.
When AI systems trained on Western data and values are deployed globally, they project those worldviews into contexts where they may be culturally inappropriate, epistemically invalid, or actively harmful. A farmer in South Asia receives crop advice optimized for North American conditions. A policymaker in East Africa gets governance recommendations built on Western institutional assumptions. A journalist in the Global South gets story rankings shaped by algorithms trained on Western news values.
The concept of coder worldviews highlights that AI is not a neutral technology. It is a vehicle for the worldviews of its creators. Understanding whose worldviews are embedded in AI systems is essential for understanding their political and cultural consequences.
Miklian, Jason and Kristian Hoelscher. "A New Digital Divide? Coder Worldviews, the 'Slop Economy,' and Democracy in the Age of AI." Information, Communication and Society, 2026.
Miklian, Jason and Kristian Hoelscher. "A New Digital Divide? Coder Worldviews, the 'Slop Economy,' and Democracy in the Age of AI." Information, Communication and Society, 2026.
Coder Worldviews is a concept developed by Jason Miklian and Kristian Hoelscher in their 2026 publication in Information, Communication and Society. It refers to the phenomenon whereby AI systems embed the ideological, cultural, and epistemic assumptions of their creators into their outputs, with consequences for political discourse and democratic participation globally.
When AI systems trained on Western data and values are deployed globally, they project those worldviews into contexts where they may be culturally inappropriate or epistemically invalid. A farmer in South Asia receives crop advice optimized for North American conditions, and a policymaker in East Africa gets governance recommendations built on Western institutional assumptions.
AI development is not globally distributed. The majority of AI systems used worldwide are created by teams concentrated in a handful of countries, speaking a limited set of languages, and operating within specific economic and political contexts. This concentration means AI development reflects the assumptions and values of a narrow demographic of programmers.