By Besnik Pula, Virginia Tech, bpula@vt.edu

In its landmark 2021 report on AI ethics, UNESCO cautions that a “digital and civic literacy deficit” could deepen divides between those who understand and shape computing technologies and those who remain excluded. This concern goes beyond issues of device ownership or software familiarity. It reflects how citizens, policymakers, and companies grapple with systems whose technical complexity makes meaningful engagement daunting. When the public lacks paths to gain relevant knowledge, a small coterie of experts and large corporate players end up directing the design, use, and regulation of emerging technologies—sometimes with little public scrutiny.
Many assume that the highly specialized nature of coding, data analytics, and AI development places these matters squarely into the realm of professional expertise. Yet Alfred Schutz’s classic essay “The Well-Informed Citizen” underscores a crucial point: knowledge in any society is socially distributed, and these distributions can perpetuate inequities in who ultimately participates in significant decisions. If entire communities are deemed too uninformed to weigh in, then laws, ethical guidelines, and technological priorities may reflect only a narrow slice of society’s interests.
Historically, computing’s evolution defies the notion that development and governance must remain the sole domain of experts. In the mid-20th century, military and corporate labs funded much early research, keeping most technology under institutional control. By the 1970s, however, the spread of micro- and personal computers (PCs) signaled a turning point. Instead of remaining hidden in government bureaus or university labs, computers began arriving in the (literal) workbenches of ordinary people—machines did not come pre-assembled but had to be creatively built out of component parts. Crucially, this shift did not happen by default: hobbyist communities such as the People’s Computer Company and similar groups fueled a grassroots movement to bring computing into everyday life. Their efforts laid the groundwork for the modular designs that allowed for open architectures and the open-source movement of our time.
Dismissing hobbyists of the past as mere enthusiasts overlooks their role as co-creators and informed citizens. Community-run user groups asked how computers might reshape culture, enhance communication, or threaten personal privacy, and this in turn informed their ways of thinking about circuitry and code. Such groups became sites of collaboration and debate, blending technical exploration with broader questions about technology’s purpose in society. Many became inspired by the work of social critic Ivan Illich who argued that technologies must become tools of conviviality rather than exploitation and control. With their hands-on knowledge, hobbyists bridged a gap between credentialed technologists working in labs and uninitiated laypersons for whom computers were entirely foreign; they actively questioned whether powerful institutions should solely determine technology’s fate.
Today, the AI revolution mirrors many of these historical dynamics. AI underpins everything from social media algorithms to automated hiring systems, but public understanding of its inner workings and broader implications remains uneven. As UNESCO warns, steep knowledge gaps risk leaving the people most affected by them on the sidelines, while corporations and governments forge ahead with powerful digital tools. Much like the hobbyists decades ago, critics note that AI can perpetuate biases, erode privacy, and transform economies in ways that benefit some groups over others. Governance thus demands more voices in the conversation—an expansion of Schutz’s ideal of the well-informed citizen.
The hobbyist legacy offers a template: grassroots knowledge-sharing can demystify computing and nurture a more publicly accessible critical discourse. In practical terms, this might include freely accessible AI tutorials, community-based workshops, and linking open-source collaborations with broader social movements seeking to limit the concentration of computational power. Although AI’s technical underpinnings appear daunting, the principle of collective engagement remains the same as in the early PC era—citizens who are encouraged to learn, experiment, and exchange ideas are often those most poised to question and improve new technologies.

Emerging solutions hinge on viewing knowledge not solely as an expert prerogative but as a shared resource. By recalling the hobbyists’ successes, we see that co-creation need not demand advanced credentials if communities can access transparent explanations of crucial design and partake in the making of policy choices. Fostering this ecosystem of informed citizens may help counter the current concentration of technological power in elite hands, ensuring that discussions of ethics, regulation, access, and rights take place in public arenas, not just behind closed corporate or government doors.
In the end, computing’s trajectory was never preordained. Far from being an irreversible path shaped by high-level experts and corporate interests, it emerged through active engagements between diverse groups and competing visions. Today, by recognizing the public as a legitimate stakeholder—and by recalling Alfred Schutz’s emphasis on social distributions of knowledge—our task is to encourage broader participation in shaping technologies that increasingly influence every sphere of life.
—
Besnik Pula is Associate Professor and Director of International Studies at the Department of Political Science at Virginia Tech. His most recent book is Alfred Schutz, Phenomenology, and the Renewal of Interpretive Social Science (Routledge, 2024). He is currently working on a book project on computing and technology governance during and after the Cold War that builds on Schutz’s sociology of knowledge.
Leave a comment