Risk, Governance, and the Reconfiguration of Socio-Technical Power
I. Introduction: Why “Normal Technology” Is an Incomplete Frame
Recent scholarship has sought to demystify artificial intelligence (AI) by framing it as “normal technology”—a continuation of historical patterns of technological development rather than a civilizational rupture. This reframing performs an important corrective function. It counters exaggerated claims about imminent superintelligence, existential risk, and technological determinism, redirecting attention toward institutional capacity, governance, and social adaptation. Yet the move to normalize AI risks flattening its most distinctive feature: not intelligence itself, but its role as a scalable system of decision mediation embedded across social, economic, and political domains.
This article argues that AI is best understood neither as exceptional nor as merely normal, but as a continuously adaptive socio-technical infrastructure that reorganizes how decisions are produced, distributed, and legitimized. AI does not need to exceed human intelligence to induce qualitatively distinct societal transformations. It is sufficient that AI systems increasingly structure labor markets, public administration, healthcare delivery, and civic participation through automated or semi-automated decision processes configured by human institutions yet operating at unprecedented scale and speed.
Framing AI as normal technology underestimates these dynamics by over-relying on historical analogies that treat diffusion as primarily technical and economic. This article proposes an alternative framing grounded in Risk Society Theory, emphasizing governance readiness, societal trust, and institutional adaptation as co-determinants of AI’s social impact. From this perspective, the core challenge of AI governance is not runaway intelligence, but the redistribution of risk, authority, and accountability in systems where decision-making is partially automated but never autonomous.

II. Theoretical Framework: Risk Society and AI Governance
Risk Society Theory, most prominently associated with Ulrich Beck, provides a useful lens for understanding AI’s societal role. In a risk society, the central political conflicts are no longer primarily about the distribution of wealth, but about the distribution of risks—many of which are technologically produced, institutionally mediated, and globally diffused.
AI fits squarely within this paradigm. Its risks are:
- Manufactured (arising from design choices, training data, and deployment contexts)
- Diffuse (distributed across populations rather than localized)
- Opaque (difficult for affected individuals to perceive or contest)
- Institutionally mediated (filtered through public and private governance structures)
Crucially, Risk Society Theory rejects technological determinism. Risks do not arise from technology alone, but from the interaction between technological systems and social institutions. Applied to AI, this means that harms such as discrimination, labor displacement, or civic exclusion are not inevitable consequences of automation, but contingent outcomes shaped by regulatory design, organizational incentives, and political choices.
This framework allows us to reconcile two seemingly competing claims:
AI is neither an uncontrollable force nor a neutral tool. It is a risk-producing infrastructure whose effects depend on governance capacity.
III. Human-Configured Decision Making: AI Without Autonomy
A persistent misconception in AI discourse is that meaningful societal transformation requires machine autonomy. In reality, AI systems remain deeply human-configured. Humans select objectives, define constraints, curate training data, set thresholds, and determine deployment contexts. What changes is not control, but the topology of control.
AI enables decision-making to be:
- Abstracted (decisions removed from local contexts)
- Standardized (uniform criteria applied across populations)
- Replicated (the same decision logic applied millions of times)
- Insulated (shielded from contestation by technical complexity)
These characteristics produce qualitative social effects even in the absence of machine autonomy. Labor markets are reshaped not because AI “decides” independently, but because human organizations embed algorithmic evaluations into hiring, scheduling, and performance management. Governance shifts not because AI governs, but because bureaucracies delegate discretion to systems designed to optimize efficiency or consistency.
The question is therefore not whether humans remain in control—they do—but how control is exercised, diffused, and legitimated when decisions are mediated by technical systems.
IV. A Multi-Layered Model of AI Diffusion
Traditional models of technological diffusion emphasize innovation, adoption, and economic incentives. While useful, they fail to capture AI’s institutional complexity. This article proposes a multi-layered diffusion model comprising four interacting dimensions:
1. Technical Capability
The availability and performance of AI models, infrastructure, and data.
2. Governance Readiness
The extent to which legal frameworks, regulatory institutions, and enforcement mechanisms are prepared to manage AI risks.
3. Societal Trust
Public confidence in institutions deploying AI, shaped by transparency, accountability, and historical experience.
4. Institutional Adaptation
Organizational capacity to integrate AI without undermining professional judgment, ethical norms, or democratic oversight.
Diffusion occurs unevenly across these layers. High technical capability paired with low governance readiness produces fragile systems prone to legitimacy crises. Conversely, slower technical adoption accompanied by strong institutional adaptation can yield more sustainable outcomes.
V. Empirical Evidence: Lessons from Healthcare Technologies
Healthcare offers a valuable empirical domain for examining AI diffusion. Historically, medical technologies such as diagnostic imaging, electronic health records (EHRs), and clinical decision support systems were initially framed as neutral efficiency tools. Over time, however, their deployment revealed systemic effects on professional autonomy, patient trust, and care equity.
Empirical studies of EHR adoption in the United States illustrate this pattern. While EHRs improved data availability and billing efficiency, they also increased clinician workload, standardized clinical decision pathways, and introduced new forms of surveillance. These outcomes were not driven by technological inevitability, but by institutional incentives embedded in reimbursement structures and compliance regimes.
AI-driven diagnostic tools follow a similar trajectory. Evidence suggests that while algorithmic systems can outperform clinicians in narrow diagnostic tasks, their real-world impact depends on integration into clinical workflows, interpretability, and liability frameworks. In settings with strong professional norms and oversight, AI functions as an augmentative tool. In under-resourced environments, it risks becoming a substitute for human judgment.
The lesson is clear: technology does not determine outcomes; institutions do.
VI. Inequality, Labor Segmentation, and Digital Citizenship
AI’s societal impact is most visible in its distributional effects. Empirical evidence increasingly demonstrates that AI systems reinforce existing inequalities when deployed without contextual safeguards.
Inequality
Predictive systems trained on historical data reproduce structural biases. Empirical audits of algorithmic credit scoring, hiring tools, and welfare eligibility systems consistently show disparate impacts along socioeconomic and racial lines. These outcomes persist even when models are technically accurate, highlighting the distinction between statistical performance and social justice.
Labor Segmentation
AI does not eliminate labor uniformly; it restructures it. Empirical labor market data show polarization, with high-skill workers leveraging AI to increase productivity while lower-skill workers face task fragmentation and increased surveillance. Algorithmic management systems exemplify this shift, converting discretionary labor into quantifiable metrics.
Digital Citizenship
AI mediates access to public services, information, and civic participation. When algorithmic systems determine eligibility, visibility, or priority, individuals’ status as digital citizens becomes contingent on opaque criteria. Empirical research on automated welfare systems demonstrates reduced procedural fairness and diminished opportunities for contestation.
These effects underscore why AI governance cannot be reduced to technical risk mitigation alone. It implicates fundamental questions of citizenship, dignity, and democratic accountability.
VII. Historical Analogies and Governance Lessons
Comparisons to electricity, aviation, or the internet are often invoked to normalize AI. While instructive, these analogies obscure key differences. AI is not merely an enabling infrastructure; it is a decision-structuring infrastructure.
A more instructive comparison is to pharmaceutical regulation. Drugs are not treated as general technologies subject only to market forces. Instead, sector-specific regimes govern development, approval, post-market surveillance, and liability. These regimes reflect an acknowledgment that risk varies by context and use.
The governance lesson is that uniform regulatory frameworks are ill-suited to heterogeneous risk profiles. AI training and deployment vary dramatically across sectors. A language model used for creative assistance poses different risks than one embedded in clinical decision-making or immigration adjudication.
VIII. Regulatory Trajectories: EU and U.S. State Frameworks
The emerging regulatory landscape reflects this tension. The EU AI Act adopts a risk-based classification system, differentiating obligations by use case rather than by technology alone. This represents a shift toward contextual governance, though its effectiveness will depend on enforcement capacity and interpretive clarity.
In the United States, state-level AI regulations increasingly focus on sector-specific harms, particularly in employment, healthcare, and consumer protection. While fragmented, these frameworks recognize that AI risks are not uniform and require tailored responses.
Both approaches implicitly reject the notion that AI can be governed effectively as a single category of “normal technology.”
IX. Multi-Stakeholder Governance as a Structural Necessity
Given AI’s embeddedness across public and private domains, governance cannot be monopolized by regulators alone. A multi-stakeholder model is not merely normatively attractive but structurally necessary.
Such a model includes:
- Regulators, setting baseline standards and enforcement mechanisms
- Industry, responsible for design choices and internal governance
- Professional communities, preserving domain-specific norms
- Civil society, providing oversight and contestation
- Affected individuals, whose experiential knowledge informs risk assessment
Multi-stakeholder governance aligns with Risk Society Theory by recognizing that legitimacy emerges from participatory risk negotiation rather than top-down control.
X. Conclusion: Reframing AI’s Normality
AI is not an existential anomaly, nor is it simply another normal technology. Its distinctiveness lies in its capacity to mediate decisions at scale while remaining embedded in human institutions. Accepting this reframing allows us to move beyond false dichotomies between alarmism and complacency.
The central governance challenge is not to prevent AI from becoming uncontrollable, but to ensure that its integration into social systems remains contestable, accountable, and proportionate to risk. Historical experience, empirical evidence, and emerging regulatory frameworks all point toward the same conclusion: AI governance succeeds not by normalization, but by contextualization.
Reframing AI in this way does not diminish its significance. It grounds it—where it belongs—in the institutions, risks, and collective choices that ultimately shape technological futures.