The diffuse void: algorithmic safety and the disappearance of judgment
Abstract This article examines a central ethical transformation in contemporary regimes of algorithmic safety: the disappearance of discrete sites where responsibility can appear. While current governance frameworks emphasize oversight, monitoring, and auditability, the architectures through which large language models are trained and deployed reorganize decision-making into continuous processes of probabilistic optimization. As a result, responsibility becomes increasingly difficult to locate, even though human actors remain present within training and evaluation pipelines. Drawing on Hannah Arendt’s account of judgment, action, and the public appearance of responsibility, the article develops a theoretical framework for understanding this transformation. Arendt’s distinction between discrete acts of judgment and the statistical administration of behavior provides a lens through which contemporary safety infrastructures can be analyzed. The article introduces the concept of post-agential governance to describe systems in which power operates without requiring actors who publicly appear as the authors of specific decisions. Three symptomatic indicators of this condition are examined: the absence of named signatories in safety releases, the displacement of normative reasoning by engineering vocabularies of tuning and drift, and the reliance on aggregate metrics to address representational harms. Finally, the article proposes institutional mechanisms that could restore identifiable moments of responsibility in algorithmic governance, including authorization tokens attached to model deployment events.
Publisert i AI and Ethics, 2026
Les artikkelen her