On 12 February 2026, NCC Slovakia will host the inaugural online lecture of the AI Accountability Dialogue Series, emphasizing the important issue of responsibility gaps in artificial intelligence systems.

There is an extensive debate about responsibility gaps in artificial intelligence. These gaps correspond to situations of normative misalignment: someone ought to be responsible for what has occurred, yet no one actually is. They are traditionally considered to be rooted in a lack of adequate knowledge of how an artificial intelligence system arrived at its output, as well as in a lack of control over that output. Although many individuals involved in the development, production, deployment, and use of an AI system possess some degree of knowledge and control, none of them has the level of knowledge and control required to bear responsibility for the system’s good or bad outputs. To what extent is this lack of knowledge and control at the level of outputs present in contemporary AI systems?

From a technical perspective, relevant knowledge and control are often limited to the general properties of artificial intelligence systems rather than to specific outputs. Actors typically understand the system’s design, training processes, and overall patterns of behavior, and they can influence system behavior through design choices, training methods, and deployment constraints. However, they often lack insight into how a particular output is produced in a specific case and lack reliable means of intervention at that level.

The lecture will offer several insights into these questions. In addition, we will show that the picture is even more complex. There are different forms of responsibility, each tied to distinct conditions that must be met. Accordingly, some forms of responsibility remain unproblematic even for AI system outputs, while others prove more challenging.

More information

Previous Post