The future of Artificial Intelligence (AI) lies in systems that enhance human capabilities without replacing human agency: in particular, the evolution of autonomous systems poses significant technical, ethical, and social challenges. These systems offer increasingly fruitful opportunities, but their decision-making authority introduces risks of misalignment with human values and societal expectations. As these systems acquire the ability to make independent decisions, integrating ethical principles into their design becomes essential. This work aims to propose a framework for hybrid autonomy-based systems that enhance human capabilities while preserving ethical adaptability and fostering trust. The vision introduces the concept of dynamic autonomy, where systems can adjust their control levels based on contextual and soft-ethical factors. The primary objective is to align autonomous systems with users’ preferences and boundaries, addressing the gap between technical performance and ethical acceptability. Rather than viewing autonomy as a fixed feature, this work searches for a balance between machine autonomy and human control that is fluid, flexible, and responsive. Through multidisciplinary perspectives, the research questions address the challenge of personalising systems to ethical preferences, and evaluating the impact on ethical explicability and trust. By stressing the elements of adaptability and collaboration, this approach supports the responsible integration of intelligent autonomous technologies across diverse sectors, from personal assistance to care robots, ensuring that innovation aligns with the values of those it serves.

Hybrid Autonomy: Towards a Unified Framework on Adaptability and Ethical Trust in Autonomous Systems

Melis, Beatrice
Membro del Collaboration Group
;
de Sanctis, Martina
Membro del Collaboration Group
;
Inverardi, Paola
Membro del Collaboration Group
;
2025-01-01

Abstract

The future of Artificial Intelligence (AI) lies in systems that enhance human capabilities without replacing human agency: in particular, the evolution of autonomous systems poses significant technical, ethical, and social challenges. These systems offer increasingly fruitful opportunities, but their decision-making authority introduces risks of misalignment with human values and societal expectations. As these systems acquire the ability to make independent decisions, integrating ethical principles into their design becomes essential. This work aims to propose a framework for hybrid autonomy-based systems that enhance human capabilities while preserving ethical adaptability and fostering trust. The vision introduces the concept of dynamic autonomy, where systems can adjust their control levels based on contextual and soft-ethical factors. The primary objective is to align autonomous systems with users’ preferences and boundaries, addressing the gap between technical performance and ethical acceptability. Rather than viewing autonomy as a fixed feature, this work searches for a balance between machine autonomy and human control that is fluid, flexible, and responsive. Through multidisciplinary perspectives, the research questions address the challenge of personalising systems to ethical preferences, and evaluating the impact on ethical explicability and trust. By stressing the elements of adaptability and collaboration, this approach supports the responsible integration of intelligent autonomous technologies across diverse sectors, from personal assistance to care robots, ensuring that innovation aligns with the values of those it serves.
2025
9781643686110
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12571/36885
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact