Conceptual Disruptions by AI (CDAI)

Leitung:  Dr. Donal Khosrowi
Team:  Ina Gawel (doctoral researcher), Iqra Aslam (student assistant), Thomas Leis (stud. assistant)
Jahr:  2025
Datum:  01-04-25
Förderung:  NMWK Europa-Programm
Laufzeit:  01.04.2025-31.10.2025
Ist abgeschlossen:  ja

Conceptual Disruptions by AI (CDAI)

The Conceptual Disruptions by AI (CDAI) project is a NMWK-funded research project that investigates the conceptual disruptions raised by artificial intelligence and machine learning systems across a range of domains.

AI systems play increasingly important roles in science as well as professional and everyday life, for instance in automating scientific discovery, enabling the creation of new images and texts, serving as conversational agents, or tools for mental health support. These advances create a range of conceptual disruptions: fundamental concepts that we use to understand the role of AI as well as our own roles come under pressure.  For example, what does it mean to be a scientific ‘discoverer’, ‘researcher’, ‘author’ or ‘creator’? Is AI just a ‘tool’ that humans use, or can AI systems exhibit characteristics such as ‘creativity’, ‘autonomy’ or ‘understanding’ that we consider essential for such roles? Can AI systems provide ‘care’ or figure as ‘therapists’? Can AI systems meaningfully ‘forget’ unwanted information? And so on. Such disruptions create a number of pressing practical, legal, ethical and epistemological challenges: what tasks in the sciences (e.g. formulating new research questions and hypotheses) can we usefully delegate to AI systems? How should professionals understand their own role in relation to AI systems and their outputs as work becomes increasingly automated? These and other questions are at the center of numerous current controversies. The CDAI project 1) systematically analyzes select conceptual disruptions caused by AI, 2) develops frameworks to adapt and improve basic concepts to better reflect the emerging roles of AI, and 3) makes concrete proposals to address conceptual and practical disruptions caused by AI.

Complementing earlier research by Dr. Khosrowi and colleagues as part of the Machine Discovery and Creation (MDAC) project, the CDAI team, consisting of Donal Khosrowi (PI), Ina Gawel (doctoral student), and Iqra Aslam and Thomas Leis (student assistants) is currently working on the following research themes:

 

What does it mean for machines to ‘forget’?

As part of her upcoming MA research project, Iqra Aslam investigates the challenges of Machine Unlearning (MU), i.e. efforts to remove unwanted information (such as sensitive, copyrighted, or biased/toxic content) from trained machine learning models without compromising model performance. Her research highlights uncertainty around different senses of what it means for machines to ‘unlearn’ or ‘forget’ data, and draws out gaps between what MU claims to do and the rightful demands and expectations of stakeholders, whose data is used. 

Outputs: Access Iqra’s extended abstract, which is published in the proceedings of the European Workshop for Algorithmic Fairness (EWAF) conference, here. A long-form paper of Iqra’s project (co-authored with Donal Khosrowi and Rahul Nagshi) has just been accepted for the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES) conference.

 

Can/should mental health chatbots be anthropomorphic (human-like)?

Thomas Leis’ upcoming MA thesis research focuses on the emerging roles of mental health chatbots (MHCBs) designed to help individuals manage their mental health. Thomas investigates to what extent MHCBs may be anthropomorphic (human-like), play roles traditionally associated with human therapists, and what principles can be used to assess whether anthropomorphization is morally acceptable. While a certain degree of anthropomorphism is necessary to facilitate Human-AI Interactions and improve user experience, Thomas argues that some forms of anthropomorphization of MHCBs are ethically problematic, such as marketing language that paints them as an “ally that’s with you through it all” or that suggests they are better than human therapists. Thomas argues that anthropomorphization raises concerns about vulnerable users overtrusting and oversharing personal information and foregoing professional care. 

Outputs: You can read Thomas’ extended abstract, which was published in the proceedings of the European Workshop for Algorithmic Fairness (EWAF) conference, here.

 

Donal Khosrowi and Ina Gawel are currently working on several projects related to CDAI's core themes, and are preparing a large grant proposal to be submitted later this year. Details and outputs to follow.