PHILOS Institut für Philosophie Institut News und Veranstaltungen News
Celebrating student success: our MA students at the European Workshop for Algorithmic Fairness

Celebrating student success: our MA students at the European Workshop for Algorithmic Fairness

Celebrating student success: our MA students at the European Workshop for Algorithmic Fairness 2025

Two of our MA Philosophy of Science students, Iqra Aslam and Thomas Leis, and their MA supervisor Dr. Donal Khosrowi, presented their three research projects at the European Workshop for Algorithmic Fairness (EWAF 2025). EWAF25 brought together researchers from computer science, philosophy, social science, and law to explore themes surrounding algorithmic fairness, especially in Machine Learning (ML) and AI. Thomas and Iqra received valuable feedback from the interdisciplinary community that will inform the development of their upcoming thesis projects.

Iqra Aslam: Ethical Challenges of Machine Unlearning (MU)

Iqra presented a poster version of her upcoming Master's thesis project, which focuses on the challenges of Machine Unlearning (MU). MU encompasses current efforts to remove unwanted information (such as sensitive, copyrighted, or biased/toxic content) from trained machine learning models without compromising model performance. Her research highlights a gap between what MU claims to do and the rightful demands and expectations of stakeholders, whose data is used. Access her extended abstract, which is published in the proceedings of the EWAF conference.  

Thomas Leis: Rethinking Anthropomorphism in Mental Health Chatbots (MHCBs)

Thomas presented a poster on his upcoming Master's thesis research on mental health chatbots (MHCBs), which are emerging AI-based chat services designed to help individuals manage their mental health. While a certain degree of anthropomorphism (human-likeness) is necessary to facilitate Human-AI Interactions and improve user experience, Thomas argues that some forms of anthropomorphization of MHCBs are ethically problematic, such as marketing language that paints them as an “ally that’s with you through it all” or that suggests they are better than human therapists. Thomas argues that anthropomorphization raises concerns about vulnerable users overtrusting and oversharing personal information and foregoing professional care. You can read Thomas’ published abstract here

Donal Khosrowi: Self-Fulfilling Predictions by ML Systems

Donal Khosrowi presented joint work with Markus Ahlers (Tübingen) and Philippe van Basshuysen (CELLS Hannover) titled “We Need to Talk about Self-fulfilling Predictions”. They argue that many Machine Learning (ML) systems may be performative: they don’t just predict outcomes in the world, but their predictions causally affect these outcomes, including in ethically problematic ways.  Read their full paper published in the proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT’25).