Talk Infos

Company affiliation: IU International University of Applied Sciences / Accenture

Time & Room

Starting Time: 14.11.2025 09:55
Duration: 25 Minutes
Room: Data Centers / Infrastructure / Management (DIM)

Talk Details

Bias and opacity in AI systems threaten fairness, trust, and accountability across domains—particularly in healthcare, where data quality, diversity, and representativeness are critical, and flawed predictions can have detrimental real-world consequences. Explainable AI (XAI) goes beyond transparency: it acts as a catalyst for bias detection, fairness assessment, quality assurance, and continuous model improvement. This presentation explores how imbalanced or incomplete data can lead to inequitable outcomes and how XAI methods can uncover, quantify, and mitigate such effects to enable fairer, more inclusive model development.

Through concrete examples, the talk connects XAI to principles of fairness, robustness, and sustainability-aware model design. It provides a detailed overview of how XAI works and is implemented in practice, showcasing key methods such as feature attribution, SHAP, and LIME—discussing both their interpretive value and their computational costs.

As XAI can be computationally demanding, the discussion addresses how explainability techniques can evolve to reduce energy use, leverage efficient approximation strategies, and contribute to the sustainability of the AI lifecycle.