TAI

Course: Trustworthy AI: Learning from Data with Safety, Fairness, Privacy, and Interpretability Requirements

Credits: 5

Hours: about 20

Teachers: Luca Oneto <luca.oneto@unige.it>

Tentative Schedule: TBD (course has been delivered on 2021 and will be delivered again in 2023)

Where: TBD

Exam: Small presentation (max 30 min) on how the concepts presented in the course ca be used/extended during the student PhD.

Course Description

Abstract:

It has been argued that Artificial Intelligence (AI) is experiencing a fast process of commodification. This characterization is of interest for big IT companies, but it correctly reflects the current industrialization of AI. This phenomenon means that AI systems and products are reaching the society at large and, therefore, that societal issues related to the use of AI and Machine Learning (ML) cannot be ignored any longer. Designing ML models from this human-centered perspective means incorporating human-relevant requirements such as safety, fairness, privacy, and interpretability, but also considering broad societal issues such as ethics and legislation. These are essential aspects to foster the acceptance of ML-based technologies, as well as to be able to comply with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters.

Program:

  • Trustworthy AI;

  • Safety in AI: Sensitivity Analysis and Adversarial Learning;

  • Fairness in AI: from Pre-, In-, and Post-Processing Models to Learn Fair Representations;

  • Privacy in AI: Federated Learning and Differential Privacy;

  • Interpretability/Explainability of AI: making models more understandable.

References:

  • Winfield, A. F. et al. "Machine ethics: the design and governance of ethical AI and autonomous systems." Proceedings of the IEEE 107.3 (2019): 509-517.

  • Floridi, L. "Establishing the rules for building trustworthy AI." Nature Machine Intelligence 1.6 (2019): 261-262.

  • Biggio, B. and Roli F. "Wild patterns: Ten years after the rise of adversarial machine learning." Pattern Recognition 84 (2018): 317-331.

  • Guidotti, R. et al. "A survey of methods for explaining black box models." ACM computing surveys (CSUR) 51.5 (2018): 1-42.

  • Liu, B. et al. "When machine learning meets privacy: A survey and outlook." ACM Computing Surveys (CSUR) 54.2 (2021): 1-36.