Type of Credit: Elective
Credit(s)
Number of Students
Advanced machine learning mechanisms such as deep learning neural networks have rapidly automated decision processes in domains including government, business, finance, and healthcare. Unlike traditional rule-based AI approaches, decisions made by a modern AI system are often built upon numerous iterations over input data, making the system behave like a black box to humans. Therefore, Trustworthy AI research and practice aim to open the black box of highly complex AI systems and inspect justification for the cause and effect within their decision processes.
This course will introduce the recent developments of Trustworthy AI. We will cover topics in AI fairness, explainability, and safety, with a focus on approaches providing provable correctness and quality guarantee. The course will consist of lectures, tutorials, and paper presentations. The teacher will give lectures and tutorials on selected topics in a self-contained manner. The students will form into groups: each group will be responsible for giving one lecture, one tutorial, and one paper presentation. After taking this course, students will gain a general knowledge of Trustworthy AI, as well as a deep understanding of specific techniques for practicing and researching Formal AI fairness, explainability, and safety.
能力項目說明
This course provides an overview of selected topics in AI fairness and explainability, focusing on the formal approaches. Specific topics that will be covered include machine learning fairness, formal explanations, logical query languages, property inference, robustness analysis, and constraint solving. This course is research-oriented and will equip the students with an algorithmic toolkit for their further study in related and more advanced topics.
教學週次Course Week | 彈性補充教學週次Flexible Supplemental Instruction Week | 彈性補充教學類別Flexible Supplemental Instruction Type |
---|---|---|
週次 Week |
課程主題 Topic |
課程內容與閱讀資料 Content and Reading Materials |
課程活動 Teaching Activities |
學習投入時間 |
|
課堂講授 |
課程前後 |
||||
1 |
Course |
Algorithmic bias in data science |
Kick-off Meeting |
3 |
5 |
2 |
Holiday |
||||
3 |
Fairness |
Fairness metrics: individual fairness, group fairness, local fairness, and global fairness. Fair model training: pre-processing, in-processing, and post-processing Structural equation modeling and counterfactual fairness |
Name Form and |
3 |
5 |
4 |
Lecture and |
3 |
5 |
||
5 |
Lecture and |
3 |
5 |
||
6 |
Lecture and |
3 |
5 |
||
7 |
Lecture and |
3 |
5 |
||
8 |
Formal XAI |
First-order logic Logical approaches to XAI Abductive explanations Contrastive explanations Transformer programs Property inference |
Lecture and |
3 |
5 |
9 |
Lecture and |
3 |
5 |
||
10 |
Lecture and |
3 |
5 |
||
11 |
Lecture and |
3 |
5 |
||
12 |
Lecture and |
3 |
5 |
||
13 |
XAI |
Interpretable models Global model-agnostic methods Local model-agnostic methods Neural network interpretations |
Lecture and |
3 |
5 |
14 |
Lecture and |
3 |
5 |
||
15 |
Lecture and |
3 |
5 |
||
16 |
Lecture and Group Discussion |
3 |
5 |
||
17 |
Lecture and |
3 |
5 |
||
18 |
TBD |
Reserved for unfinished lectures |
TBD |
3 |
5 |
Literature presentation 30%
Paper presentation 30%
Assignments 30%
Participation 10%
Bonus exercises (~10%)
You are encouraged to attend each meeting with the assigned readings prepared in advance. When you attend a meeting, you must be on time and remain there for the entire meeting. A three-hour meeting will often consist of two hours of the lecturer presenting technical papers, followed by one hour of group discussion on a selected article. Your grades for "Lecture participation" will be determined by how actively you participate in the lecturer's presentation, for example, by answering questions from the lecturer, making comments, and initiating further discussions. The group discussion session will be led by the lecturer in the first few weeks and by the students afterward. Each group must take charge of at least one group discussion session. Your grades for "Group discussion participation" will be determined by how your group presents the selected article and leads the discussion (when your group is in charge), and how your group contributes to the discussion (when your group is an audience) during the group discussion session.
You will be evaluated after every meeting based on the following criteria. Please note that contributions are not equivalent to merely attending a meeting and talking. The quality of your comments and responses will also be an important component of the evaluation.
Excellent Participation (A) : (1) regularly initiates and contributes to meeting discussions; (2) regularly indicates substantial knowledge and insights; (3) frequently facilitates other students in clarifying and developing their viewpoints. (4) frequently contributes to meeting discussions by helping others produce a synergistic understanding of the issues being discussed.
Good Participation (B) : (1) frequently initiates and contributes to meeting discussions; (2) occasionally indicates substantial knowledge and insights; (3) occasionally facilitates others in clarifying and developing their viewpoints.
Fair Participation (C) : (1) occasionally initiates meeting discussions; (2) occasionally contributes to meeting discussions; (3) indicates some knowledge and insights; (4) rarely responds constructively to the contributions from other students.
Poor Participation (D/E) : (1) never or rarely initiates meeting discussions; (2) never or rarely contributes to meeting discussions; (3) actively inhibits or impedes the course of discussion; (4) exhibits defensive behavior such as aggression or withdrawal rather than being thoughtful and considerate of others' ideas.
Algorithmic bias - Sina Fazelpour and David Danks
https://compass.onlinelibrary.wiley.com/doi/10.1111/phc3.12760?af=R
Are Algorithms Value-Free? - Gabrielle M. Johnson
https://www.gmjohnson.com/uploads/5/2/5/1/52514005/are_algorithms_value_free_.pdf
Algorithmic injustice - Abeba Birhane
https://www.sciencedirect.com/science/article/pii/S2666389921000155
Data Owning Democracy or Digital Socialism? - James Muldoon
https://www.tandfonline.com/doi/full/10.1080/13698230.2022.2120737
Stop Explaining Black-Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead - Cynthia Rudin
https://arxiv.org/pdf/1811.10154.pdf
The Bias Dilemma - Oisín Deery and Katherine Bailey
https://ojs.lib.uwo.ca/index.php/fpq/article/view/14292
On the Advantages of Distinguishing Between Predictive and Allocative Fairness in Algorithmic Decision‐Making - Fabian Beigang
https://link.springer.com/article/10.1007/s11023-022-09615-9
Transparency in Complex Computational Systems - Kathleen Creel
https://www.cambridge.org/core/journals/philosophy-of-science/article/transparency-in-complex-computational-systems/4DB040EB28172CADF5F2858B62D0952C
Algorithmic and Human Decision Making - Mario Günther and Atoosa Kasirzadeh
https://www.mario-guenther.com/_files/ugd/70b9dd_ff087ae509034fb9b126dcf783182457.pdf
A modern Pascal's wager for mass electronic surveillance. - David Danks
https://static1.squarespace.com/static/5f6d0320212a261d8716949f/t/621319146907794d4dba3724/1645418773886/Telos-PascalsWager-Pub.pdf
The Surveillance Society - Oscar H. Handy Jnr.
https://academic.oup.com/joc/article-abstract/39/3/61/4210548
Risk Imposition by Artificial Agents - Johanna Thoma
https://johannathoma.files.wordpress.com/2021/02/moral-proxy-problem-feb-2021.pdf
書名 Book Title | 作者 Author | 出版年 Publish Year | 出版者 Publisher | ISBN | 館藏來源* | 備註 Note |
---|