Human-Centered Approach to Fair & Responsible AI


News:  In response to the COVID-19 pandemic, we will be holding a virtual workshop on Sunday, April 26th.  Consistent with how CHI workshops normally run, we are focusing on hosting just authors of accepted papers.  We are exploring ways of perhaps making the keynote public.

News: Our keynote speaker will be Ece Kamar from Microsoft Research AI! More information will be posted soon.

Motivation 
As AI changes the way decisions are made in organizations and governments, it is ever more important to ensure that these systems work according to the values that diverse users and groups find important. Researchers have proposed numerous algorithmic techniques to formalize statistical fairness notions, but emerging work suggests that AI systems must account for the real-world contexts in which they will be embedded in order to actually work fairly. These findings call for an expanded research focus beyond statistical fairness that includes fundamental understandings of human uses and the social impacts of AI systems, a theme central to the HCI community. (For a full review of existing work, you can read the workshop proposal.) 

Goals and outcomes 
This one-day workshop aims to bring together a diverse group of researchers and practitioners to develop a cross-disciplinary agenda for creating fair and responsible AI systems. We invite academic and industry researchers and practitioners in the fields of HCI, machine learning (ML) and AI, and the social sciences to participate. By bringing together an interdisciplinary team, we aim to achieve the following outcomes: 
1) Synthesis of emerging research discoveries and methods. An emerging line of work seeks to systematically study human perceptions of algorithmic fairness, explain algorithmic decisions to promote trust and a sense of fairness, understand human use of algorithmic decisions, and develop methods to incorporate them into AI design. How can we map the current research landscape to identify gaps and opportunities for fruitful future research? 
2) Design guidelines for fair and responsible AI. Existing fairness AI toolkits aim to support algorithm developers, and existing human-AI interaction guidelines mainly focus on usability and experience. Can we create design guidelines for HCI and user experience (UX) practitioners and educators to design fair and responsible AI? 

What you will get out of the workshop
All participants will become familiar with state-of-the-art research and industry projects and findings and get to know other colleagues and potential collaborators who are interested in the topic. We will form working groups from the workshop so that interested participants can continue to work on the two topics above. We also hope to promote tighter collaboration between HCI, AI/ML and social sciences. 
  • If you are from HCI: Learn state-of-the-art algorithmic techniques and the philosophical and social theoretical framing of AI fairness
  • If you are from AI/ML: Learn human-centric methods in order to understand human perceptions of fairness and evaluate fair and responsible AI systems, and learn about real-world social factors that algorithms need to account for. 
  • If you are from social sciences: Learn current interventions to achieve fair and responsible AI. 

How to participate
To participate, submit a 2-4 page (including references) position paper in CHI extended abstract format via Easy Chair by February 11, 2020. 

We are open to diverse forms of the submissions, including reports on empirical research findings on fair and responsible AI, essays that offer critical stances and/or visions for future work, and show-and-tell case studies of industry projects.
Potential topics include:
  • Human biases in human-in-the-loop decisions
  • Human perceptions of algorithmic fairness
  • Development & evaluation of fair ML models
  • Explanations & transparency of algorithmic decisions
  • Methods for stakeholder participation in AI design
  • Decision-support system design
  • Algorithm auditing techniques
  • Ethics of AI
  • Sociocultural studies of AI in practice

Position papers will be reviewed by two organizers and evaluated based on their quality, novelty, and fit with the workshop theme. Accepted papers will be posted on the website. At least one author of an accepted paper must attend the workshop and all participants must register for both the workshop and for at least one day of the conference. 


Important Dates
  • Position paper deadline: February 11, 2020 at 23:59:59 AoE (UCT-12).
  • Notification: February 28, 2020 
  • Workshop at CHI2020: April 26 (Sunday), 2020

Contact Information

Submission URL
Submit your paper via EasyChair.


FAQ

Should the submission be anonymized?
No, please include your names and affiliations in the submission.

What timezone is used for when the submission is due?
Anywhere on Earth (AoE).  That's UTC-12.

Does the page limit apply to references?
Yes, the page limit includes the references.

Should submissions be conceptual or experimental?  Qualitative or quantitative?
We welcome either style of work.

May we submit something  that has already been published, accepted for publication, or is under submission elsewhere?
Your submission to us should be an original write up, but, as far as we are concerned, it may be based upon research reported in another submission and/or publication.

Do you have travel stipends or student grants?
Sadly, no.

Publications


On the Nature of Bias Percolation: Assessing Multiaxial Collaboration in Human-AI Systems


Andi Peng, Besmira Nushi, Kori Inkpen, Emre Kiciman, Ece Kamar


Designing for Digital Agency: Equity Perspectives and AI in K-12 Education


Ariam Mogos, Laura McBain, Carissa Carter, Megan Stariha, Lisa Kay Solomon


Measuring Social Biases of Crowd Workers using Counterfactual Queries


Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller


Getting Fairness Right: Towards a Toolbox for Practitioners


Boris Ruf, Chaouki Boutharouite, Marcin Detyniecki



View all

Contact


Fair & Responsible AI Workshop @ CHI2020


Share

Tools
Translate to