Artificial Intelligence (AI) is increasingly playing an integral role in determining our day-to-day experiences. Increasingly, the applications of AI are no longer limited to search and recommendation systems, such as web search and movie and product recommendations, but AI is also being used in decisions and processes that are critical for individuals, businesses, and society. With web-based AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. With many factors playing a role in development and deployment of AI systems, they can exhibit different, and sometimes harmful, behaviors. For example, the training data often comes from society and real world, and thus it may reflect the society’s biases and discrimination toward minorities and disadvantaged groups. For instance, minorities are known to face higher arrest rates for similar behaviors as the majority population, so building an AI system without compensating for this is likely to only exacerbate this prejudice.These concerns highlight the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, trans- parent, explainable, fair, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around web-based AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, hiring, sales, lending, and fraud detection. We will emphasize that topics related to responsible AI are socio-technical, that is, they are topics at the intersection of society and technology. The underlying challenges cannot be addressed by technologists alone; we need to work together with all key stakeholders --- such as customers of a technology, those impacted by a technology, and people with background in ethics and related disciplines --- and take their inputs into account while designing these systems. Finally, based on our experiences in industry, we will identify open problems and research directions for the data mining/machine learning community.