The ethics of AI and working with data at scale: what are the experts saying
This post originally appeared on SAGE Ocean
If we were to do a text mining exercise on all the incredible discussions at last week’s conference 100+ Brilliant Women in AI & Ethics, education would beat all other topics by a mile. We talked about educating kids, we had teenagers share their thoughts on AI in poems and essays, and exchanged views on the nuances of teaching ethics in computing and working with large volumes of social data both for computer scientists and experts from other disciplines.
There was a resounding agreement in the overwhelmingly full room at Oxford University’s Lady Margaret Hall that we are still in the testing phase. We know we have to improve education in AI and ethics for all ages. We should encourage and explore new ways of teaching ourselves and the next generations about the benefits and dangers of developing machine learning-based products that rely on social data. Oxford University is aiming to do exactly that with the new £150 million funded Institute for Ethics and AI, part of The Schwarzman Centre, a hub for the humanities. The institute will invite students to engage with new technologies in many varied ways from art to critical thinking to programming. We’ve written about other research centers with similar aims in this blog.
We need ethics in AI and data science courses
There are at least 200 courses on ethics of AI or data science taught in universities at the undergraduate level and not just in computer science departments. Omidyar Network, Mozilla Foundation, Schmidt Futures and Craig Newmark Philanthropies ran a challenge last year, and are jointly awarding “$3.5 million in prizes to promising approaches to embedding ethics into undergraduate computer science education, empowering graduating engineers to drive a culture shift in the tech industry and build a healthier internet.” More than 700 instructors applied and 17 are going to use the awards to develop novel approaches to solve this challenge.
There is no shortage of courses, ethics is indeed taught, and I am confident that institutions that don’t offer it, will soon do. Whether it should be a requirement, is still a debate. I’ve heard both from academics that believe the ethics of AI and data science should be a requirement for all undergraduates, and from those that don’t understand why someone who is learning how to build an algorithm needs to know or consider all the possible scenarios of doom. Some say it stifles innovation. But sometimes, innovation grows out of restrictions.
What should we teach?
Jeanette Winterson CBE, renowned British writer, and one of the speakers at 100+ Brilliant Women in AI & Ethics believes that we first need to work on our language. We should use it to the full extent of its complexity to explain the nuances of AI and the multiple levels of ethics. Because language creates thought, and if we cannot express it, whether we are the programmer or the policy maker, we reduce the thought and will miss on moving forward at a better speed. “Talking down to anybody isn’t democratic, it’s just rude,” she says.
When I asked a brilliant panel of experts that included Safiya Noble, associate professor and author of Algorithms of Oppression, Gina Neff, research fellow and author of Self-Tracking, Lilian Edwards, leading academic in the field of internet law, Bulbul Gupta, founding advisor for an AI Think Tank, and Maria Axente, AI for Good lead at PwC about the concepts they would teach in an ethics for AI and data science course, almost all mentioned critical thinking. They also suggested building upon feminist theories of technology, race, power, rules and values in product design; teaching about justice, and human and civil rights; discussing what is human flourishing and what it takes to live together and most importantly instilling courage to know the difference and walk away.
At the very basic level, we should teach about consent, says Alex Robertson, a PhD candidate at the University of Edinburgh. We should know about giving permission and requesting permission. We should know about consent when we work with other peoples’ or companies’ data. We should know to “fairly compensate participants and crowdsourced workers for their time and effort.”
Most of all, we think such courses should broaden computer and data scientists’ perspectives and teach them to appreciate the value of ethics and social science in developing the next generation of AI. Ultimately, as Rachel Coldicutt, the CEO of doteveryone argues, it’s having a social scientist with the right understanding of technology and the right level in the decision-making process that will result in more responsible technologies. The Pentagon is taking this advice seriously, are you?
Ethical guidelines for better AI technologies and data science research
-
Association of Internet Researchers ethics guidelines/questions
-
Algorithmic discrimination from Fairness Measures
-
Ethics Guidelines for Internet-Mediated Research (2017) from British Psychological Society
-
American Psychological Society’s guidelines for working with internet data
-
University of Chicago’s data ethics checklist
-
Accenture’s data ethics insights
-
PwC’s responsible AI toolkit
-
Data Science Ethics online course from the University of Michigan
-
O’Reilly ethics and data science book