WHAT IS THE PRACTICAL AI ETHICS ALLIANCE?
Every day machines make more and more decisions in our lives. How do we know we can trust those decisions?
We created the AI Ethics Alliance to help bring trust and transparency to artificial intelligence. Most importantly, we’re working towards a practical and implementable AI Ethics system. If your values aren’t implementable then your algorithms will never reflect the essence of your organization.
Practical AI Ethics is hard. Too often the efforts of organizations fall woefully short. Organizations form a committee, issue a report and then nothing changes. Their ideas are just words that don’t translate to anything a data scientist and AI ops engineers can implement in code. With algorithms in charge of critical decisions in people’s lives, from self-driving cars, to deciding who goes to jail, to helping hire and fire people, this old approach cannot stand.
We need a real way to shine light on the decisions machines make. That’s why we’ve created the alliance to bring together ongoing research, tools and people to bring true auditing, transparency and accountability to machine learning.
1) Craft practical AI ethics standards that are actionable for their data science and engineering teams
2) Create a framework for ongoing auditing and management of AI decision making and AI anomalies/errors
3) Outline clear methods for crafting both triage and long term solutions for AI anomalies/errors
4) Train a PR/Customer service team for how to respond to AI errors with customers and the general public
5) Adopt privacy and data sharing policies that protect people’s personally identifiable information
6) Move towards openness with algorithms, datasets and models
7) Develop ethical ways to release or withhold AI technology with dual use potential
8) Outline ways to responsibly report security vulnerabilities with AI systems and models
Alliance Partners are encouraged to join working groups within the Alliance, to further the Alliance’s mission, and to adopt the Alliance’s principals of openness and transparency in their own organizations, and with their customers, at their own discretion.
In many ways, AIs are alien intelligences. They make decisions in a "black box" and we can't really understand why they made the choices they made. Explainable frameworks usually look like the answer. They turn AIs in "glass box" models, that reveal some of their...
In this talk (slides here) by data scientist Suneeta Mall at Kubeflow Con, she talks about why reproducbility needs to gets much stronger in the coming years of AI development. Without understanding, auditing and reproducibility, we got zero chance of fixing the wild...
AI is the future and there’s lots of money to be made from it. But organisations keep making the news over AI governance failings, such as Microsoft’s chatbot that turned racist and google images labelling African-Americans as gorillas. We’re seeing a growth of ethics...