top of page
Ravit banner.jpg

AI ETHICS
RESOURCES

How To Identify AI risks

How can organizations determine which risks their AI poses to people and society? Here is list of questions organizations can ask themselves when thinking it through. Feedback welcome!






➤ The questions are in the attached document, and you can also find them on my website. Link in the comments.

➤ As you will see, questions are grouped into four categories:

  • Fairness and non-Harm

  • Transparency and Explainability

  • Data Protection

  • Human Autonomy and Control

These categories are based on a research paper mapping dominant themes in AI ethics. (link in the comments)

➤ These categories, and the questions below, are far from exhaustive. However, they can help organizations get started.

Companies that develop AI can use it to reflect on their product. Investors and companies who procure AI systems can use it as part of their due diligence.

➤ The final version of this list will be included in a handbook for investors that I am writing, which will be out soon.

➤ Suggestions of other questions, better ways to formulate the questions, and any other feedback is very welcome!

FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page