Putting AI ethics to work: are the tools fit for purpose? (MAIEI review)
What tools are out there to implement AI ethics principles? And what are the gaps in the current tools? I wrote a review for the Montreal AI Ethics Institute of Jacqui Ayling and Adriane Chapman's great paper answering these questions!
Highlights:
► Only 23% of AI ethics documents include tools for the application of AI ethics. The rest only offer statements of principles.
► The tools are sorted into three categories:
-- Impact assessment tools, which primarily include checklists and questionnaires.
-- Technical and design tools, which primarily include computational tools (e.g., computationally identifying and mitigating bias) and design processes (e.g., workshop-style events for raising awareness in design teams or participatory design processes).
-- Auditing tools, primarily include documentation for verification and assurance.
► Ayling and Chapman identify two gaps in the landscape of AI ethics tools:
-- A Gap in stakeholder participation - Assessment and audit processes typically include little participation from traditionally marginalized groups, the users of the developed services, and vested interest stakeholders such as citizens, shareholders, and investors.
-- A Gap in auditing - Nearly all the AI ethics tools are for internal self-assessment only, without external oversight, which runs the risk of organizations falling into “ethics washing.”
► A final thought from me:
Which strategies are appropriate for external oversight in the case of AI ethics? Can sufficient participation be introduced into familiar auditing processes (e.g., from finance), and if so, how? Alternatively, would it be better to design different oversight procedures for AI ethics? If so, what should they look like?
► Read the full summary here
Comentarios