top of page
Ravit banner.jpg

RESOURCES

Light-It: A case study in evaluating AI governance

A case study in evaluating AI responsibility at a startup - Light-it’s experience!


Read the full article here.


Highlights:



➤ The evaluation framework: The Responsible AI Governance Maturity Model


💪I led the development of this framework for the last year!


💪 It helps companies evaluate the social responsibility of their AI governance and strategize about how to improve. It also helps external evaluators, such as investors and buyers, evaluate companies.


💪It’s based on the NIST AI RMF, one of the most influential AI governance frameworks in the world.


💪A link to the framework itself in the comments.



➤ Light-it


Is a digital product company focused on healthcare product development and healthcare innovation consulting. One of their platforms, Puppeteer, enables any healthcare company to construct AI agents with unparalleled human-like capabilities.



➤ Light-it used the maturity model to evaluate itself. In the article linked in the comments, we share insights from the process.


💡What they did to evaluate themselves

💡 What their approach to the evaluation process was

💡Growth opportunities in AI responsibility the company identified



➤ Huge appreciation to my Light-It partners Javier Lempert, Adam Mallat, and Macarena Balparda and to Benny Esparra who did a lot of work behind the scenes.



➤ I’m looking to pilot the maturity model with additional companies. Reach out if you’re interested!


➤ How do other people approach evaluating AI governance? Join the conversation on the LinkedIn thread!

FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page