Multinational consultancy firm Ernst & Young today introduced the Trusted AI platform, a service accessible through a we web interface that assigns numerical values to the trustworthiness of an AI system.

The Trusted AI platform takes into account factors such as the objective of an AI model, whether a human is in the loop, and underlying technologies used to create a model.

Analytical models are then used to score each model.

“The technical score it provides is also subject to a complex multiplier, based on the impact on users, taking into account unintended consequences such as social and ethical implications,” according to a statement shared with VentureBeat announcing the news.

“An evaluation of governance and control maturity acts as a further mitigating factor to reduce residual risk.”

The Trusted AI platform will be made available later this year, and takes into account the Trusted AI conceptual framework Ernst & Young released last year to spell out its views on matters of bias, ethics, and social responsibility.

The text above is a summary, you can read full article here.