The IBM Policy Lab released a new perspective, Precision Regulation for Artificial Intelligence, that lays out a regulatory framework for organizations involved in developing or using AI based on accountability, transparency, and fairness and security. This builds upon IBM’s calls for a “precision regulation” approach to facial recognition and illegal online content—laws tailored to hold companies more accountable, without becoming over-broad in a way that hinders innovation or the larger digital economy. Specifically, IBM’s new policy paper outlines five policy imperatives for companies, whether they are providers or owners of AI systems that can be reinforced by regulation.
They include:
#1 Designate a lead AI ethics official. To ensure compliance with these expectations, providers and owners should designate a person responsible for trustworthy AI, such as a lead AI ethics official.
#2 Different rules for different risks. All entities providing or owning an AI system should conduct an initial high-level assessment of the technology’s potential for harm. And regulation should treat different use cases differently based on the possible inherent risk.
#3 Don’t hide your AI. Transparency breeds trust; and the best way to promote transparency is through disclosure, making the purpose of an AI system clear to consumers and businesses. No one should be tricked into interacting with AI.
#4 Explain your AI. Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.
#5 Test your AI for bias. All organizations in the AI developmental lifecycle have some level of shared responsibility in ensuring the AI systems they design and deploy are fair and secure. This requires testing for fairness, bias, robustness and security, and taking remedial actions as needed, both before sale or deployment and after it is operationalized. This should be reinforced through “co-regulation”, where companies implement testing and government conducts spot checks for compliance.
These IBM recommendations come as the new European Commission has indicated that it will legislate on AI within the first 100 days of 2020 and the White House has released new guidelines for regulation of AI.
A new Morning Consult study commissioned by the IBM Policy Lab found that 62% of Americans and 7 in 10 Europeans prefer a precision regulation approach for technology, with less than 10% in either region supporting broad regulation of tech. 85% of Europeans and 81% of Americans support consumer data protection in some form, and 70% of Europeans and 60% of Americans support AI regulation. Moreover, 74% of American and 85% of EU respondents agree that artificial intelligence systems should be transparent and explainable, and strong pluralities in both countries believe that disclosure should be required for companies creating or distributing AI systems. Nearly 3 in 4 Europeans and two-thirds of Americans support regulations such as conducting risk assessments, doing pre-deployment testing for bias and fairness, and reporting to consumers and businesses that an AI system is being used in decision-making.
If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]