Nov. 25—U.S. Senator Shelley Moore Capito, R-W.Va., is the latest lawmaker to tackle the issue of Artificial Intelligence.
Capito has joined a bipartisan coalition of lawmakers in introducing a bill that seeks to increase transparency and accountability of high-risk AI applications.
Called the Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023, the measure has several goals, including providing a clearer distinction between human and AI-generated content and a requirement that citizens are property notified when they are interacting with AI as opposed to an actual human.
Capito joined U.S. Senators John Thune, R-S.D., U.S. Sen. Amy Klobuchar, D-Minn., U.S. Senator Roger Wicker, R-Miss., U.S. Senator John Hickenlooper, D-Colo. and U.S. Sen. Ben Ray Luján, D-N.M., all members of the Senate Committee on Commerce, Science, and Transportation, in introducing the legislation last week just before senators left Washington for the long Thanksgiving holiday weekend.
“I am glad to partner with my colleagues to introduce a bipartisan first step towards addressing the development of artificial intelligence,” Capito said. “Our bill will allow for transparent and commonsense accountability without stifling the development of machine learning.”
AI has been in the news for much of the year. It can be generally defined as machines that are capable of intelligence or decision-making actions similar to humans.
Capito’s bill would provide clearer distinctions between human and AI-generated content and would require the National Institute of Standards and Technology (NIST) to carry out research to facilitate the development of standards for providing both authenticity and provenance.
It would also direct NIST to support standardization of methods for detecting and understanding emergent properties in AI systems in order to mitigate issues stemming from unanticipated behavior.
The proposed legislation would also provide new definitions for “generative,” “high-impact,” and “critical-impact” AI systems. It would also require large internet platforms to provide notice to users when the platform is using generative AI to create content the user sees. Capito said the U.S. Department of Commerce would have the authority to enforce that requirement.
The bill would also establish an advisory committee, composed of industry stakeholders, to provide input and recommendations on the issuance of proposed critical-impact AI certification standards. Before any standards for critical-impact AI could be prescribed, the Commerce Department would be required to submit to Congress and the advisory committee a five-year plan for testing and certifying critical-impact A.
The bill would also require the Commerce Department to establish a working group to provide recommendations for the development of voluntary, industry-led consumer education efforts for AI systems.
— Contact Charles Owens at cowens@bdtonline.com
— Contact Charles Owens at cowens@bdtonline.com. Follow him @BDTOwens