A state task force on artificial intelligence believes the use of AI in both government and the private sector merits exploration while regulatory measures should be in place to protect consumers from harm.
The Connecticut Artificial Intelligence Working Group recently released its final report and recommendations to create a legislative framework for ethical and equitable use of AI within the state.
The group was created last year in response to the growing presence of the technology statewide.
The 21-member task force consists of computer and public policy experts as well as state agency heads. The final 255-page report was the culmination of a process that included seven separate meetings and presentations from over 20 different speakers with expertise in the AI field.
The group uses the federal model created by U.S. Sens. Richard Blumenthal, D-Conn, and Josh Hawley, R-Mo., as a starting point for its own recommendations. While Congress has held several committee hearings and had several bills proposed concerning AI, none has passed. This has placed the onus of establishing regulatory and compliance frameworks for AI systems on individual states.
On the first day of this year’s legislative session, Sen. James Maroney, co-chair of the group, along with 21 other senators proposed a bill regarding AI.
The bill aims to protect the public from harmful unintended consequences of AI. It would criminalize the sharing of non-consensual sexually explicit images generated by AI and would make sharing an AI-generated deep-fake of a candidate for public office illegal. In a press conference of Senate Democratic leaders on Feb. 20, Maroney referenced similar deep-fake laws passed in Minnesota and Michigan.
[RELATED: AI is here. Connecticut is scrambling to set standards.]
The bill also focuses on ways to mitigate bias within AI systems.
“We know that there are biases in our world. We know that they exist and, unfortunately, we’ve seen that in algorithms,” Maroney said in the press conference.
The bill would require rigorous assessments of the impact of the AI system, as well as consistent testing of generative AI systems. It would also require public disclosures, so citizens are made aware when they are interacting with an AI system.
The bill also emphasizes the training of Connecticut’s workforce to use the technology and allow schools to utilize the benefits of AI. It would create a “Connecticut Citizens AI Academy” in a collaboration with nonprofits within the state. The academy would offer classes to Connecticut citizens on how to use AI systems.
[RELATED: Some CT schools are using artificial intelligence for student tutoring]
The bill has been referred to the General Law Committee. Maroney said he believes the bill would be just the beginning of the legislature’s work regarding AI.
“Legislatively, we are going to have to come back and tweak it every year. We are at the very beginning stages of a new revolution,” Maroney said in an interview with The Connecticut Mirror.
Government use
In its report, the AI working group’s first recommendation was to encourage expansion of the government’s use of the technology.
In Connecticut’s last legislative session, Public Act 23-16 was signed into law — requiring judicial branches to do an annual inventory of systems that employ AI. It also mandates that officials create policies and procedures on developing, procuring, using, and assessing systems that use AI, as well as publicly posting their inventory, policies and procedures online.
From 2019 to 2023, the use of AI in state governments increased.
In Minnesota, the Department of Public Safety utilizes AI translation technology in its Driver and Vehicle Services division. A chatbot allows non-English speaking citizens to get access to information in multiple languages and can conduct services such as updating insurance, checking the status of a title or checking on a driver’s license.
California’s main firefighting agency has implemented a program training AI to detect potential wildfires. For years, the state relied on a network of 1,000 mountaintop cameras to detect wildfires, with operators having to stare at computer screens around the clock to detect smoke.
“States should look at what industries are their strongest and try and apply AI there if possible,” said Caleb Williamson, the state public policy counsel for the App Association, a global trade association for small and medium-sized technology companies. “Some states may use it more in health care, others more in agriculture. States should look for where it’s most applicable for their citizens.”
During their meetings, the group identified several areas within government where AI could be harnessed — like criminal justice, health care, language processing and law enforcement, among others. While the recommendations did not include any specific policy proposals, the group encouraged continuing to explore uses of AI within government agencies.
Workforce development
In his opening remarks for the first meeting of Connecticut’s task force for artificial intelligence, Blumenthal noted how many Americans feel when interacting with these systems.
“I’m saying this to you as Richard Blumenthal, your United States Senator: I don’t know how these AI algorithms work,” he said.
In a study conducted by Pew Research Center, Americans were asked six questions to identify AI in several areas such as fitness trackers, online shopping and customer service. In total only 30% of Americans correctly answered all six questions. There is also a greater public fear of AI, with 52% of Americans reporting growing concern regarding the prevalence of AI.
In response to these concerns, the working group recommended increasing the development of AI within the workforce and ensuring access to the systems for workforce employees.
Recommendations from the group included: incentivize and grow AI businesses in the state, starting with health care, defense and finance; assist all businesses with starting their digital transformation; work with higher education to produce certificate programs related to AI skills for small businesses and employees; create online courses teaching how to use AI, making certain to include courses on responsible use of AI.
In addition the group suggested that Connecticut legislators should explore providing more powerful technology to run AI to researchers and businesses, incorporate AI training into workforce programs and work with institutions of higher education to create AI professional development programs for teachers.
Regulation
AI is by no means a new concept. It has been around since the 1950s. What has changed over the past five years is the speed and power of the computing engines running AI programs. The increased power allows for more complex programs to be run and has demonstrated some of the potential societal dangers of the technology.
There have been examples recently of the dangers these programs can present: from non-consensual sexually explicit AI generated deep-fakes of minors, to an AI generated robocall of President Joe Biden intended to dissuade people from voting in the New Hampshire primary. This is why the third, and final, recommendation made by the group had to do with government regulation of AI.
The group recommended that any legislation passed by Connecticut should be aligned with any relevant global standard for AI. Currently the most relevant global standard pertaining to AI is the European Union’s AI Act. It applies to those who provide, deploy and create AI systems within the EU.
The working group called for steps to be taken to prevent deep-fakes for elections and non-consensual intimate images. The group also called for the creation of a permanent advisory committee for AI composed of representatives from the business, education, and government sectors. The group also said distributors and creators of AI models should also be held accountable for content created with their models.
While the group recommended that certain AI be subject to regulation, it also suggested specific AI examples that should be exempt from regulation. This includes AI used for scientific research for the common good, which should be exempt from any regulation, the group said. Upon meeting transparency requirements, open source models could also be exempt from regulation. The group also pushed to designate a single point of contact for AI businesses within the Department of Economic and Community Development.
The future of AI in Connecticut
There have been efforts both federally and within states to classify, grow, industrialize and regulate AI.
Last fall President Biden issued an executive order focusing on the safe, secure and trustworthy development and use of AI.
“The commerce department has a huge role in the implementation of Biden’s Executive Order on AI. There are over 150 directives spread across 70 different agencies,” according to independent AI advisor Chloe Autio.
Connecticut is one of 12 states to have enacted legislation to ensure that those developing and deploying AI systems are complying with certain rules and standards and are being held accountable if they do not meet them.
In Connecticut, there has been bipartisan support for the government regulation of AI and ensuring the data privacy of citizens, even as the technology continues to evolve and change.
“We are not going to change the world with our legislation, but the world is changing, and we need to change it for the good,” Maroney said during one of the task force meetings.