As the White House moved Monday to control the development of artificial intelligence systems and deal with any impact on employment, Connecticut companies are moving ahead cautiously, with AI-oriented jobs constituting only a small percentage of openings today in Connecticut.
The White House directive spans privacy, civil rights and innovation among other factors, as the Biden administration toes the line between leveraging AI for the betterment of society while containing illicit use and any unintended consequences as a result of programs going rogue.
U.S. Sen. Richard Blumenthal, D-Conn., has been plumbing into the potential perils from AI in his role as chair of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, including the possibility of “massive unemployment” in his words that could be in the offing if AI systems displace workers.
“The president should be credited with reaching high and providing a really powerful, potential blueprint — even if he can’t do everything by executive order,” Blumenthal told CT Insider on Monday afternoon. “It’s not just that AI is moving forward, but it is accelerating to a breathtaking pace, so there’s a need for Congressional action.”
Under the White House executive order, the National Institute of Standards and Technology will set “red-team” safety testing standards, and 15 large developers pledged to share those tests on AI systems and other “critical” information with the federal government, prior to releasing them for public use. Standards will be stricter in select industries like life sciences and nuclear energy.
The Department of Commerce will develop methods for “watermarking” in an effort to clearly label AI-generated content from the federal government. The Biden executive order does not extend any such requirement to the private sector.
Under the order, the federal government will also help schools figure out how to use AI in targeted ways, such as tutoring programs for students. And to help promote innovation, the Biden administration will pilot a National AI Research Resource to help students, startups and others tap key AI resources and data.
As of Monday morning, about 125 Connecticut jobs posted on the Indeed jobs board specified artificial intelligence work, the majority at consulting firms that help other businesses envision or implement new systems. That includes a dozen jobs each at Deloitte and EXL, and 10 openings at Capgemini.
According to Deloitte’s LinkedIn page, the firm has gotten a dozen applications for an open position in Stamford leading AI business development, with the company paying anywhere from $175,000 to $225,000 to whomever it hires.
In New York, Meta listed more than 180 openings touching on AI, with IBM having about 60. IBM is building a platform it calls Watsonx.governance, which aims to provide tools businesses will need to comply with any emerging regulations on AI.
“Our book of business in the third quarter specifically related to generative AI was in the low hundreds of millions of dollars,” said IBM CEO Arvind Krishna, speaking last week on a conference call. “The interest is larger, with thousands of hands-on interactions with our clients — these are across our largest clients and smaller clients.”
Krishna estimated that AI would have a “cost takeout” aspect in his words of between 20 percent and 30 percent of what a company might spend on a customer service contact center.
“If you can have more calls, more chats answered by AI, that means you can have much more volume with a smaller number of people,” Krishna said. “AI does not get tired. It doesn’t get angry. It doesn’t get upset.”
But Deloitte’s CEO offered a counterbalancing view to AI’s impact on employment, during remarks last month with students at the University of Virginia Darden School of Business.
“People sometimes ask, ‘well, is this ‘automating away’ work? Is this destroying jobs?'” said Deloitte CEO Joe Ucuzoglu. “In the face of those predictions and prognostications, where do we find ourselves today? Three-and-a-half percent unemployment — lowest rate in half a century.”
The state of Connecticut moved ahead on it own last spring, with the Connecticut General Assembly passing a bill that requires all agencies to report annually any systems they use that employ artificial intelligence, including whether those systems generated decisions independent of any human oversight. Those inventories will be made public on the CT Open Data platform.
The Biden executive order does not specifically address one measure Blumenthal’s committee has considered: licensing for companies in “high-risk” niches of artificial intelligence.
“There has to be a licensing regime — much like you have for cars or drugs or airlines, or even toys for children — so that standards are applied and safety and efficacy are assured,” Blumenthal said late last month, addressing members of a Connecticut AI working group that convened as a result of last spring’s legislation. “We want to create an office or agency to establish a licensing regime and then enforce the standards.”