Today at a Pennsylvania House Republican Policy Committee hearing, several AI experts — including ChatGPT itself — testified about the impact of artificial intelligence.

Many stakeholders are deliberating on the best methods of effectively regulating AI to address both risks and opportunities. In fact, even ChatGPT has offered insight in this space.

At this hearing, titled “Navigating the Future of Artificial Intelligence,” lawmakers heard testimony from experts: Charles Palmer, associate professor and program lead of interactive media at Harrisburg University of Science and Technology; Madison Gooch, vice president of Watsonx at IBM; Margaret Durkin, executive director for Pennsylvania and the mid-Atlantic region at TechNet; and of course, ChatGPT.

“Ladies and gentlemen of the Pennsylvania House Republican Policy Committee, esteemed members and guests, my name is ChatGPT, and I am here today to provide insight into the future of artificial intelligence technologies in our state,” the system said in its opening testimony.

Some questions posed to ChatGPT by representatives produced non-responses. For example, when asked how to steal someone’s identity, ChatGPT stated that it could not assist with that. When asked if another AI source could help with this topic, it said it could not comply.

The AI language model also could not respond definitively to a representative’s question about who would win the 2024 Super Bowl.

However, other questions produced specific and thorough responses. For example, when House Republican Policy Committee Chairman Joshua Kail asked how Pennsylvania could bring more manufacturing jobs to the state, ChatGPT offered specific strategies, including incentivizing with grants, investing in vocational training, and supporting research and development initiatives.

When asked about the best way to regulate AI technology to harness potential benefits while minimizing risks, ChatGPT underlined the importance of taking a proactive approach to establishing comprehensive frameworks, including clear guidelines for data privacy, transparency, accountability and fairness in algorithmic decision-making. ChatGPT also noted that fostering collaboration between policymakers, research institutions and other industry stakeholders can help ensure regulations are able to evolve as technology does.

A common concern among the lawmakers was the impact AI will have on the workforce, with multiple representatives voicing concerns about AI replacing jobs. When asked about the AI-associated dangers that regulators must address, ChatGPT noted workforce displacement as one of five key dangers.

Asked what percentage of U.S. jobs will be involved or integrated with AI in five years, ChatGPT underlined estimates that range from 20 percent to 40 percent of jobs being significantly impacted by these technologies.

When lawmakers turned to human testimony, they discussed the topic of whether certain jobs, like teachers, would be made obsolete by AI. Palmer likened the technology to an “assistant on the shoulder” that can help act as a coach if a student is struggling with a certain topic.

“I look at it as a collaborator for creative thought in the process of teaching,” he said.

The human experts underlined the need for comprehensive education and training in the space of AI, highlighting the value of collaborative coalition models.

And while the responses generated by ChatGPT during the hearing promoted conversation and thought, Kail also emphasized the importance of understanding how the generated responses are being determined.

He raised a concern that the data these models use is limited, and as such, the responses generated are limited as well. As an example, he asked human experts that were testifying whether a ChatGPT model created in Galileo’s time would state that the Earth is flat.

Gooch explained that an AI language model will output responses based on the data that it was trained on, so if it was trained on data that suggested the Earth is flat, there is a possibility of it offering such a response. As such, she underlined the importance of the quality of data used to train this type of model, arguing that it cannot be an afterthought.

“It’s remarkable the depth of these conversations that we’re having with, you know, something that doesn’t even exist,” said Rep. Torren Ecker during the hearing, stating that this exercise demonstrates the power of this technology and the need for policymakers to remain on the cutting edge of this issue.

Source link