New York City has put together what city leaders say is a first-of-its-kind plan to use artificial intelligence in municipal government.

The blueprint aims to develop a framework for city agencies to evaluate AI tools and weigh the associated risks while helping city employees build their AI knowledge and skills.

New York City Mayor Eric Adams unveiled the plan on Oct. 16 and stressed it would focus on the responsible use of AI in municipal government.

New York Mayor Eric Adams, foreground, announces the city's plan to use and manage artificial intelligence on Oct. 16.
New York Mayor Eric Adams, foreground, announces the city’s plan to use and manage artificial intelligence on Oct. 16.

Ed Reed/Mayoral Photography Office.

City officials tout the “New York City Artificial Intelligence Action Plan” as a way to keep the city as one of the leaders in AI technology.

“While artificial intelligence presents a once-in-a-generation opportunity to more effectively deliver for New Yorkers, we must be clear-eyed about the potential pitfalls and associated risks these technologies present,” Adams said.

“What’s different from New York City and other municipalities is that we are not running away from AI, we are going to properly govern how we use AI in a responsible way,” Adams said at a press conference.

AI technologies use data to make predictions, recommendations, rankings and other decisions. The city’s plan outlines 37 actions, most of which are set to start within the next year.

Those actions include establishing a framework for AI governance, acknowledging risks including unintended bias; creating an external advisory network; and publishing an annual report on the city’s AI progress.

AI’s risks include providing inaccurate or unintended results.

“I am proud to introduce a plan that will strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm,” Adams said.

“We want to think more innovatively about challenges and using the right technology the right way and be responsible when we do it. You could use or abuse anything,” Adams said. “And if we stay away from moving forward because we’re afraid of someone’s going to abuse it, you won’t get anything done.”

A starting point will be MyCity, New York City’s one‑stop shop for city services and benefits. The city will launch a business site on MyCity that includes a chatbot. The site aims to become a tool where residents can access government services in a user-friendly way.

With the AI chatbot, the city says business owners will easily be able to access accurate information from more than 2,000 city business web pages.

“For the first time, business owners and aspiring entrepreneurs will be able to direct their questions to an AI‑powered chatbot rather than scan through webpage after webpage, going into the black hole of uncertainty of how to open a business, how to run a business, how to answer some of the basic questions,” Adams said.

“By putting all of our services in one location and using the innovative new chatbot as a guide, we are taking another step towards making New York into the true ‘City of Yes,'” said Small Business Services Commissioner Kevin Kim.

The city and the state handle a lot of data, said John Hallacy, founder of John Hallacy Consulting LLC.

“If AI can ease the way for securing a building permit or paying a parking ticket it will be a win,” Hallacy told The Bond Buyer. “Electric and plumbing permits are handled separately. Could AI bring them together for one project? There have to be many other applications where it should work. The labor savings may turn out to be real and significant.”

In a September Leaders videocast, Matthew Fraser, the city’s chief technology officer, told the American Banker that a lot of AI’s stigma came from evil portrayals in Hollywood movies.

“I’m a kid of the 1980s, so being a kid of that generation, how could you not know a movie like the Terminator? So when people hear artificial intelligence, they think about Skynet, CyberDyne Systems and them creating autonomous robots to walk around and take the place of humanity,” he said.

“I think that we’re a long way from technology being at that level, but artificial intelligence as a whole gives us the capability to process work faster and in some cases remove human involvement in areas that don’t require a human decision,” he said.

“Artificial intelligence, at least the way that most people look at it today, is they look at these large language models like ChatGPT, and I think that’s the thing that people get really concerned about. It’s like these models provide information and it almost seems like the information that comes out comes from an authoritative source, but you don’t know where the information is sourced from, you don’t know how valid the information is,” Fraser said.

“What folks are very concerned about is that by leveraging models like that, you can spread disinformation very quickly and you can form an opinion about something that’s not true because people depend on the technology more than they depend on proving that the information is right,” he said. “So I think for us, artificial intelligence will prove to be a useful tool, just like all technology that’s come out to this point. And it’s on us to make sure that we have guardrails up. So when we deploy it, we deploy it in ethical and moral ways.”

"We want to set the standard in AI development," Gov. Kathy Hochul said at SUNY's inaugural AI Symposium at UAlbany.
“We want to set the standard in AI development,” Gov. Kathy Hochul said at SUNY’s inaugural AI Symposium at UAlbany.

New York governor’s office

In October, Gov. Kathy Hochul announced a $20 million investment to foster the collaboration between the State University of New York at Albany and IBM to advance artificial intelligence goals and create a State University of New York AI Research Group. 

“AI is fundamentally changing the world we live in and New York doesn’t just want to get in at the ground floor. We want to set the standard in AI development,” Hochul said at SUNY’s inaugural AI Symposium at UAlbany on Oct. 16. 

The UAlbany and IBM collaboration forms the Center for Emerging Artificial Intelligence Systems, which will power new AI research projects with the help of advanced cloud computing and emerging hardware out of the IBM Research AI Hardware Center.

The AI Research Group focuses on areas that include data privacy and security; education; social impact; ethics and trustworthiness; research and infrastructure; and workforce and industry.

The center is part of UAlbany’s AI Plus initiative, which is integrating teaching and learning about AI across the university, from data science and semiconductor design to philosophy and the arts.

“Researchers across UAlbany’s nine schools and colleges are employing artificial intelligence to power new discoveries about our world,” University at Albany President Havidán Rodríguez said. “We are committed to giving them the best tools to develop critical new knowledge and advance the state of the art in their fields.”

The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency on Tuesday released a “Roadmap for Artificial Intelligence,”

Last month, President Joe Biden issued an executive order directing the DHS to promote adoption of AI safety standards globally, protect U.S. networks and critical infrastructure, reduce risks that AI can be used to create weapons of mass destruction, combat AI-related intellectual property theft, and help attract and retain skilled people in the field.

CISA’s roadmap outlines several strategic areas it will focus on.

The roadmap aims to “enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day,” said CISA Director Jen Easterly.

Source link