Bletchley Park in Buckinghamshire near London was once the top-secret base of the codebreakers who cracked the German ‘Enigma Code’ that hastened the end of World War II. This symbolism was evidently a reason why it was chosen to host the world’s first ever Artificial Intelligence (AI) Safety Summit.

The two-day November 1-2 summit that has drawn in global leaders, computer scientists, and tech executives began with a bang, with a pioneering agreement wrapped up on the first day, which resolved to establish “a shared understanding of the opportunities and risks posed by frontier AI”. Twenty-eight major countries including the United States, China, Japan, the United Kingdom, France, and India, and the European Union agreed to sign on a declaration saying global action is needed to tackle the potential risks of AI.

The Bletchley Park Declaration

“Frontier AI” is defined as highly capable foundation generative AI models that could possess dangerous capabilities that can pose severe risks to public safety.

The declaration, which was also endorsed by Brazil, Ireland, Kenya, Saudi Arabia, Nigeria, and the United Arab Emirates, incorporates an acknowledgment of the substantial risks from potential intentional misuse or unintended issues of control of frontier AI — especially cybersecurity, biotechnology, and disinformation risks, according to the UK government, the summit host.

The declaration noted the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, as well as risks beyond frontier AI, including those of bias and privacy.

Festive offer

These risks are “best addressed through international cooperation”, the Bletchley Park Declaration said. As part of the agreement on international collaboration on frontier AI safety, South Korea will co-host a mini virtual AI summit in the next six months, and France will host the next in-person summit within a year from now.

President Biden’s Executive Order

The declaration has come days after US President Joe Biden issued an executive order aimed at safeguarding against threats posed by AI, and exerting oversight over safety benchmarks used by companies to evaluate generative AI bots such as ChatGPT and Google Bard. The order was seen as a vital first step taken by the Biden Administration to regulate rapidly-advancing AI technology. White House Deputy Chief of Staff Bruce Reed said the batch of reforms amounted to “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust”.

The order, issued on Monday, requires AI companies to share the results of tests of their newer products with the federal government before making the new capabilities available to consumers. The safety tests undertaken by developers, known as “red teaming”, are aimed at ensuring that new products do not pose a threat to users or the public at large. Following the order, the federal government is empowered to force a developer to tweak or abandon a product or initiative. “These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,” the White House said.

 

Thus, a new rule seeks to codify the use of watermarks that alert consumers to a product enabled by AI, which could potentially limit the threat posed by content such as deepfakes. Another standard asks biotechnology firms to take appropriate precautions when using AI to create or manipulate biological material.

While the industry guidance has been prescribed more as suggestions rather than binding requirements — thus giving developers and firms elbow room to work around some of the recommendations — American government agencies, such as the Departments of Energy or Homeland Security, have been explicitly directed to implement changes in their use of AI, thus creating industry best practices that the administration expects the private sector to also embrace.

Different Countries, Varied Approaches

In fact, policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools, prompted by ChatGPT’s explosive launch. The concerns lie under three broad heads: privacy, system bias, and violation of intellectual property rights. But the policy response has varied.

The EU has taken a tough line, proposing to bring in a new AI Act that classifies artificial intelligence according to use-case scenarios, based broadly on the degree of invasiveness and risk. The UK is at the other end of the spectrum, with a decidedly “light-touch” approach that aims to foster, and not stifle, innovation in this field.

The US approach is seen to be somewhere in between, with Monday’s executive order setting the stage for defining an AI regulation rulebook that will ostensibly build on the Blueprint for an AI Bill of Rights unveiled by the White House Office of Science and Technology Policy in October 2022.

China has released its own set of measures to regulate AI.

All of this comes in the wake of calls this April by tech leaders Elon Musk (of X, SpaceX, and Tesla), Apple co-founder Steve Wozniak, and more than 15,000 others for a six-month pause in AI development. Labs are in an “out-of-control race” to develop systems that no one can fully control, the tech leaders warned.

Musk was in attendance at Bletchley Park, where he warned that AI is “one of the biggest threats” and an “existential risk” to humans, who face being outsmarted by machines for the first time.

India’s Change in Stance

Union Minister of State for IT Rajeev Chandrasekhar, who is representing India at Bletchley Park, said at the opening plenary session that the weaponisation represented by social media must be overcome, and steps should be taken to ensure AI represents safety and trust.

“We are very clear that AI represents a big opportunity for us. We are extremely clear in our minds about what we need to do in terms of mitigating all of the other downsides that AI and, indeed, any emerging technology can or will represent we look at AI and, indeed, technology, in general, through the prism of openness, safety and trust and accountability,” he said.

India has been progressively pushing the envelope on AI regulation. On August 29, less than two weeks before the G20 Leaders Summit in New Delhi, Prime Minister Narendra Modi had called for a global framework on the expansion of “ethical” AI tools. This statement put a stamp of approval at the highest level on the shift in New Delhi’s position from not considering any legal intervention on regulating AI in the country to a move in the direction of actively formulating regulations based on a “risk-based, user-harm” approach.

Part of this shift was reflected in a consultation paper floated by the apex telecommunications regulator Telecom Regulatory Authority of India (TRAI) earlier in July, which said that the Centre should set up a domestic statutory authority to regulate AI in India through the lens of a “risk-based framework”, while also calling for collaborations with international agencies and governments of other countries for forming a global agency for the “responsible use” of AI.

This also came amid indications that Centre was looking to draw a clear distinction between different types of online intermediaries, including AI-based platforms, and issue-specific regulations for each of these intermediaries in a fresh legislation called the Digital India Bill that is expected to replace the Information Technology Act, 2000.

Most Read

1
Vijay: ‘There’s only one Superstar and there’s only one Thalapathy’
2
Aishwarya Rai cuts birthday cake at an event with daughter Aaradhya Bachchan, refuses to eat as she is observing Karva Chauth. Watch video

In April, the Ministry of Electronics and IT had said that it is not considering any law to regulate the AI sector, with Union IT minister Ashwini Vaishnaw admitting that though AI “had ethical concerns and associated risks”, it had proven to be an enabler of the digital and innovation ecosystem.

“The NITI Aayog has published a series of papers on Responsible AI for All. However, the government is not considering bringing a law or regulating the growth of artificial intelligence in the country,” Vaishnaw had said in a written response in Lok Sabha this Budget Session.

TRAI’s July recommendation on forming an international body for responsible AI was broadly in line with an approach enunciated by Sam Altman, the founder of OpenAI — the company behind ChatGPT — who had called for an international regulatory body for AI, akin to that overseeing nuclear non-proliferation.





Source link

author-sign