Computational systems demonstrating logic, reasoning, and understanding of verbal, written, and visual inputs have been around for decades. But development has sped up in recent years with work on so-called generative AI by companies such as OpenAI, Google, and Microsoft.
When OpenAI announced the launch of its generative AI chatbot ChatGPT in 2022, the system quickly gained more than 100 million users, earning it the fastest adoption rate of any piece of computer software in history.
With the rise of AI, many are embracing the technology’s possibilities for facilitating decision-making, speeding up information gathering, reducing human error in repetitive tasks, and enabling 24-7 availability for various tasks. But ethical concerns are also growing. Private companies are behind much of the development of AI, and for competitive reasons, they’re opaque about the algorithms they use in developing these tools. The systems make decisions based on the data they’re fed, but where that data comes from isn’t necessarily shared with the public.
Users don’t always know if they’re using AI-based products, nor if their personal information is being used to train AI tools. Some worry that data could be biased and lead to discrimination, disinformation, and—in the case of AI-based software in automobiles and other machinery, accidents and deaths.
The federal government is on its way to establishing regulatory powers to oversee AI development in the U.S. to help address these concerns. The National AI Advisory Committee recommends companies and government agencies create Chief Responsible AI Officer roles, whose occupants would be encouraged to enforce a so-called AI Bill of Rights. The committee, established through a 2020 law, also recommended embedding AI-focused leadership in every government agency.
In the meantime, an independent organization called AIAAIC has taken up the torch in making AI-related issues more transparent. Magnifi, an AI investing platform, analyzed ethics complaints collected by AIAAIC regarding artificial intelligence dating back to 2012 to see how concerns about AI have grown over the last decade. Complaints originate from media reports and submissions reviewed by the AIAAIC.