Artificial intelligence took the world by storm with the release of the ChatGPT chatbot and image generators such as Midjourney. Aside from scattered stories about fraud, deep fakes, and occasional silliness, it appeared the AI revolution was upon us.

Then Google launched its AI chatbot Gemini, and the result was far more disastrous than anyone could imagine.

Gemini results, so obviously tethered to the Democratic politics and embrace of intersectionality, were an embarrassment. In one particularly bad instance, someone prompted Gemini to create images of German soldiers in 1943 and it produced four images, two of which were an Asian female and a black male dressed in German military uniforms. In another, someone asked for images of Greek philosophers and it produced “Greek” people who were Indian women and a Native American man. One person asked for a generation of images of a “white family.” Gemini declined, saying it could “not generate images that specify ethnicity or race,” but it proceeded to create images of a “black family” when asked.

Still, that is the private sector and a mishap of Google’s own making. It’s the public sector and government regulation where AI stands to lose the most, especially if Congress does not step up to the plate.

It might seem odd to call for Congress to stick its nose in and make up some rules, but without clear guidance from the lawmaking body in the U.S. government, AI stands to get regulated by a patchwork of state government rules and overstepping by federal bureaucrats with a political agenda.

Christina Montgomery, chief privacy and trust officer at IBM; Gary Marcus, professor emeritus at New York University, and Sam Altman, CEO of OpenAI, are sworn in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law oversight hearing to examine artificial intelligence on May 16 in Washington, D.C. (Andrew Cabellero-Reynolds / AFP via Getty Images)

It is why the Chevron case the Supreme Court took up this term is so important. Known merely as the “Chevron doctrine,” the 1984 case Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. resulted in federal judges giving deference to federal agencies when interpreting federal statutes. In short, it let lower court judges off the hook, giving bureaucrats the authority to “interpret” federal law. The Supreme Court seems poised to overturn the 40-year-old decision, much to the chagrin of those who think the “experts” should make such decisions.

The decision will mark a welcome change. For one, bureaucrats at federal agencies are less likely to issue certain regulations, knowing they’d have to get past a federal judge. Second, it will prevent the wild policy swings that go from one administration to the next. Congress, not the executive branch, will have to do its job. Members of Congress, of course, while often caterwauling about executive power, share much of the blame in giving the executive branch the power to make policy, a duty once reserved, with some exceptions, almost entirely to the legislative branch.

The people who would rather it get left up to them to make those decisions include Lina Khan, the head of the Federal Trade Commission and someone who hasn’t met a lawsuit she didn’t want to file or a merger she didn’t oppose. It’s not as though she never laid out her agenda. In a paper she wrote while a student at Yale Law School in 2017, she said she wanted to broaden FTC scrutiny of business and wrote that “antitrust laws must take political values into account.”

During a talk at Stanford University last November, Khan already set her sights on generative AI, saying the FTC “will be cleareyed in ensuring that claims of innovation are not used as cover for lawbreaking.” Then in January, the agency announced it “issued orders to five companies requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers.”

Others on the Left are already advocating as well. Despite the data revealing AI chatbots lean leftward as expressed by their responses, left-liberal organizations are already taking note of the “threat” AI poses to minority groups and the LGBT community.

In October 2022, the White House released a “blueprint for an AI bill of rights,” one of which addressed immigration. In language that would assuage people such as DEI-monger Ibram X. Kendi, one of the blueprint items targeted “algorithmic discrimination protections,” saying, “You should not face discrimination by algorithms, and systems should be designed in an equitable way.”

The American Civil Liberties Union, an organization once devoted to protecting constitutional rights and morphed into another left-wing advocacy organization, wrote a commentary piece, “How artificial intelligence can deepen racial and economic inequities.”

A digital publication called Access Now defines itself as one that “defends and extends the digital rights of people and communities at risk,” complete with an image of a raised fist. An example article at the site had the title, “Computers are binary, people are not: how AI systems undermine LGBTQ identity.”

At Salon, a published article was titled, “Expert: Misinformation targeting Black voters is rising — and AI could make it more ‘sophisticated.’” The “expert” in the story was an attorney with the left-wing advocacy group Southern Coalition for Social Justice.

With advocacy groups going after AI and the lack of congressional action, states are jumping in and creating their own legislation. One piece of legislation introduced in California would prohibit companies from releasing AI tools without first testing them for “unsafe” behavior and providing a mechanism to shut the technology down completely if necessary.

Another piece of legislation would require “developers and deployers of automated decision tools (ADTs) that use AI to complete an annual assessment for the Civil Rights Department to describe the purpose, uses, and context of the technology in making ‘consequential decisions impacting natural persons.’”

A patchwork of laws that differ from state to state would create nothing but chaos, stifling innovation and costing consumers more as companies deploying AI tools would have to recoup the costs of adhering to various types of legislation in 50 different states.

Considering the likelihood the Supreme Court will overturn the Chevron ruling, it becomes that more critical for Congress to do its job and create legislation that is specific and supersedes most state laws to create an environment in which AI can thrive while maintaining a modern legal framework that judges can use when faced with presiding over lawsuits related to the technology.

It would also provide the opportunity for Congress to lead in keeping the Left from interfering with AI innovation through its absurd habit of looking at everything through a lens of race and gender identity. How does one go about designing an AI tool in an “equitable way,” as the White House wants? It is nonsensical.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

AI is not all about asking a chatbot to write a cover letter for an employment application or to generate images of ducks playing football. AI has made strides in healthcare, law enforcement, tax filing, entertainment, and limiting bank fraud. In one case, researchers used AI to decipher ancient Roman texts carbonized in a deadly Mount Vesuvius eruption.

Congress must stop playing games. Holding hearings to berate Google executives over whether the Biden administration had a role in creating Gemini is performative nonsense, aimed at stoking anger and garnering social media engagement. Congress has to get to work and do its job. If members of the House and Senate don’t, then those like Khan and Gov. Gavin Newsom (D-CA) will.

Andrea Ruth is a writer from the Pacific Northwest now residing in the DC metro area of West Virginia.



Source link

author-sign