Artificial intelligence, or AI, has been increasingly present in everyday life for decades, but the launch of the conversational robot ChatGPT marked a turning point in its perception — © AFP/File Camille LAFFONT

Regulating AI is inevitable. The problem is that the scope of artificial intelligence operations is so vast nobody’s too sure where to start. There are some simple solutions provided you understand the problems.

The whole basis of law is redress for injury, loss or enforcing legal entitlements. These entitlements may include property rights privacy, etc. In effect, Artificial intelligence laws should mirror existing laws. Any regulation of artificial intelligence has to be consistent with those laws by default.

Simple enough…maybe.

The current state of play is that high-level politicians and technology corporations haven’t got to square one yet.  As you’d expect, everybody is spruiking a series of feel-good buzzwords compressed into whatever media time they may have.

Practical applications of artificial intelligence laws must relate to:

A clear legal definition of artificial intelligence.  

Privacy and security

Property and assets

Risk management in accordance with the type of AI service provided

Third-party interactions and involvement with user-generated artificial intelligence information

Artificial intelligence supplier service obligations

Professional services like doctors, lawyers, and accountants confidentiality requirements

Effective legal cover for major hacks and associated financial liabilities, which are long overdue.

The major point to be made here is that all of these essential legal requirements are already well covered by existing law. There is no need to write an entire raft of legislation for AI. AI laws also cannot be in conflict with existing laws.

An additional problem arises with the sheer quantity of information generated by artificial intelligence. To use a very annoying expression, “You’re either compliant or complicit”.

There are environmental factors to be considered. It’s likely that this AI-generated information will be seen as marketable by somebody.  The dark net will buy anything, however useless, but that information is still somebody else’s property. People could be at serious risk if information is leaked. Legal rights are inherent in personal information. It is effectively personal property.

There’s a much less obvious problem here. Artificial intelligence has its own natural legal defence. It is perfectly capable of creating situations where quite literally nobody is at fault.

Whatever the legal issue may be, it could possibly be a system error. That doesn’t alter the fact that somebody is on the wrong end of that system problem. The question is who’s liable?

I can see major insurance issues working their way into these problems soon enough. I can also see incredible lengths of time being used up on various legal problems created by artificial intelligence. Even the possibility of quick redress to legal complaints may be impossible to sort out.

It’s quite reasonable to believe that we actually need a different class of law to manage these matters. It’s fair to say that artificial intelligence involves a level of specialization similar to medicine, forensics, and things like DNA evidence.

The word “Evidence” Is the major player in any laws related to artificial intelligence. Evidence standards will have to be established and somebody is going to have to be able to access data for the purposes of analysis. This is almost exactly the same thing as “Discovery” in a conventional case.

We also have an unavoidable critical issue for AI laws. In any legal case, the quality of evidence is critical. Evidence is either acceptable or unacceptable to the court. Some types of evidence may even be inadmissible. When you’re talking about billions of lines of code, that is a very high bar to set. You can also expect some level of dysfunction in an adversarial environment.

The courts themselves will have a massive administrative problem with the sheer scale and scope of artificial intelligence lawsuits. There could be quite literally millions of lawsuits all of which will require court administration processes.

That’s likely to be expensive, incredibly time-consuming, and in some cases unworkable. A possible solution is that the old legal theory of “uncontested evidence” may save a lot of time and space.

The choice is to get this right, or things get very messy, and fast.

___________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.



Source link

author-sign