The introduction of artificial intelligence has been hailed as a game-changing development, with the potential to shake up multiple industries. One application alone, ChatGPT, has been reported to be the fastest-growing app ever, leaving TikTok in the dust. Users run the gamut, from people who want to write a personal or business letter, to legal professionals who harness it for assistance with research, document generation, developing basic legal advice, and legal analysis.

Unfortunately, as with many other developments, bad actors have taken steps to position AI for their own purposes. But an experienced cybersecurity consultant can suggest solutions to reduce the risk.

Facebook owner Meta recently sounded the alarm about the threats, noting that hackers worldwide are trying to use ChatGPT to break into devices. Meta’s security team revealed that hackers are attempting to access people’s devices by deploying browser extensions that claim to offer ChatGPT-based tools, and by offering online tools in app stores that contain malware.

“These malware families – including Ducktail, NodeStealer, and newer malware posing as ChatGPT and other similar tools – targeted people through malicious browser extensions, ads, and various social media platforms with an aim to run unauthorized ads from compromised business accounts across the internet,” according to the Meta report.

Since March 2023, the company’s security teams “have found around 10 malware families using ChatGPT and other similar themes to compromise accounts across the internet,” the report noted. “In one case, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools. They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware.

“In fact, some of these extensions did include working ChatGPT functionality alongside malware, likely to avoid suspicion from official web stores,” the report continued. “We’ve blocked over 1,000 unique ChatGPT-themed malicious URLs from being shared on our platforms and shared them with our industry peers so they, too, can act as appropriate.”

Meanwhile, security organizations and so-called “white hat” ethical hackers are uncovering ways to use ordinary speech, instead of computer code, to trick ChatGPT and other generative AI systems into generating detailed “how to” instructions on criminal activities like creating meth and hotwiring a car—bypassing rules around writing about illegal acts. Additionally, “prompt injection attacks,” which take advantage of AI learning “shortcuts,” can insert malicious data or instructions into the AI models.

The best offense …

Because ChatGPT can process and store large amounts of data, one of the first steps to reinforce a business’ cyber defenses is to ensure that the enterprise’s data is encrypted both in transit and at rest — this way, even if a malicious actor is able to gain access, they will be unable to read or exploit the information.

More Tech Intelligence

Home office

Another threat vector leverages an asset, ChatGPT bots – a software application that is programmed to do certain tasks – against the user. ChatGPT bots are useful for automating certain tasks, but they are also vulnerable to a bot takeover, where a malicious actor utilizes the bots to gain control of ChatGPT and use it for their own purposes. Hackers can do this by exploiting vulnerabilities in the code, or by guessing the user’s password.

To safeguard against this, it is essential for organizations to secure their systems with strong authentication protocols, keep up with security updates, and patch any software vulnerabilities that are discovered. Additionally, companies should use multifactor authentication – an authentication method that requires the user to provide two or more verification factors to gain access to a resource – and regularly change passwords.

Data leakage is another danger since data on ChatGPT systems may be exposed or stolen due to improper configuration or malicious actors. To reduce the likelihood of such loss, robust access controls should be implemented, allowing only authorized personnel to enter the system and its resources. Further, all activity on the system should be monitored on a regular basis so any suspicious behavior or incidents will be detected in a timely manner. And scheduling frequent backups of all data stored in the system will ensure that even if a breach does occur, any lost information can be quickly retrieved.

Carl Mazzanti
Mazzanti

Malicious code, which may be introduced into a ChatGPT system through user input or downloads from third-party sources, is another constant threat. An enterprise should regularly scan systems for malware and viruses, while protective anti-virus and other software – capable of detecting and removing threats before they become an issue – should be installed.

AI applications like ChatGPT can help to boost a business’s productivity. But these time-saving advances can also open more entry points for bad actors. Forward-looking organizations can protect themselves, however, by working with knowledgeable IT support services professionals.

Carl Mazzanti is president of eMazzanti Technologies in Hoboken.

o





Source link

author-sign