Netskope Web

It is no secret that artificial intelligence is transforming the world of work. Already employees use a plethora of AI assistants to streamline everyday tasks, such as writing emails, developing code, crafting marketing strategies and even managing company finances. The trend is set to accelerate as the technology develops, yielding huge productivity benefits for organisations.

Yet as thousands of new AI-enabled applications are launched each week, many of them free to use, there are growing concerns about the data protection risks. 

Many organisations have no idea what AI apps and services are being used by their staff, or for what purpose. They are also unaware of what data is being shared and with whom, or how it is being managed and protected. 

This heightens the risk of data breaches that come with significant financial and reputational costs. Additionally, there is a real possibility that organisations may be feeding AI tools with sensitive corporate information without realising, contributing to the training of potentially competitive AI models. 

So how can firms reap the benefits of AI while mitigating against the risks?

‘Data protection is non-negotiable’

Neil Thacker is chief information security officer EMEA at Netskope, a secure access service edge (SASE) provider that helps organisations around the world to prevent data loss, leakage and misuse. He says the arrival of AI is much like the advent of cloud computing or even the internet, with companies still scrambling to understand the technology and its risks. 

“This comes as data regulation is being tightened up around the world, making the safeguarding of sensitive data non-negotiable for every business,” Thacker says. The EU’s existing GDPR rules and new AI Act, which is set to come into force over the next few years, are cases in point. 

At the same time corporate use of AI-enabled apps is accelerating rapidly. According to Netskope Threat Labs’ Cloud & Threat Report 2023, organisations of 10,000 staff or more accessed at least five generative AI apps daily last year, with ChatGPT, Microsoft Co-pilot and GitHub Copilot being among the most commonly used. 

The algorithms that power these platforms develop and improve based on the data fed into them, which raises myriad copyright and intellectual property issues. For example, last year source code was being posted to the most popular generative AI app, ChatGPT, at a concerning rate of 158 incidents per month in 2023, according to Netskope research.

“If firms are not careful they could leave sensitive data such as proprietary IP, source code or financial information accessible to competitors. Without realising it you are helping train even smarter AI platforms that can help your competitors,” Thacker says. “The risk is immediate too. It used to take years to train powerful new algorithms but these days it can be done in a matter of days and weeks.”

Private AI?

Thacker says firms must deploy continuous data protection policies and tools to protect themselves. Chief information security officers (CISOs) should make an inventory of all the AI services in use across their organisation, identifying those that are truly relevant to the company. 

They then need to vet each platform vendor and assess its data policies, including whether it relies on third- or fourth-party support. 

“There are significant costs associated with AI technology, so it’s obvious that free or inexpensive options make their money in other ways – by selling data or the AI intelligence that it has contributed towards,” says Thacker. “In such cases, a thorough examination of the terms and conditions becomes imperative for CISOs to ensure the protection and privacy of sensitive data.” 

What many organisations do not realise is that popular AI apps often offer private subscription plans, where for a fee customer data is not used to update the public model. Yet given the large and growing number of platforms in use in the corporate world, doing so for every app would be costly and impractical while failing to offset future risk. 

Without realising it you are helping train even smarter AI platforms that can help your competitors

Data loss prevention (DLP) tools must be deployed to help bridge these gaps. Take Netskope’s platform, which uses a proprietary system to ensure no sensitive information is used within input queries to AI applications without informed consent.

It plugs seamlessly into cloud services, flagging the risks associated with more than 85,000 cloud apps and services including AI apps. Powered by AI itself, it learns how to recognise sensitive data based on an organisation’s preferences and identify it in real time.

When a risk is detected it issues a pop-up message telling the employee the risk level of the app they are using on a scale from 0-100. 

“We base the score on 50 variables, including the security controls that platform has in place, its privacy policy, where any data is being processed and the regulatory challenges, and any other potential legal liability issues,” says Thacker. “If an app is high risk the employee can make a call on whether to use it depending on the sensitivity of the data involved. Netskope may also be able to offer them an alternative that is more secure for the organisation.”

Research has shown this behavioural approach to data security is highly effective, given that a staggering 95% of cybersecurity incidents stem from human error. Continuously training people using point-in-time warnings is highly effective the same way reinforced training is used in AI models; “I use the analogy of radar speed cameras that tell you your speed,” says Thacker. “Once you are reminded how fast you are going and the consequences, you slow down. It’s about point-in-time awareness of the risks.” 

Founded in 2012 Netskope has become a leader in the SASE space, offering unrivaled visibility, real-time data and threat protection for cloud services, websites and private apps. Known for its data and threat protection, the US company is now leading the way in the AI security space globally. 

“As the digital transformation of companies continues, AI will offer enormous benefits in terms of enhanced efficiency, competitiveness and end-user experiences,” says Thacker. “But it has also become the frontline in the fight to protect data, and organisations that do not adapt to the evolving threat landscape could pay a high price.”



Source link

author-sign