GenAI-related security

GenAI-related security

by Angus Jones

New research from Zscaler, Inc. (NASDAQ: ZS), the leader in cloud security, suggests that organisations are feeling the pressure to rush into generative AI (GenAI) tool usage, despite significant security concerns. According to its latest survey “All eyes on GenAI-related security”, although 85% of Australian and New Zealand (ANZ) organisations consider GenAI tools like ChatGPT to be a potential security risk, 97% are already using them in some guise within their businesses.  
 
Understanding the risk, 70% of ANZ organisations are monitoring the usage of GenAI tools, with the survey revealing ANZ is leading the way in GenAI security with 85% of organisations implementing GenAI-related security measures. Globally, over two-thirds (66%) of organisations have implemented security measures with an additional 31% planning on adding measures to protect critical data. 

“GenAI tools, like ChatGPT, offer ANZ businesses the opportunity to improve efficiencies, innovation and the speed in which teams can work but we can’t ignore the potential security risk of some tools, especially in light of the recently announced 2023 – 2030 Australian Cyber Security Strategy,” said Heng Mok, Chief Information Security Officer, Asia Pacific and Japan at Zscaler. “In accordance with cybershield #2 on promoting the safe use of emerging technology such as GenAI, it is very encouraging to see that IT teams in ANZ are already cognizant of the risks and are monitoring usage and implementing security measures to ensure their data and customers data is secure.”  

The rollout pressure for AI tools isn’t coming from where people might think, however, with the results suggesting that IT has the ability to regain control of the situation. Despite mainstream awareness, it is not employees who appear to be the driving force behind current interest and usage – only 12% of ANZ respondents said it stemmed from employees. Instead, 51% said usage was being driven by the IT teams directly.  

“IT teams leading the charge when it comes to GenAI should be reassuring to ANZ business leaders,” Heng Mok added. “It demonstrates ANZ organisations are using AI tools with security considerations being top of mind, with the fast-paced nature of GenAI it is essential that businesses continue to prioritise educating employees and implementing security measures in response to rapidly changing technologies while enabling the business.”  

With 45% of ANZ respondents anticipating a significant increase in the interest of GenAI tools before the end of the year, organisations that have not implemented security measures need to act quickly to bolster the gap between use and security. 

Steps business leaders can take to ensure GenAI-related security use in their organisation :  

  • Develop an acceptable use policy on GenAI 
  • Implement a holistic zero trust architecture to authorise only approved AI applications and users. 
  • Conduct thorough security risk assessments for new AI applications to clearly understand and respond to vulnerabilities. 
  • Establish a comprehensive logging system for tracking all AI prompts and responses. 
  • Enable zero trust-powered Data Loss Prevention (DLP) measures for all AI activities to safeguard against data exfiltration. 

Other guides like this

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More