LLM-powered generative AI for new set of security capabilities
Many companies have been spending a lot of time working on generative AI, and the same goes with Google Cloud, who have been exploring in particular large language models.
Google Cloud’s new security products are based on their latest version of LLM tech (SecPaLM) which they announced at Google I/O before Sunil Potti’s online media briefing. Sunil said, “In the world of security obviously we have been working on ML-related AI capabilities, and all things related to detection, response, automation, operations and so forth.”
In the world of security, standard AI models will not do, because they need to be trained first on identifying threats, TTPs, zero day attacks, and all the other abstract capabilities that exist in the world of cybersecurity.
With the AI model that Google Cloud proposes, an organisation could apply front line threat intelligence gathered from around the world that is integrated with Google’s own threat intelligence.
“Now with infusing generative AI, we have the ability in my opinion, to prevent any downstream impact,” Sunil said, adding there is also the scalability aspect that is possible due to Google’s security infrastructure.
The second element in Google Cloud’s security strategy is convergence. This implies an approach of building security into everything that they do.
Sunil hypothesised, “And so imagine a world where if generative AI can be used to generate code, why not also generate the identity and access controls associated with it? Why not also generate the software supply chain test vulnerabilities associated with it? Why not also generate the compliance checks associated with the code that you’re generating?”
Ultimately, applications would have prescriptive security controls integrated and systems could secure themselves.
The final element is to do with the current talent gap. “Our approach is to take an extended set of people inside a company and make them security experts, and elevate the capability level of the next level of security experts.”
Generative AI for security
Now, it’s good to know where Google Cloud is coming from when they announced their latest AI capabilities for security, which they claim they are not creating yet another chat interface to a security product.
Instead, what they have done is built a platform powered by their new LLM technology, so that anybody else could build a security application on top of it.
“We have taken an intentional approach, leveraging Vertex AI (compliance, responsible AI capabilities etc) and delivered all these capabilities in an enterprise-grade fashion.
“It now comes with SecPaLM, built around a construct called the security AI workbench. “
A customer can essentially leverage these capabilities for their security use cases, while at same time plugging in to their data so they can create prompts and make it contextual in that environment.