
There’s an emerging new trend in the generative AI space — generative AI for cybersecurity — and Google is among those looking to get in on the ground floor.
At today’s RSA 2023 conference, Google announced Cloud Security AI Workbench, a cybersecurity suite powered by a specialized “security” AI language model called Sec-PaLM. An offshoot of Google’s PaLM model, Google says, it’s “finely tuned for security use cases” — incorporating security intelligence such as searching for software and malware vulnerabilities, threat indicators, and actor profiles of behavioral threats.
Cloud Security AI Workbench includes a suite of new AI-powered tools, such as Mandiant’s Threat AI, which will leverage Sec-PaLM to find, summarize, and act on security threats. (Remember, Google bought Mandiant in 2022 for $5.4 billion). VirusTotal, another Google property, will use Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts.
Elsewhere, Sec-PaLM will help customers of Chronicle, Google’s cloud-based cybersecurity service, search for security events and “discreetly” interact with the results. Meanwhile, AI users in Google’s Security Command Center will get “human-readable” explanations for being attacked by Sec-PaLM, including affected assets, recommended mitigation, and risk summaries for security, compliance, and privacy findings.
“While generative AI has captured the imagination lately, Sec-PaLM builds on years of foundational AI research by Google and DeepMind, and the deep expertise of our security teams,” Google wrote in a blog this morning. “We are just beginning to realize the power of applying generative AI to security, and we look forward to continuing to leverage this expertise for our customers and drive progress across the security community.”
These are pretty bold ambitions, especially considering that VirusTotal Code Insight, the number one tool in the Cloud Security AI Workbench, is only available in a limited preview at the moment. (Google says it plans to roll out the remaining offerings to “trusted testers” in the coming months.) And frankly, it’s not clear how well Sec-PaLM works — or doesn’t work — in practice. Sure, “Recommended Mitigation Actions and Risk Summaries” sounds helpful, but are the suggestions much better or more accurate because the AI model produced them?
After all, AI language models—no matter how sophisticated—make mistakes. And they’re vulnerable to attacks like instant injection, which can cause them to behave in ways the creators didn’t intend.
This doesn’t stop the tech giants, of course. In March, Microsoft launched Security Copilot, a new tool that aims to “summarize” and “understand” threat information using generative AI models from OpenAI including GPT-4. In press materials, Microsoft — similar to Google — claimed that generative AI would better equip security professionals to tackle new threats.
The jury is very much out on that one. In truth, generative AI for cybersecurity may be more hype than anything else — there is a paucity of studies on its effectiveness. We’ll see the results soon enough with any luck, but in the meantime, take Google and Microsoft’s claims with a healthy grain of salt.