In a new move towards fortifying cybersecurity measures, Google Cloud has introduced an innovative platform that combines its security services with generative artificial intelligence (AI). Known as the Google Cloud Security AI Workbench, the platform integrates Google’s Mandiant cyber intelligence unit and Chronicle security operations platform with Vertex AI infrastructure and the AI model, Sec-PaLM.
The new platform aims to provide cyber analysts with an advanced tool that allows the upload of potentially malicious code for in-depth analysis by Sec-PaLM. It also offers breach alerts from Mandiant and an AI chat feature that connects users with Google’s extensive historical security data through Chronicle. This data amalgamates Google’s security experience from both its own systems and Google Cloud customers, enriched with Mandiant’s data and additional information from popular products like Google Chrome.
Incorporating generative AI developed by Google’s DeepMind, the platform enables intuitive interaction without the need for users to master specialist terminology. This technology, as explained by Sunil Potti, VP and General Manager of security at Google Cloud, can examine malware samples, predict potential system vulnerabilities, and generate comprehensive and easily understandable explanations.
Google also plans to extend the platform’s capabilities by allowing firms to contribute their data and help enhance the AI model. Accenture has already joined as Google’s inaugural partner, with additional partners anticipated in the future. Despite skepticism around generative AI, Google’s platform, specifically engineered for cybersecurity and trained on Google’s security data, offers more than a simple chatbot interface, enhancing cybersecurity efforts and proving more effective.
However, the platform is currently in a “curation” stage and its improvement and refinement is expected as it continues learning from the data. It is presently focused on tasks that Potti referred to as low-risk and high-reward, such as analyzing threat intelligence and writing server rules. The output of these tasks can be audited by human professionals, ensuring the accuracy and reliability of the AI system.