Google's AI Plans Now Include Cybersecurity - Latest Global News

Google’s AI Plans Now Include Cybersecurity

As people try to find more uses for generative AI that are less about taking a fake photo and more about actually being useful, Google plans to relegate AI to cybersecurity and make threat reports easier to read.

In a blog post, Google writes that its new cybersecurity product Google Threat Intelligence will combine the work of its Mandiant cybersecurity unit and VirusTotal Threat Intelligence with the Gemini AI model.

The new product uses the large language model Gemini 1.5 Pro, which Google says reduces the time required to reverse engineer malware attacks. The company claims that Gemini 1.5 Pro, released in February, took just 34 seconds to analyze the code of the WannaCry virus – the 2017 ransomware attack that crippled hospitals, businesses and other organizations around the world and identify a kill switch. This is impressive, but not surprising considering how well LLMs can read and write code.

Another possible use of Gemini in the threat space is to aggregate threat reports within threat intelligence in natural language so that organizations can assess how potential attacks might impact them – or in other words, so that organizations do not over- or under-react to threats.

According to Google, Threat Intelligence also has an extensive network of information to monitor potential threats before an attack occurs. This gives users a larger overview of the cybersecurity landscape and allows them to prioritize what they want to focus on. Mandiant provides human experts who monitor potentially malicious groups and consultants who work with companies to block attacks. The VirusTotal community also regularly publishes threat indicators.

The company also plans to use Mandiant’s experts to assess security vulnerabilities around AI projects. Through Google’s Secure AI Framework, Mandiant will test the defenses of AI models and assist in red teaming efforts. While AI models can help summarize threats and reverse engineer malware attacks, the models themselves can sometimes fall prey to malicious actors. These threats sometimes include “data poisoning,” which adds bad code to the data collected by AI models, preventing the models from responding to certain prompts.

Of course, Google isn’t the only company combining AI with cybersecurity. Microsoft has launched Copilot for Security, based on GPT-4 and Microsoft’s cybersecurity-specific AI model, giving cybersecurity professionals the ability to ask questions about threats. Whether either is actually a good use case for generative AI remains to be seen, but it’s nice to see it being used for something other than images of a pretentious pope.

Sharing Is Caring:

Leave a Comment