Jakkal says that while machine learning security tools have been effective in specific areas, like monitoring email or activity on individual devices, known as endpoint security, Security Copilot pulls all of these separate streams together and extrapolates the big picture. “With Security Copilot, you can capture what others may have missed because it forms that connective tissue,” she says.
Security Copilot is largely powered by OpenAI’s ChatGPT-4, but Microsoft points out that it also incorporates Microsoft’s proprietary security-specific model. The system tracks everything that is done during an investigation. The resulting record can be audited and the documents it produces for distribution can all be edited for accuracy and clarity. If something Copilot suggests during a survey is wrong or irrelevant, users can click the “Off Target” button to further train the system.
The platform offers access controls so that certain colleagues can be shared on particular projects and not others, which is especially important for investigating possible insider threats. And Security Copilot allows a sort of backstop for 24/7 monitoring. That way, even if someone with a specific skill set isn’t working on a given shift or on a given day, the system can offer basic analysis and suggestions to help fill in the gaps. For example, if a team wants to quickly analyze a potentially malicious script or binary, Security Copilot can kickstart that job and contextualize the software’s behavior and goals.
Microsoft stresses that customer data is not shared with others and is “not used to train or augment core AI models.” However, Microsoft prides itself on using “65 trillion daily signals” from its vast worldwide customer base to inform its threat detection and defense products. But Jakkal and his colleague, Chang Kawaguchi, Microsoft vice president and AI security architect, point out that Security Copilot is subject to the same data sharing restrictions and regulations as all the security products it supports. ‘integrated. So, if you already use Microsoft Sentinel or Defender, Security Copilot must follow the privacy policies of those services.
Kawaguchi says Security Copilot was designed to be as flexible and open as possible, and customer feedback will inform future feature additions and enhancements. The usefulness of the system will ultimately come down to how insightful and accurate it can be about each customer’s network and the threats they face. But Kawaguchi says the most important thing is that defenders start benefiting from generative AI as soon as possible.
As he puts it: “We have to equip the defenders with AI as the attackers are going to use it no matter what we do.”