LAS VEGAS — Walmart, Amazon and Microsoft have all reportedly issued warnings to employees to avoid sharing trade secrets or proprietary code when querying generative artificial intelligence tools such as ChatGPT. — The CISO committee meeting at the Cyber Risk Alliance’s Identiverse conference on May 30 in Las Vegas suggests: Many companies are considering doing the same.
When moderator Parham Eftekhari, Executive Vice President of Collaboration, CyberRisk Alliance, asked how many attendee organizations had policies in place regarding the use of AI, a large audience (about half by eye count) raised his hand.
At the same session, Moser Packaging’s chief information security officer (CISO), Ed Harris, voluntarily announced that his company had issued laws similar to those enacted by Walmart and others. That is, do not embed confidential company information into external AI tools. Harris said that over-enthusiastic employees enlisted the help of an AI tool to enhance the company’s marketing strategy, in which he entered company information and the AI remembered it for later use by other users (perhaps I imagined the scenario of passing it on to a competitor.
“I’m afraid someone will [ask the AI engine]: “Hey, can you tell me what Mosel is thinking from a marketing perspective? That’s why “we actually have a policy that it’s OK to ask AI questions to start being creative, but we don’t share detailed plans.”
Bezawit Sumner, CISO of CRISP Shared Services, a non-profit healthcare technical support organization, agrees that as AI becomes more pervasive in the daily lives of employees, it will inevitably need to establish usage parameters. bottom.
“There will be people who will try to use it because A, they’re interested, or B… they think they have to do it because they have to outdo other people,” Sumner said. . Whatever the reason, the key is “to make sure people are doing it the right way and knowing what we can deliver.” [them with] It is a guardrail of “do’s” and “don’ts” for what AI can do. ”
Clear and simple rules for staff when it comes to AI guidelines
AI policies vary according to the needs and concerns of individual companies. These may include broad or precise definitions of what constitutes sensitive information that should never be shared with AI tools. Alternatively, Sumner suggested, it could include instructions on how to identify instances when responses from AI tools appear to be malicious or anomalous.
But whatever rules and guidelines are used, “policies should be clear and easy to understand,” said Sean Zadig, Yahoo’s vice president and CISO. “Use plain language.”
Zadig said it’s important for security leaders to act nimbly in developing these policies to keep pace with the rapid rise in AI adoption and experimentation, and that employees should not impede progress. He said it is important to be seen as an enabler without
“Everyone in this room, all of your companies are probably doing their best to integrate and drive AI capabilities,” Zadig said. “And you don’t want to get in the way by saying ‘Stop’. [users are] By just going around you, you’re going to lose the visibility you need to help them make the right decisions. ”
To that end, we also encourage you to solicit input and feedback from your employees, including engineers, developers, and analysts, who are affected by our AI Usage Policy. By doing so, “we get people to buy in early on, rather than just telling people, ‘This is what you have to do…’,” Sumner explained.
In fact, CISOs would be wise to remember that their teams are likely to rely on AI as well to combat future digital threats. Certainly they do not want their policies to curb such efforts against adversaries who are abusing AI for their own illicit gain.
“We need to ensure that the battle is at least symmetrical and that good AI efforts are not hindered,” said the CEO and founder of Ping Identity, who delivered his solo opening keynote ahead of the panel session. said Andre Durand. “The risk of doing this right and protecting intellectual property is real for legitimate companies, but the same reservations and considerations do not exist on the other side.”
Ultimately, no matter what the AI usage policy is, there are limits to what AI can do. The end result will depend on companies enforcing these rules and, ideally, hiring trustworthy employees who adhere to them.
After all, Harris said: “If someone at my company had bad intentions and they dumped something on the AI, I don’t know how to find out how much effort they put into it.”