Join top executives in San Francisco July 11-12 to hear how they are integrating and optimizing their AI investments for success. learn more
Earlier this week, a group of over 1,800 artificial intelligence (AI) leaders and technologists, from Elon Musk to Steve Wozniak, told every AI lab to develop an AI system 6 or more powerful than GPT-4. We have issued an open letter requesting that the month be stopped immediately. Because of “serious risks to society and humanity”.
A pause could help us better understand and regulate the societal risks posed by generative AI, but it could also be an attempt by lagging competitors to catch up with leaders in the field like OpenAI and AI research. Some argue that there is.
According to Gartner’s distinguished VP analyst Avivah Litan, who spoke to VentureBeat about the issue, “The six-month pause is a plea to stop training models that are more powerful than GPT-4. Then comes GPT-5, which is expected to enable AGI (Artificial General Intelligence) soon, and even with AGI, it will be necessary to introduce safety controls that effectively protect the use of these systems by humans. may be too late.”
>>Follow VentureBeat’s ongoing generative AI coverage<
Join us July 11-12 in San Francisco. A top executive shares how she integrated and optimized her AI investments and avoided common pitfalls for success.
Despite concerns about the societal risks posed by generative AI, many cybersecurity experts do not believe that pausing AI development will help at all. Instead, such a pause would only provide a temporary respite for security teams to develop defenses and prepare to respond to the rise in social engineering, phishing, and malicious code generation. I’m claiming
Why Pausing Generative AI Development Isn’t Realistic
From a cybersecurity perspective, one of the most compelling arguments against a moratorium on AI research is that it only affects vendors, not malicious actors. Cybercriminals can develop new attack vectors and refine their attack techniques.
McAfee CTO Steve Grobman told VentureBeat: “When a technological breakthrough occurs, it is essential to have an organization or company with the ethics and standards to keep technology advancing in order to ensure that the technology is used in the most responsible way possible. It’s essential.”
At the same time, banning the training of AI systems could be seen as regulatory overkill.
“AI is applied mathematics and we cannot legislate, regulate or prevent people from doing it. Rather, we must understand it and use it responsibly where appropriate. We need to educate our leaders and recognize that our adversaries will try to exploit it,” Grobman said.
So what should we do?
If it is impractical to pause the development of generative AI entirely, instead, regulators and private sector organizations should consider the parameters of AI development, the level of built-in protections required for tools such as GPT-4, and their countermeasures. should consider developing a consensus on Businesses can use it to mitigate the associated risks.
“Since AI regulation is an important and ongoing debate, and the range of use cases is partly endless, from medicine to aerospace, legislation on the moral and safe use of these technologies should draw on sector-specific knowledge. It remains an urgent issue for legislators who have,” said Justin Fier. , his SVP of Red Team Operations, Darktrace, told VentureBeat:
“Achieving a national or international consensus on who should be held accountable not only for Generation AI, but for misuse of all kinds of AI and automation is a significant challenge, especially for the development of Generation AI models. A little interruption is unlikely to solve the problem,” says Fier. He said.
Rather than pausing, it accelerates discussions on how to manage the risks associated with malicious use of generative AI and encourages AI vendors to be more transparent about the guardrails they have implemented to prevent new threats. , the cybersecurity community will be better served.
How to regain trust in AI solutions
For Gartner’s Litan, current Large Language Model (LLM) development requires users to trust the vendor’s red teaming capabilities. However, organizations like OpenAI are opaque in how they manage risk internally and offer users little ability to monitor the performance of their built-in protections.
As a result, organizations need new tools and frameworks to manage the cyber risks introduced by generative AI.
“We need a new class of AI trust, risk and security management [TRiSM] A tool that manages data and process flows between the user and the enterprise that host the LLM foundation model.these are [cloud access security broker] The technical configuration is similar to CASB, but unlike CASB capabilities, it is trained to reduce risk and increase confidence in using cloud-based underlying AI models,” said Litan. .
As part of the AI TRiSM architecture, users agree that vendors hosting or providing these models provide additional data protection and privacy assurance features such as masking, as well as tools to detect data and content anomalies. should be expected.
Unlike existing tools such as ModelOps and Adversarial Attack Resilience, which can only be run by model owners and operators, AI TRiSM allows us, in defining the level of risk presented by tools such as GPT-4, to Let users play a bigger role.
preparation is key
Ultimately, rather than trying to stifle the development of generative AI, organizations should look for ways to prepare themselves to face the risks posed by generative AI.
One way to do this is to find new ways to fight AI with AI and follow the lead of organizations like Microsoft, Orca Security, ARMO, and Sophos who are already developing new defensive use cases for generative AI.
For example, Microsoft Security Copilot uses a combination of GPT-4 and proprietary data to process alerts created by security tools and translate them into natural language descriptions of security incidents. This gives human users a narrative they can refer to to more effectively respond to breaches.
This is just one example of how GPT-4 can be used defensively. With generative AI now readily available and open to the public, security teams must find ways to leverage these tools as false multipliers to protect their organizations.
Forrester VP Principal Analyst Jeff Pollard told VentureBeat: “The only way to be cybersecurity ready is to start taking action now. It just hurts. Teams need to start researching and learning now about how these technologies will change the way work is done.”
Mission of VentureBeat will become a digital town square for technical decision makers to gain knowledge on innovative enterprise technology and trade. Watch the briefing.