Press play to listen to this article
Artificial intelligence speaks out.
Jan Nicola Beyer is Research Coordinator for the Digital Democracy Unit at Democracy Reporting International.
The debate around the risks of generative artificial intelligence (AI) is now in full swing.
Proponents of the model of generative AI tools, on the other hand, admire their potential to drive productivity gains not seen since the Industrial Revolution. On the other hand, a growing number of people have expressed concerns about the potential dangers these tools pose.
But while there are enough calls to regulate or slow down the development of new AI technologies, there is an entirely different aspect that seems to be missing from the debate: detection.
Compared to regulations, investments in technologies that distinguish between human-generated and machine-generated content (such as DetectGPT and GPTZero for text and AI Image Detector for visuals) are seen as substandard solutions. Some may think so. However, regulation will face insurmountable challenges, and detection could be a promising avenue for mitigating the potential risks of AI.
There is no denying that generative AI has the potential to enhance creativity and improve productivity. However, losing the ability to distinguish between natural and synthetic content can also empower malicious actors. From simple plagiarism in schools and colleges, to breaching electronic security systems, to launching professional disinformation campaigns, the dangers behind machines writing text, drawing pictures and making videos are many. over.
All these threats require technical as well as legal responses. But such technical solutions are not getting the support they should.
Currently, the funding allocated for new generation tools significantly exceeds the investment in detection. Microsoft alone has invested a whopping $10 billion in OpenAI, which developed ChatGPT. With this in mind, total European spending on AI is estimated to be around $21 billion, but given that detection is a relatively under-represented area of public debate, this total has been directed towards this end. considered to be only a small part of
But easing this imbalance will require more than relying on industry strengthening alone.
Profits from detection of generative output are unlikely to be as lucrative as those from the development of new creative tools, so private companies are unlikely to match funding allocated to detection with spending on generative AI. It is low. And even where lucrative investment opportunities exist in detection tools, specialized products are rarely available to the general public.
Synthetic audio technology is a good example. Despite the fact that so-called voice cloning poses a serious threat to the public, especially when used to impersonate politicians and celebrities, private companies have used detections aimed at bank security systems to prevent fraud. It prioritizes other concerns, such as mechanics. And developers of such technology have little interest in sharing source code, as it encourages attempts to bypass security systems.
Lawmakers, meanwhile, have traditionally focused more on regulating AI content than on funding research to detect it. For example, the European Union is working on its regulatory efforts with the AI Act, a regulatory framework aimed at ensuring the responsible and ethical development and use of AI. Nevertheless, finding the right balance between containing high-risk technologies and allowing innovation has proven difficult.
Moreover, it remains to be seen whether effective regulation can be achieved.
ChatGPT was developed by OpenAI (a legally responsible organization), so it may be subject to legal oversight. The same cannot be said for scale projects. . For example, a researcher at Stanford University was able to use his LLaMA model from Meta to create his own LLM with similar performance to his ChatGPT at a cost of just $600. This example shows that other his LLMs can be fairly easily and cheaply built on existing models and avoid self-regulation. This is an attractive option for criminals and disinformation attackers. And in such cases it may not be possible to assume legal liability at all.
Robust detection mechanisms therefore provide a viable solution for gaining an edge in the ever-evolving arms race with generative AI tools.
Already at the forefront of the fight against disinformation and pledging huge investments in AI, the EU should take the lead in research funding. And the good news is that the amount of money spent on developing generative AI tools doesn’t even have to match the amount of money spent on developing tools that facilitate AI detection. Detection tools generally don’t need a lot of scraped data and don’t have the high training costs associated with modern LLMs.
Nonetheless, as the models underlying generative AI advance, detection technology must keep pace as well. In addition, detection mechanisms may also require the cooperation of domain experts. For example, when it comes to synthesized speech, for such tools to be effective, machine learning engineers need to collaborate with linguists and other researchers, and the research funding provided allows such collaboration. should be encouraged.
COVID-19 has shown that countries around the world can drive innovation to help overcome the crisis, if needed. And governments have a role to play in ensuring that their citizens are protected from potentially harmful AI content. Investing in detecting the AI output it produces is one way he does that.