AI wrote better phishing emails than humans in a recent test


Natural language processing keep finding his way in unexpected corners. This time it’s phishing emails. In a small study, researchers found that they could use the GPT-3 deep learning language model, along with other AI-as-a-service platforms, to significantly reduce the barrier to learning. entry for creating large-scale spearphishing campaigns.

Researchers have long questioned whether it would be worth it for crooks to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and stereotypical, after all, and are already very effective. However, highly targeted and personalized spearphishing messages take more work to write. This is where NLP can come in surprisingly useful.

At the Black Hat and Defcon security conferences in Las Vegas this week, a team from the Government Technology Agency in Singapore presented a recent experiment in which they sent out targeted phishing emails that they themselves created. and generated by an AI-as-a-service platform to 200 of their colleagues. Both posts contained links that weren’t really malicious but simply reported click-through rates to searchers. They were surprised to find that more people clicked on links in AI-generated posts than those written by humans, by a significant margin.

“The researchers pointed out that AI requires a certain level of expertise. It takes millions of dollars to train a great model, ”says Eugene Lim, cybersecurity specialist with the Government Technology Agency. “But once you put it on AI as a service, it costs a few pennies and it’s really easy to use – just get in and out of texts. You don’t even need to run any code, you just need to give it a prompt and it will give you exit. This therefore lowers the barrier of entry for a much larger audience and increases the potential targets for spear phishing. As a result, each large-scale email can be personalized for each recipient.

Researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to background and characteristics. of their colleagues. Personality analysis-driven machine learning aims to predict a person’s tendencies and mindset based on behavioral inputs. By running outputs through multiple services, researchers were able to develop a pipeline that prepared and refined emails before sending them. They say the results looked “oddly human” and the platforms automatically provided startling details, like mentioning a Singaporean law when ordered to generate content for people living in Singapore.

Although they were impressed with the quality of the synthetic messages and the number of clicks they garnered from their colleagues compared to those composed by humans, the researchers note that the experiment was only ‘a first step. The sample size was relatively small and the target pool was fairly homogeneous in terms of employment and geographic region. Additionally, human-generated messages and those generated by the AI-as-a-service pipeline were created by insiders in the office rather than outside attackers trying to find the right tone from a distance.

“There are a lot of variables to take into account,” says Tan Kee Hock, cybersecurity specialist at the Government Technology Agency.

Still, the findings have prompted researchers to think more deeply about how AI as a service can play a role in phishing and spearphishing campaigns in the future. OpenAI itself, for example, has long been fear the potential for abuse of its own service or similar ones. Researchers note that he and other scrupulous AI-as-a-service providers have clear codes of conduct, attempt to audit their platforms for any potentially malicious activity, or even attempt to verify identity. of users to some extent.

“The misuse of language models is an industry-wide issue that we take very seriously as part of our commitment to the safe and responsible deployment of AI,” OpenAI told WIRED in a press release. “We grant access to GPT-3 through our API and review every production use of GPT-3 before it goes live. We impose technical measures, such as rate limits, to reduce the likelihood and impact of malicious use by API users. Our active surveillance systems and audits are designed to highlight potential evidence of misuse as early as possible, and we are continually working to improve the accuracy and effectiveness of our security tools.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *