In another test, Xudong Shen, a doctoral student at the National University of Singapore, assessed language models based on how they stereotype people based on their gender or whether they identify as queer, transgender or non-binary. . He found that larger AI programs tended to engage in more stereotypes. Shen says manufacturers of large language models should correct these flaws. OpenAI researchers also found that language models tend to become more toxic as they grow older; they say they don’t understand why.
The text generated by the great linguistic models comes closer and closer to a language that seems to come from a human, but it still fails to understand things that require reasoning that almost everyone understands. In other words, as some researchers say, this AI is fantastic bullshit, able to convince both AI researchers and others that the machine understands the words it generates.
Alison Gopnik, professor of psychology at UC Berkeley, studies how toddlers and youth learn to apply this understanding to computing. Children, she said, are the best learners, and the way children learn language stems in large part from their knowledge and interaction with the world around them. Conversely, the great linguistic models have no connection with the world, which makes their output less anchored in reality.
“The definition of bullshit is you talk a lot and it sounds plausible, but there’s no common sense behind it,” Gopnik says.
Yejin Choi, associate professor at the University of Washington and leader of a group studying common sense at the Allen Institute for AI, has subjected the GPT-3 to dozens of tests and experiments to document how it can commit errors. Sometimes it repeats itself. Other times he devolved to generate toxic language even starting with harmless or harmful text.
To learn more about AI on the world, Choi and a team of researchers created PIGLeT, an AI trained in a simulated environment to understand things about the physical experience that people learn as they grow up, like it’s a bad idea to touch a hot stove. This training led a relatively small linguistic model to outperform others on common sense reasoning tasks. These results, she said, demonstrate that the ladder is not the only winning recipe and that researchers should consider other ways to train models. His goal: “Can we really create a machine learning algorithm capable of learning abstract knowledge about how the world works?” “
Choi is also working on ways to reduce the toxicity of speech patterns. Earlier this month, she and her colleagues presented an algorithm who learns from offensive text, similar to the approach taken by Facebook AI Research; they say it reduces toxicity better than many existing techniques. Large language models can be toxic to humans, she says. “It’s the language that exists. “
Perversely, some researchers have found that attempts to refine and remove bias from models can end up harming marginalized people. In a paper published in april, researchers at UC Berkeley and the University of Washington have found that blacks, Muslims and people who identify as LGBT are particularly disadvantaged.
The authors say the problem stems, in part, from humans labeling the data poorly judging whether the language is toxic or not. This leads to prejudice against people who use the language differently from whites. The co-authors of this article claim that it can lead to self-stigma and psychological damage, as well as forcing people to change codes. The OpenAI researchers did not address this question in their recent article.
Jesse Dodge, a researcher at the Allen Institute for AI, came to a similar conclusion. He discussed efforts to reduce negative stereotypes about gays and lesbians by removing from the training data of a large linguistic model any text containing the words “gay” or “lesbian.” He found that such efforts to filter language can lead to datasets that effectively wipe out people with those identities, making linguistic models less able to handle text written by or about these groups of people.
Dodge says the best way to deal with bias and inequality is to improve the data used to train the language models instead of trying to remove the bias after the fact. He recommends better documenting the source of training data and recognizing the limitations of web-pulled text, which can over-represent people who can afford internet access and have the time to build a website or post a comment. It also urges documenting how content is filtered and avoiding the general use of blocklists to filter content fetched from the web.