I don’t know why people are worried that artificial intelligence is overtaking the collective intellect of humanity anytime soon, we can’t even get the systems we have today to stop working. emulate some of our most despicable tendencies. Or rather, maybe we humans need to disentangle ourselves from these same biases first before expecting them to be removed from our algorithms.
In A citizen’s guide to artificial intelligenceJohn Zerilli leads a host of leading AI and machine learning researchers and authors to present readers with an accessible, holistic review of both history and the current state of the art. , the potential benefits and challenges facing an ever-improving AI. technology, and how this rapidly evolving field could influence society for decades to come.
MIT Press
Extract of “A citizen guide to AI”Copyright © 2021 Band John Zerilli wwith John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat and Merel Noorman. Used with permission of the publisher, MIT Press.
Human prejudices are a mixture of hard-wired and learned prejudices, some of which are sensitive (such as “you should wash your hands before eating”), and others are clearly false (such as “atheists have no faith). moral ”). Artificial intelligence also suffers from both embedded and learned biases, but the mechanisms that produce the built-in biases of AI are different from the evolutionary ones that produce the psychological heuristics and biases of human reasoners.
A group of mechanisms arise from decisions about how practical problems should be solved in AI. These decisions often incorporate the sometimes biased expectations of programmers about how the world works. Imagine you were tasked with designing a machine learning system for landlords who want to find good tenants. It’s a perfectly sane question to ask, but where should you go to look for the data that will answer it? There are many variables you could choose to use in shaping your system – age, income, gender, current zip code, high school attendance, creditworthiness, character, alcohol consumption? Leaving aside variables that are often misreported (such as alcohol consumption) or legally prohibited as discriminatory reasoning grounds (such as gender or age), the choices you make are likely to depend at least on to some extent from your own beliefs about what influences behavior. tenants. Such beliefs will produce a bias in the output of the algorithm, especially if developers omit variables that are in fact predictive of being a good tenant, and thus harm individuals who would otherwise make good tenants but go unidentified. as such.
The same problem will reappear when decisions need to be made on how data should be collected and labeled. These decisions will often not be visible to people using the algorithms. Certain information will be considered commercially sensitive. Some will simply be forgotten. Failure to document potential sources of bias can be particularly problematic when an AI designed for one goal is co-opted to serve another – such as when a credit score is used to assess a person’s suitability. as an employee. The danger inherent in adapting AI from one context to another has recently been dubbed the “portability trap”. This is a trap because it has the potential to degrade both the precision and fairness of reused algorithms.
Also consider a system like TurnItIn. It is one of the many anti-plagiarism systems used by universities. Its creators claim that it crawls 9.5 billion web pages (including common research sources like online lecture notes and reference books like Wikipedia). It also maintains a database of trials previously submitted through TurnItIn which according to its marketing materials grows by over fifty thousand trials per day. Essays submitted by students are then compared against this information to detect plagiarism. Of course, there will always be similarities if a student’s work is compared to the essays of a large number of other students writing on common academic subjects. To work around this problem, its creators chose to compare relatively long strings. Lucas Introna, professor of organization, technology and ethics at Lancaster University, says TurnItIn is biased.
TurnItIn is designed to detect the copy, but all tests contain something like the copy. Paraphrasing is the process of putting other people’s ideas into your own words, showing the marker that you understand the ideas in question. It turns out that there is a difference in paraphrasing native and non-native speakers of a language. People learning a new language write using familiar and sometimes long fragments of text to ensure they have the correct vocabulary and phrase structure. This means that paraphrasing non-native speakers of a language will often contain longer fragments of the original. Both groups paraphrase, don’t cheat, but non-native speakers have consistently higher plagiarism scores. So a system designed in part to minimize biases from teachers subconsciously influenced by gender and ethnicity seems to inadvertently produce a new form of bias because of the way it processes data.
There is also a long history of built-in biases deliberately designed for commercial gain. One of the greatest successes in AI history is the development of recommendation systems that can quickly and efficiently find consumers the cheapest hotel, the most direct flight, or the books and music that match. best to their tastes. The design of these algorithms has become extremely important for traders – and not just for online traders. If designing such a system meant that your restaurant was never searched, your business would definitely take a hit. The problem is worsening as referral systems become more entrenched and effectively mandatory in certain industries. This can create a dangerous conflict of interest if the same company that has the recommendation system also owns some of the products or services that it recommends.
This problem was first documented in the 1960s after the launch of the SABER airline reservation and scheduling system jointly developed by IBM and American Airlines. This was a huge step up from call center operators armed with seating charts and drawing pins, but it soon became clear that users wanted a system that could compare the services offered by a range of companies. aerial. A descendant of the resulting recommendation engine is still used and operates services such as Expedia and Travelocity. American Airlines has not lost sight of the fact that its new system actually advertises the goods of its competitors. So they began to explore ways to present search results so that users would choose American Airlines more often. Thus, although the system is based on information from many airlines, it would systematically skew user purchasing habits in favor of American Airlines. The staff called this strategy screen science.
The science of American Airlines screens has not gone unnoticed. Travel agents quickly noticed that SABER’s top recommendation was often worse than the ones below. Eventually, American Airlines President Robert L. Crandall was called to testify before Congress. Surprisingly, Crandall was utterly unrepentant, testifying that “the preferential display of our flights, and the corresponding increase in our market share, is the competitive rationale for creating the [SABRE] system first. Crandall’s rationale has been dubbed “Crandall’s complaint,” namely, “Why would you want to build and operate an expensive algorithm if you can’t skew it in your favor?”
Looking back, Crandall’s complaint seems rather odd. There are many ways to monetize recommendation engines. They don’t need to produce biased results to be financially viable. That said, the science of screens has not gone away. There are always claims that the recommendation engines are biased towards the products of their manufacturers. Ben Edelman gathered all the studies in which Google was found to promote its own products through significant placements in such results. These include Google Blog Search, Google Book Search, Google Flight Search, Google Health, Google Hotel Finder, Google Images, Google Maps, Google News, Google Places, Google+, Google Scholar, Google Shopping, and Google Video.
Deliberate bias doesn’t just influence what the recommendation engines offer you. It may also affect the fees you are charged for services recommended to you. Personalization of search made it easier for companies to participate dynamic pricing. In 2012, a Wall Street Journal survey found that the recommendation system used by a travel company called Orbiz appeared to recommend more expensive accommodation for Mac users than Windows users.