The pandemic that has raged around the world over the past year has cast a cold, harsh light on many things – the varying levels of preparedness to respond; collective attitudes towards health, technology and science; and vast financial and social inequalities. As the world continues to grapple with the covid-19 health crisis and some places even begin a gradual return to work, school, travel and leisure, it is critical to address competing priorities for protecting human rights. public health in an equitable manner while ensuring confidentiality.
The protracted crisis has led to a rapid change in professional and social behavior, as well as an increased reliance on technology. It is now more essential than ever that businesses, governments and society exercise caution in the application of technology and the handling of personal information. The wide and rapid adoption of artificial intelligence (AI) shows how adaptive technologies tend to intersect with humans and social institutions in potentially risky or unfair ways.
“Our relationship with technology as a whole will have changed dramatically after the pandemic,” says Yoav Schlesinger, director of ethical AI practice at Salesforce. “There will be a process of negotiation between people, business, government and technology; how their data flows between all of these parties will be renegotiated in a new social data contract. “
AI in action
As the covid-19 crisis began to unfold in early 2020, scientists turned to AI to support a variety of medical uses, such as identifying potential drug candidates for vaccines or the treatment, detection of potential symptoms of covid-19 and allocation of scarce and intensive resources. -the beds and ventilators in the care units. Specifically, they leveraged the analytical power of AI-augmented systems to develop cutting-edge vaccines and treatments.
While advanced data analysis tools can help extract insight from a massive amount of data, the result hasn’t always been fairer results. In fact, AI-based tools and the datasets they work with can perpetuate inherent bias or systemic inequality. Throughout the pandemic, organizations like the Centers for Disease Control and Prevention and the World Health Organization have collected huge amounts of data, but the data does not necessarily accurately represent the populations that have been affected by the disease. disproportionately and negatively, including black, brown and indigenous populations. people – not more than some of the diagnostic advancements they’ve made, says Schlesinger.
For example, biometric wearable devices like Fitbit or Apple Watch show promise in their ability to detect potential symptoms of covid-19, such as temperature changes or oxygen saturation. Yet these analyzes rely on often flawed or limited data sets and can introduce biases or injustices that disproportionately affect vulnerable people and communities.
“Some research shows that green led light has more difficulty reading pulse and oxygen saturation on darker skin tones, ”says Schlesinger, referring to the solid-state light source. “So it might not do as good a job of catching the symptoms of covid for those with black and brown skin.”
AI has shown greater efficiency in helping analyze huge data sets. A team from the Viterbi School of Engineering at the University of Southern California has developed an AI framework to help analyze covid-19 vaccine candidates. After identifying 26 potential candidates, he narrowed the field down to 11 who were most likely to be successful. The data source for the analysis was the Immune Epitope Database, which includes more than 600,000 determinants of contagion from more than 3,600 species.
Other researchers at Viterbi are applying AI to more accurately decipher cultural codes and better understand the social norms that guide the behavior of ethnic and racial groups. This can have a significant impact on how a certain population behaves during a crisis like the pandemic, due to religious ceremonies, traditions and other social mores that can facilitate the spread of the virus.
Principal scientists Kristina Lerman and Fred Morstatter based their research on Theory of moral foundations, which describes the “intuitive ethics” that form the moral constructs of a culture, such as benevolence, fairness, loyalty and authority, helping to inform individual and group behavior.
“Our goal is to develop a framework that allows us to understand the dynamics that drive the decision-making process of a culture at a deeper level,” says Morstatter in a report published by USC. “And in doing so, we generate more culturally informed forecasts.”
The research also examines how to deploy AI ethically and fairly. “Most people, but not all, want to make the world a better place,” says Schlesinger. “Now we need to take it to the next level: what goals do we want to achieve and what results would we like to see? How will we measure success and what will it look like? “
Allay ethical concerns
It is essential to question assumptions about collected data and AI processes, says Schlesinger. “We’re talking about achieving equity through awareness. Every step of the way, you make value judgments or assumptions that will weight your results in a particular direction, ”he says. “This is the fundamental challenge of building ethical AI, which is looking at all the places where humans are biased.”
Part of this challenge is to critically examine the datasets that inform AI systems. Understanding the data sources and the composition of the data, and answering questions such as: How is the data formed? Does it include a wide range of stakeholders? What is the best way to deploy this data in a model to minimize bias and maximize fairness?
As people return to work, employers can using detection technologies with integrated AI, including thermal cameras to detect high temperatures; audio sensors to detect coughs or loud voices, which help the spread of respiratory droplets; and video feeds to monitor hand washing procedures, physical distance regulations, and mask requirements.
Such monitoring and analysis systems not only present problems of technical precision, but also present major risks for the environment. human rights, confidentiality, security and trust. The push for increased surveillance has been a troubling side effect of the pandemic. Government agencies have used images from surveillance cameras, location data from smartphones, records of credit card purchases, and even passive temperature scans in crowded public areas like airports to help trace the movements of people who may have contracted or been exposed to covid-19 and establish transmission of the virus. Chains.
“The first question to answer is not simply whether we can do it – but should we do it?” said Schlesinger. “Scanning individuals for their biometric data without their consent raises ethical concerns, even if it presents itself as a benefit for the greater good. We should be having a solid conversation as a company about whether there is a good reason to implement these technologies in the first place. “
What does the future look like
As society reverts to something close to normal, it’s time to fundamentally reassess the relationship with data and set new standards for data collection, as well as appropriate use – and wrong use. potential use – data. When building and deploying AI, technologists will continue to make the necessary assumptions about data and processes, but the foundations of that data must be questioned. Does the data come legitimately? Who put it together? What assumptions is it based on? Is it presented accurately? How can the privacy of citizens and consumers be preserved?
As AI is more widely deployed, it is essential to think about how to instill trust as well. Using AI to increase human decision making, not to replace human input entirely, is one approach.
“There will be more questions about the role AI should play in society, its relationship to humans, and what are the appropriate tasks for humans and what are the appropriate tasks for an AI,” says Schlesinger. “In some areas, AI’s capabilities and its ability to augment human capabilities will accelerate our confidence and dependence. In places where AI does not replace humans, but increases their efforts, this is the next horizon. “
There will always be situations in which a human needs to be involved in the decision making. “In regulated industries, for example, like healthcare, banking and finance, there has to be a human being in the loop in order to maintain compliance,” says Schlesinger. “You can’t just deploy AI to make care decisions without the intervention of a clinician. As much as we would like to believe that artificial intelligence is capable of doing this, it does not yet have empathy and probably never will.
It is essential that the data collected and created by AI does not aggravate but minimize inequalities. There must be a balance between finding ways for AI to help accelerate human and social progress, promote equitable actions and responses, and simply recognize that some problems will require human solutions.
This content was produced by Insights, the personalized content arm of MIT Technology Review. It was not written by the editorial staff of MIT Technology Review.