Big Tech’s Guide to Talking About Ethics in AI


AI researchers often say that good machine learning is really more of an art than a science. The same could be said of effective public relations. Choose the right words for have a positive tone or reframing the conversation on AI is a tricky task: done right, it can strengthen its brand image, but poorly done, it can trigger an even greater backlash.

The tech giants would know. In recent years, they must have learned this art quickly as they have faced growing public distrust of their actions and heightened criticism of their AI research and technology.

Now, they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly, but want to make sure they don’t get too much scrutiny. Here’s an insider’s guide to decode their language and challenge built-in assumptions and values.

responsibility (n) – The act of hold someone else responsible for the consequences of failing your AI system.

precision (n) – Technical accuracy. The most important measure of success in evaluating the performance of an AI model. See validation.

opponent (n) – A lone engineer capable of disrupting your powerful income-generating AI system. See robustness, security.

alignment (n) – The challenge of designing AI systems that do what we tell them and value what we value. Abstract on purpose. Avoid using real examples of unintended adverse consequences. See security.

artificial general intelligence (phrasing) – A hypothetical AI god it is probably far in the future but also maybe imminent. Can be really good or really bad, whichever is more rhetorically helpful. Obviously, you are building the right one. Which is expensive. Therefore, you need more money. See long term risks.

Audit (not) – A review that you are paying someone else to make your business or your AI system to appear more transparent without having to change anything. See impact assessment.

increase (v) – Increase the productivity of white collar workers. Side effect: automate blue collar jobs. Sad but inevitable.

beneficial (adj) – A coverage descriptor for what you’re trying to build. Ideally ill-defined. See value.

intentionally (ph) – As in “equity by design” or “accountability by design”. A sentence to signal that you have seriously thought about the important things from the beginning.

conformity (n) – Following the law. Anything that is not illegal is fine.

data labellers (ph) – People who would exist behind Amazon’s Mechanical Turk interface to do cheap data cleaning jobs. I don’t know who they are. I never met them.

democratize (v) – Scale a technology at all costs. A justification for concentrating resources. See climb.

diversity, equity and inclusion (ph) – The act of hiring engineers and researchers from marginalized groups so that you can introduce them to the public. If they challenge the status quo, fire them.

Efficiency (n) – Using less data, memory, personnel or energy to build an AI system.

ethics committee (ph) – A group of advisers with no real power, brought together to give the impression that your company is actively listening. Examples: Google’s AI Ethics Board (canceled), Facebook’s supervisory board (still standing).

ethical principles (ph) – A set of truisms used to signal your good intentions. Keep it at a high level. The vaguer the language, the better. See Responsible AI.

explicable (adj) – To describe an AI system that you, the developer and the user can understand. Much more difficult for people he is used to. It is probably not worth the trouble. See interpretable.

justice (n) – A complicated concept of impartiality used to describe unbiased algorithms. Can be set in dozens of ways depending on your preferences.

for real (ph) – As in “AI for good” or “data for good. »An initiative totally tangential to your core business that helps you generate good publicity.

foresight (n) – The ability to look to the future. Basically Impossible: So, a perfectly reasonable explanation for why you can’t rid your AI system of unintended consequences.

frame (n) – A set of guidelines for decision making. A good way to appear thoughtful and measured while delaying actual decision making.

generalizable (adj) – The sign of a good AI model. One that continues to operate under changing conditions. See real world.

the governance (n) – Bureaucracy.

human-centered design (ph) – A process of using “personas” to imagine what an average user might want from your AI system. May involve soliciting feedback from actual users. Only if there is time. See stakeholders.

human in the loop (ph) – Anyone who is part of an AI system. Responsibilities range from simulate system capabilities to ward off accusations of automation.

impact assessment (ph) – A review that you do yourself of your business or AI system to show your willingness to consider its downsides without changing anything. See Audit.

interpretable (adj) – Description of an AI system that you, the developer, can calculate step by step to understand how he arrived at its answer. In fact, probably just linear regression. AI sounds better.

integrity (n) – Problems that compromise the technical performance of your model or the ability of your business to grow. Not to be confused with issues that are bad for society. Not to be confused with honesty.

interdisciplinary (adj) – Term used to designate any team or project involving people who do not code: user researchers, product managers, moral philosophers. Especially moral philosophers.

long term risks (n) – Bad things that could have catastrophic effects in the distant future. This will probably never happen, but it is more important to study and avoid the immediate damage to existing AI systems.

the partners (n) – Other elite groups who share your worldview and can work with you to maintain the status quo. See stakeholders.

confidentiality compromise (ph) – The noble sacrifice of individual control over personal information for collective benefits such as advancements in AI-powered healthcare, which are also highly profitable.

progress (n) – Scientific and technological progress. An inherent good.

real world (ph) – The opposite of the simulated world. A dynamic physical environment filled with unexpected surprises that AI models are trained to survive. Not to be confused with humans and society.

regulation (not) – What you call shift responsibility for mitigation of harmful AI to decision makers. Not to be confused with policies that would hinder your growth.

Responsible AI (n) – A nickname for any job in your company that could be interpreted by the public as a sincere effort to mitigate the harms of your AI systems.

robustness (n) – The ability of an AI model to function consistently and accurately under malicious attempts to feed corrupted data.

security (n) – The challenge of creating AI systems that do not deviate from the designer’s intentions. Not to be confused with building AI systems that don’t fail. See alignment.

climb (n) – The de facto end state that any good AI system should strive to achieve.

security (n) – Protecting valuable or sensitive data and AI models from being breached by bad actors. See opponent.

stakeholders (n) – Shareholders, regulators, users. The people in power you want to keep happy.

transparency (n) – Reveal your data and code. Bad for proprietary and sensitive information. So really hard; frankly, if not impossible. Not to be confused with clear communication about how your system actually works.

reliable (adj) – An assessment of an AI system that can be manufactured with sufficiently coordinated advertising.

universal basic income (ph) – The idea that paying everyone a fixed salary will solve the massive economic upheaval caused when automation leads to widespread job losses. Popularized by 2020 presidential candidate Andrew Yang. See redistribution of wealth.

validation (n) – The process of testing an AI model on data other than the data it was trained on, to verify that it is still accurate.

value (n) – An intangible advantage returned to your users which brings you a lot of money.

values (n) – You have them. Remind people.

redistribution of wealth (ph) – A useful idea for to balance when people are looking at you for using way too much resources and making way too much money. How would the redistribution of wealth work? Universal Basic Income, of course. Also not something that you could figure out on your own. Would require regulation. See regulation.

withhold publication (ph) – The benevolent act of choosing not to open your code because it might fall into the hands of a bad actor. Better limit access to partners who can afford it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *