KITE BLOG: Anticipation, Impact and Accountability in AI

Otto Sahlgren

The writer is a Doctoral Researcher in Philosophy at Tampere University. His research focuses on ethics of AI, philosophy of discrimination and algorithmic fairness.

AI systems – those driven by machine learning, in particular – are increasingly used by companies, organizations and governmental agencies to complement or autonomously execute decision-making processes and to surveil, monitor and organize daily activities. While most AI applications are, plausibly, designed to do good[1], their design, development and use phases will always have unintended consequences. These consequences may include subtle decreases in user autonomy or filter bubbles in online spaces, as well as significant adverse impacts, such as unlawful discrimination or physical harm. We all know what the road to hell is paved with, and the case is no different with regard to the impact AI has on our communities and environment as well. This short post looks at two questions: what are unintended consequences in AI (really) and what can designers do to avoid bad ones?

Key take-aways:

  • The extent to which designers and developers can reasonably anticipate the impact of their AI systems bears on the morality of their actions, but it does not necessarily affect the extent to which the public can and should hold them accountable for that impact.
  • “Unintended consequences” in AI can reflect an explicit or implicit disregard for different communities and worldviews and the normative core of AI systems (i.e., the politics of AI).
  • Algorithmic Impact Assessments (AIAs), published impact statements (AIS), and participatory AI design which gives voice to stakeholders and marginalized communities can enhance risk-sweeping, close epistemic gaps and balance power asymmetries in design.

Reasonable Foresight

Anticipating the effects of purposive action is central from the design and implementation points of view with respect to AI systems. Automation is meant to streamline and optimize processes, often with the aim of increasing productivity and efficiency. Machine learning systems are also used to detect at-risk individuals and populations so as to guide intervention and prevention. Some applications are used to “flag” individuals and events – i.e., spot abnormalities and anomalies – that warrant closer scrutiny by human agents. Automated data analysis offers what data scientists like to refer to as actionable insight – that is, action-guiding information grounded in data. Too often, however, acting on insight, even with desirable aims, may lead to situations where certain individuals and groups are disadvantaged as a result. Research in AI ethics[2] and examples of biased algorithms brought forward by the media[3] stand witness to the harm that may be caused by (people designing and using) AI technologies. Even problem formulation – the first and focal design task where complex problems of a given domain are translated into ones an algorithm can crunch answers to – involves comparisons and choices which may have dire consequences. As Passi and Selbst find in their ethnographical study on data science and problem formulation, specific approaches that lend themselves to more elegant technical implementation are often preferred “even in the face of what might seem like normatively preferable alternatives”[4].

Now, what do we mean by “unintended consequences”? Perhaps, one should distinguish (roughly) between foreseeable and unforeseeable unintended impact. The former result from purposive action (e.g., conscious design choices). These effects are not intentionally produced, but they are, or could reasonably be, anticipated given accessible knowledge about the application domain and social context. (I’ll give an example below.) The latter, unforeseeable impacts, are indirect and cascading effects of actions, which could not be reasonably anticipated (all things considered).

A paradigmatic example of things going wrong with AI is Microsoft’s conversationalbot ‘Tay’[5]. Tay used online learning, designed to learn from Twitter’s textual data, i.e., users’ tweets. Now, for social media actives, it is no news that that social media platforms – despite efforts to moderate and delete harmful content – provide a platform for hateful speech, misinformation and so on. In the case of Tay, many users started tweeting racist tweets with the explicit aim of teaching the bot to do the same thing – which is exactly what happened. It took less than a day for a humanity-embracing conversational bot to turn into a hatemongering racist. What Microsoft failed to consider was the possibility that malicious users would want to poison the training data which was used to teach Tay.

Tay’s racist tweets were surely an unintended consequence of design on Microsoft’s part. A harmful one, nevertheless. But were they foreseeable? Failing to account for the social context into which the system was deployed, one could argue, amounts to a clear deliberative failure. Designers, developers and engineers need to think about the bad people. Given the prevalence of racism on social media platforms, it is not unreasonable to expect designers to implement safeguards against data poisoning and malicious actors. To quote J. R. R. Tolkien, “[i]t does not do to leave a live dragon out of your calculations, if you live near him.”

Accountability and the Politics of AI Systems

The extent to which one can anticipate consequences of one’s actions plausibly bears on the morality of our actions. If we fail to anticipate and prevent ill consequences of our actions, we may be seen to have failed in consider the morally relevant aspects of those decisions (e.g., other people’s interests and rights). This, in turn, makes us blameworthy for those consequences albeit we have not explicitly intended to produce them. Yet, if no deliberative failure can be found, this makes us at least less morally blameworthy. This does not equate a lack a responsibility, or more specifically, accountability – that actors are to be held accountable for their actions – nonetheless. Once designers and developers of AI deploy their products and services into public fora, they could argued to have agreed (at least implicitly) to exercise their right-to-make-profit insofar as, and to the extent that, others’ rights and freedoms are not violated. People’s rights and designers’ duty to respect those rights, one could argue, are not constraints on freedoms to act in a given forum but, rather, define the very space of acceptable activity, economic and otherwise. For AI designers, this means that they need have done all in their power to prevent or decrease negative impact in order to justify their participation in the market.

Now, critics of the “unintended consequences” terminology would perhaps argue that, in the end, there is no such thing. In this view, unintended adverse impacts of AI systems only indicate an ordering of priorities and values in design and implementation – prioritizing some types of impacts (positive and negative) over others. Alternatively, such unintended consequences may also point to a neglect of designers’ and developers’ duty to broaden their epistemic landscape, so to speak; a duty to engage with a plurality of stakeholders and worldviews, and to consider the interests and needs of those often pushed into the margins, so as to truly design for everyone. In the worst cases, there may be a conscious disregard for the stakeholders and communities who are likely to be negatively affected by system design and use. Unintended consequences could then be framed as either implicit or conscious prioritization of certain values and interests over others (politics of AI design) or a result of normative blind spots in design and development. As Frank de Zwart states, ‘unintended but anticipated consequences are neither designed nor generated spontaneously; they are better characterized as “permitted outcomes”.’[6] Accountability for such permitted outcomes requires, at the very least, transparency regarding the prioritization decisions that are made during design, development and use; regarding trade-offs between ethical values and outcomes, for example.

As noted, some negative impacts remain unanticipated or “unforeseeable” due to tunnel-vision in seeking for profit or pursuing some other goal. Design choices and, more generally, the purpose of AI systems in themselves, reflect the worldviews (or politics, as per Langdon Winner[7]) of those designing them. An example of the politics of AI systems comes from a platform for gender prediction called Genderify[8] which was recently shut down. The system was designed to predict gender on the basis of a keyword, which can, arguably, be financially beneficial for companies who want to use gender demographic data in personalized advertising. However, the system was found to exhibit significant gender bias. For example, for the term ‘scientist’ the system outputted a 95.7 percent probability that the person is male and a 4.3 percent chance for female.

The politics of this particular system go deeper than statistical bias, however. It may be understood as reflecting a binary gender politics; biased in the sense that it is built on the assumption that gender can be predicted without asking information regarding individuals’ self-identification. Countering the politics of Genderify, programmer Emil Hvitfeldt launched Genderify Pro[9]. Genderify Pro, when one enters a name or occupation as input, produces the following output:

“if it is important to know someone’s gender, ask them

assigning genders is inherently inaccurate

consider the impact on individuals for whom your assumptions are wrong.”

Technologies, such as Genderify Pro, that question the very purpose and politics of mainstream technologies, demonstrate that there are always alternative views as well as social imaginaries that may be pursued and enforced through digital means[10].

What can designers do?

To prevent exclusion and harm, researchers and citizen voices are calling for stricter regulation of AI technology alongside more inclusive and participatory design practices. One solution identified as a way of promoting accountability and ethical design is that of algorithmic impact assessments (AIAs)[11]. AIA is an assessment and risk sweeping process, often a continuous one, in which designers and decision-makers identify, assess and evaluate the risks related to the proposed system (design), alongside the possible adverse effects the system may have when put into use. Proposed models for AIAs build on a history of (research on) policy impact assessment, with models for AIAs introduced by researchers drawing on existing approaches to environmental impact statements, human rights impact assessment, and privacy impact assessment, in particular. As of recently, AIAs have been adopted by the Canadian government to assess the risks involved in AI technologies procured, developed and used by government agencies[12]. The European High-Level Expert Group on AI has also published an ethical assessment list that organizations may consult during design, development and use-phases of AI systems[13].  

Ideally, a report of the AIA process – or some part of it – would be disclosed to the public in the form of an Algorithmic Impact Statement (AIS). An AIS includes a detailed description of the system and identified risks, the methods used for evaluating and mitigating these risks, and possibly a consideration of alternative means for achieving similar results with respect to the given task[14]. While detailed, possibly proprietary or confidential information about the system (e.g., algorithms or code) is not necessary to report, disclosure of the estimated impact (e.g., in terms of efficiency, accuracy and fairness, environmental costs and sustainability) will allow the public to hold designers and decision-makers accountable. Disclosure should include information about prioritization decisions. For example, how did the designers navigate trade-offs between fairness and accuracy in predictive models? Did they consult stakeholders and impacted individuals? How did they ensure that individuals can correct or remediate decisions generated by an algorithm? This kind of information can help communities assess whether their rights, interests and needs are genuinely accounted for or whether the negative impacts on these communities are, indeed, ‘permitted outcomes’ from the point of view of the decision-makers.  

Another approach for more thorough risk sweeping has been participatory design and stakeholder engagement. AI researchers and practitioners are currently developing methods for giving stakeholders and disadvantaged communities voice in AI design[15], thereby tackling problems that result from a lack of diverse viewpoints and underrepresentation at tables where significant decisions are made. These efforts include designing systems with more comprehensive feedback components and user-interactive elements, developing auditing and documentation methods, auditing existing systems for mechanisms that amplify existing injustices and inequalities, and building tools for organization in communities and among workforces. Participatory AI and community engagement may, at its best, balance the power asymmetry between tech designers, decision-makers and impacted communities. It is no panacea for ethical issues in AI but, hopefully, it will lead to design practices and technologies that are better suited to account for the interests and needs of people who are far too often excluded, marginalized and left behind. 

References

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Retrieved from https://www. propublica. org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.

De Zwart, F. (2015). Unintended but not unanticipated consequences. Theory and Society44(3), pp. 283-297

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P. & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines28(4), pp. 689-707.

Government of Canada Digital Playbook. Government of Canada. URL = https://canada-ca.github.io/digital-playbook-guide-numerique/views-vues/automated-decision-automatise/en/algorithmic-impact-assessment.html. [Accessed 12.8.2020]

Green, B. (2019). “Good” isn’t good enough. AI for Social Good workshop at NeurIPS 2019.

High-Level Expert Group on Artificial Intelligence. (2020). The Assessment List for Trustworthy Artificial Intelligence (ALTAI). URL = https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society3(2), 2053951716679679.

Moss, E., Watkins, E. A., Metcalf, J., & Elish, M. C. (2020). Governing with Algorithmic Impact Assessments: Six Observations. Available at SSRN.

Passi, S., & Barocas, S. (2019). Problem formulation and fairness. Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 39-48.

Reisman, D., Schultz, J., Crawford, K. & Whittaker, M. (2018). ALGORITHMIC IMPACT ASSESSMENTS: A PRACTICAL FRAMEWORK FOR PUBLIC AGENCY ACCOUNTABILITY. AI Now Institute. Retrieved from https://ainowinstitute.org/aiareport2018.pdf.

Selbst, A. D. (2017). Disparate impact in big data policing. Georgia Law Review52, pp. 109-195.

Synced. (2020, July 30). AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash. URL = https://syncedreview.com/2020/07/30/ai-powered-genderify-platform-shut-down-after-bias-based-backlash/. [Accessed 14.8.2020]

The Verge. (2016, March 24). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. URL = https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist. [Accessed 14.8.2020]

Winner, L. (1980). Do artefacts have politics? Daedalus, 109(1), pp. 121-136.

[1] On this subject see Floridi et al. 2018. For a critical examination of efforts to design for the “common good” see Green 2019.

[2] See, e.g., Mittelstadt et al. 2016.

[3] E.g., Angwin et al. 2016.

[4] Passi & Selbst 2019, p. 1.

[5] See The Verge 2016, March 24th.

[6] de Zwart 2015, p. 295.

[7] Winner 1980.

[8] Synced2020, July 30th.

[9] Hvitfeldt’s Genderify Pro can be found at https://emilhvitfeldt.github.io/genderify/.

[10] See also Benjamin 2019.

[11] See, e.g., Reisman et al. 2018; see also Moss et al. 2020 for a critical examination.

[12] Government of Canada Digital Playbook. See link: https://canada-ca.github.io/digital-playbook-guide-numerique/views-vues/automated-decision-automatise/en/algorithmic-impact-assessment.html.

[13] High-Level Expert Group on Artificial Intelligence 2020.

[14] See Selbst 2017.

[15] See, e.g., research on Participatory Approaches to Machine Learning, link: https://participatoryml.github.io/.