Algorithms are everywhere. Of particular interest, algorithms that are used “to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions” can be extraordinarily powerful tools.1 Such algorithms determine advertisements seen online or received in the mail, posts that appear prominently on social media feeds, even hiring and firing decisions. They impact innumerable aspects of many people’s daily lives. And, as one recent post from the University of Oxford’s “Practical Ethics” blog noted, the way algorithms “function and are used . . . whether in computers or as a formal praxis in an organization – matters morally because they have significant and nontrivial effects.” 2
Many algorithms provide a great benefit to our society, helping human beings to organize and simplify a constantly-expanding and complicated universe of data. In some situations, however, they can also have adverse and inhumane effects – for example, by invading individuals’ privacy or producing results based on incomplete or otherwise flawed data. Accordingly, all involved parties – information technology innovators who create algorithms, corporations that make use of algorithms for business gain, technology consumers whose use of algorithm-enhanced products has catalyzed the present ubiquity of such systems – have an obligation to think about and develop ethical approaches to the current landscape. How do we even begin to approach this enormous task?
In March 2015, the Centre for Internet and Human Rights and the Technical University of Berlin hosted a conference on “The Ethics of Algorithms,” at which academics and technology professionals from the United States and Europe grappled with these very issues. A background paper from that conference identified a subset of algorithms that are of the greatest ethical concern, and the specific attributes that require heightened scrutiny: “complexity and opacity, gatekeeping functions [determining ‘what gets attention, and what is ignored’], and subjective decision-making.” 3
That same paper also proposed a handful of appropriate regulatory responses to problematic algorithms, weighing the pros and cons of each. This provides an excellent starting point for any discussion of the ethical challenges of algorithms.
The first proposed response is “algorithmic transparency and notification.” Transparency in algorithms is a challenging proposition – in part because most algorithms are so complex that lay people would not be able to understand them even if they were opened up to scrutiny. In addition, many programmers and corporations keep the secrets of their algorithms close to the vest and would not give them up without a colossal fight. While some openness is a fantastic goal and is necessary for a dialogue about ethical algorithms, on its own this is not a realistic or adequate solution. An alternative to full transparency, however, is “notification,” which envisions more consumer engagement with the manner and extent of data provided to algorithms: “Consumers can demand for control over their personal information that feeds into algorithms which might have a considerable effect on their lives. This includes the rights to correct information and demand their personal information to be excluded from the database of data vendors.”
A second response, “algorithmic accountability,” asks that we question how and why algorithms work as they do: “causal explanations that link our digital experiences with the data they are based upon [which] can empower individuals to better understand how the algorithms around them are influencing their life-worlds.” Indeed, the conference paper describes investigations as to how algorithms produce certain outcomes, even if such investigations do not create a definitive explanation, as an “essential precondition for the public scrutiny of algorithms.”
Finally, the paper approaches the possibility of “governments directly regulating an algorithm,” with regulation of algorithms in the financial sector as one appropriate example. This regulatory approach becomes more complicated, however, if applied to government regulation of search engines: “even deciding what would be in the ‘public interest’ is a complex and contested question, exactly because there is no right answer to how [a search engine] should rank its results.” Such regulation would be controversial and difficult (if not impossible) to manage. It could also serve to discourage innovation in the development of algorithms, at a time when we should foster creativity and flexibility among programmers.
None of these regulatory responses is perfect. What this discussion does make apparent, however, is that algorithms are valuable yet imperfect tools and, especially as they become increasingly central to our lives, they should be scrutinized through a lens of fairness and ethics.
Indeed, as the previously referenced University of Oxford blog post puts it: “We cannot and should not prevent people from thinking, proposing, and trying new algorithms: that would be like attempts to regulate science, art, and thought. But we can as societies create incentives to do constructive things and avoid known destructive things.”
Some awareness of the impact of algorithms on humanity, both positive and negative, can go a very long way, along with consideration of our ethical obligations as the drivers of the algorithm environment. The most important thing is to not fall into a trap of thinking about algorithms – as autonomous as they may appear when designed skillfully – as something independent of their human creators, for which humans do not bear full responsibility.
As Tarleton Gillespie recommends in the article The relevance of algorithms, we “must unpack the warm human and institutional choices that lie behind these cold mechanisms . . . to see how these tools are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known.” Such inquiry can help us make ethical choices about our use of algorithms. When used thoughtfully, algorithms can be an extraordinary tool for the common good.
- Gillespie T. (2014). The relevance of algorithms. Media Technologies: Essays on Communication, Materiality, and Society. Eds. T. Gillespie, P. Boczkowski, and K. Foot. Cambridge: MIT Press, 167–194. ↩
- Sandberg, A. (2015, Oct. 6). Don’t write evil algorithms. (Web log post). Retrieved from: http://blog.practicalethics.ox.ac.uk/2015/10/dont-write-evil-algorithms/. ↩
- Center for Internet and Human Rights. (March 2015). The ethics of algorithms: from radical content to self-driving cars. (Final draft background paper). Retrieved from https://www.gccs2015.com/sites/default/files/documents/Ethics_Algorithms-final%20doc.pdf. ↩
Latest posts by jdixon (see all)
- Algorithms and Ethics: Brainstorming Solutions - November 18, 2015
- Preservation and Community Engagement at the Brooklyn Historical Society - October 23, 2015
- Knowledge Hoarding in Organizations and Beyond - September 28, 2015