Impact Model for Ethics: Notes from a talk

Annette Markham

This blog contains notes from a talk I gave on June 27, 2017, at Ryerson University as part of a pre-conference workshop on Data Stewardship prior to the main Social Media & Society annual conference. As one of four invited speakers, I was asked to focus on ethics of academic use of research data. This talk offers an impact model as a framework for thinking about ethics, not only in academic use of data, but also in technology design and development. A more complete article based on this talk was published in 2018

Last year at the launch of the Alan Turing Institute at Oxford Internet Institute in the UK, experts gathered to talk about data and ethics. One of the emergent themes highlighted the importance of building nuanced frameworks for discussing ethics issues. How do policy makers frame ’big data’ or ‘smart city’ issues in conversation with the public? How do tech companies frame ethics when socializing employees? How do regulators frame definitions of ethics whenΩ they talk about governance of data science? These questions are not just to identify current practice, but to prompt vital discussions about how we might change these frames in the future.

I want to talk about how an impact model for ethics might frame data science in a way that produces stronger ethics in everyday practice among scientists, developers, and policy makers.

For me, this is literally an ethnographic project. I’ve been studying these premises in the field of social research for more than a decade now.  It’s also a rhetorical challenge. I’ve been searching for many years to find the right vocabulary or visualization to guide ethical data science, technology design, and social research.

The development of this model is premised on three premises. First: a key rhetorical theory premise that everyday discourse matters: Our enactment of anything ‘ethical’ is built on how we understand ethics in the first place, which is baserd on how we learn what ethics means from other sources. Second: Ethics is often experienced as a vague and philosophical term. Third: if we rethink our frames fort the practice of ethical research and design, we can start to build new practices that fit contemporary complexities.

Over the past decade, we can see shifts in how we conceptualize/frame concepts around data, which influences how we think about ethics. Here, I just mention four contexts in which I’ve been working where I’ve either noticed this or I’ve worked to actively shift thinking among specific groups:

1: AOIR:  I was involved in the development of the “ethical decision-making for internet research” guidelines document between 2006 and 2012.

2: I worked to develop different ways of thinking about ethics and technology development at Microsoft Research Labs, along with other members of the Social Media Collective.

3: In Aarhus, Denmark, I’m working with the Aarhus City Archives and Aarhus University to rethink what counts as ‘data’ collected around largescale cultural events

4: As founder of the Future Making Research Consortium, I am supporting various research teams focused on designing creative and playful interventions with citizens to build critical literacy around data.

[[As a caveat, it should be noted that my core research project is around methodology, which influences how I think about ethics. Elsewhere, I develop a ‘method as ethics, ethics as method‘ perspective. Here, I only mention briefly that from an epistemological perspective, one’s method or approach will be influenced by social and political considerations as well as their everyday habits. In looking at what influences our research and design practice, then, we can look both above and below what we might describe as disciplinary tools and techniques. This shift of perspective highlights the political, social, economic facctors influencing our research design and also the everyday habits or norms influencing our enactment of technique.

We can also apply this logic (of looking above and below) to how ethics get enacted, which allows us to consider how our ethics are being framed, and what conceptual frameworks are being reproduced and privileged. End caveat]]

Over a decade of work in the AoIR’s ethics committee provides a clear example of how frames around ethics shift over time. The most remarkable difference between the first set of guidelines in 2002 and the second in 2012? The number of questions in the document. This shift reflected the growing recognition across regulatory and scientific communities that contextual factors complicate ethical decision making. It is no longer adequate to adhere to the typical top-down or principle-driven ethical codes, but pay attention to specific contingencies of situtions.

 

 

This image is one page from the AoIR 2012 document, Ethical decision- making in internet research. This is perhaps the 20th iteration of a chart developed over six years. We started by emphasizing ethical questions by platform, which was unsatisfactory. We experimented with organizing ethics issues by data type or methodology. Neither option worked. Finally, we got rid of most of the grid lines that would demarcate a chart and simply listed types of data, types of contexts, and then listed commonly asked questions by ethical researchers.

Ultimately, while this may look like a simple chart, it took many months to create and man meetings to approve. The actual handout is available online .

In 2015, I shifted from thinking about ethics in social research to how we talk about ethics in technology design environments. In response to what I perceived as a failure to convey the urgent need to consider ethics, I began to experiment with vocabulary that would better translate to the tech sector. When using the word ‘ethics,’ many computer scientists and computational develoipers would simply tune out. They have ethics, don’t get me wrong. But often, technology developers use a different narrative. When in a talk, I highlighted the concept of “avoiding the creepy factor,” for example, this became a compelling idea for discussion.

 

As my own vocabulary shifted toward ‘avoiding the creepy factor,’ I recognized that my outlook was shifting to considerations of possible impacts, a future-oriented perspective.  Inspired by a conversation with Microsoft researchers Janice Tsai and Sumit Basu, this impact model identifies four arenas or ways we can talk about the impact of our actions. What happens when we use the word ‘impact’ rather than ethics? This term prompt thought about the way our decisions and actions, whether deliberate or not, might influence other things or people, in both the short and long term. These arenas are meant as provocations for further conversation among data scientists, IT developers, and technology designers, who face different ethical dilemmas than the typical social researcher. As an early draft of this model, the number of levels is not clear. It’s also not clear whether these are categories, scales, or levels. And while these focus on possible harms, the other important angle of gaze opened up by this model is future possible benefits of taking particular actions.

.

In 2016, I started a 5-year project to engage with citizens to ask them (and train them to ask themselves) questions about the future impact of data collection. This shifted my own research lens from empirical observation or evaluation to intervention. How could we help citizens regain control of the ‘big data’ they regularly produce in their everyday lives? As contemporary societies become more saturated in digital and social media, it takes time and effort to keep track of our own piles of messages, photos, and posts, much less curate these in ways that might make sense for our grandchildren. On the other hand, Google and Facebook are avidly interested in sorting and curating our memories for us.

This project exemplifies how we might teach citizens about an impact model of ethics. The method we’re using in Aarhus is to talk around ethics, rather than directly addressing it.

to further describe this experimental method and the goals behind it: Our starting point when working with citizens in this future-oriented approach is to ask them to imagine themselves as archeologists in the year 2080.  As they dig up artifacts about the year 2017, what are they likely to find? What is the equivalent of a ‘pot shard’ 75 years in the future?

Taking this future-oriented perspective allows citizens to speculatively imagine the impacts of current technologies. It also helps technology developers consider the possible long term impacts of various aspects of tech design.

The overall project is intended to conduct a range of intervention workshops to experiment with different techniques of building dta literacy, focusing on ethics of technology design, and critically analyze the contemporary practices and policies around data archiving in smart cities

A specific project we’re doing is the Museum of Random Memory. You can learn more about this project here. We designed this exhibition to be participatory so ideas could unfold playfully over two days around questions that are very STS-inspired, including but not limited to:

What is the process of remembering and forgetting in the digital age?

How are we creating future memories?

How do cities now create their future heritage? How much is this deliberate versus automated through digital data collection?

What do the affordances of social media like Facebook encourage us to remember….and how?

Who’s histories get to count? Who or what might be forgotten?

100 years from now, what will archeologists find to teach them about what happened back in 2017? What would we like them to find?

An impact model is, I believe, a natural next step in the development of frameworks that guide decision making in technology design and social research. Some highlights of such an impact model for ethics includes:

Shifting from statements/rules to questions,

Making abstract concepts more concrete

Giving agency (and responsibility) to individuals

Raising consciousness across multiple sectors about the future impact of technology designs, uses, policies, and norms,

Embracing uncertainty as inherent to assessing situations and making decisions that have ethical consequences,

and recognizing the importance of building flexible and adaptive models.