ethical decisions in sensitive research situations
Annette Markham
“Ethics.”* This is a complicated term in academic research contexts. Not only does it conjure up vague philosophical concepts about morality and values, it also becomes a stand-in for the process of sifting through complicated regulations, taking mandatory training modules, and checking off various boxes to demonstrate one’s adherence to ethical principles. These background encounters with the concept of ‘ethics’ are unfortunate in situations where the ethical needs are not satisfied by the ethical training one has received.
- When studying radical and potentially violent communities and wanting to remain in the shadows as a lurker rather than a member of the organization or a recognized researcher, how does one satisfy the regulatory norm for gaining “informed consent?”
- When one seeks to learn more about successful suicide prevention by scraping confidential data from hotline calls, without asking for express permission, how can researchers justify a violation of potential privacy?
- What if you want to talk with young adults online about their drug use but they can’t prove they’re an adult without giving away personally identifiable information?
- What should a researcher do if they need to study the spontaneous reactions of people surveillance robots in public spaces but the local ethics review committee requires informed consent in advance of exposure to the robot by default, thus negating the “spontaneous” part of the study’s design?
- If one wants to record and study children interacting with a controversial robot in a public square without seeking permission from them or their parents, how can one adhere to international ethics principles naming children as automatically vulnerable populations?
These are complicated and actual research situations that my colleagues and I have faced. There’s no single answer. There are just tough choices at critical junctures with many possible outcomes. This chapter focuses on a specific research project related to the last two questions mentioned above, when I led a team of early career researchers in a study of public reactions to semi-autonomous robots that have been used for surveillance and control. In such a case, issues arise around key research ethics concepts, including informed consent, vulnerability, privacy protection, and data security.
Readers can find many documents elsewhere to guide basic thinking about ethical regulations and ethical decision making in internet research or digital contexts, such as the three distinctive guideline documents developed by the Association of Internet Researchers. Here, I focus on some basic factors one should consider when the situations are unclear, where regulatory guidelines are inadequate, or where the needs of the research situation or the local context do not match taken-for-granted norms or policies. In contexts where the company or institution’s ethical guidelines seem ‘off base,’ or even counterintuitive, how do ethical researchers make choices?
In this post, I suggest that when ethical guidelines are contradictory or not effective or appropriate, returning to core principles and seeking guidance from experienced researchers can be an effective strategy to help build bespoke guidelines that have strong contextual, rather than regulatory integrity. This does not mean that one can ignore the laws that apply in various research situations. But it recognizes that ethical guidelines are negotiable and changing, not static, and that different situations and stages of inquiry require precise and mindful application of ethical decision making, rather than rote adherence to policies.
Ethical guidelines are negotiable and changing, not static
Ethical guidelines for scientific study of humans were developed in response to egregious violations of human rights in WWII. Basic principles of ethical treatment of humans were codified in notable historical documents such as the Nuremberg Report, the Helsinki Agreement, and later, the Belmont Report. Concepts such as respect, justice, and beneficence were commonly specified through actions or practices that could be regulated –that is, actions that were demonstrable and measurable, such as “obtaining informed consent,” “protecting privacy through anonymization,” or “excluding vulnerable categories of people from research studies.” These operationalization, strongly associated with biomedical research contexts — for which they were developed, have been critiqued for decades now by social science and humanities research communities, as they are ill-suited and despairingly outdated when it comes to qualitative social research, as well as digitally saturated or data-implicated research contexts. \
Put differently, while the core principles are broad and generally applicable, the operationalization of these through regulated practices and norms are narrow, specific, and likely outdated. To this, we can add that the most commonly adopted ethical regulations come from those nations who were the first to define the concepts in writing, establish norms through specific policies, and build training models for novice researchers. While this history is simplified here, the point is that the language and policies around what counts as ethical practice have been dominated by particular stakeholders at certain point in time, and these norms spread, not least because they have immediate utility. There is an unfortunate consequence that certain definitions are inappropriately universalized. For example, “utilitarian” perspectives have been commonly adopted in Western countries like the UK or U.S., whereby harm is measured as a risk ratio in relation to benefits, and the evaluation of ethical action relates to the consequences or effects. In other words, whatever benefits the greatest number of people will outweigh the risk of harm to a few. This utilitarian or consequentialist stance clashes directly with the more “deontological” ethical perspectives adopted in the Nordic regions, whereby, in alignment with philosophies of Emanual Kant, one should evaluate the action in relation to the nature of the action itself. This means that sometimes, research simply should not be done.
Digital media and data analytics have resulted in fruitful transformations in how ethical research is defined and enacted. This is not discussed in depth here, but it is worth mentioning that at a baseline level, there is strong recognition that ethical research design does not come in a one-size-fits-all proposition. Instead, as the Association of Internet Researchers (AoIR) has argued since 2003, ethical decision making should be grounded in the particularities of the social and technical contexts, which means paying attention to a range of considerations that continue to evolve as new technologies emerge.
When regulatory, situational, and disciplinary definitions don’t match
Doing the right thing. Avoiding the creepy factor. Making ethical choices is often a matter of weighing many competing factors in specific situations with many stakeholders and interested parties. Most regulatory guidelines are idealized for non-actual situations. Thus, even if you want to maintain an ethical stance that emphasizes the importance of being sensitive to the specific contexts under study, this sort of “contextual integrity” (Nissenbaum, 2010) may be challenging since there can be large variation and contradicting rules or norms between international regulatory bodies, local or regional ethical review committees, the ethics delimited by Terms of Service in digital platforms, and disciplinary or professional ethics.
For example in Australia or the U.S., two environments I’ve become familiar with, researchers working with people or “human subjects” are required to submit their research design to an ethics committee and gain approval prior to carrying out the research. One common expectation is that before talking with or collecting data from persons, the researcher will inform the participant of the study’s purpose, ask them to confirm they comprehend, and obtain written informed consent to be a participant in the study. On the surface level, this all seems quite sensible. In actual practice, this regulatory requirement may not only be counterproductive in the field but might also be dangerous to the researchers or participants. Informed consent from a named individual might be impossible (if you’re interacting with persons in anonymous online spaces, for example). Or, it can increase risks to the researcher (if you’re studying a group that victimizes people like you and you ask for consent, this requires you to present an authentic identity, whereby you increase the risk of becoming a victim, for example). There are many reasons why consent, or informed consent, may not be necessary or warranted. This is only one example among many, part of longstanding discussions among ethics scholars (c.f., Markham, 2018, Zimmer, 2018) about the serious mismatches between different models for how ethical research can or should be accomplished.
The case of studying Spot the Robot
Complex digital situations often involve multiple different ethical dilemmas. Because digitally-saturated contexts will most likely either occur in, or intersect with, the use of digital platforms, attention on Terms of Service is crucial, especially around issues of data scraping or announcing oneself as a researcher. Local laws around gathering, storing, or analyzing data may also be relevant, such as the EU’s General Data Protection Regulations (GDPR), a broad set of requirements that must be attended to if collecting data from EU participants or in EU contexts. If one is conducting research at an institution like a university, there will be norms as well as policies around research integrity, data management, and research ethics review boards. These three entities are often conflated, or the committees combined, but each has special considerations. The latter, research ethics review committees” is relevant to this case since it required considerable attention.
The case I present illustrates some of these complexities. Since the study’s design is too large to discuss in detail, I describe it through the potential ethical dilemmas it immediately presented to me as the research leader:
- Controversial object: The research team sought to study reactions of people to a controversial robot developed by Boston Dynamics, called Spot. These imposing agile and semi-autonomous robots were designed for multiple uses, “such as inspecting a bomb, rummaging through remnants of an explosion or fire, or even deescalating a potentially dangerous situation” (Bushwick, 2021). In 2021, “officers deployed the robot in just a few cases, including a hostage situation in the Bronx and an incident at a public housing building in Manhattan” (Bushwick, 2021). Within a short time of being deployed, the robots were pulled from the streets because of the negative public outcry. Among other reactions, Spot, or as the NYPD called it, “Digidog” was denounced as frightening, creepy, and inappropriately deployed as a surveillance mechanism in poor communities (read more from Bushwick, 2021).
- Disrupting people unexpectedly in a public setting: We wanted to bring Spot to the largest public square in Melbourne, allow it to walk around in a seemingly autonomous fashion, and observe the immediate reactions of passersby.
- Participants not defined in advance: The robot would encounter significant numbers of passersby, and because we didn’t know who might pass by, there should be no restrictions on who could be included in the study.
- Including children as participants: This would include any persons who might regularly be described as vulnerable, such as children.
- Recording audio/video in public without prior permission: We also wanted to video/audio record the human/robot encounters as well as brief interviews with people just after they encountered the robot.
- Data management; transferring sensitive facial image data using non-secure cloud storage: Because we wanted to observe the encounter from a variety of angles, we would use a team of researchers, all of whom would be recording their observations through their own personal mobile phone cameras. This latter point is relevant since they would be then transferring video data of people to a central location for analysis.
- Nonobvious surveillance video recorded by robot: Finally, we would also store and analyze the visual data generated by the robot’s many cameras.
The desired setup raised ethical quandaries under almost every red flag category of human subject research. This includes, as a partial list: a) disrupting a person’s normal expectations for walking through a public space, b) protecting participants’ privacy after collecting images in an era of facial recognition, c) ensuring participants not only comprehend what’s happening but also consent to being studied, d) avoiding physical and psychological harm, e) respecting people’s rights to participate (or not), f) giving extra protection to vulnerable people, g) and properly handling data collected. Once you start thinking about the potential complications, it’s nearly enough to stop the study before it begins.
When we pair this setup with the presence of an unexplained and unexpected robot –a somewhat controversial robot that is confronting enough to be featured in the Metalhead episode of the series Black Mirror, depicting a techno-dystopian near future scenario– several more ethical ‘red flags’ emerge that must be considered. The robot is known to cause strong reactions, since it walks on four ‘legs’ and can be experienced as not simply agile, but also autonomous, since the remote control is small and can be up to 50 meters distant. The robot sounds mechanical, which our participants described as militant. It also has motions reminiscent of a dog, which can be experienced as quite cute. It is sometimes experienced as uncanny, a disturbing sensation that something is not quite right. Physically, the robot is heavy and if it fell on someone, could cause injury. Oh, and it has many moving parts that could crush a finger if caught in its joints.
When I explained this in detail to colleagues and the research team, they suggested modifying the study so that it would be approved by the university’s ethics committee. This included removing the need for video recordings, avoiding children or other vulnerable populations, and seeking written informed consent from anyone we talked to in the field.
When I presented it as an idea, not yet a proposal, to the ethics committee, they suggested it was not as risky as I was making it out to be and I could easily frame it in less risky terms to help marshal the project through the ethics approval process. I decided to take this advice, to shrink the sheer number of red flags that the committee would need to consider.
I decided to keep all the elements of the study intact and to justify the need for this study using a utilitarian framework. This latter perspective would emphasize that there is a vital need for understanding the granular physical and psycho-social dynamics of the human/robot interaction, especially since robots are swiftly being introduced in public spaces: the potential benefit of the study far outweighs the risks of the study itself.
Three decisions are highlighted here: First, I rephrased how I was presenting information to the committee from “this is a heavy, dangerous, and controversial robot that really should not be in the public sphere” to “This controversial robot will soon roam public spaces around the world in the near future, since there are very few regulations in this emerging area. Studying reactions of people is a critical topic.”
Second, despite the committee’s pressure to remove children, I insisted that since children would be the most affected by increasing numbers of robots in their lifetimes, they should be allowed to participate, react to the robot, have their opinions and reactions recorded and heard by scientists. To sell this design to the ethics committee, I provided numerous precedents for studying youth without parental consent and extensive documentation on the rights of youth to participate in research, including work by Paulina Billett, 2020, work on ethical research with children specific to the Australian context by Graham, Powell, and Taylor in 2015, and some keen insights and advocacy work by Ruiz-Canela, M., Burgo, C.Ld., Carlos, S. et al in 2013
Third, I held firm that consent, informed or not, should be waived for this study, since getting a natural reaction would be limited if we alerted people in advance to obtain consent for the encounter. Asking for consent afterward would also be disruptive. I was willing to compromise that if people wanted to give us their reaction, they would be informed briefly about the study and asked for verbal consent. In any confusing or tricky situations, the researchers would flag me or another senior researcher. There were plenty of information sheets to give people who wanted them, and we also encouraged people to take photos of our contact information, in case they had questions or misgivings, and if they might want to have a longer conversation about the study. These information guides and plentiful options for later contact from participants provide a measure of reassurance. They also met the standard procedures for the University’s ethics committee.
This project was approved and allowed many types of interventions, data collection, and inclusion of vulnerable populations, which is remarkable since in this context, ethics committees are known to be quite strict. Notably, the three points highlighted above should not be taken as the only relevant issues or steps taken in this case, but simply illustrative of the ethical process as one of making active decisions that may go against the general flow of assessments by other members of the research team or external parties.
This is an unusual but optimistic case for embracing tricky research ideas, using strategic framing, and maintaining a strong ethic that the ‘best’ thing to do might sometimes be the more risky option. Even if one doesn’t realize it, most research designs contain elements that can be interpreted quite differently depending on how one frames the situation or the ethical guidelines. An ethics committee might apply more or fewer restrictions than the researcher would. Ethical parameters are not pre-set or universal, even if laws and regulations are, which means there is room for interpretation. Often this means going above and beyond what is minimally required by law. Or it can also mean establishing and defending creative practices that may seem to defy one’s local regulatory norms but actually do a better job of achieving the goals of the principles on which those norms have been based.
Balancing creativity and constraint:
This case is meant to complicate the current situation of adequate ethical guidance in tricky situations, not to put off researchers, but to enable a more proactive stance that is mindful and, because it is sensitive to the needs of the situation, flexibly adaptive. This is a useful consideration in criminology, since most studies will be assessed as risky, yet these are topics that are of vital importance to study.
This discussion also offers a pathway for balancing the need to make independent decisions about what is the best move, and dependence on external ethical regulations that have been developed for good reason. Importantly, while this chapter focuses on how regulations have become outdated because of changes wrought by digital technologies, we would be remiss to always assume they are too limiting, since regulations can also be too loose, not providing enough guidance for especially novice researchers, or in tricky situations that might benefit from significant input. It’s important to remind onself that ethics committees are there to help, not hinder, and they are generally wanting to see the research proceed and succeed.
Sometimes the tricky issue of the specific case makes it necessary to develop ethical decisions beyond and outside the boundaries of regulated norms. It can be risky to take a proactive stance rather than a more conservative stance, which helps illustrate how common regulatory guidelines rarely fit contexts of actual research enactment, and therefore can seem broken. Working beyond regulations is easier when one gets buy-in from the local ethics committee or guidance from local governing bodies and international experts.
As a summary of the key takeaway points of this post:
- Digitally-saturated field sites or mediums for interacting with participants present present particular ethical challenges that are not always accounted for in traditional ethics regulations or disciplinary guidelines
- Concepts like ‘human subject’, ‘privacy’, and consent require definition on a case-by-case basis, rather than universally, especially since these can be defined and experienced in many different ways in digital or data contexts.
- Digitalization creates more complications for data protection, which requires not only careful planning for data management but creative considerations for protecting privacy, beyond what may be required.
- Ethical parameters are not set in stone but ever-changing and adaptable.
- While progressive planning is advised to adapt to the needs of the specific context, this creativity should be balanced with careful attention to how situations can change.
- Ethical needs will change over the course of a study; regularly attending to the ethical situation creates strong contextual integrity.
- When situations are tricky or extant ethical guidelines don’t seem to fit, researchers are advised to return to core principles and seek advice from expert and experienced researchers.
Recommended reading:
By far the most comprehensive and most widely adopted guidelines for ethical decision making in digital research have been developed by the Association of Internet Researchers (AoIR) and all three distinctive sets of guidelines are recommended:
Since most readers will need to adhere to local regulations and norms, they should also pay close attention to how local or regional guidelines match the AoIR guidelines for ethical treatment of people, norms for data management, and guidelines for research integrity. In these guidelines, readers can also find precedence and guidance for designing research that adheres to international best practice but may be in some conflict with local regulatory norms.
*<draft of forthcoming short essay. Please cite as Markham, A. (in press). Ethics. In Mork Lomell, H., and Kaufman, Mareile (Eds). Handbook on Digital Criminology (pp forthcoming). De Gruyter Press.