The dialogue of evidence: A Summary

Annette Markham

Sep 4, 2014

Share

In 1994, the Western Journal of Communication produced a special issue (V58, N1) called “The dialogue of evidence: A topic revisited.” This special issue revisited a previous special issue by the same journal (volume 41, issue 1, 1977 Western Journal of Speech Communication). This 1994 special issue asks the question:

What criteria should be used to judge the admissibility of evidence and how does the choice of methodology limit and enhance that which we consider to be evidence?

Both issues provide an excellent cross section of thinking about what counts as evidence in scientific/scholarly inquiry.  To give you a taste of this topic, I’ve summarized the contributions in the 1994 issue, with the addition of minor gratuitous editorial commentary, because I couldn’t help myself.

Walter Fisher, Genesis of the conversation.  pp. 3-4.

Fisher, the editor for the original 1977 issue on evidence, writes that the investigation of methodological differences between scholars, “convinced me that they derive from philosophical commitments, not just procedures or statistical preferences” (p. 3).  A key difference, he adds, “is their explicit or implicit conceptions of what counts as data” (p. 4).

Leonard D. Hawes, Revisiting Reflexivity.  pp. 5-10.

Hawes’ discussion of evidence is less concerned with admissibility than with accessibility and accountability.  Hawes asks, what is valid as knowledge? What “counts” as knowledge?  “Admissibility,” he says, presupposes “a singular body of knowledge . . . that is representational and objective in structure and function” (p. 6).  To be admissible, then, evidence must fall within prescribed categories of right and wrong, adequate and inadequate.  Knowledge, however, is “mediated through and by others,” and being self-reflexive means “becoming other-wise, coming to know that selves, as subjectivities and agencies, are identities only in relation to other” (p. 7).  Hawes contends that evidence must be dialogic, or at least conversational (and he seems to mean this in the real sense of moving toward interviews-as-conversation).  Understanding knowledge, evidence, and the research project as relational, researchers are then required to implicate their subjectivities in their work.

Hawes also argues for a move away from the goal of finding grand theories and toward the politicizing of everyday practice and experience (p. 9).  The task, he concludes, “is to multiply particular accounts and knowledges, not to unify an abstract and supposedly universal Knowledge” (p. 10).  This article is very insightful and provocative–and easier to read than his book.

Charles Berger, Evidence?  For What?  pp. 11-19.

Debates over method polarities (such as qualitative/quantitative, self-report/observation, etc.) have “done little to advance the study of human communication because they have taken place within a substantive theoretical vacuum” (p. 11).  Numerous “exploratory” studies collect observations and build categories without any reference to explicit theoretical frameworks.  This is problematic, and a simple “result of the lack of theoretical development in the field” (p. 12).  Observations are theory laden, categories are theory driven.  In the absence of theoretical grounding, the meanings of operations, findings, and conclusions are ambiguous, difficult for colleagues to scrutinize or critique or replicate–which are assumed to be goals of communication science (pp. 12-13).  With these assumptions, Berger proceeds to present the simple argument that whatever evidence is, it is “No evidence without explicit theory” (p. 13, emphasis in original).

Berger’s assumptions, arguments, and conclusions are very distinct from Hawes.  Berger’s justification for explicit theory, theory testing, replication, etc., is so that other members of the research community can understand and critique the research.  While Hawes would agree that good arguments need to have good links, he would argue for including the people outside our community of scholars as judges of good research.

A. Cheree Carlson, How One Uses Evidence Determines Its Value. pp 20-24.

Carlson addresses the issue of evidence from an argumentation or persuasion perspective by focusing on mostly presentational areas, which most methods of criticism have in common.  The author asks:  “How do we, as critics, use ‘evidence’ to persuade other critics that our conclusions have merit?  What evidence do we look for?” (p. 21).  Carlson suggests three criteria to judge good arguments.  1) Is the data pertinent to one’s critical perspective?  Researchers must understand their own perspective to use evidence competently.  2) Is the evidence selected relevant to the claim?  3)  Has the critic done a credible job of persuading the audience that the best available data has been utilized?

All research is persuasion and politics.  Therefore, “it is up to the critic to direct the reader’s experience of the text” (p. 22) through ethos, as well as logical reasoning and sufficient evidence to support claims.  Carlson reminds us that editors, journals, and readers often judge evidence using traditional or mainstream (white, middle-class, status quo) standards; a practice that needs to be scrutinized and critiqued.  This article is very straightforward and argues that good research is nothing more than a good argument.  I like it.

Barbara J. Wilson,  A Challenge to Communication Empiricists:  Let’s Be More Forthcoming About What We Do.  pp. 25-31.

Wilson points out that social scientists rarely acknowledge their position or assumptions in writing.  Her challenge is for empiricists to be open and direct about their work. (p. 30).  She offers three approaches to judging the admissibility of evidence. 1) researchers can endorse their own perspective by critiquing competing models.  This debate is useful in some ways, but can often sidestep the weaknesses of the paradigm being used.  2) Researchers can use across-the-discipline standards.  While this may also be useful, it “thwarts the possibility of appraising one’s own epistemology” (p. 26).  3)  This approach, which Wilson advocates, is to critically appraise one’s own evidence, position, and procedures.

Five additional criteria to guide critical self-appraisal (for empiricists) are:  a) Evidence should be consistent with one’s chosen epistemology or perspective.  b) Evidence should be observable.  c) Evidence should be gathered through systematic procedures.  d) Evidence should be shared and public (and written to different audiences to encourage dialogue).  e)  Evidence should be compelling (not simply well collected and statistically significant).

Although she is speaking empiricist language (observable, reliable, valid, etc.), Wilson acknowledges that evidence collection and presentation is not neutral but a result of paradigm-dependent choices made by researchers.  I like the reflexivity of this article.

Kristine L. Fitch, Criteria for Evidence in Qualitative Research.  pp. 32-38.

Qualitative research is “that which examines the qualities (attributes, characteristics, properties) of communication phenomena” (p. 32).  Given this definition, Fitch confines her discussion to “ethnographic” inquiry and presents criteria for which evidence might be judged “rightfully convincing, as opposed to simply compelling or indisputable” (p. 33).  The criteria are based on a balance between richness (the example is postmodern research) and precision (the example is conversation analysis).  First, for qualitative data to be admissible as evidence five conditions must be met:  a) Researcher must be closely connected to that which is being studied.  b) Research should strike a balance between participant and observer.  c) Claims should be saturated in data (cannot reduce evidence to a single incident, for example).  d) Data should, as much as possible, be publicly accessible (transcript of conversation is better than field notes, for example).  e) Data and analysis should balance concrete phenomenon with interpretation (which assumes that concrete phenomena are simply there to be observed, not interpreted by researchers).  Second, for qualitative studies to be counted as evidence:  a) Findings should be situated, not assumed a priori (emergent).  b) Findings should be translatable to other studies.  This will ensure that these studies are connected to the larger, progressive research community.  With awareness and limitations on subjectivity, plus a balance of richness and precision, qualitative studies can be treated as evidence for claims about social life.

Karen A. Foss & Sonja K. Foss, Personal Experience as Evidence in Feminist Scholarship.  pp. 39-43.

Two tenets of feminist scholarship have led to the use of personal accounts as data:  1) “Women’s perceptions, meanings and experiences are taken seriously and valued” (p. 39); and 2) “the information gathered abut women’s perceptions, meaning, and experiences cannot be understood within constructs and theories that were developed without a consideration of women’s perspectives” (p. 39).  Personal experience (defined as “the consciousness that emerges from personal participation in events” (p. 39)) usually takes the form of personal narratives, feelings, and interpretation.  Because of feminists’ unwillingness to make judgments about the nature of someone’s experience, the issue of what criteria to use to judge the admissibility of evidence is irrelevant (p. 39).  However, several issues must be faced by feminist scholars.  If feminists proclaim not to judge personal accounts, how do they justify their selection, interpretation, and presentation of particular accounts over others?  Moreover, how do they justify their interpretations at all?  Foss and Foss argue that the researcher’s expertise is presentational, rather than experiential.  “As presentational experts, researchers play a role very much like a midwife, coaching and assisting others to give voice to their experience” (p. 40).  With continual monitoring and communication with participants, feminist scholars hope to produce a joint construction “of the participant’s experiences and the interpretations and researchers’ presentational expertise” (p. 41).  While I like the midwife metaphor, I think it is a cop-out.  Once we listen to and participate in personal accounts, we become a part of it; researchers cannot be simply conduits through which women’s feelings pass.

Foss and Foss state that this form of research produces many benefits.  It provides for a multiple truths and multiple spectator positions.  It also produces better understanding and caring through the recounting of personal experience.  This type of research also empowers women and “contributes to the improvement of participants’ lives by encouraging them to discover their own truths” (p. 42).

I like most of the points of this essay.  Yet throughout their essay, I think Foss and Foss assume a tenuous position.  They want feminist scholars to be merely conduits, through which women’s personal experiences can pass (p. 40).  This role requires objectivity, passive listening, and separation of researcher from researched.  How is this possible, when the whole enterprise is geared toward experiencing and interpreting others’ experiences?  It also contradicts the authors’ position that participants are the experts of their own experiences and places the researcher in the privileged position of understanding and empowering the women whose experiences are being interpreted.  Another impossible position is presented as the “presentational expert” becomes a “passionate knower” (p. 42) of participants’ experiences and also helps to improve participants’ lives.  This dual role, both experience near and experience distant (Geertz’s terms), would be difficult, if not impossible, to play.

Phillip K. Tompkins, Principles of Rigor for Assessing Evidence in “Qualitative” Communication Research.  pp. 44-50.

The answer to the question, according to Tompkins, is “words.  There is no other kind of evidence in ‘qualitative research:  There is only Logos” (p. 44).  Tompkins divides the world of communication research into quantitative and textual and presents four “principles of rigor which should be used to judge the admissibility of textual evidence” (p. 50):  1) Editors ought to make writers assume the burden of proof in establishing representativeness of evidence presented;  2) Editors ought to examine textual evidence (when publicly available) for consistency and representativeness;  3) Editors ought to demand the verification of textual evidence when it is not publicly accessible;  and 4) “Editors ought to require the researcher to report how the subjects studied responded to a complete ‘playback’ of the results and conclusions of the research” (p. 50).  These criteria are the result of his “deep reservations” for the quality of much contemporary “qualitative” research, which seems to neglect the rigorous traditions and standards of textual analyses in favor of “anything goes” (p. 46).

Tompkins is definitely arguing for generalizable standards for textual analyses.  I don’t like his attitude; he seems to doubt the voluntary integrity of researchers, implying that much current research is not conducted or evaluated in a rigorous manner.  I agree there is poor (and poorly supported) research out there, but not all research can be subjected to the narrow range of standards he demands.  With Tompkin’s criteria, our field would not expand its boundaries very much.

Wayne A. Beach, Relevance and Consequentiality.  pp. 51-57.

This article departs from the others in a couple of ways:  whereas the first essays were clearly and simply written, including active voice and first person accounts, Beach’s essay is dense, requiring close and careful reading of passive sentences constructed, for the most part, without an apparent author.  Beach seems to reject the question as a waste of time:  “However fruitful for revealing contrasting positions and reflexive insights about the research process . . . it will ultimately yield severely limited knowledge about the social world and its constituent features” p. 53).  Rather than spending our time theorizing about the topic of evidence, we should be engaging in the inspection and analysis of actual data.  To accomplish actual progress, we should  turn to conversational analysis as an example of that type of research that directly observes interactants and the organization of their realities through talk.

This is a very confusing meta-essay which returns to conversational analysis as the center of all research methods and goals.  Beach, in his charming and convoluted way, seems to take this opportunity to publicly pay homage to the Gods of ethnomethodology, Sacks and Schegloff.

Jo Liska & Gary Chronkhite, On the Death, Dismemberment, or Disestablishment of the Dominant Paradigms.  pp. 58-65.

In their review of the special 1994 issue, these authors argue for “good fit.”  In fact, from a range of perspectives (although the authors make an excellent point that only a few perspectives are represented in this issue) and in various ways, all of the contributors argue for “good fit.”  Liska and Chronkhite review some of these arguments and then briefly discuss four evolutionary movements in the communication research discipline:  1) Researchers are more open to a variety of perspectives; 2) Research is more focused on “messages;” 3) Research is more concerned with messages in context; and 4) Researchers seem to have more desire and willingness to explicitly state their epistemological assumptions.

Thomas M. Scheidel, On Evidence.  pp. 66-71.

In response to both the 1977 and 1994 issues, Scheidel makes the following claims:  1) While any evidence from any method is admissible, the evaluation of the merit of that evidence is a clear topic for discussion (p. 67).  2)  Berger’s notion of explicit substantive theoretical frameworks needs serious consideration.  3) As readers of research, we rely heavily on ethos; we should therefore pay careful attention to historical perspectives and principles of persuasion in establishing, defending, and evaluating claims (p. 69).  4)  We need more real face-to-face dialogue between researchers.  Researchers tend to talk at each other through their writings, rather than with each other dialectically. (pp. 69-70).  Finally, Scheidel notes that regardless of debates researchers may have over what counts as evidence, the proof is in the product.  And the product is how communication scholars demonstrate their value to each other and, more importantly, the rest of the world.