Volume 12, No. 1 – January 2011

Computer Technology and Qualitative Research: A Rendezvous Between Exactness and Ambiguity

Carmen Schuhmann

Attending a conference in which one big experiment stood central reminded me of times long past, when I was a student in physics, fascinated by physical research into the nature of reality. The high-tech measuring instruments I and my fellow-students used while doing our physical experiments seemed a guarantee for obtaining unambiguous, transparent results. In fact, nothing was less true. Measuring results seemed never to be quite what we had expected. They were not the glorious end-points of a research process, giving transparent descriptions of reality, but the starting-points of painstaking processes of interpretation, in which we tried to make sense of our results in terms of unexpected influences which had interfered with the experiment. These influences might be anything; external influences, influences from the measuring instruments themselves, or influences from us, the researchers, who turned out to be just fallible human beings. [1]

So, twenty years after having abandoned physics and its experiments, here I sat, listening to the results of an innovative experiment, the first in its kind, concerning the interference of the exact world of computer technology with the domain of qualitative research—a domain I had just started to explore actively myself. I was really curious what the meeting of these apparently so different domains might produce. I wondered how software could be applied to data in order to improve a qualitative analysis of these data. To me, software seemed associated with exact and unambiguous operations rather than freedom of interpretation. So how might software be helpful in creative interpretation of data? Would this not force a researcher to look at data from one specific angle, ruling out other points of view beforehand? If one thing became clear to me during this conference, it was that the use of software does not replace the interpretative activity of researchers. On the contrary, it maybe even adds a layer of interpretation to qualitative analysis as one has to know how to "read" a software package. Different packages use different terminologies, and it was stressed several times during the conference that it takes time to develop a "literacy" for different packages. This is a comforting thought—and yes: in the course of the conference the "codes" and "nodes," the "notional families" and "clips" began to sound more familiar and meaningful to me than at the start, when I felt at the verge of getting lost in the specialized vocabulary. [2]

As for the qualitative analysis of data itself, all speakers participating in the experiment unanimously emphasized that the use of software has the potential to open up new perspectives on a given set of data rather than ruling out perspectives. This seemed related to another issue that was emphasized throughout the conference: these software packages are "just" supportive tools in doing qualitative research, they are not a methodology in themselves. But this does not mean that any tool would do; the choice of a software package clearly does influence the research process. Already at the level of choosing a sample from the enormous amount of data that was provided for the experiment it turned out that the choices made by different developers were at least partly determined by what kind of data their software package was "good at." Of course the aim of this conference was to investigate these differences: to find out to what extent application of different software packages to the same dataset with the same research question would produce different results. To me, similarities were eventually more striking than differences. As a newcomer to this field I gained insight in the advantages of using a software package in qualitative research—any of the software packages presented at the conference—rather than insight in what software package to choose for a specific set of data and a specific research question. Transparency and cooperation were the two key concepts here. They appeared throughout the conference, independent of the specific software package that was discussed. Software packages seem to facilitate more transparency of research processes by providing the possibility to keep track of different stages in these processes in detail. And they allow researchers in different places in the world to work on one set of data simultaneously, which makes cooperation much easier. [3]

So at the end of the conference, I had an answer to my question how software might improve qualitative research. But meanwhile, many new questions had popped up. Again I was reminded of my experience as a student in physics, where answering one question generally meant raising a couple of new ones, and where one experiment would almost inevitably generate new experiments. So when I left this conference, I was not just convinced of the importance of using software in qualitative research, but also quite certain that this had been the first but definitely not the last experiment in its kind, concerning that fascinating area where computer technology and qualitative research meet. [4]


Carmen Schuhmann


Schuhmann, Carmen (2011). Comment: Computer Technology and Qualitative Research: A Rendezvous Between Exactness and Ambiguity. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 12(1), http://nbn-resolving.de/urn:nbn:de:0114-fqs1101C27.

Copyright (c) 2011 Carmen Schuhmann

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.