How not to report qualitative research

Have you ever wondered how to report your qualitative research? Well, fear not, the COREQ checklist is a only click away. Follow COREQ and you will report all that it to be reported and more. Well, yes, this post is negative again, it’s about how not to do things again.

The COREQ questionnaire appeared in a recent Twitter exchange I took part in. It’s meant to be a checklist “of items that should be included in reports of qualitative research”. My initial reaction was negative. After some reflection, my thinking about it has not changed, in fact, it’s probably more negative. And this post is an account of this. And, let me, please, say, if you use it, stop. Now.

Before I engage with the nitty-gritty, let me say that I reject the idea that qualitative research must necessarily involve interviews. The checklist declares itself to be for reports of qualitative research, but it also assumes that such a report must include reference to interviews. The researchers are assumed to be interviewers. Surprising as it might seem, there is plenty of qualitative research which actually doesn’t include interviewing. To assume that qualitative research necessarily involves interviewing is nonsense.

Yes, I do see that the checklist offers an opt-out, still I find it interesting that the default position is that qualitative research is about interviewing. In reality, it’s not.

Before I get to the more problematic questions, let me deal with some apparently easy ones. For example, the checklist assumes that the researcher should, in the manuscript, declare what their occupation was at the time of the study. Why?! How is that important and relevant? Incidentally, if I were to report, what do I say? Academic, researcher, lecturer, linguist, philologist, psychologist, professor? Presumably, it makes a difference for the checklist authors, so I would put professor, so that it would sound really important.

The same applies to the question of experience/training of interviewers. I would imagine it is not unimportant for the data collection phase, but why put it in the manuscript? What’s the purpose of such a declaration? That the study is not really good enough, because the interviewers were not very experienced? Or that the study is the first-class miracle, because I am an experienced interviewer? Either is nonsense and, in my view, unnecessary to put in the article in the first place. If anything, rather than some strange notion of transparency, it only serves some (counter-)narcissistic reasons.

Finally, apparently I should also report how many people refused to take part and, bizarrely, reasons for their refusal. Once again, I am not entirely certain why, but, crucially, how would I ever know the reasons for participants’ refusal? One of the basics of research ethics is that participants do not have to give me any reason for their refusal. They say no and that’s it! It may be surprising, but I actually don’t ask them why. I do care, but it’s not my place to ask such questions. But let’s imagine that they did give me their reasons. If they did, I actually believe it’s nobody’s business to learn about them, even if they’re grouped, anonymised etc.. I’m not telling.

I could go on with reference to other things (e.g. I find reference to ‘important characteristics’ of the sample problematic – important for whom?), but, I really want to start with the issues in the checklist that I believe are much more problematic.

I’ll proceed as the issues appear, starting with the section “Data collection”. The authors ask the following two questions:

  • “Was data saturation discussed?”
  • “Were transcripts returned to participants for comment and/or correction”?

And I wonder who I am supposed to discuss data saturation with. With the participants? With ‘my team’? In the former case, do I provide a quick crash course in qualitative research for my 70+-year-old informants, with no or minimal formal education not going beyond basic literacy, on the Polish-Ukrainian border? More importantly, what if think that the whole thing about ‘data saturation’ is overrated and, to be really controversial? I happen to think it is an attempt to make qualitative research appear like number crunching. I really do. Do I still discuss data saturation? And, again, I reject that there is this one qualitative research in which we all do discuss it.

The question about returning transcripts to participants is even more problematic. First, if, heavens forbid, I were to turn Conversation Analytic, and to use their very complex transcription notation, do I, again, provide training? But even more problematically, what are the participants to correct? What they said? How is that?!

So, let’s just imagine that I return the transcript, with minimal notation, and the participant says: “Oh, I didn’t say that, did I?”. But I have you on tape, I say. Oh, then, can you change this, please? I can’t have the word ‘shit’ in an interview, can I, the participant says.

Do the authors of the checklist seriously entertain the idea that I will simply delete ‘shit’ from the transcript? I mean, really?? I fully and willingly accept the right of every single participant to withdraw their consent for using their interview in my study. And, indeed, over the years, I deleted (too many) wonderful interviews because after the interview the informant simply said, usually very apologetically, that they felt too much had been said or some such thing. It pained me but without hesitation I deleted their interview. But I’m sorry, but the ‘shit’ stays in the interview. The right to withdraw – gladly, the right to correct, no.

To be honest, I wish I could say I find it difficult to understand the rationale behind the correction imperative. I don’t. It’s constructing interviews as having only content. Much like media interviews which are authorised by politicians. I’m actually not particularly sorry to say that it’s nonsense again.

Just to flag up. There is a different issue here. What if our participants disagree with out interpretations? I think it’s a very serious issue and I’ve been thinking about writing about it. But it’s different from correcting transcripts. Interestingly, the checklist suggests that when it asks about participant feedback on findings. Bloody hell, I thought, I learnt ‘formal’ linguistics for 5 years at uni, then my PhD, then… How do I explain all this to the participants? And, please, don’t tell me I’m patronising. I’m not. Just like non-linguists will not learn linguistics, I will not learn to fix cars in an hour or two.

Let’s turn to data analysis now. Once again – it seems that for the checklist authors, qualitative research is about coding and I would suggest that they go out more. In fact, even thematic analysis, although getting increasingly popular (the reason why is probably not so complex, but I will not discuss it here), is really not the default of qualitative research. To be honest, it really irritates me that decades of discourse analysis are just blanked as, apparently, without identifying themes, I cannot bloody publish! Please!

But what is particularly interesting, I think, are three  questions:

  • Was there consistency between the data presented and the findings?
  • Were major themes clearly presented in the findings?
  • Is there a description of diverse cases or discussion of minor themes?

I’m afraid I really don’t understand the first question. Are the authors asking me whether I actually presented what I found in the data? To be honest, I started wondering how I would write it in the paper. Would it be something like:

I would also like to point out that the discussion in the paper is consistent with the data I found. Despite temptation, on this occasion I decided not to cheat and my article is a very good representation of what I found in the data. I do promise!

As it happens the remaining two questions happen to be my favourite. Basically, I refuse to make the distinction between major and minor themes and, to be honest, there are so many problems with such a distinction that I cannot do them all justice. Incidentally, I intensely dislike the use of ‘major’ and ‘minor’ as they are, I think, carrying judgement.

So, let me ask two questions. First, for whom is the theme major or minor? Is it for me, the researcher, or is it for the participants? I’ll up the ante, is it for all the participants, some, what if one disagrees? But then, second, what counts as a major/minor theme?  Is it, for example, the one that occurs most frequently? But the positive answer raises more questions than answers. Does ‘more frequently’ mean that you code it more frequently or it occurs more frequently, or does it simply contain more words? There is more. Is it only about the dataset you have or do you make a claim that the ‘major theme’ is always major? These questions just keep popping up and I so wish there were easy answers. Oh, please, I so don’t want to do the checklist.

Time for a conclusion. Qualitative research for the authors of the checklist consists of collecting a set of interviews which will be subject to thematic analysis. This is a very narrow view of qualitative research. While I don’t want to extoll the wonders, complexity, and variety of qualitative research, I really reject the idea that it all can be locked in the COREQ cage. It can’t.

No, I will not now write about the rich tapestry (I’ve always wanted to use this phrase – it sent shivers down my spine) of what we do. I happen not to agree with quite a lot of ‘story tellers’, to put it politely.  But, please, let’s keep things in perspective. There are more things in qualitative research than are dreamt in your checklist, authors.


  1. Caroline Struthers

    Hi there! Sorry for the delay in replying…I didn’t get a notification that you had posted again. Interesting about COREQ. I am glad SRQR is good. There seem to be very silly squabbles between reporting guideline developers which EQUATOR has to keep out of. That’s what I’ve been told anyway. I think the reason why I thought it was explicitly for interviews and focus groups was because that’s what it says in the title of the EQUATOR record. I think perhaps we were diplomatically trying to make having two reporting guidelines for qualitative research make sense – when it obviously doesn’t! I am really looking forward to reading your article about checklists in general…(I think)

  2. Hi Dariusz. I am sure I would feel your pain if I were a qualitative researcher…and I am quite glad I’m not. I really enjoyed this blog which a colleague directed me to. Did you know about SRQR which is designed to help with reporting all qualitative research? As I understood it, although it came first, COREQ was only designed for research involving interviews and focus groups – and was not implying that all qualitative research should involve interviews and focus groups. I’m not an expert – although I do work for The EQUATOR Network and we promote the use of reporting guidelines in general, and try and guide researchers to the ones which will be most helpful to them. We identify them, collect them and curate them, but we unfortunately don’t have the funding to review and quality control them. If they’re published in a peer reviewed journal, they go in our database. However, we have plans to use the authority people already assume we have, and start a project to sort the wheat from the chaff.

    1. Dariusz Galasinski

      Hi Caroline, thank you very much for your response. Of course, I accept what you say, yet, the COREQ checklist doesn’t only refer to interview research, in fact, it very explicitly says it is a checklist for qualitative research. I’m afraid I don’t think that, as an addressee of such a checklist, I should wonder what the authors ‘really’ had in mind when they designed their checklist. I firmly believe that whatever is written on the checklist should be taken at face value. Moreover, I am not entirely certain that COREQ is used as subtly as you suggest.

      Thank you very much for pointing me to the Equator network. Since I wrote the reporting post, I’ve been collecting thoughts for a more general one about checklists in general. I’ll send you a link, so that, if you wish, you can tell me where I’m going wrong.

      Incidentally, the article you offer a link to, which I know, is very good.

Comments are closed.

Loading ...