On bad peer review

It always hurts, but when my paper is demolished by a competent and fair critique, I can only thank the reviewers for the favour. Alas, over the last few months, I have had the misfortune of reading very different reviews. So, I’m angry, frustrated, disheartened and in order to vent some of this frustration, I decided to write a post about some of the recent reviews of my work.

I have written on how I review before,  obviously, drawing upon not only my ideas, but also others’ experiences of a common academic practice. And while my points can be nuanced and rounded at the edges, I still think they offer a rough guide to what competent and fair reviewing might look like. Alas, the reviews I have received of late, have sinned against all those points. So, below I want to offer a number of examples of points which, in my view, should never have been made.

1. My first point is about a practice I find very annoying. Reviewers keep telling me how they would have done or written about the study. And I really don’t care. If you find your idea is better, do your research and write about. In the review, you’re supposed to write about what I wrote not about your wishes.

But it’s even worse when the reviewer has no idea…. In a recent review of my paper based on unsolicited, ecological data, the reviewer decided that it would be much better to interview clinicians. My jaw dropped. This assumption that the interview is somehow the default, superior data offering insight into clinical and other practices is perhaps not very common, but it is, I’m sorry to say, stupid.

There is much to be said about problems with the semi-structured interview, but let me tell you the most important (in my view). The interview first and foremost reflects my interests! The interview is not about letting people talk, it is about letting people talk about things that I am interested in. Moreover, what interviewees say is subject to normal social constraints and interviewees are not open books from which to harvest data to our hearts’ delight. And these points are not exactly earth shattering, are they?

2. When I review, I make a point of understanding the assumptions which are made in the research and article and offer critique within those assumptions. I’m always surprised when this simple, and in my opinion, obvious perspective is not shared by other reviewers.

And so, I describe my work as pertaining to Critical Discourse Studies, which is a new label for what has been known as Critical Discourse Analysis. Although this is a field with some (considerable) variation, it is quite distinct from, say, Foucauldian discourse analysis (FDA), interpretive phenomenological analysis (IPA), conversation analysis (CA), various incarnations of thematic analysis (TA) and a zillion versions of them.

In reviews these methodological perspectives get mixed, one is criticised with another. My ‘favourite’ recent critique was to get pummelled for not acknowledging Foucault and his discourse analysis. The reviewer criticised me that I had not specified how I was using Foucault, except… I wasn’t using Foucault at all….

3. This mixing of perspectives is particularly important in the case of coding. I described in some detail in a recent post. Unfortunately, because of the current coding fashion in qualitative research, coding is assumed to follow IPA/TA groove(s), rather than discourse analytic way(s). So, when I am asked to provide the interrater reliability for coding, I can only despair. The request doesn’t apply to the kind of data processing that I do, but I keep getting asked….The editors seem none the wiser and pass on the wisdom of the ‘experts’ who seem not to understand that there isn’t one and only and wonderful way to code the data.

4. I think it’s time to provide some quotes. Let me start with the one:

The passages that are examples of places where participants point to a certain part of the self communicating decisions about suicide to another part of the self are quite interesting. However, it would be helpful to discuss whether these passages represent a linguistic phenomenon or psychopathology.

Now, if you’re wondering what exactly the reviewer means, you’re not on your own. How can a passage represent psychopathology, for pity’s sake? What can it possibly mean? Does the reviewer really think that there is a simple unidirectional relationship between psychopathology and language? I’m sorry, but such an assumption is beyond stupid. That said, I hasten to add that the paper was rejected on the basis of this review.

Here is another quote from a review which nuked a paper of mine:

The method of analyzing is not critical discourse analysis, rather it is just discourse analysis.

Yes, reviewers who have no idea, but still feel they need to make a point losing a golden opportunity to shut up. To say that the paper is ‘just discourse analysis’ suggests that the reviewer has really no idea about discourse analysis. Discourse analysis, which includes the various forms I mentioned above and a number of others, is a huge field of inquiry. And to say that a paper is anchored in ‘discourse analysis’ is like saying that it focuses on language. Better, number-crunchers should take the advice and suggest that they did ‘statistics’.

Another reviewer who decided that my paper was unworthy writes:

The analysis itself seems logical, but it is difficult to confirm the soundness of the scholarship because the author/s have not provided systematic study identifiers for the excerpts from the interviews.

What does it mean? What kind of ‘systematic study identifiers’ does the reviewer have in mind? Wait, I know, I should create a little table saying that phrase A, which was used 24 times in the interviews, means that X, phrase B , which was used 15 times means that Y… I keep wondering how many people reviewing in health-related journals heard of such amazingly novel ideas like…the context? You know, like the same phrase can mean different things at different times.

There is a more important issue here, though. You see, I can review quantitative papers. I don’t because I respect the scholars. My review is bound to be superficial and is very unlikely to offer any interesting methodological insights. What I cannot understand is why quantitative researchers cannot return the favour. Or at least understand that we have different ways of writing, so judging my texts on the way you write yours makes no sense and is unfair.

I’ll end with a story. Some time ago, I submitted a paper to a journal (I actually can’t remember which). The editor wrote to me that the reviews were all consistent in their negative assessment and  on that basis the paper was rejected. When I read the reviews, I thought I was dreaming. You could not find reviews which were more contradictory. If one reviewer liked point A in the paper, another thought the point contributed to the downfall of social sciences. If one decided that by making point B I excluded myself from civilised academic inquiry, another proposed that B warranted putting up a monument in my honour. And so on, and so forth. Yes, all the reviews were negative, but to say they were consistent could only mean the editor didn’t even look at them.

This was the first time I wrote to a journal editor. I think he was quite embarrassed, as he offered extended apologies, offering also to put the paper through another review process. I declined, the paper was already with another journal. This was the first time I wondered about the role of the journal editor. I remain quite certain that it’s a role that involves making sure that you read and, yes, understand what the reviewers say. I also think it’s not too much to ask for.

A couple of days ago, I wrote another such letter. It was spurred by one of the most incompetent (or unfair, if you prefer) reviews I received in my academic career. The response I got was not encouraging – apparently, I was making personal comments. I suspect I shall never publish in that journal, I also wonder whether I should write more such letters. I also wonder if ‘we’ should.

 

Loading ...