Review My Review

(If you have arrived here first and are searching for the data table see instead here)

Thanks to all the editors concerned for contributing data for their titles to the commons. 

Titles in political science and related fields which did not provide (or pledge) information following two or more contacts are: Armed Forces & Society;Aust. J Pol. Sci.;Brit. Pol.; Bus. & Pol.; Critical Policy Stud; Critical Soc. Pol; Cultural Pol; Democratization; East Europ.Pol. & Soc., Ephemera; Europ. Urban & Reg. Stud.; Europe-Asia Studies; Evidence & Policy; German Pol.; Global Envt. Pol; Intl.J. Pub. Admin; Intl. Politik; Intl. Pol.; Intl. Theory; J.Civil Soc.; J. Common Market Stud.; J. Contemp. Europ. Res.; J. Europ. Soc. Policy; J. Strat. Stud; Local Govt. Stud.; Mediterranean Stud.; New Polit.Sci.; Policy & Pol.; Polit. Anal.; Polit. Sci. Quart.; Pol. & Soc; Politikon; Polity; Post Soviet Affairs; Reg. Stud.; Rev. African Polit. Econ.; Scot. J. Polit. Econ.; Survival ;Theory, Culture & Soc.; Urban Affairs Review; West Europ. Pol.  Only 6 of these are US based titles.

A third category involved replies along the lines that the data was not readily available accompanied by varying degrees of intent to respond/ consider at a later stage, including European Political Science, History of Political Thought, Journal of Global Governance,Journal of Latin American Studies; International Affairs; Millennium, Parliamentary Affairs, and PS: Political Science and Politics; two journals added that they were a new editorial team (Journal of Public Policy, Regional & Federal Studies).

Readers will welcome data on any missing titles; editorial management & peer review software systems provide this information to editorial teams & publishers.  Alternatively, readers of any title who may recall a recent editorial commentary containing such data, etc, can identify missing information.

More about this site

This is an information tool for authors of manuscripts written for academic peer-reviewed political science related journals, as well as a resource for editors, publishers, reviewers and learned societies.   Beyond this page you can also find an opportunity via the link below to debate the review process, or to share a submission and review experience responsibly on this moderated site.  Results from surveys will also be posted in due course.

The growth of trained social science researchers, and research performance assessments, have increased the significance of the peer review process as well as the varied politics which can lurk behind it. The essential components of the peer review system remain largely unchanged over decades in that reviewers are anonymous and volunteer journal editors have substantial executive powers.  As social scientists we know the problems such circumstances can cause, yet there are few opportunities to debate the issues.  The impact on medicine and the natural sciences are well aired in popular coverage (see also the comments page), but there is little public discussion about the social sciences.

The publishing process has always involved substantial information asymmetries between editors and authors.  Editors now have an opportunity to easily share more information with authors via  editorial  management software systems (such as Scholar One/Manuscript Central) which most journals now use.  These could include progress updates where  authors have the ability to track a manuscript going through each stage  of the review process, but the systems are almost never used to inform authors.  These systems can also provide information about acceptance rates and turnaround times.

….and the data; publishers need to develop common standards

Political scientists are well versed with the politics which can lie behind any data collection exercise and its presentation.  Respondents will be skewed towards those with reasonable figures to report, and data can be collected and presented in various ways to best serve perceived interests.  Beyond this, an industry standard is required for full comparability. In the absence of this, data requests in this first round of collection have been standardised to common parameters, such as external review; for instance, some titles – mainly US based – have extensive ‘screening’ practices of internal review which restricts the number of manuscripts which go for external review.

Editors will be invited at a later stage to provide data to cover different types of average, ranges of variation in turnaround time data, different categories of acceptance responses, screening’ times (before a manuscript is sent for external review) and reject rates during screening.  However, highly tailored requests for data calculations will reduce response rates, and limit the extent of comparability.  Ultimately, an industry funded solution for a third party agency is required to capture this sophistication of data together with the means for independent verification.  Until such time as a comprehensive industry standard emerges among publishing houses, private initiatives such as this is the only way to provide authors with any information.  In any event, an indicative ‘mean average’ for final acceptance rates as well as turnaround times for manuscripts sent for external review is the data which most editors currently have readily available through editorial software management systems, and using this information is the best place from which to get this process established.

The territorial base of the title (US or otherwise – mostly derived from the Thomson ISI list linked in the Comments section) also seems to be an influence upon the data reported.  Until now, the available public domain data has mainly been provided by US based journals attached to professional societies, leading to more familiarity of public reporting (commentators attribute this to claims that some US titles   disproportionately favoured quantitative approaches, resulting in pressures upon titles linked to membership societies to release data).  There has been much less of a tradition of public domain reporting from titles based in other territories.  For editors based elsewhere – mostly Europe – this has been a first opportunity to share the data widely, and for a few even the very first moment to collect and scrutinise it.  Some publishing houses, such as Taylor & Francis, were defensive, advising journal editors not to provide the information to this site.  Typically, titles based in Europe have fewer administrative resources with which to collect data than do US based titles attached to membership societies.  If and once known, sharing the information has been a challenge for some, and remains so for a few still;  titles based outside the US account for a disproportionate number of the non-responses, while US based titles (around half of the total) account for only 10% of non-responding titles.

We’re all Authors

There are frequent reports of long waiting times until a decision is notified; I recently waited almost 300 days for a single review from European Urban and Regional Studies, and which only arrived after I escalated an enquiry to the editor following a non-response from the journal’s administrator.  The mean average for articles sent for review reported by editors is a little over two months; editors say unacceptable times are a small number of inevitable outlier cases caused by reviewer delay.  Yet survey data taken in the summer of 2012 highlights an issue of difference in the reports from editors, reviewers, and the experiences of authors.   Around one-fifth of authors (n=500) report an average turnaround time in the last two years in excess of 110 days.  See the link ‘Survey Results’ at the top left of the page for more.

A common perspective among authors is that editors can do more to exercise judgement about individual reviews in reaching a decision whether to publish, and particularly ‘outlier’ reviews betraying a lack of professionalism.  Journals with low acceptance rates often apply a  crude rule of rejecting any article which attracts a single negative review, leading to unfair outcomes and the possibility of work exceeding its sell-by date as well as never reaching the public domain.  There is scope for greater involvement of editorial board members in scrutinising outlier reviews so as to share the workload which editors have.

A reasonable premise is that the confidence which stakeholders have in a journal is related to the amount of available public information a journal is willing to provide. At present Palgrave, and Elsevier, are the only publishers which identify on their journal web pages the names and contact details of its responsible publishing managers for different titles.  In medicine, The Lancet provides a model of good publishing practice by having an Ombudsman empowered to investigate complaints of administrative malpractice.  There are ongoing examples on these pages, such as turnaround times exceeding a year.  Some editors have reported cases of their own articles in which a decision has never been made.  There are reports of decisions taking longer to communicate to authors than the time spent in external review.  In such cases, independent third party oversight can prevent excess.  The lack of response by any publisher with a stable of social science titles to this suggestion when put directly to them illustrates the need for such a measure; given a reasonable grievance and a wall of silence what redress is possible?

I have experience on all sides, as a journal editor, author and reviewer, and undertake this endeavour at my own initiative and from my own resources.  You will find reports from cases where arrangements work well, and where they don’t,  and suggestions for improvements, on this site.

Justin Greenwood, Professor of European Public Policy (info@reviewmyreview.eu)

Continue to Post or Read Reviews >>

41 thoughts on “Review My Review

  1. I would love to see information about Eur Pol Sci Review, J. of Contemp Eur Studies, and European Political Science.

    • Me too! and doubtless many others. I had invited JCES but had no response, also EPS but the information was not forthcoming, and will include EPSR in the next round of invitations

  2. You might want to include the following journals in the survey (or a future version of it):
    Civil War
    Contemporary Security Policy
    Cooperation & Conflict
    Global Policy
    Global Society
    Journal of International Relations and Development
    Journal of Intervention and Statebuilding

  3. My last manuscript submission, with European Urban and Regional Studies, took almost 300 days until first decision. When I had asked about progress with the managing editor I got no response, so I escalated it to the journal editor; shortly thereafter I received a single, short review which contains a link to the department where the managing editor is based. Apparently there had been a difficulty in recruiting reviewers so I sought information about the dates when invitations had been issued, although when I then asked about the unaccounted for periods involving many months I received no response.

    The title has never responded to requests for information about turnaround time and acceptance rates.

  4. Anything on Journal of Conflict Resolution, International Security, or Security Studies (referenced as International Relations journals in Thomson/Reuters)?

  5. Can anyone offer a summary of principles established from case law in which an author(s) has brought an action against a publisher/their agents surrounding issues arising in the process of review of a manuscript, or an action seeking the right to do so? Given the scale of career consequences now attached to getting published, the ease of access to litigation in the US, and the walls of silence which can arise when something goes wrong coupled with the lack of any ombudsman system, it would be remarkable if no cases have arisen.

  6. I am the editor of the journal Politikon.

    The administrator of this site sent me an email containing the following sentence:

    “You will see also on the website that there is a much shorter list of journals which have not responded to information requests; without a response from you to my earlier requests it would now be a reasonable assumption that your preference is to appear in the latter list if I haven’t heard from you before the end of August.”

    I wish to publicly register my objection to any supposed “reasonable assumptions of [my] preference” as well as to the use of a blatant public shaming tactic to elicit a response regarding a research project which (1) provides no guarantee that proper ethics procedures have been applied, and (2) which cannot purport to represent valid or true data.

    Regards
    Pieter Fourie

    • It’s been insightful to compare the responses coming in from US based titles and from those based elsewhere. In the US it has been commonplace for some time for a number of titles to provide information on acceptance rates and turnaround time (on society and journal websites, editorials, etc). The wider constituency of US based editors have generally had the two key items of data available such that they usually supplied it by return. Typically, they have corresponded with courtesy, and not infrequently with encouragement. Only 4 of the non-responding titles are US based.

      I do understand that it has generally been more of a challenge to share data on acceptance rates and turnaround times more widely in territories where there has not been an historic tradition to publish it, and for a few even it seems to have been a first opportunity to collect or calculate the data. The ratio of responses/non-responses from titles based outside the US is about the same (though there are now titles in the table which reversed a previous decision not to participate, as well as recent requests from established titles for inclusion). A further explanation accounting for US/elsewhere differences is that US titles attached to professional societies generally have memberships sufficiently large to provide more journal secretariat resources so as to calculate data.

      Readers will want to be informed where there is no information at all on a title; the front page has more on data collection detail. Meantime you would be very welcome to present the information. Similarly, any workable ideas for an independent data verification system are also welcome. Authors would just like to be informed about prospects for getting their work published with different titles, as well as how long they are likely to have to wait before getting a decision. It’s clear from feedback that many editors similarly find the availability of data for titles across the subject field a useful reference point.

      For the record, you were sent your first request in July as well as a reminder in mid-August, from which no replies were received. The other sections from the email message you refer to were:
      Pieter, you will see at http://www.reviewmyreview.eu that around 75 journals in political science have chosen to share information about acceptance rates and turnaround times (for manuscripts sent for external review);i’d be pleased to list data for your title also..Meantime, I remain available to you for any questions you might have. kind regards

      • In re. “…regarding a research project which (1) provides no guarantee that proper ethics procedures have been applied”: Were this research being conducted at a US insitution, under US IRB law no ethics (IRB) review would be necessary as it does not entail the PI/researcher interacting with either animals or ‘human subjects’. I believe the same would hold for Canadian, UK, Australian, and New Zealand ethics review policies. In EU states, it would be a matter of protecting individuals’ privacy rights; but I’m not sure that would extend to the rights of journals and their practices, should such a corporate entity exist, leaving the question of whether data on publishing constitute individual editors’ “data”.

  7. Useful links:

    Social Science citation index for Political Science journals from Thomson Reuters

    Stephen Yoder & Brittany Bramlett in PS:Political Science & Politics which contains data on acceptance rates and turnaround times from 2009 for 18 (mainly US based) political science journals, as well as a transparency index for them.

    Bramlett & Yoder on Reviewing the Reviewers

    The (US) International Studies Association publishes annual reports for its 6 titles.

    The (UK) Political Studies Association has a guide to Publishing in Politics by the last editorial team of Politics

    The Scholar One manual for editors

    • In Reviewing the Reviewers, Brittany Bramlett & Stephen Yoder note that “every scholar will receive a rejection letter at some career point and wonder whether the decision came down solely on the merit of the submitted work”. They focus on reviewer characteristics in this article, noting that “female reviewers may also review manuscripts differently. They reject papers at a higher rate than male reviewers, which may be due to their more critical read or just a different set of standards by which they judge scholarly work (Borsuk et al. 2009; but see Caelleigh et al. n.d.).” They consider the issue of graduate students and assistant professors as reviewers, and note that “graduate students and assistant professors may be influenced by competition bias to pass out harsher critiques. Weller (2001) acknowledges that the competition for a scarce resource, page space in selective journals, may drive those most driven by the need to publish, scholars who must publish to gain employment or tenure, to pen more critical assessments in order to retain page space for their own work.”

      Any perspectives out there?

  8. “We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong”…attributed to Richard Horton, Editor of the Lancet, at Wikipedia

    Fair comment? What ideas are out there to improve it in the social sciences?

  9. I think this is a good initiative. I have had to wait over seven months for reviews to come back from British Politics and I have heard some real nightmare stories about the length of time people can wait for publication once accepted. However, having acted regularly as a peer reviewer it’s quite clear to me that leading journals send out some papers for review when they shouldn’t because of the lack of quality. This means the editors have not had the time to adequately assess what they are sending out for review and this inevitably wastes the reviewers time.

    We have very clear assessment criteria for undergraduate and post-graduate student work, but we often have no criteria on how to assess journal articles. This is presumably because as academics we’re all supposed to know innately what constitutes good quality or not, but this is rather curious given that we’re assessing peers who also ought to know. I wonder if some of the stories about ‘bad’ peer reviews are the result of journals not being clear and rigorous enough in their expectations of what reviewers should focus on and this could lead to lapses.

  10. One of the worst turnaround for papers is the Journal of Common Market Studies – once it took 7 months to get back to me with their report. This journal also discriminates when it comes to word length and other issues – they make exceptions to papers depending on the standing of the scholar who submits the paper. Another journal with long periods of turnaround is the Cambridge Review of International Affairs – the students running the journal are good willed but I think they often get ignored by reviewers when it comes to returning their reports in time.

    • JCMS did not respond with information about acceptance rates and turnaround times, but in a recent commentary piece elsewhere the current editors expresses an aspiration that their turnaround times in 2012 will not exceed ten days worse than in 2011, which, if achieved, would place them just below average. When you posted the above message on the JISCMAIL discussion forum run by UACES (which part-owns JCMS) it was quickly removed from the record, and a system of moderating messages introduced. The other UACES owned Journal of Contemporary European Research has also not responded to the information survey.

      The Forum:A Journal of Applied Contemporary Politics is top of the list for turnaround times, at 30 days. The European Journal of Political Research, has reduced times from 101 days to 42 in a little over two years since moving over to its new editorial home. The editorial board of another responding journal has used the data on this site at a recent meeting to initiate an action plan to reduce their own turnaround times.

  11. Cambridge Journal of Economics – 1 year and counting. Last online visit implied it was waiting for allocation to referees.

    Disability and Society – about 3 months, relatively good comments.

    • Several contacts were sent to the CJE seeking information about acceptance rates and turnaround times, without response. A note is on its way to publishers asking whether they would have any objection to the introduction of a ‘release clause’ policy among their stable of titles if no editorial decision is communicated within 100 days of submission, so as to leave an author free to submit the article to another title whilst the review process continues. Watch this space…

  12. The stats are important, but so too is the testimonial side. In my case, the Policy Studies Journal was excellent on both counts. Submitted early January, revise and resubmit decision by late February. 3 reviewers gave challenging but constructive comments. The editors provided clear guidance on how to proceed. Resubmitted by end of April, accepted after two reviews mid June. The editor took the time to praise the article. I know a ‘yes’ colours your experience, but mine was entirely positive and I would recommend it.

  13. What a good idea with huge potential! My longest review wait time was 7 months. If I knew this beforehand I would have submitted elsewhere!

    I know that all this requires resources, but it would really help if you organised the website in a way one can easily find a particular journal. Best would be to list journals alphabetically and to display reviews once a user clicks on a journal from the list. Furthermore it would be great to standardise parts of the information (e.g. duration of review process in weeks, number of reviews provided, helpfulness of the reviews – graded from 1-10 or so, etc.). If the standardised information could be aggregated for particular journals it would be a great help to gain a quick overview.

    • The Journal Citation Reports lists 2740 social science journals alone – and a decision has thus been taken to concentrate on the titles in and around political science, which is a more manageable constituency.

  14. I think this kind of initiative is a good idea, but one caveat I’d have is that calling for evaluation of response times of journals might contribute to reinforcing the dominant status of the established journals published by the commercial publishers, when it might be in the best interest of scholars, libraries and the public to support the Open Access model. Librarians have long been warning us about the unsustainability of the fact that the publishing industry’s business is to act as a middle-man distributing scholarly work that was provided and reviewed by scholars for ‘free’, but for regularly increasing subscription prices. (see for instance McGuigan & Russel, 2008: http://bit.ly/4Nfkka)

    Smaller and/or younger journals may need more time to find reviewers, may not have the means to set up integrated reviewing systems like Scholar One (although Open Journal Systems seems to offer some similar features) and they may have to rely on volunteer desk work.

    This does not address the other issues raised here, but perhaps some flaws of anonymous peer-reviewing (such as it being subject to rivalry between authors and reviewers) would be partly resolved if Open Access became more widespread, thus allowing to consider broader possibilities for publishing than just the one or two top-journals in one’s area. Well I might be becoming utopian here…

  15. I’d like to share a very negative experience I had with Ephemera. After 3 months’ waiting I got back from the editor, by email, a “review” not longer than one small paragraph, by someone who: 1) had clearly not read the paper beyond page 2; 2) did not demonstrate knowledge of the subject; the editor did not await the second reviewer, whom she admitted was very late and was not going to chase up. My subject topic had already had a few papers published in Ephemera, in previous years, hence I thought there would be knowledgeable reviewers (authors of the previously published papers).

    • There are a number of journals which have editorial collective arrangements; one potential issue here is diffuse responsibility. It would be interesting to hear from any such title how this is addressed in practice; I heard a story last week of a fast turnaround experience an author had with Critical Social Policy in which the individual received significant and useful feedback from a number of members of the editorial team, so it seems such arrangements can work well also.

  16. This is a really good idea – many thanks for settng up this site.

    The Journal of Poverty and Social Justice took 6 months to get back to me about my submitted article and this included chasing them up repeatedly. Then I received a rejection with not very helpful feedback to enable me to revise the article (they did not reject it absolutely, unlike other journals perhaps). This was very disappointing as this was the first peer-reviewed article I had submitted. It was also surprising as the journal is quite a topical (and timely) journal which straddles policy and practice, so you would think that dealing with papers in a timely manner would be beneficial to their readership. I didn’t re-submit the article.

    My experience with Work, Employment and Society (4*) was much better – although the first paper I submitted to them was rejected, they got back to me within a couple of weeks maximum with very constructive comments which helped me to revise the paper and submit it, with success, elsewhere.

    I have heard reports about journals being closed (publicly or otherwise) to new submissions because of the scramble for the REF. It would be useful to know whether this is actually the case in reality – and which journals this applies to…

    • Thanks. The learned societies are not well placed to undertake the data collection task of this site because they invariably have publishing interests.

      On the question you raise, the ability to publish ‘early view’ articles on-line helps, but there are limits where acceptance rates are down to around 5% levels without causing unacceptable publishing backlogs. Under current extent of research assessment incentivisation, until outlet supply matches author demand then some worthy material won’t make it into the public domain.

  17. I think it would be extremely useful to have comments on specific journals – I am sure I am not the only one who has heard stories about desk rejections after a 6 month waiting period, 3 reviewers in the first round and 3 different ones (with very different demands) in the second, or e-mails after four months which say sorry, I just can’t be bothered to chase the reviewers any more. Because all of these are about reputable and highly coveted journals (the above examples are Research Policy, Theory Culture and Society, and the British Journal of Sociology, respectively), this is information worth having when you decide who to submit to and you don’t want to lose months and months in the process. (After all, what editors will say about their lead time often seems to differ markedly from what reviewees tell you about actual practice at the same journal.) Of course I also recognise that it is most likely someone is motivated to write when they have a negative experience and want to vent their frustration, so it would also be necessary to point out particularly positive review experiences (e.g. quick turnaround, in-depth constructive comments). But with enough people contributing, there would at least be a good chance of avoiding journals which stand out as repeat offenders.

  18. Thank you for this initiative, very useful!

    A similar site had been started in the mid-2000 for political science, but has unfortunately died, it seems: http://www.politicalsciencejournals.blogspot.de/

    This site tried to invite editors to give information about screen, acceptance rates and return times. It would be very helpful to access to this kind of information, preferably updated regularly.

    • ‘Screen-rate’ practice (i.e. screening manuscripts to check first that they are suitable to be sent out to reviewers) differs so markedly between journals that comparability is very difficult, and thus a different data collection exercise is in hand to the one we have evident on the home page about acceptance rates and turnaround times. Figures for the latter we have only relate to papers sent for review.

      It would be helpful if anyone who remembers the blogspot site (primarily run by US scholars) is in a position to comment by comparing progress with this one.

  19. My last five journal submissions (not including special issues) over the period 2010-2012 have taken an average of around 6 months from submission to first decision notification, with the worst case almost 300 days. Even in the one instance involving less than 3 months the article spent a similar time period awaiting an editorial decision than in review. These are the issues which arose:

    In most of the problem cases the issue (or part of it) was/is the length of time to place the article with reviewers, caused by delays in screening and/or in recruiting reviewers. For the latter, the usual default setting on editorial management software system gives reviewers one week to decide before alerts kick in. Here the answer is clearly to move on to another reviewer where there is a non-response, thus depriving the opportunity to access information for a reviewer seeking to impose a delay purposefully.

    Other problems involved one slow reviewer (probably the same individual as it involved the same topic) who was reportedly chased 6 times by one journal, and broken deadline pledges. The Manuscript Central system profiles the review record history of individuals, allowing editors to avoid reviewers who habitually break deadlines badly. The difficulty comes when a reviewer purposely seeks to impose delays in a particular case. A solution is to take a decision on the first x number of reviews which come in, thus denying such cases the chance to input the final publishing decision.

    The other way in which delays are imposed purposefully are when reviews recommend major revisions together with impossible tasks, but stopping short of outright rejection; these are usually apparent in lengthy reviews containing line by line objections.

Leave a Reply

Your email address will not be published. Required fields are marked *

Please prove you are human: * Time limit is exhausted. Please reload CAPTCHA.