(If you have arrived here first and are searching for the data table see instead here)
Thanks to all the editors concerned for contributing data for their titles to the commons.
Titles in political science and related fields which did not provide (or pledge) information following two or more contacts are: Armed Forces & Society;Aust. J Pol. Sci.;Brit. Pol.; Bus. & Pol.; Critical Policy Stud; Critical Soc. Pol; Cultural Pol; Democratization; East Europ.Pol. & Soc., Ephemera; Europ. Urban & Reg. Stud.; Europe-Asia Studies; Evidence & Policy; German Pol.; Global Envt. Pol; Intl.J. Pub. Admin; Intl. Politik; Intl. Pol.; Intl. Theory; J.Civil Soc.; J. Common Market Stud.; J. Contemp. Europ. Res.; J. Europ. Pub. Policy; J. Europ. Soc. Policy; J. Strat. Stud; Local Govt. Stud.; Mediterranean Stud.; New Polit.Sci.; Policy & Pol.; Polit. Anal.; Polit. Sci. Quart.; Pol. & Soc; Politikon; Polity; Post Soviet Affairs; Pub. Policy & Admin.; Reg. Stud.; Rev. African Polit. Econ.; Scot. J. Polit. Econ.; Survival ;Theory, Culture & Soc.; Urban Affairs Review; West Europ. Pol. Only 6 of these are US based titles.
A third category involved replies along the lines that the data was not readily available accompanied by varying degrees of intent to respond/ consider at a later stage, including European Political Science, History of Political Thought, Journal of Global Governance,Journal of Latin American Studies; International Affairs; Millennium, Parliamentary Affairs, and PS: Political Science and Politics; two journals added that they were a new editorial team (Journal of Public Policy, Regional & Federal Studies).
Readers will welcome data on any missing titles; editorial management & peer review software systems provide this information to editorial teams & publishers. Alternatively, readers of any title who may recall a recent editorial commentary containing such data, etc, can identify missing information.
More about this site
This is an information tool for authors of manuscripts written for academic peer-reviewed political science related journals, as well as a resource for editors, publishers, reviewers and learned societies. Beyond this page you can also find an opportunity via the link below to debate the review process, or to share a submission and review experience responsibly on this moderated site. Results from surveys will also be posted in due course.
The growth of trained social science researchers, and research performance assessments, have increased the significance of the peer review process as well as the varied politics which can lurk behind it. The essential components of the peer review system remain largely unchanged over decades in that reviewers are anonymous and volunteer journal editors have substantial executive powers. As social scientists we know the problems such circumstances can cause, yet there are few opportunities to debate the issues. The impact on medicine and the natural sciences are well aired in popular coverage (see also the comments page), but there is little public discussion about the social sciences.
The publishing process has always involved substantial information asymmetries between editors and authors. Editors now have an opportunity to easily share more information with authors via editorial management software systems (such as Scholar One/Manuscript Central) which most journals now use. These could include progress updates where authors have the ability to track a manuscript going through each stage of the review process, but the systems are almost never used to inform authors. These systems can also provide information about acceptance rates and turnaround times.
….and the data; publishers need to develop common standards
Political scientists are well versed with the politics which can lie behind any data collection exercise and its presentation. Respondents will be skewed towards those with reasonable figures to report, and data can be collected and presented in various ways to best serve perceived interests. Beyond this, an industry standard is required for full comparability. In the absence of this, data requests in this first round of collection have been standardised to common parameters, such as external review; for instance, some titles – mainly US based – have extensive ‘screening’ practices of internal review which restricts the number of manuscripts which go for external review.
Editors will be invited at a later stage to provide data to cover different types of average, ranges of variation in turnaround time data, different categories of acceptance responses, screening’ times (before a manuscript is sent for external review) and reject rates during screening. However, highly tailored requests for data calculations will reduce response rates, and limit the extent of comparability. Ultimately, an industry funded solution for a third party agency is required to capture this sophistication of data together with the means for independent verification. Until such time as a comprehensive industry standard emerges among publishing houses, private initiatives such as this is the only way to provide authors with any information. In any event, an indicative ‘mean average’ for final acceptance rates as well as turnaround times for manuscripts sent for external review is the data which most editors currently have readily available through editorial software management systems, and using this information is the best place from which to get this process established.
The territorial base of the title (US or otherwise – mostly derived from the Thomson ISI list linked in the Comments section) also seems to be an influence upon the data reported. Until now, the available public domain data has mainly been provided by US based journals attached to professional societies, leading to more familiarity of public reporting (commentators attribute this to claims that some US titles disproportionately favoured quantitative approaches, resulting in pressures upon titles linked to membership societies to release data). There has been much less of a tradition of public domain reporting from titles based in other territories. For editors based elsewhere – mostly Europe – this has been a first opportunity to share the data widely, and for a few even the very first moment to collect and scrutinise it. Some publishing houses, such as Taylor & Francis, were defensive, advising journal editors not to provide the information to this site. Typically, titles based in Europe have fewer administrative resources with which to collect data than do US based titles attached to membership societies. If and once known, sharing the information has been a challenge for some, and remains so for a few still; titles based outside the US account for a disproportionate number of the non-responses, while US based titles (around half of the total) account for only 10% of non-responding titles.
We’re all Authors
There are frequent reports of long waiting times until a decision is notified; I recently waited almost 300 days for a single review from European Urban and Regional Studies, and which only arrived after I escalated an enquiry to the editor following a non-response from the journal’s administrator. The mean average for articles sent for review reported by editors is a little over two months; editors say unacceptable times are a small number of inevitable outlier cases caused by reviewer delay. Yet survey data taken in the summer of 2012 highlights an issue of difference in the reports from editors, reviewers, and the experiences of authors. Around one-fifth of authors (n=500) report an average turnaround time in the last two years in excess of 110 days. See the link ‘Survey Results’ at the top left of the page for more.
A common perspective among authors is that editors can do more to exercise judgement about individual reviews in reaching a decision whether to publish, and particularly ‘outlier’ reviews betraying a lack of professionalism. Journals with low acceptance rates often apply a crude rule of rejecting any article which attracts a single negative review, leading to unfair outcomes and the possibility of work exceeding its sell-by date as well as never reaching the public domain. There is scope for greater involvement of editorial board members in scrutinising outlier reviews so as to share the workload which editors have.
A reasonable premise is that the confidence which stakeholders have in a journal is related to the amount of available public information a journal is willing to provide. At present Palgrave, and Elsevier, are the only publishers which identify on their journal web pages the names and contact details of its responsible publishing managers for different titles. In medicine, The Lancet provides a model of good publishing practice by having an Ombudsman empowered to investigate complaints of administrative malpractice. There are ongoing examples on these pages, such as turnaround times exceeding a year. Some editors have reported cases of their own articles in which a decision has never been made. There are reports of decisions taking longer to communicate to authors than the time spent in external review. In such cases, independent third party oversight can prevent excess. The lack of response by any publisher with a stable of social science titles to this suggestion when put directly to them illustrates the need for such a measure; given a reasonable grievance and a wall of silence what redress is possible?
I have experience on all sides, as a journal editor, author and reviewer, and undertake this endeavour at my own initiative and from my own resources. You will find reports from cases where arrangements work well, and where they don’t, and suggestions for improvements, on this site.
Justin Greenwood, Professor of European Public Policy (email@example.com)
Continue to Post or Read Reviews >>