Correspondence to:
Thomas Gaston, John Wiley & Sons, 9600 Garsington Road, Oxford, OX4
2DQ. tgaston@wiley.com
Acknowledgements
We gratefully acknowledge the assistance of a number of Wiley
colleagues. James Cook did some data collation, Emily Mitic did some
data validation, and Tom Broomfield did some data extraction.
Some additional desk research was undertaken by Felicity Ounsworth
whilst she was on a work experience placement at Wiley.
Data Availability
Statement
Data about Wiley journals, including submission numbers, turnaround
times, and acceptance rates is proprietary.
Retraction data was obtained from Retraction Watch and is open at
http://retractiondatabase.org/
Author Contributions
Thomas Gaston – Conceptualization; Data curation; Investigation;
Methodology; Project administration; Supervision; Writing – original
draft; Writing – review & editing;
Francesca Ounsworth - Conceptualization; Data curation; Investigation;
Methodology; Writing – review & editing;
Emma Jones - Data curation; Investigation; Methodology; Writing –
review & editing;
Sarah Ritchie – Formal analysis; Methodology; Validation; Writing –
review & editing;
Tessa Senders – Formal analysis; Methodology; Writing – original
draft;
Conflict of Interests
TG, EJ, and SR are all Wiley employees. FO was also a Wiley employee
when the research was conducted and has since left the company. TS was
an intern at Wiley when the research was analyzed.
Key Messages
- Increased Impact Factor correlates to increased submissions; decreased
Impact Factor correlates to decreased submissions
- Negative peer review reputation correlates to a decrease in
submissions
- Editors and publishers need to invest in peer review to maintain
submission numbers
Introduction
The number of submissions received by a journal, while not always
directly related to the quality of said manuscripts, often has an effect
on the overall number of articles published. The number of manuscripts
submitted is therefore an important metric in the sustainability of a
journal. Whether it is a subscription journal that wishes to maintain
(or increase) its frequency of publication, or an open access journal
that wishes to increase the number of articles to can publish (without
jeopardizing academic rigor), the success of a journal, both in terms of
publication output and in terms of revenue, is ultimately dependent on
the number of submissions it receives. For this reason, knowing the
factors effecting submission numbers will be of significance both to
journal editors and publishers.
Previous research has attempted to identify the factors considered by
authors when choosing where to submit, primarily through surveys. A
number of surveys conducted by Swan and Brown \cite{Swan_1999}, \cite{Swan_2004} found that
the two most important factors affecting the decision of where to submit
were readership and journal quality; the question of readership was
focused on reaching the right readers rather than the overall number.
Solomon and Bjork \cite{Solomon_2011} found the top three factors were fit to subject
area of the journal, quality of the journal (sometimes measured by
Impact Factor), and speed of review. Respondents to a survey by Søreide
& Winter \cite{S_reide_2010} ranked journal reputation, followed by Impact Factor,
as the two most important factors. However, after grouping several
factors, the authors found that journal “prestige”, followed by
turnaround time, were the two most important factors. Factors considered
least important were acceptance rate, option to suggest reviewers, and
open access. Özçakar et al \cite{Özçakar2012} found that the three most important
factors were mission and contents of the journal, Impact Factor, and
match between perceived quality of the study and journal Impact Factor.
Ziobrowski and Gibler \cite{Gibler_2002} surveyed real-estate authors and found that
the author perception of journal quality was the highest ranked factor.
A survey of Canadian researchers found that journal prestige and Impact
Factor greatly outranked other criteria \cite{research2014}.
Unpublished research by Wiley, surveying authors of accepted
manuscripts, includes a question as to the reasons for submitting to a
journal. The top five ranked reasons are as follows, as of May 2019:
- Scope of Journal (70%)
- Reputation of journal (64%)
- Impact Factor (54%)
- Previous experience with journal (36%)
- Expected speed (29%)
Focusing specifically on open access journals, \citet*{Schroter_2006} used interview and survey techniques to study author perceptions
regarding where to submit. They found the most important factors were
Impact Factor, reputation, readership, speed of publication, and the
quality of the peer review system. Regarding willingness to pay APCs,
they found that journal quality was the most important factor. A survey
conducted by the SOAP project found that 30% of respondents don’t
submit to OA journals due to the lack of high-quality OA journals in
their field \cite{Vogel_2011}.
Not many of the surveys separated peer review from other factors, such
as journal reputation or journal quality. \citet{Nicholas_2015} found
that peer review was second only to relevance to the field, when
choosing a journal. Being published by a traditional publisher and being
highly cited were third and fourth. These respondents linked peer review
with high quality. Focusing on OA journals, whether the journal was peer
reviewed was the key selection criteria – even more important than
whether the journal had a reputable publisher. Other research
demonstrates that being peer reviewed is the key criteria authors use
for distinguishing legitimate and predatory journals \cite{Edie_2019}.
An alternative methodology to surveying researchers was to try and model
optimum submission strategies. \citet*{Salinas_2015} modelled around
maximising expected number of citations, whilst minimising number of
required revisions and time in review (also see \cite{Heintzelman_2009}). \citet*{Coupe_2004} argued that if authors behaved rationally, they
would rank the risk of rejection more highly, because of the
implications for delay in getting their manuscript published.
The absolute significance of Impact Factor, distinct from other factors,
is difficult to access. A survey of ecologists by \citet{Aarssen_2008}
found that most respondents ranked Impact Factor as “very important”
or “important”. \citet{Calcagno_2012} found that high impact journals
publish more resubmissions from other journals; low impact journals
publish more “first-intent” submissions. They hypothesized plausibly
that this result was due to high impact journals competing for the same
submissions. It is well-known that Impact Factor is used by universities
and other institutions to evaluate performance \cite{Adam_2002}; \cite{Smith_2006}, and therefore publishing in a high IF journal can have
implications for promotion and tenure. \citet*{Gibler_2002}
found that some author groups prioritise promotion and tenure
considerations, whereas others prioritise ease and fairness of the
process. \citet*{Pepermans_2015} found a difference between authors
of “high quality” papers, who are looking for a journal with a high
impact and/or high standing, and authors of “standard quality” papers,
for whom acceptance rate is of equal standing to Impact Factor. Impact
Factor is also a key criterion used when creating lists of journal
rankings, which are often used to determine where submit authors to –
though the use of such lists has been criticised for its negative impact
on the literature \cite{Willmott_2011}. One exception is the survey of
librarians by Neville and Crampsie(\citet*{2019}. When respondents were allowed
to select multiple options, impact factor ranked 15th
(18%), behind journal scope (1st; 95%), whether the
journal is peer-reviewed (2nd; 87%), and intended
audience (3rd; 75%). When respondents were restricted
to one, “most important”, option, Impact Factor ranked
5th (4%), behind scope (1st; 49%),
whether the journal is peer-reviewed (2nd; 21%), and
publisher reputation (3rd; 8%).
It is notable that reputation and/or quality is often ranked higher than
Impact Factor by respondents. Whilst Impact Factor can be one factor in
establishing journal reputation, if it were the primary or only factor
in a journal’s reputation, one would expect these two reasons to be
ranked on a par. The implication is that there are other aspects of a
journal, perhaps not so easily quantified, that contribute to author
perceptions of journal reputation, and thus to perceptions of
suitability for submission. These might include the prestige of the
editorial board, the reception of the journal on social media, and the
negative impact of bad press.
Whilst surveying author perceptions is useful, what respondents say they
will do in abstract situations does not necessarily indicate what they
will do in actual situations. For instance, an author might value
turnaround times but nevertheless submit to the highest IF journal in
their field. In this study we wanted to explore actual submission
numbers and correlate them with likely factors. Based on previous
research, we identified three categories to explore and from those
categories identified nine factors we wanted to test:
1. Impact Factor
i. Absolute Impact Factor
ii. ISI subject category ranking
2. Journal Reputation
iii. Net Promoter Score (NPS)
iv. Average reputation score
v. Retractions
vi. Altmetric Score
3. Editorial Process
vii. Time from submission to first decision
viii. Time from submission to acceptance
ix. Acceptance rate
Hypothesis vi (Altmetric score) was later dropped due to availability of
data.
Methods
We retrieved annual submission data for all Wiley journals using
ScholarOne Manuscripts between 2007 and 2018. The percentage increase
was calculated for each journal for each year between 2008 and 2018
(e.g. the value for 2008 was the difference between submissions in 2007
and 2008.) Some journals were not publishing, or were not on ScholarOne
Manuscripts, for all the years between 2007 and 2018, thus null values
were marked “n/a” and discounted. We also retrieved data on the median
days from submission to first decision, the median days from submission
to acceptance, and the acceptance ratio.
We retrieved all the retractions from Wiley journals listed on the
Retraction Watch database. We also retrieved retractions listed under
“Wiley-Blackwell” and “Blackwell Publishing”, and then de-duplicated
the results. In total we found 937 retractions, though these included
retractions for non-journal publications. After standardizing the
journal titles, we identified the retractions, by year of publication,
with the journals in our sample. There were 661 retractions between 2007
and 2018 for the journals in our sample with some journals having
multiple retractions in the same year.
We retrieved the Impact Factor and ISI primary subject category ranking
for 2008 to 2018 for each of the journals in our sample. Not all
journals have an Impact Factor or had an Impact Factor for all years in
our sample range.
We retrieved the average reputation score and the Net Promoter Score
from Wiley’s internal survey of accepted authors. This data is only
available for 2016, 2017, and 2018. We were given guidance that twenty
responses or more were required to be significant, which meant that
there were only 601 cases where there was significant data in a given
year across the journals included in our sample.
An initial analysis was conducted by categorizing each annual change in
submissions into either an increase or a decrease, then comparing this
against each of the factors under investigation. Based upon this
investigation, we proceeded with a statistical analysis of retractions
and Impact Factor.
For Impact Factor, the number of submissions in the two years after the
Impact Factor being published were analyzed (e.g. 2010 and 2011,
following IF 2009), resulting in 17 different pairs (e.g. IF 2009 and
submissions in 2010). For each of the pairs of years, simple linear
regression was performed regressing the number of submissions a journal
receives in a given year on the Impact Factor of the journal for a given
year. The Impact Factor is released midway through the subsequent year
(e.g. IF 2017 was released midway through 2018), which is why we
analyzed the two subsequent years of submissions. One journal was
removed from this part of the analysis as it is an outlier; its 2017 IF
was over 200.
For retractions, the percent increase in submissions (2008-2018) was
compared with the number of retractions published (2007-2017). The
impact of a journal having retractions on the percent increase in the
number of submissions received the year after the journal had
retractions, and two years after, was analyzed, resulting in 21
different pairs (e.g. 2007 retractions and 2008 submissions). There are
so few journals in a given year that have retractions that a simple
linear regression does not provide an accurate analysis of the impact of
retractions on the number of submissions a journal receives. For each
year, the continuous data regarding the number of retractions a journal
had was changed into binary categorical data so that each journal is
labeled as either having retractions in that given year (“Yes”) or
having no retractions in that given year (“No”). For each of the pairs
of years a Welch two sample t-test (a variation on the traditional
t-test) was performed comparing the average percent increase in the
number of submissions a journal receives if it had issued retractions
and the average percent increase in the number of submissions a journal
received if it has issued no retractions. This test was used to
demonstrate any statistically significant difference in the means of the
two groups, journals with retractions and journals with no retractions.
Based on manual inspection of the data, we assumed that retractions are
independent events, and that number of retractions in any given year
does not affect retractions in subsequent years.
To further our investigation, we proceeded to conduct case studies. We
limited ourselves to cases where the number of submissions declined by
10% or more from a level of 1000 or more submissions. This resulted in
55 cases across 38 journals. We emailed the publishing manager for each
journal, including any data described above that we thought might be
relevant, and asked for their analysis as to why submissions declined in
the given year. We received 39 responses, which were evaluated and
categorized. We also undertook some desk research to identify any
widely-publicized reasons that might explain the decrease in the given
year.
Results