Let’s all agree on one thing: in an ideal world, there would be an accurate, proven resource that’d tell you where to publish your research for the utmost impact. The fruits of your hard work, the hours of research condensed into that journal article would then reach the maximum readership and everyone would be on that lauded road to heightened research impact.
Now let us return to the reality.
Deciding where to publish your research in a crowded, competitive environment is difficult. You may have built trusted relationships with certain journals and editorial teams that have proven successful but I’m sure you’d all be on the lookout for new options. Options that yield exposure.
Wherever you go and whoever you ask there will be opinions given about where to publish your research and for what reason. Invariably these opinions and findings will run into conflict because the factors we bring into play when attempting to evaluate impact outcomes need to be assessed differently. How are we measuring impact in this context? What are our indicators? Do traditional indices of impact cut it in the age of ‘Alt’ metrics and digital space? How and why do considerations alter within different disciplines? How do we define disciplines? It can be a problem of comparing apples and oranges in the search for a common language.
There is no gold index, but let’s consider a few of the options out there.
ERA journal rankings: the ERA 2010 collection produced a scheme of rankings which divided journals, along strictly ERA-defined Field of Research (FoR) code based, definitions. These ranking have been used since as an authority with greater or lesser credentials, depending on your source of information.
Trying to unpack the methodology employed here becomes problematic. Are we talking metrics-driven rankings or is peer-review privileged, or is it a combination of elements? In any event, the Australian Research Council abandoned future rankings lists and the 2010 version becomes more dated by the day.
Other rankings indexes rely exclusively on metrics to arrive at that magic impact factor. The impact factor has been discredited in some quarters as representing a false measure of prestige. However you look at them they will at some point figure in calculations relating to your published research. Either by you, or by those evaluating your work.
Scimago’s SJR journal rankings use metrics to neatly stack your publishing options. There’s no doubt that SJR rankings are referred to as shorthand when evaluating research impact, both by researchers and the developers of research data software, but what it really boils down to is how applicable is it to you and your research?
So do metrics lie? A citation might be a citation, but there’s more than meets the eye. A journal waaaay down the list might be just right for you because publishing in it means you reach a specialised group of peers you wouldn’t otherwise. This reveals one of the problems with journal ranking systems: how do we define discipline? Are we talking applied research or basic research? Depending on your field, there’s a world of difference, both in the research itself and the publishing options to disseminate that research.
Journal rankings like Scimago also place HDR students and ECRs in a difficult position. Looking at that list of top ranking journals in your field could well be demoralising. You’ve got next to no chance of having your paper published. But you’re a PhD candidate currently working through some research and that journal sitting at 458 on the SJR list for your field is the perfect fit for you at this stage of your career. It’s all relative.
Google Scholar Metrics also provides a list of journal rankings that can be filtered by disciplines. Scholar’s rankings are based on the h-Index, a measurement that has attracted a lot of interest in the world of research impact since it was first presented publicly in 2005. You might be familiar with your own personal h score, as generated by Scopus, Web of Science or Google Scholar. If not, it’s calculated using a formula based on your highest cited paper, total citations and total publications and has a reputation of being a reliable quantitative measure.
With all this in mind I’d like to turn to some recent thoughts by Professor Patrick Dunleavy of the London School of Economics, who blogs excellently about all manner of research impact. Prof Dunleavy put together Thirty one things to consider when choosing which journal to submit your paper to in his Medium blog, Writing for Research. I encourage everyone here to check his work out in general, but I’m going to reproduce the ’31 things’ below as they seem to be to be a clear and very insightful summary of the many different factors – as we’ve discussed – that present themselves when considering where to publish for exposure and impact.
This is Prof Dunleavy’s attempt – though he does credit Stefanie Hauser’s book, Multi-dimensional Journal Evaluation, as a significant inspiration – to move beyond a sole reliance on the impact factor in favour of a more versatile approach to evaluating the appeal of a journal. The factors are grouped into five categories, beginning with scope:
The second category concerns how your research is reviewed:
We then move on to the all-important question of Open Access:
The fourth area refers loosely to the submission process:
Lastly, we get to questions about impact:
We hope these questions and answers help point you in the right direction when planning where to submit your research. They will remain here as an available resource and guide to publishing for impact.