The Impact Debate

Hi everyone,

Here’s an interesting contribution to what we’re all talking and writing about – Research Impact – by Warwick Anderson, Professor & Chief Executive Officer, National Health and Medical Research Council, in the always excellent, The Conversation. 

Shared here under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Quality not quantity: measuring the impact of published research

Xzn67rfx 1379387793
Judging the achievements of researchers should be much broader than just looking at their publications.
Image from

Warwick Anderson, National Health and Medical Research Council

Few things are changing faster in the research world than publishing. The “open access” movement recognises that publicly-funded research should be freely available to everyone. Now more than a decade old, open access is changing where researchers publish and, more importantly, how the wider world accesses – and assesses – their work.

As the nation’s medical research funding body, we at the National Health and Medical Research Council (NHMRC) mandate that all publications from research we’ve funded be openly accessible. We and the government’s other key funding organisation, the Australian Research Council, are flexible on how it’s done, as long as the paper is made available.

Researchers may opt for “green” self archiving, where they publish in a restricted journal and archive a freely available version, or “gold” access, which allows readers to freely obtain articles via publisher websites.

Most Australian medical research publications will be available through university repositories and by researchers submitting to journals with copyright agreements that support the NHMRC open access policy. The university librarians have been especially helpful in ensuring that the institutional repositories are ready for this revision to the policy.

Initiatives such as PubMed Central (PMC) and European PMC are also making it easier to access published research.

Consumer groups want direct access, as soon as possible, to the findings of research – after all, they pay for it through taxes and donations to charities. This information helps in a time when we’re bombarded with health messages of sometimes dubious origin and where vested interests are often not disclosed.

In 21st century medical research, consumers and patients group members are often integrally involved in the research itself and are important messengers between researchers and the community.

Death of the journal impact factor

The open access movement is having a significant impact too on how we measure the impact of scientific research.

For too long, the reputation of a journal has dominated publication choice – and the reputation has been mainly determined the journal impact factor. This metric reflects how frequently the totality of a journal’s recent papers are cited in other journals.

The journal impact factor dominated how universities, research institutes and research funding bodies have judged individual researchers for many years. This has always been a mistake – the importance of any individual paper cannot be assessed on the the citation performance of all the other papers in that journal. Even in the highest impact factor journals, some papers are never cited by other researchers.

Consumer groups want direct access, as soon as possible, to the findings of research.
Image from

The NHMRC moved away from using journal impact factors in 2008. So it was good to see the San Francisco Declaration on Research Assessment, which has now been signed by thousands of individual researchers and organisations, come out with such a strong statement earlier this year:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions.

Hear hear.

Measuring real impact

In health and medical research, choosing where best to publish a paper involves so much more than just the prestige (or impact factor) of the journal. In clinical and public health sciences, authors will want the right audience to read the article.

When published in a surgical journal, for instance, the findings of surgical research will influence the thinking of more people, and more of those who should read the research, than when published in a more general journal, even if the latter had a higher impact factor. Similarly, public health researchers want to publish where public health policymakers and officials will read the article.

A single paper in the wide-reaching Medical Journal of Australia – which could change health policy and practice affecting thousands of Australians – may be of greater impact than a paper in a high-impact journal that very few people read.

All this has implications for peer review and the judgement of “track record”. Judging the achievements of researchers must amount to much more than simply counting the number of publications and noting the journals’ impact factors.

I agree with Science editor-in-chief Bruce Alberts who recently argued that formulaic evaluation systems in which the “mere number of a researcher’s publications increases his or her score” creates a strong disincentive to pursue risky and potentially ground-breaking research because it may be years before the first research papers begin to emerge.

Researchers want to publish where policymakers and officials will read the article.
Image from

The NHMRC has long asked applicants to identify their best five papers and in 2014, we will be asking applicants to identify them in their track record attachment. This will make it easier for reviewers to look at these and evaluate them.

In the words of Bruce Alberts, analysing the contributions of researchers requires

the actual reading of a small selected set of each researcher’s publications, a task that must not be passed by default to journal editors.

There is one other potential implication of focusing on quantity rather than quality. It often alleged (though the evidence is scant) that the pressure to produce many papers and to publish in journals with high impact factors explains some of the apparent increase in publication fraud in medical research.

This may or may not be true, but focusing more on the quality of a few papers, rather than just counting the total number of publications and being overly influenced by the reputation of the journal, can help ameliorate the publish-more-and-more syndrome.

Nothing stays the same in science and research. Publishing is set to change further. The democratisation of publishing began with the internet and has a long way yet to run. The challenge for researchers, institutions and funders will be to identify, protect and encourage quality and integrity.

Read more Conversation articles on open access here.

Warwick Anderson, Professor & Chief Executive Officer, National Health and Medical Research Council

This article was originally published on The Conversation. Read the original article.


Predatory publishers and events

Hi everyone,
As of early this year the influential and often controversial index of  so-called ‘predatory publishers,’ Beall’s list, went dark. We’ve mentioned this resource regularly here, so an update is in order. The credentials of possible publishing options will remain an ongoing concern for all players in the world of academia, so I’m sharing this very interesting, practical post from the good folk at The Research Whisperer (thank you!!), for your interest. Next time you’re solicited out of left-field from a title you have no working-relationship with, think about this.

The Research Whisperer

Excerpt from academic spam I received on 2 Feb 2017. Excerpt from academic spam I received on 2 Feb 2017.

It seemed like such a good idea at the time.

‘Let’s write something on predatory publishing!’ I said.

‘Let’s talk about all that academic spam we get!’ I said.

I even roped in my fab colleague from La Trobe’s Borchardt Library, Steven Chang (@stevenpchang), to write something, too. He was keen. We swapped links on email and Twitter.

Then the groundbreaking resource, Beall’s List, officially went dark. It can still be salvaged in Wayback form (that is, a cached version) but it won’t feature updated information anymore.

For me, not having Beall’s List active is a big blow against the tracking of, and education about, predatory processes in contemporary scholarship. I used it all the time and, though Beall is not without his critics, I found it to be of strong value and an excellent way to build awareness…

View original post 873 more words

Research and writing in the shadow of reporting and evaluation frameworks

With ERA 2018 just around the corner I’ve been spending a bit of time with various researchers within the Faculty, discussing strands of ERA strategy in an attempt to define and focus on ‘excellence,’ and more of it going forward. Discussion inevitably touched upon a few core points of tension that result from the pressures of institutional reporting expectations competing with individual goals and career stages.

How, for example, to reconcile the pragmatics of publishing and the publishing industry with the emphasis on publishing for excellence and impact?

How to navigate the quantity v’s quality dichotomy, when both are required in different contexts and actively sought in different circumstances?

What might I need to sacrifice in order to achieve excellence?

When and how are excellence and impact competing against each other?

Difficult, relevant questions that have been raised by different members of the research community I work with on a daily basis.

Today’s post will share an excellent article by UK-based researcher, Sharon McCulloch, which touches on a number of these questions, but focuses on the all-important process on which a significant part of impact and excellence depends upon – academic writing itself. Her piece, “The importance of being REF-able: academic writing under pressure from a culture of counting” draws on a two year study that examined academics’ writing practices in the contemporary workplace.

It details “tensions around the ways in which managerial practices interact with academics’ individual career goals, disciplinary values and sense of scholarly identity,” which I’m sure regular readers of this blog could well identify with.

The drive toward research impact can be evaluated from many angles and a thorough consideration of the concept must include the experience of researchers themselves as writers and communicators.

The following article was first published in the always excellent LSE Impact Blog on February 9th, 2017 and is shared here under a Creative Commons Attribution-ShareAlike 3.0 Unported License.


Academic life is diverse, including research, scholarship, teaching, and public engagement. But the principal role of an academic is to produce, shape and distribute knowledge. Writing is central to this endeavor, but academics’ writing practices have come under pressure from several directions, such as the increasing marketisation of higher education and changes in the digital landscape, both of which have brought about new forms of writing as well as changes to existing practices.

For the past two years I have been working on an ESRC-funded project investigating academics’ writing practices in the contemporary university workplace. The project looks at how knowledge is produced through academics’ writing practices, and how these are shaped by, among other things, managerial practices and evaluation frameworks. We have interviewed 51 academics in three disciplines (mathematics, history and marketing) across three higher education institutions in England, as well as administrative staff and heads of departments. These interviews have revealed tensions around the ways in which managerial practices interact with academics’ individual career goals, disciplinary values and sense of scholarly identity.

Universities in the UK are subject to a national Research Excellence Framework (REF) aimed at rating the quality of research and allocating funding accordingly, with higher rated institutions receiving more funding. As well as direct funding, a high score on the REF also links to rankings and league tables, which in turn affect an institution’s ability to raise income from tuition fees. Given the importance of high REF scores, most universities and departments have policies in place to encourage their academic staff to produce work likely to score highly in the REF.

Our study found that academics’ capacity for career advancement was closely coupled to their universities’ strategic interests in performing well on the REF. For example, during probationary periods, new lecturers were required to publish certain numbers of papers of a specified quality. For academics working in marketing, quality was determined using the Chartered Association of Business Schools’ (ABS) annually published Academic Journal Guide, which ranks business and management journals on a star-rating system similar to that employed by the REF. This target list of journals was used in all three of the marketing departments participating in the study, and the star-rating system employed by the ABS was deeply embedded in discourse of the marketing academics we spoke to about scholarly writing and academic success. Each talked about their own publications in these terms, as seen in the comment below:

“I don’t get any hours for writing. I don’t get any hours for research whatsoever. So basically, unless your work is at least three-star, four-star, then you don’t get any hours for it because although it’s two-star material and it is REF-able, they’re only interested in three and four-star.” (Lecturer in Marketing)

This comment illustrates both the extent to which the REF has become naturalised in the rhetoric of what academic success is understood to mean, and the dangers of any ranking system. Our participants repeatedly used the adjective “REF-able”, meaning that one has enough publications of sufficient quality within the REF period (five to six years) to be included in the department’s submission to the REF, as shorthand for talking about their career status. Although being REF-able was seen as a prized benchmark for academic success, it was far from being sufficient. As the comment above shows, having enough publications (up to four in the 2014 exercise) to be REF-able is unhelpful if they happen to be rated below three-star level, otherwise defined as “internationally excellent”.

It also became clear that, although heads of department described their departmental systems of evaluating academics’ publications as ways of “rewarding” good publications, most academics saw this performance management as something closer to a threat than a reward. They talked of struggling to achieve what they saw as a small and moving target. For example, some key target journals in marketing moved down the ABS rankings in 2015, yet the pressure on academics to “hit a four-star” remained. Even academics who were, by these measures, performing well expressed anxiety about being unable to sustain such a high level of ‘excellence’ year in, year out, and about what might happen if they failed to do so.

Evidence emerged from our interviews of what could be described as strategic behaviour in regard to meeting these performance targets around scholarly publication, but while strategic behaviour enabled academics to meet their targets, they paid a high price for this in terms of their sense of disciplinary identity. One professor in marketing described publishing her research in journals outside of her disciplinary area: “now I target management journals, which is one way of hitting a four star”. This enabled her to maximize the prospects of career advancement, but it gave her something of an identity crisis about who she was as a scholar.

Marketing was not the only discipline where fulfilling managerial demands pertaining to research evaluation conflicted with disciplinary values. Over and over again, academics described peer-reviewed journal articles when asked about the sort of writing they were expected to produce. However, what they were expected to produce and what they wanted to produce was not always the same. Historians talked about the scholarly monograph as their most valued form of disciplinary writing. One historian described the monograph as “the heavyweight, solely authored piece of research work, which is usually the result of years of research in archives”. One problem is that the time it takes to produce a monograph may extend beyond the REF period, so in order to be REF-able, historians also need to be publishing articles that can be written relatively quickly allowing more to be produced in the same timeframe.

Another historian, in her first lecturing post, described a tension relating to this for her first published book, based on her PhD research. She saw this first monograph as potentially career-defining, so had wanted to do it well, to augment and extend her doctoral research rather than merely making minimal changes and publishing it in book form. However, she also knew that without the book, she would struggle to secure permanent employment, so felt pressured to compromise the quality of the research in order to publish quickly and make progress in her career.

We also asked the academics in our study about their use of digital technologies, including whether they engaged in any forms of online writing by, for example, contributing to blogs, tweeting or other emerging means of digital scholarship. Some refrained from these new forms of writing on the grounds that they were perceived to be trivial or self-aggrandizing. Others expressed interest and enthusiasm but nevertheless did not devote much, if any, time to these digital platforms. Their comments about lack of time were often qualified by reference to the belief that such writing did not count or was not valued, as seen in the comment below:

“A lot of the work is grey literature where people have written blog pieces. I think that’s opened my eyes to what’s possible in that area but yes, if there’s time – I think it’s always a question of time. Again, that work is not valued by the university as far as I can see.” (Professor in Mathematics)

Non-traditional genres of academic writing were not perceived to meet the criteria departments have in mind when they stipulate that, for career progression, an academic needs a track record of ‘good publications’. Understanding of what counts as writing worth doing does not stretch to emerging online genres, despite the increased attention paid by universities to public engagement and dissemination of research findings to a wider, non-academic audience.

The picture that emerges is one in which academics are positioned as managed professionals whose personal goals are expected to be closely aligned with the university’s objectives to perform well in the REF, move up the league tables, attract students and secure income. In a neoliberal culture of measuring outputs, the range of forms of knowledge creation that are valued appears to be narrowing. High-prestige journal articles are seen as essential to career success, and should be ranked at three or four-star to secure rewards such as promotion or time for writing. The academics in this study strived to shape their writing around these targets, even though they saw them as unrealistic or out of sync with disciplinary values. Because scholarly writing and disciplinary identity are so closely intertwined, for many academics, this pressure engendered something of an existential crisis about the true purpose of their writing.