How to participate in a conference via digital technology

Professor Victoria Haskins of the Centre for 21st Century Humanities and PURAI – Global Indigenous and Diaspora Research Studies Centre virtually attended and delivered a presentation on her research at the Seventeenth Berkshire Conference on the History of Women, Genders, and Sexualities. The conference is the biggest and one of the most influential women’s history conferences in the world and was held at Hofstra University, Hampstead, New York.

Professor Haskins was invited to speak on ‘Confronting Domesticity: New Global Histories of Home and Family,’ as part of a roundtable of panelists to an audience of historians of women and gender. However due to time and funding constraints Professor Haskins was unable to attend in person. Instead she pre-recorded her presentation, which was shown during the conference session and then used Skype to join in the roundtable discussion and take audience questions.

As this was the first time she was to participate in a conference from afar, Professor Haskins sought assistance from Luke Boulton from the UON Blended and Online Learning (BOLD) Lab.

“A couple of weeks before the actual conference, I went to the lab and pre-recorded a five-minute summary of my paper. Luke transformed the talk into a YouTube podcast, and I then forwarded it to one of my co-panelists as well as the IT support technician at Hofstra who organised for it to be shown at the appropriate time during the session,” Professor Haskins said.

Professor Haskins said the session itself went off without a hitch.

“I was able to see and hear everybody quite clearly via Skype (noises in the room were amplified which was somewhat difficult) and everybody could hear me when I spoke. The pre-recorded YouTube clip was presented on screen and apparently presented well. In the question time afterwards, several questions were directed to me and I was able to respond. Two colleagues who were in the room told me the session was very successful and my presentation worked well.

Professor Haskins also received some encouraging engagement via email after the session from a US post-grad who attended.

“I doubt I would have got such an engagement if I had simply sent my paper through to be read by someone else, again confirming that the virtual presentation style is a viable alternative to physical presence as a way of reaching audiences and disseminating research.”

In doing this exercise for the first time Professor Haskins I learnt:

  • A pre-recorded talk requires a particular style of delivery that is a little different from a face-to-face presentation. “For me, doing this for the first time, it took me almost an hour to get a 5-minute recording I was happy with,” Professor Haskins said;
  • A ‘rehearsal’ in advance to check the technology is ideal, and provides the opportunity to work out what will work best;
  • A pre-recorded talk plus interactive participation in real time is preferable to just providing one or the other. “The pre-recorded talk, suggested by the conference organisers, was certainly a good way of ensuring that I managed to deliver my paper reasonably effectively;”
  • Although such virtual participation works as a viable alternative to actual participation, it is considerably less satisfactory because:
  1. Your own engagement with the audience and co-presenters is hampered by imperfect audio and vision, i.e. it is difficult to know quite when to speak and how you are coming across; it is sometimes difficult to hear people due to other noises in the room.
  2. You miss the opportunities for networking and collaboration during the rest of the conference, and for hearing the new work of others.”

Tips for pre-recording conference presentations


  • Start with the ‘big picture;’
  • Break talk into points needed to be covered;
  • Add ‘activities’ at the end for audience (i.e. ask questions, ask for critical thinking);


  • Be much more animated than you would normally be;
  • Change voice tone frequently;
  • Move hands, have them visible occasionally in frame;
  • Move eyes as if you were looking around an actual room, rather than staring fixedly at the camera;


  • Does this video communicate what I want it to communicate?


You may notice posts to this blog slow down in the coming months. Rest assured, existing content will remain here and we have every intention of recommencing posts when the circumstances are right.

Thanks everyone,


Research methods vs approaches

Who isn’t trying to bomb-proof their grant applications? Jonathan Laskovsky from RMIT deconstructs research applications and looks closely at methodology and how to articulate it in this excellent post (plus a little Wu-Tang). Thanks be always to the wonderful folk at The Research Whisperer.

The Research Whisperer

Portrait of Jonathan LaskovskyJonathan Laskovsky is the Senior Coordinator, Research Partnerships, in the College of Design and Social Context at RMIT University. He is primarily responsible for managing research partnerships support and administration within the College.

Alongside this role, Jonathan has research interests in modern and postmodern literature with a particular focus on fictional space and critical theory.

He tweets as @JLaskovsky and can be found on Linkedin.

Method Man (aka Clifford Smith) performing at Shattuck DownMethod Man, by Alyssa Tomfohrde from Oakland, USA, CC BY 2.0.

I am a Method Man. No, this does not involve being part of the Wu-Tang Clan. I’m not even referencing the fact that most university researchers exist in a paradigm easily summarised by Wu-Tang’s most famous line: Cash Rules Everything Around Me (C.R.E.A.M.).

I mean that when I read your research application, I take a very close look at your research methods.

This is, in part, driven by systemic behaviour of…

View original post 913 more words

Why should I bother? Funding Applications within the Humanities, Arts and Social Sciences

Are you a researcher within the humanities, arts and social sciences? If you are and you’ve ever despaired  and wondered why research in your field is not better represented within the outcomes of major funding bodies such as the ARC, read on …

Today I’m sharing a post, courtesy of the ever-fantastic, The Research Whisperer. “Why Should I bother” is a recent post by Tim Pittman, a Research Development Adviser in the Faculty of Humanities at Curtin University. In it, Time provides answers to three key questions HASS researchers might raise in relation to funding applications.

This post is reproduced here  through the Creative Commons “Attribution-non commercial-sharealike” license.

“I’ve been working in research development for a decade now and almost all of that has been focused on the humanities, arts and social sciences (HASS).

Much of the ‘art’ of finding funding is universal across disciplines, but there’s also a need for research support that is more targeted towards HASS researchers.

I feel this especially keenly every year when researchers are submitting applications for Australian Research Council (ARC) funding (roughly equivalent to the National Science Foundation in the USA and the national research councils in other countries).

Often, institutional support processes designed to improve the quality of Australian Research Council applications appear to focus on the science, technology, engineering and mathematics (STEM) fields.

This can leave HASS researchers scratching their heads, wondering whether that key observation, or sage bit of advice applies to them as well.

Based on my experience, these three issues below are commonly raised by HASS researchers, and I’ve provided the advice that I give in response.

1. Do the national funding agencies value this sort of research?

In any given year, around one-fifth of Australian Research Council project and fellowship funding goes to HASS research. In 2016, this was approximately A$100 million. Now, more than one-fifth of academics employed in our universities are in HASS disciplines, so there’s a proportional issue to be considered. Nonetheless, the funding is significant in its own right.

When I break this issue down further, usually the real question being asked is, “Will the national research council value my research?”. This angst is often driven by them reading the most recent funding outcomes, generating despair that their specific field of research was not represented.

However, the success rate across the main Australian Research Council schemes averages around 20%. So, in purely statistical terms, a grant application needs to be submitted five times to be successful. And if you look at the Australian Research Council data over a five-year period, you’ll find that pretty well every field of research receives funding (at the Australian and New Zealand Field of Research four-digit level, for those who like to be specific).

At the same time, there’s no point in denying that certain fields of research get small amounts of funding. Visual arts and crafts for one, and tourism for another. But the same is true of certain technical disciplines, such as automotive engineering, forestry sciences, or environmental biotechnology.

Worrying about your specific field also ignores the fact that the Australian Research Council only reports the primary field of research code against grants awarded, which hides the multidisciplinary nature of many projects. HASS researchers in Australia are on a lot of projects, they’re just not visible in the way that the national research council presents the information.

My advice, always, is to forget about being a ‘HASS researcher’ and concentrate on the issue or problem that you want to address. When this can be framed in a way that speaks to the goals of a national funding agency, then it has the potential to be funded.

2. Scientists work in teams. Researchers like us don’t, right?

As one literary scholar once told me, “I don’t need a team to read Shakespeare for me. I need to do it myself”. Point taken. And I, too, get occasionally frustrated when I’m at a university workshop and the person leading it instructs everyone to “collaborate, collaborate, collaborate” without qualification.

At a basic level, there is some truth to the argument that HASS researchers are less collaborative than others. For example, around four out of five (80%) of STEM projects funded under the Australian Research Council Discovery scheme in the last five years involved more than one chief (or principal) investigator. For HASS projects, it was three out of four (75%). When you break the fields of research down even further, the differences are even more marked. Almost all of commerce, management, tourism and services grants are collaborative, compared to only a quarter of philosophy and religious studies projects. Education researchers tend to work in teams, historians less so.

There is a kick to this, however, when you find out that two-thirds of the sole-investigator grants are awarded to senior researchers. That is, professors, associate professors, or emeriti.

So, when a HASS researcher asks me whether they should be put in an application by themselves, I certainly take into account their discipline. And I point out that the single most important assessment criterion – track record – falls solely on them.

3. The elite universities control the bulk of the research funding. Why bother?

Anyone in research development has heard this statement repeated ad infinitum. And it’s easy to understand why, since two-thirds of Australian Research Council grants are awarded to the elite universities in Australia, the Group of Eight (Go8).

There are four things I could say here.

First, I could say that things are actually slightly better for HASS researchers than STEM researchers. It’s only a few percentage points, but it’s a difference. But that’s not the real answer.

My possible second response means going deeper into the data. I could observe that certain disciplines in HASS do quite a bit better than average. For example, more than half of Education research in Australia is headlined by non-elite universities, and in disciplines like Built Environment and Design, and Language, Communication and Culture, it’s approaching half. But that’s not the real answer either.

My third response might be that, as I said earlier, the Australian Research Council only announce the university administering the grant, not all those involved. So, involvement from universities outside of the elite is actually much higher than you might think. But that’s still not the real answer.

The real answer is:

Data analysis is extremely important to people like me, but close to meaningless for individual researchers or research teams. I look for patterns and trends across hundreds or thousands of grants. At this level, the patterns and trends are quite clear.

In contrast, researchers are focused on one piece of datum: their project. It’s no good to them if their grant, statistically speaking, has a high chance of success if it still isn’t funded. Conversely, who cares if it displays certain superficial characteristics of an uncompetitive grant (e.g. only one researcher, not in an elite university, coming out of a rarely funded discipline) as long as it gets up?

It’s very important that researchers don’t let data analytics drive behaviour. Don’t make your application only look like a good application; actually make it a good application. The basic strategy is the same across all disciplines (win the grant) but the tactics employed need to be specifically designed. That’s why there’s a need for research development staff who have a particular understanding of, and affinity for, HASS research.”

The Impact Debate

Hi everyone,

Here’s an interesting contribution to what we’re all talking and writing about – Research Impact – by Warwick Anderson, Professor & Chief Executive Officer, National Health and Medical Research Council, in the always excellent, The Conversation. 

Shared here under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Quality not quantity: measuring the impact of published research

Xzn67rfx 1379387793
Judging the achievements of researchers should be much broader than just looking at their publications.
Image from

Warwick Anderson, National Health and Medical Research Council

Few things are changing faster in the research world than publishing. The “open access” movement recognises that publicly-funded research should be freely available to everyone. Now more than a decade old, open access is changing where researchers publish and, more importantly, how the wider world accesses – and assesses – their work.

As the nation’s medical research funding body, we at the National Health and Medical Research Council (NHMRC) mandate that all publications from research we’ve funded be openly accessible. We and the government’s other key funding organisation, the Australian Research Council, are flexible on how it’s done, as long as the paper is made available.

Researchers may opt for “green” self archiving, where they publish in a restricted journal and archive a freely available version, or “gold” access, which allows readers to freely obtain articles via publisher websites.

Most Australian medical research publications will be available through university repositories and by researchers submitting to journals with copyright agreements that support the NHMRC open access policy. The university librarians have been especially helpful in ensuring that the institutional repositories are ready for this revision to the policy.

Initiatives such as PubMed Central (PMC) and European PMC are also making it easier to access published research.

Consumer groups want direct access, as soon as possible, to the findings of research – after all, they pay for it through taxes and donations to charities. This information helps in a time when we’re bombarded with health messages of sometimes dubious origin and where vested interests are often not disclosed.

In 21st century medical research, consumers and patients group members are often integrally involved in the research itself and are important messengers between researchers and the community.

Death of the journal impact factor

The open access movement is having a significant impact too on how we measure the impact of scientific research.

For too long, the reputation of a journal has dominated publication choice – and the reputation has been mainly determined the journal impact factor. This metric reflects how frequently the totality of a journal’s recent papers are cited in other journals.

The journal impact factor dominated how universities, research institutes and research funding bodies have judged individual researchers for many years. This has always been a mistake – the importance of any individual paper cannot be assessed on the the citation performance of all the other papers in that journal. Even in the highest impact factor journals, some papers are never cited by other researchers.

Consumer groups want direct access, as soon as possible, to the findings of research.
Image from

The NHMRC moved away from using journal impact factors in 2008. So it was good to see the San Francisco Declaration on Research Assessment, which has now been signed by thousands of individual researchers and organisations, come out with such a strong statement earlier this year:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions.

Hear hear.

Measuring real impact

In health and medical research, choosing where best to publish a paper involves so much more than just the prestige (or impact factor) of the journal. In clinical and public health sciences, authors will want the right audience to read the article.

When published in a surgical journal, for instance, the findings of surgical research will influence the thinking of more people, and more of those who should read the research, than when published in a more general journal, even if the latter had a higher impact factor. Similarly, public health researchers want to publish where public health policymakers and officials will read the article.

A single paper in the wide-reaching Medical Journal of Australia – which could change health policy and practice affecting thousands of Australians – may be of greater impact than a paper in a high-impact journal that very few people read.

All this has implications for peer review and the judgement of “track record”. Judging the achievements of researchers must amount to much more than simply counting the number of publications and noting the journals’ impact factors.

I agree with Science editor-in-chief Bruce Alberts who recently argued that formulaic evaluation systems in which the “mere number of a researcher’s publications increases his or her score” creates a strong disincentive to pursue risky and potentially ground-breaking research because it may be years before the first research papers begin to emerge.

Researchers want to publish where policymakers and officials will read the article.
Image from

The NHMRC has long asked applicants to identify their best five papers and in 2014, we will be asking applicants to identify them in their track record attachment. This will make it easier for reviewers to look at these and evaluate them.

In the words of Bruce Alberts, analysing the contributions of researchers requires

the actual reading of a small selected set of each researcher’s publications, a task that must not be passed by default to journal editors.

There is one other potential implication of focusing on quantity rather than quality. It often alleged (though the evidence is scant) that the pressure to produce many papers and to publish in journals with high impact factors explains some of the apparent increase in publication fraud in medical research.

This may or may not be true, but focusing more on the quality of a few papers, rather than just counting the total number of publications and being overly influenced by the reputation of the journal, can help ameliorate the publish-more-and-more syndrome.

Nothing stays the same in science and research. Publishing is set to change further. The democratisation of publishing began with the internet and has a long way yet to run. The challenge for researchers, institutions and funders will be to identify, protect and encourage quality and integrity.

Read more Conversation articles on open access here.

Warwick Anderson, Professor & Chief Executive Officer, National Health and Medical Research Council

This article was originally published on The Conversation. Read the original article.

Predatory publishers and events

Hi everyone,
As of early this year the influential and often controversial index of  so-called ‘predatory publishers,’ Beall’s list, went dark. We’ve mentioned this resource regularly here, so an update is in order. The credentials of possible publishing options will remain an ongoing concern for all players in the world of academia, so I’m sharing this very interesting, practical post from the good folk at The Research Whisperer (thank you!!), for your interest. Next time you’re solicited out of left-field from a title you have no working-relationship with, think about this.

The Research Whisperer

Excerpt from academic spam I received on 2 Feb 2017. Excerpt from academic spam I received on 2 Feb 2017.

It seemed like such a good idea at the time.

‘Let’s write something on predatory publishing!’ I said.

‘Let’s talk about all that academic spam we get!’ I said.

I even roped in my fab colleague from La Trobe’s Borchardt Library, Steven Chang (@stevenpchang), to write something, too. He was keen. We swapped links on email and Twitter.

Then the groundbreaking resource, Beall’s List, officially went dark. It can still be salvaged in Wayback form (that is, a cached version) but it won’t feature updated information anymore.

For me, not having Beall’s List active is a big blow against the tracking of, and education about, predatory processes in contemporary scholarship. I used it all the time and, though Beall is not without his critics, I found it to be of strong value and an excellent way to build awareness…

View original post 873 more words

Research and writing in the shadow of reporting and evaluation frameworks

With ERA 2018 just around the corner I’ve been spending a bit of time with various researchers within the Faculty, discussing strands of ERA strategy in an attempt to define and focus on ‘excellence,’ and more of it going forward. Discussion inevitably touched upon a few core points of tension that result from the pressures of institutional reporting expectations competing with individual goals and career stages.

How, for example, to reconcile the pragmatics of publishing and the publishing industry with the emphasis on publishing for excellence and impact?

How to navigate the quantity v’s quality dichotomy, when both are required in different contexts and actively sought in different circumstances?

What might I need to sacrifice in order to achieve excellence?

When and how are excellence and impact competing against each other?

Difficult, relevant questions that have been raised by different members of the research community I work with on a daily basis.

Today’s post will share an excellent article by UK-based researcher, Sharon McCulloch, which touches on a number of these questions, but focuses on the all-important process on which a significant part of impact and excellence depends upon – academic writing itself. Her piece, “The importance of being REF-able: academic writing under pressure from a culture of counting” draws on a two year study that examined academics’ writing practices in the contemporary workplace.

It details “tensions around the ways in which managerial practices interact with academics’ individual career goals, disciplinary values and sense of scholarly identity,” which I’m sure regular readers of this blog could well identify with.

The drive toward research impact can be evaluated from many angles and a thorough consideration of the concept must include the experience of researchers themselves as writers and communicators.

The following article was first published in the always excellent LSE Impact Blog on February 9th, 2017 and is shared here under a Creative Commons Attribution-ShareAlike 3.0 Unported License.


Academic life is diverse, including research, scholarship, teaching, and public engagement. But the principal role of an academic is to produce, shape and distribute knowledge. Writing is central to this endeavor, but academics’ writing practices have come under pressure from several directions, such as the increasing marketisation of higher education and changes in the digital landscape, both of which have brought about new forms of writing as well as changes to existing practices.

For the past two years I have been working on an ESRC-funded project investigating academics’ writing practices in the contemporary university workplace. The project looks at how knowledge is produced through academics’ writing practices, and how these are shaped by, among other things, managerial practices and evaluation frameworks. We have interviewed 51 academics in three disciplines (mathematics, history and marketing) across three higher education institutions in England, as well as administrative staff and heads of departments. These interviews have revealed tensions around the ways in which managerial practices interact with academics’ individual career goals, disciplinary values and sense of scholarly identity.

Universities in the UK are subject to a national Research Excellence Framework (REF) aimed at rating the quality of research and allocating funding accordingly, with higher rated institutions receiving more funding. As well as direct funding, a high score on the REF also links to rankings and league tables, which in turn affect an institution’s ability to raise income from tuition fees. Given the importance of high REF scores, most universities and departments have policies in place to encourage their academic staff to produce work likely to score highly in the REF.

Our study found that academics’ capacity for career advancement was closely coupled to their universities’ strategic interests in performing well on the REF. For example, during probationary periods, new lecturers were required to publish certain numbers of papers of a specified quality. For academics working in marketing, quality was determined using the Chartered Association of Business Schools’ (ABS) annually published Academic Journal Guide, which ranks business and management journals on a star-rating system similar to that employed by the REF. This target list of journals was used in all three of the marketing departments participating in the study, and the star-rating system employed by the ABS was deeply embedded in discourse of the marketing academics we spoke to about scholarly writing and academic success. Each talked about their own publications in these terms, as seen in the comment below:

“I don’t get any hours for writing. I don’t get any hours for research whatsoever. So basically, unless your work is at least three-star, four-star, then you don’t get any hours for it because although it’s two-star material and it is REF-able, they’re only interested in three and four-star.” (Lecturer in Marketing)

This comment illustrates both the extent to which the REF has become naturalised in the rhetoric of what academic success is understood to mean, and the dangers of any ranking system. Our participants repeatedly used the adjective “REF-able”, meaning that one has enough publications of sufficient quality within the REF period (five to six years) to be included in the department’s submission to the REF, as shorthand for talking about their career status. Although being REF-able was seen as a prized benchmark for academic success, it was far from being sufficient. As the comment above shows, having enough publications (up to four in the 2014 exercise) to be REF-able is unhelpful if they happen to be rated below three-star level, otherwise defined as “internationally excellent”.

It also became clear that, although heads of department described their departmental systems of evaluating academics’ publications as ways of “rewarding” good publications, most academics saw this performance management as something closer to a threat than a reward. They talked of struggling to achieve what they saw as a small and moving target. For example, some key target journals in marketing moved down the ABS rankings in 2015, yet the pressure on academics to “hit a four-star” remained. Even academics who were, by these measures, performing well expressed anxiety about being unable to sustain such a high level of ‘excellence’ year in, year out, and about what might happen if they failed to do so.

Evidence emerged from our interviews of what could be described as strategic behaviour in regard to meeting these performance targets around scholarly publication, but while strategic behaviour enabled academics to meet their targets, they paid a high price for this in terms of their sense of disciplinary identity. One professor in marketing described publishing her research in journals outside of her disciplinary area: “now I target management journals, which is one way of hitting a four star”. This enabled her to maximize the prospects of career advancement, but it gave her something of an identity crisis about who she was as a scholar.

Marketing was not the only discipline where fulfilling managerial demands pertaining to research evaluation conflicted with disciplinary values. Over and over again, academics described peer-reviewed journal articles when asked about the sort of writing they were expected to produce. However, what they were expected to produce and what they wanted to produce was not always the same. Historians talked about the scholarly monograph as their most valued form of disciplinary writing. One historian described the monograph as “the heavyweight, solely authored piece of research work, which is usually the result of years of research in archives”. One problem is that the time it takes to produce a monograph may extend beyond the REF period, so in order to be REF-able, historians also need to be publishing articles that can be written relatively quickly allowing more to be produced in the same timeframe.

Another historian, in her first lecturing post, described a tension relating to this for her first published book, based on her PhD research. She saw this first monograph as potentially career-defining, so had wanted to do it well, to augment and extend her doctoral research rather than merely making minimal changes and publishing it in book form. However, she also knew that without the book, she would struggle to secure permanent employment, so felt pressured to compromise the quality of the research in order to publish quickly and make progress in her career.

We also asked the academics in our study about their use of digital technologies, including whether they engaged in any forms of online writing by, for example, contributing to blogs, tweeting or other emerging means of digital scholarship. Some refrained from these new forms of writing on the grounds that they were perceived to be trivial or self-aggrandizing. Others expressed interest and enthusiasm but nevertheless did not devote much, if any, time to these digital platforms. Their comments about lack of time were often qualified by reference to the belief that such writing did not count or was not valued, as seen in the comment below:

“A lot of the work is grey literature where people have written blog pieces. I think that’s opened my eyes to what’s possible in that area but yes, if there’s time – I think it’s always a question of time. Again, that work is not valued by the university as far as I can see.” (Professor in Mathematics)

Non-traditional genres of academic writing were not perceived to meet the criteria departments have in mind when they stipulate that, for career progression, an academic needs a track record of ‘good publications’. Understanding of what counts as writing worth doing does not stretch to emerging online genres, despite the increased attention paid by universities to public engagement and dissemination of research findings to a wider, non-academic audience.

The picture that emerges is one in which academics are positioned as managed professionals whose personal goals are expected to be closely aligned with the university’s objectives to perform well in the REF, move up the league tables, attract students and secure income. In a neoliberal culture of measuring outputs, the range of forms of knowledge creation that are valued appears to be narrowing. High-prestige journal articles are seen as essential to career success, and should be ranked at three or four-star to secure rewards such as promotion or time for writing. The academics in this study strived to shape their writing around these targets, even though they saw them as unrealistic or out of sync with disciplinary values. Because scholarly writing and disciplinary identity are so closely intertwined, for many academics, this pressure engendered something of an existential crisis about the true purpose of their writing.

Everything you wanted to know about Research Impact (but were afraid to ask): a Handbook for Social Science Researchers and more

Welcome back everyone to what promises to be another eventful year. Change is certainly upon us here at the Faculty of Education and Arts, with the newly formed School of Creative Industries taking shape, bringing with it a cohort of researchers from the disciplines of design, communication and natural history illustration. It’s great news and can only contribute to an increasingly dynamic and interdisciplinary research environment. So a special welcome to all concerned here, in particular.

Things will also be heating up during the year as preparations for the next ERA (Excellence in Research Australia) 2018 get well and truly underway – combine this with the inaugural Engagement and Impact Assessment Pilot (EIAP) , which will involve multiple disciplines from FEDUA, including history; philosophy and religious studies; education; creative arts and writing plus language, communication and culture – and what you get are some busy but hopefully instructive times ahead. Both of these exercises demand a whole lot of reporting responsibility. However they will present us with a valuable opportunity to present and perhaps redefine the research threads within the Faculty. What’s new? How are we engaging with industry, with the ever important digital sphere?  You’ll hear more about this soon, but I will say that the measurements involved relate directly to many of the concepts we’ve been talking about in this blog since it started nearly a year ago. Speaking of which, we’ll continue to do our best to bring you informative and relevant posts on impact and research communications throughout this year as well!

To start on this we will ease you back into the fray after what was hopefully a relaxing Summer break. Followers of this blog will be aware of the LSE Impact Blog. Today’s post will give you the chance to have a good look at one of their truly valuable and comprehensive resources (if you haven’t already). The Handbook, Maximizing the Impacts of your Research  was produced with Social Science researchers in mind but I think the content is equally relevant and applicable across the breadth our disciplines. It’s a veritable Bible of ideas, strategies, tips and discussion around what Research Impact is and how it might be achieved.

As always, we encourage you to follow this great blog. Here’s what they had to had to say about it when it was first made public back in late 2011 as part of a consultation process:

Included under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

“There are few academics who are interested in doing research that simply has no influence on anyone else in academia or outside. Some perhaps will be content to produce ‘shelf-bending’ work that goes into a library (included in a published journal or book), and then over the next decades ever-so-slightly bends the shelf it sits on. But we believe that they are in a small minority. The whole point of social science research is to achieve academic impact by advancing your discipline, and (where possible) by having some positive influence also on external audiences – in business, government, the media or civil society.

For the past year a team of academics based at the London School of Economics, the University of Leeds and Imperial College have been working on a ‘Research Impacts’ project aimed at developing precise methods for measuring and evaluating the impact of research in the public sphere. We believe the our data will be of interest to all UK universities how to better capture and track the impacts of their social science research and applications work.

Part of our task is to develop guidance for colleagues interested in this field. In the past, there has been no one source of systematic advice on how to maximize the academic impacts of your research in terms of citations and other measures of influence. And almost no sources at all have helped researchers to achieve greater visibility and impacts with audiences outside the university. Instead researchers have had to rely on informal knowledge and picking up random hints and tips here and there from colleagues, and from their own personal experience.

This Handbook remedies this key gap and opens the door to researchers achieving a more professional and focused approach to their research from the outset. It provides a large menu of sound and evidence-based advice and guidance on how to ensure that your work achieves its maximum visibility and influence with both academic and external audiences. As with any menu, readers need to pick and choose the elements that are relevant for them. We provide detailed information on what constitutes good practice in expanding the impact of social science research. We also survey a wide range of new developments, new tools and new techniques that can help make sense of a rapidly changing field.

We hope that this Handbook will be of immediate practical value for academics, lead researchers, research staff, academic mentors, research lab leaders, chairs and research directors of academic departments, and administrative staff assisting researchers or faculty team leaders in their work.”

Download the Handbook in PDF format

Good reading!

Making an impression online: how to build your digital presence and increase your research impact

Research isn’t just global – it’s digital. To make an impact in research you need to have a consistent online presence.

The good news about building your online persona is that consistency counts, so once you’ve got all the basics in hand – you can just work at sharing them across a range of platforms. Think about how you wish to be viewed, and start building your professional persona. Here are the basics:

Your highlights on the web

If you’re a researcher at UON, the Research and Innovation communications team can help enhance your web presence by crafting a professional highlight story. Simply set aside 30 minutes and the team will come to you for an interview and photo shoot.

A highlight story is an easy-to-read overview of what you’ve done in your career – and what you’re focussed on in your research. It should give people an idea about what inspires you, and what your aims are.

It may cover:

  • Your Undergraduate degree and what inspired you to study.
  • Post grad degrees – what helped your forge this research course.
  • Mentors – who helped you make an impact along the way.
  • Areas of interest – rather than honing in on a specific research topic we like to keep it as broad as possible to open possibilities for collaboration.
  • Key career milestones and projects.
  • What’s next? Where will the next five years take you?


above: a recently published highlights story on Professor Cathy Coleborne

Once the article is written you’ll receive a draft copy to edit and approve before anything goes live. When it’s live it becomes a great resource for collaborators to learn more about you and your research.

If you’re a researcher at UON contact us  to set up your highlight story.

In the picture

A quality high resolution head and shoulders picture is essential for all researchers. Use it across all your online platforms so that you’re recognisable and professional. Ensure your image is web-optimised and 300dpi jpeg, 250px high by 150px wide.

It’s worth getting a professional to take a good headshot. They know what’s flattering, how to find good locations and how to get the lighting just right. Keep it simple and look to set up a ‘Profile Pic Photo shoot’ with a few colleagues and get in touch with the Research Comms team (as above).

Be prepared. Work out what you do and do not like about photos of yourself. Get out your phone and snap off a few selfies – you’ll work out which angles do and don’t work and find your most flattering angle. Dress in smart work clothing – but keep prints (particularly stripes) to a minimum. Bold, bright colours work well if that’s what suits you. Otherwise classic or neutral shades work just as well.

Brief bio

Can you describe who you are and what you do in 150 words? Get practicing. A good biography is brief, succinct and around 150 words. Distil your work down to the ultimate essence: Your title, location, and a brief description of your research using the main keywords.

Include a link to your university profile, and any professional social media accounts. Read bios in conference papers or online to see what works, and what doesn’t.

Get social

Are you on twitter? Sign up now. No, it’s okay, I’ll wait…

Twitter is the single-most effective tool for researchers to find collaborators, build networks and share their research. Once you’ve got your profile set up, it’s time to find your tribe. And don’t worry, they’ll be on there. Search for researchers in a similar field to yourself, then scroll through their ‘following’ and ‘followers’ lists to find like-minded researchers.

Be a caring, sharing person on Twitter. Don’t just share your work, share the work of colleagues, collaborators and people you’ve never met who are doing something interesting. Cultivate a Twitter feed that’s vibrant and interesting and gives a great understanding of who you are and what you do.

Again, contact Research Communications team and we can organise a twitter seminar for 5 – 30 people – in just 30 minutes.

Follow the leaders

The best way to succeed online is to assess what others are doing and work out why they’re doing it well. A well-optimised online presence makes you more attractive to collaborators, to those awarding grants, and for promotion.

Look to the leaders – those who get the grants, forge the collaborations and seem to be in the news. Search them on Google and check out what comes up on the first and second pages – then look to replicate it.

And don’t forget, we’re here to help!

A big, big thanks to our guest blogger for this post. Linda Drummond is the Research Communications Coordinator for Research and Innovation Division at UON. She curates their wonderful Twitter account @UON_Research, so get following. This account is a prime example of successful and forward-thinking Tweeting for research impact.

Last post for the year! I will be back in 2017, along with my colleague, Jessie Reid. Enjoy the break, everyone!


Building an impact literate research culture: some thoughts for the KT Australia Research Summit

Very interesting post based on the author’s presentation at the current Research Impact Summit. If you’re a Slideshare user the accompanying slides can be downloaded here.

With the ERA Engagement and Impact Assessment on the horizon, Julie Bayley’s experience, drawn from the UK Research Excellence Framework 2014 is valuable. Definitely some key points made here, so take note:

Thanks, Julie!

Julie Bayley

I was delighted to be asked to speak at the KT Australia Research Impact Summit (November 2016). In my talk, I discussed many of the challenges of introducing an impact agenda into the academic community, and how impact literacy can help. An extended version of my slides are here, but let me talk through the key points below.

Consider impact. A small word. A simple, standard part of our vocabulary meaning influence or effect. But go from (small i) impact to (big I) Impact, and you’ve suddenly entered the domain of formal assessment and causal expectations. Arguably the UK have been the first to really take the Impact bull formally by the horns through the Research Assessment Framework 2014, but of course efforts to drive research into usable practice are far from unique to this little island. Whilst every country is rich with learning about how knowledge best mobilises within its own context, the…

View original post 1,003 more words

Collaboration: what do you think?

Acknowledgement: this post draws from Jenny Delasalle’s The Research Whisperer post, “How do you find Researchers who want to collaborate?” Used under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Research collaboration has never been more topical. Institutions are including it in KPIs; it’s figuring within metrics data and within an increasingly global and digital environment it’s becoming a very important figure. The potential  positives of  collaboration are hard to ignore: the wider, more dynamic your network, the stronger the output. Indeed, it’s being viewed as a significant driver of research quality.

Or as Jenny Delasalle, Librarian and editor of the Piirus blog, describes: “Contacts and collaboration are increasingly important to researchers. From the sparking of early ideas, to co-authorship which increases outputs and helps authors to reach new audiences, and on again to partnerships with organisations or industry which offer sources of funding and routes to impact: collaboration activities are increasingly seen as a part of research excellence.”

So what do researchers themselves think about collaboration? In late 2014 Piirus  a not-for-profit service provided by the University of Warwick, which is a networking resource  with a focus on promoting collaborative opportunities within the global research community, conducted a survey involving over 300 researchers across the globe from a variety of disciplines. The aim was to get some insight and opinions about networks and collaborations and what researchers are interested in (or willing) to share in online platforms such as Piirus, from the primary source – researchers.

Here’s what they found:


We’re very interested to know what you think about these issues. Do any of these conclusions resonate with you? Are you surprised with any? Do you personally disagree? And if so, where and why do you differ?

Let us know via the comments section here or via Twitter to @EdArtsUON or join the conversation by tagging #collaborateUON.