Information Systems Journal | 2019

Do we care about the Societal Impact of our research?

 
 

Abstract


The nature of “societal impact”, and the extent to which our research has any, is loaded with potential dissent. However, this is not a topic that we can avoid much longer. Research evaluation exercises, as conducted by funding bodies, are sharpening their expectations that our research should have demonstrable impact in the form of societal value. For instance, the Hong Kong Government s Research Grants Council, in its guidance notes for the 2020 Research Assessment Exercise, specifies that 15% of research assessment will be attributed to impact. Impact is defined as “the demonstrable contributions, beneficial effects, valuable changes, or advantages that research qualitatively brings to the economy, society, culture, public policy or services, health, the environment or quality of life; and that are beyond the academia.” Clearly, research funding agencies will no longer be satisfied with claims that our research has impact merely because we use it in training of our students, because it is well-cited by other academics, or because it is published in reputable journals. One reason is that these journals are themselves accorded credibility by virtue of their impact factors or inclusion in indexes of journal quality such as, in our own field, the AIS Senior Scholar basket of eight journals, as well as the UT Dallas and FT50 lists of top business journals. Instead, funding bodies will seek to identify how our research contributes to measurable impacts for non-academic stakeholders in industry, government, and society at large. Such demands seem to be justified in situations where the source of university funding is the public purse. In these situations, it seems reasonable that at least some discernible societal value should emerge from research. In addition, to ensure our accountability, it is probably necessary to evaluate it in some way. However, such demands may well be unwelcome for academics accustomed to playing the H-index game, viz.: the writing of peer-reviewed journal papers (the more the merrier) that are cited exclusively by other academics, in pursuit of ever-higher numbers of publication counts and H-index citation scores. The visibility of this game is all too clear on Google Scholar, which is increasingly used as a source for ranking academics: see for instance the University of Arizona s H index of IS scholars. Another more select publication game (drawing on SCI/SSCI data) can also be undertaken: see for instance the AIS s indices of IS scholars. Given the new focus on evaluating impact directly, it is apparent that funding agencies are increasingly sceptical of the academic claims of a strong relationship between the number of citations and the extent of societal impact. Research that fuels these claims could run the risk of being seen as self-serving in its pursuit of behaviour that is fundamentally unethical. The journals themselves, proud of their impact factors, will also come in for the criticism that they have helped to foster the same skewed climate where societal impact is neglected. Accordingly, there is likely to be increased societal pressure for a new research evaluation regime. In order to address this issue, a number of large-scale initiatives have been mounted recently. Galletta et al. (2019) list some of the most prominent ones, viz:

Volume 29
Pages 989 - 993
DOI 10.1111/isj.12259
Language English
Journal Information Systems Journal

Full Text