[Sponsored by RISIS]

14 Septembre, 16:30 – 18:00 · Room 1 · ground floor

2nd plenary session.
Roundtable: Infrastructures for Inclusive and Open Science and RISIS presentation
Panelists: Éric Archambault (Science-Metrix. Montréal, Canada), Valentin Bogorov (Thomson-Reuters. Moscow, Russia), Abel Packer (Scielo, Sao Paulo. Brazil), Hebe Vessuri (IVIC. Venezuela).
Chair: Ismael Ràfols (INGENIO, CSIC-UPV. València, Spain)


The infrastructure for information on S&T has a strong influence on the patterns of communication and the visibility of science. Scientific journals and the bibliographic database shape the production, circulation and consumption of knowledge.  Since the mid 20th century, science dynamics was influenced by Garfield’s notion that a small “core journals” that published most of the all the research of significance – those covered by the ISI (now Web of Science) database. These core journals of ‘international’ scope that ‘controlled’ most scientific communication were mainly published in a few Western countries. The databases were often used by managers to stratify science into high quality cores (top quartile journals), second class science and ‘invisible science’.

Since the 1980s, researchers in the global south and in some disciplines such SSH have increasingly voiced discontent about Garfield notion of ‘core’, in particular about its consequences in terms of the invisibility of ‘peripheral’ journals and the effects of journal stratification on knowledge production. For example, there have been worries of suppression of research on topics relevant to developing countries or marginalised populations which are published in local journals in languages other than English.

Also, the great changes in ICT in the last two decades have facilitated the pluralisation of scientific information. The appearance of new databases, such as Scielo or Redalyc that explicitly aim to fill in gaps in coverage. Moreover, the advent of open access technologies that can make ‘local’ journals accessible across the globe. Also new forms of science dissemination, such as blogs or twitter, or new forms of publishing (e.g. data sharing), are making scientific information more diverse. However, this succession of transformations towards more ‘open science’ poses major challenges to the governance of information infrastructure.

In this round table we aim to discuss, first, the diverse strategies for developing infrastructure with an open and comprehensive coverage and, second, the governance of the scientific information infrastructure in the face of new forms of communication.

First, current general databases have a limited coverage while more comprehensive databases are specific to some regions or sectors. Thus, most S&T indicators and benchmarking are based on conventional ‘core’ databases. Should more comprehensive databases be developed, mixing different types of science – e.g. more ‘local’ and more ‘universal’? How should indicators of these databases be interpreted? How is open access best provided and maintained?

Second, the development of robust and publicly trusted indicators needs an open and transparent data infrastructure. What type of governance should be established to ensure public critical analysis? Which types of organisations should manage the data? Should these be distributed or centralised systems?

Previous studies of standards and infrastructure have shown that deep political implications of apparently technical choices. If we aim to make science more open, democratic and inclusive, we need to be highly reflective on how we develop these infrastructures.


[Sponsored by IFRIS]

15 Septembre, 12:30 – 13:30 · Room 1 · ground floor

3rd Plenary Session.
Global networks, internationalization and local research agendas: indicators for benchmarking or context specific?
Panelists: Jonathan Adams (Digital Science, London, UK), Rigas Arvanitis (Director of IFRIS, IRD, Paris, France), Sami Mahroum, (INSEAD Innov. and Policy Initiative, Abu Dhabi, United Arab Emirates), Mónica Salazar (InterAmerican Develoment Bank, Bogotá, Colombia).
Chair: Richard Woolley (Ingenio, CSIC-UPV, València, Spain).


It is widely accepted that ‘global science’ or the globalization of scientific work, collaboration and coordination has developed rapidly in the era of mass long-haul travel and has intensified with the arrival of the ‘Internet age’. The ideal of a global science network through which access and contribution to science is no longer structured by zones of inclusion and exclusion is said to be within reach. In this so-called ‘flat-earth’ view of globalized science, physical location and local resources are secondary to international networks. Strategies for raising scientific quality are contingent on plugging into the global networks. Through these networks, countries with lower resource levels (human capital, research infrastructure, financial) are expected to  access  advance knowledge and techniques. This is assumed to lead to a faster rising level of competence underpinning the advancement of a science and innovation driven mode of socio-economic development.

Indicators of ‘internationalization’ thus become important for monitoring global connectedness as a proxy for a network model of development. Countries that map and understand their collaborations can leverage their strengths and use policy interventions to build global links in targeted areas. Indicators play an important role in highlighting opportunities and progress in connecting to key global channels. Research quality is assumed to rise in concert with internationalization indicators, lifting downstream activities and opportunities for commercial exploitation. Indicators that seek to benchmark or produce universalized measures (such as the global university rankings) are therefore regarded as relevant and seen as having positive impacts on the direction of policy development.

In contrast to this vision of global equalization, another interpretation of the globalized organization of science sees the global networks as a perpetuation of asymmetric relations of power and control over the scientific agenda. In this view, global networks mainly operate to export the research agenda of the rich and successful countries to distributed research groups in other locations. The development of a science that is not just of high quality but also of relevance to its context may be hampered by focusing on the research questions which are of interest to researchers and funding agencies in highly developed countries.

Indicator development faces other challenges according to this view that the scientific world is very far from being ‘flat’. Different types of indicators might be needed in different contexts. ‘Universal’ measures such as global rankings may be useless, or even potentially misleading, in terms of shaping policy agendas in these contexts.

Taking these polar views, we can see that the same global network could be interpreted in two very different ways. Perhaps the challenge is to find the complementarities between these two visions. Perhaps a more reflexive politics of responsible indicator development is needed. What exactly should be the role of state administrations in this contested terrain, including those charged with capturing and presenting data for S&T information systems? This session will bring these issues of the global and the local/regional into focus and into question. It will provide an opportunity for robust debate and for challenging perspectives on the received vision of ‘global science’ and the indicators of internationalization that help to construct this vision.


[Sponsored by Thomson Reuters]

16 Septembre, 16:30 – 18:00 · Room 1 · ground floor

5th Plenary Session.
Roundtable on “Use of indicators in policy and inclusive metrics”
Panellists: Richard Deiss, (Directorate General for Research and Innovation, European Commission), Diana Hicks (Georgia Tech. Atlanta, USA), Slavo Radosevic (UCL. London, UK), Judith Sutz (President of Globelics & Univ. de la República. Montevideo, Uruguay).
Chair: Jordi Molas-Gallart (INGENIO (CSIC-UPV), Spain).


The STI conferences have long aimed to stimulate reflection on the use of indicators. Two years ago, in a plenary roundtable on “quality standards for evaluation indicators” Diana Hicks launched the idea of a “manifesto” that would lay out some basic principles on the evaluative use of indicators. This led to the Leiden Manifesto for Research Metrics, a set of “ten principles to guide research evaluation”. The Leiden Manifesto has become an influential initiative to raise awareness of the challenges posed by the use of indicators in evaluation and, therefore, to inform policy decisions. The HEFCE report The Metric Tide also recommended general principles such as robustness, humility, transparency, diversity and reflexivity regarding the responsible use of research metrics.  Yet, although these principles have been well received, in many cases they do not provide solutions but state desirable goals. Agreement with the principles does not imply the capacity to implement them. How can we move from general principles to more specific advice?

This closing roundtable will discuss how to address the challenges posed by the use of indicators in policy, in particular in relation to geographical, cognitive or social areas that are not well described by current indicators.

First, we need to consider how indicators are used in the policy process. There is agreement among many evaluation practitioners  that “quantitative evaluation should support qualitative, expert assessment”, as stated by the first principle of the Leiden Manifesto. Indicators and the analyses based on indicators should therefore inform but not substitute judgement. How can the principle operate in practice? Is this applicable in all circumstances? Can the application of mixed methods to evaluation help address this problem?

A second challenge relates to the adequacy of currently available indicators for assessing institutions or research against their stated missions and their specific context. The indicators community has developed sensible methods for measuring performance against some missions in certain contexts. However, some fields, such as the Humanities, or missions, such as health care, and many regions, are currently poorly covered by indicators. How can we use indicators to inform policy when they are known to be biased, for example due to the uneven topic or country coverage of databases? How should we use indicators so that local research and innovation is made visible and valued? How can we, for instance, use indicators to capture the performance of an organisation against its research missions when these are peculiar to a local context? What are the opportunities for the development and use of alternative indicators that are inclusive of currently invisible or marginalised research and innovation?

We would like to invite the panellists and the audience to share ideas and collective initiatives so that our community can contribute to a wiser, more inclusive and responsible use of S&T indicators.