• Keine Ergebnisse gefunden

2.4 Science, Technology and Innovation System governance

2.4.2 Evaluation

Evaluation is an essential element of the governance dimension of public policy.

This is also the element that links most immediately and directly with the issue of performance-based public funding.

Not so long ago, evaluation was a very small element of research governance.

In almost all contexts, it was used only to award project or programme funding (e.g. the competitive, project-based modality of funding as peer review and follow up on results). Public authorities are now relying increasingly on governance through evaluation, as evidenced by the introduction of national-level evaluation systems (Whitley and Glaser, 2007)24.

24 Whitley, R. and Gläser, J. (eds.) (2007). The changing governance of the sciences. The advent of research evaluation systems. The Netherlands: Springer.

National evaluation systems are generally associated with the selective allocation of base-line research funding to research organisations and characterise PRFSs.

An example of such a system is the Research Assessment Exercise currently operating in the UK (see Barker, 2007)25.

Our research provided evidence that research evaluation is still problematic in Georgia. These problems concern the peer-review-based systems operated mainly by the SRNSFG and GITA, as well as evaluation above and beyond.

Basic principles of peer review are being challenged in Georgia. An example we want to highlight is the issue of the confidentiality of reviewers: it is with good reason that international practice to not disclose the name of reviewers of individual projects26. This very important principle was challenged in court last year, and is again being challenged in 2018. We are fully in line here with the letter from the SRNSFG International Supervisory Board of 3 April 2017 to the Minister of Education and Science, which requests that the confidentiality of reviewers is respected for the benefit of Georgia’s science system.

Research performance evaluation, is carried out to a degree by the GNAS. This, according to local accounts, resembles more of an administrative reporting procedure than an evaluation system akin to those employed elsewhere. To begin with, reporting is conducted using descriptive narratives; no explicit indicators, qualitative or quantitative, are specified27. Our interviewees reported that there is no follow-up on the evaluation and that its outcome is not linked to any funding or other decisions28. This is most likely because this evaluation is carried out by the Academy of Science which, as a reputational rather than an executive body, is unable to reconcile the results of the evaluation with policy action.

25 Barker, K. (2007). The UK Research Assessment Exercise: the evolution of a national research evaluation system, Research Evaluation, Vol. 16, Issue 1.

26 Only if confidentiality is guaranteed can reviewers go about their work without fear that they will be harassed subsequent to funding decisions. Experts will abstain from performing reviews, if names are disclosed; the whole merit-based system of project selection will be undermined herewith. According to good practice (and similar to the EC in its Horizon 2020 programme), SRNSFG publicises the list of reviewers who have been consulted for reviews in its programmes, but it does not provide the names of reviewers of individual projects.

27 We also could not find evidence that this information is being processed and assessed in a systematic and transparent way.

28 In fact, a director of Georgia’s research institute mentioned that, although it stopped filling in the evaluation report years ago, there have been no discernible consequences.

Table 3: Overarching problems of the STIS in Georgia

Research governanceResearch organisationsFunding arrangements

üInsufficient level of stability in the research system hinders the organic development of research.

üLack of resources at the level of research organisation obstruct risk-taking in research.

üDifficulties maintaining research infrastructure and facilities

üResearch system starved of basic conditions for research (heat, energy, chemicals, tests, computing etc.)

ü‘Blind’ policymaking;

üExperience of internationally successful groups difficult to reproduce

üNo incentives for research to seek (and maintain) collaboration with industry

üTapping into the resources of wealthy benefactors may be a way to increase the level of funding for research in Georgia

üNo level playing field in competition for resources

ü Research institutes and universities are not integrated.

ü Research institutes, research labs and sectoral research units do not collaborate.

ü ‘Political’ tensions between different research performers (e.g. institutes and universities). ü Centralisations at the highest political levels

(Office of the Prime Minister)

ü Hindered research and innovation prioritisation process

ü Lack of flexibility of the STIPS

ü Contributes to the fragmentation of the research system (research survival in the hands of individuals and groups rather than organisations)

Evaluation regime

Research and Innovation System Dimensions

Key issues of the research and innovation system in Georgia at a glance

üReduced capacity for carrying out research üReduced capacity for internationalisation of the

research system

üReduced capacity for establishing links with industry (inadequate research infrastructure and facilities; mismatched competence)

Low level of research funding (certainly low for the breadth of the research system).

There is de facto no base-line funding for research; resource allocation from MES to former research institutes of the GNAS cover salaries at very basic level.

No reliable data on funding streams in the Georgian STIS

Low level of industry contribution (probably linked to low absorption capacity)

Ad hoc funding from international sources

Still relatively low level of variety in research funding organisations (no private foundations, charities)

Different funding rules for different research actors

Fragmented research system

Sub-critical mass in terms of researchers, research facilities and equipment

Lack of coordination and collaboration at the operational, ministerial level

Limited strategic capacity of the universities

ü Evaluation seen as a pointless administrative burden

Disconnect between research evaluation and policy action (GAS carries out the evaluation but is not in structural position to implement action)

Research evaluation is more an administrative reporting procedure than an a precursor to policy action (no transparent indicators, collects descriptive narratives not data, does not inform action)