Learning more from evaluations - the use of thematic approach and impact modelling in evaluating public support measures

Jari Romanainen, Tekes

Introduction

Competitiveness and economic growth increasingly depend on the capability to produce innovations. More and more private companies, regions and nation states are selecting a strategy of innovation driven growth. Although knowledge and skills on which innovations are mostly based on are accumulative and there is no empirical evidence that investment in knowledge and skills would have diminishing returns, continuously increasing investments research, technology, development and innovation (RTDI) are required in maintaining and improving competitiveness on global markets.

This means that private companies, regions and nation states are facing the need to increase their investments in RTDI. The overall target set by the European Council of making Europe the most innovative region globally and reaching the level of 3% of GDP in R&D investments by the year 2010 is how the European Union has reacted to this need.

Although it is not only about money, increasing investments in RTDI is the key element in any innovation or competitiveness policy. Although most of the increase is expected to come from the private sector, public spending for RTDI also has to increase. As significant amounts of public money are directed to RTDI, it is quite natural that the need arises to understand the effects and impact of public RTDI investments on competitiveness, economic growth and social development. Do these investments contribute to competitiveness, economic growth, social welfare, environment and how? Are the funds being used as effectively and efficiently as possible?

Evaluation of public RTDI policies and funding schemes is thus becoming increasingly important feature of policy design and policy implementation. Large number of evaluations are commissioned and performed for the purpose of understanding the impact of public funding. Unfortunately, quantity is a poor fix for the lack of quality, i.e. the understanding in innovation processes, systems, mechanisms and appropriate evaluation methodologies. Although the quality of evaluations is improving there are still more strategic issues that need further attention. One of these is the ability of evaluation to support the policy design and implementation processes at the strategic level. Another is the ability of evaluation to facilitate learning amongst actors within the innovation system. Both of these issues will be discussed in this paper in the context of practical experiences gained from applying impact modelling and thematic evaluations in the portfolio of Tekes National Technology Programmes.

Evaluation in the context of modern innovation policy

The challenges and how to approach them

From the policy maker’s point of view the question is: what is the impact of public RTDI funding on economic growth, social welfare or environment? The question might be simple enough, but the answer is extremely complex. Innovations are created in complex interactive processes affected by many different macro- and microeconomic, social, political and cultural factors. Furthermore, whereas some impacts can become visible within a few years, wider impacts are typically realised in much longer time frames.

This brings out one of the key challenges in evaluation, attribution. The shorter the time frame, the easier it is to show evidence of causality. However, as most of the impacts become visible only after several years, the harder it is to attribute them to any specific public RTDI support schemes. It is therefore imperative to understand the mechanisms through which impacts take place. Impact mechanisms allow for the design of indicators, which can at least show that impact can be expected in longer term.

The early understanding of the impact of public RTDI funding was based on the linear model of innovation. Investments in science lead into economic growth because new knowledge and skills created produce innovations. The problem with this approach is that while scientific research is important, it does not ensure innovation. The current innovation policy is based on systemic approach. This approach recognises the interactive and complex nature of innovation processes and the importance of understanding the systemic context. The problem with the systemic approach is mainly in two aspects. First, it attempts to draw boundaries where boundaries are blurring, e.g. because of globalisation, networking and changing division of labour between public and private actors. Secondly, it does not truly capture the dynamics of the changing context. The latter especially emphasises that instead of focusing on structures and schemes, one should stress more the importance of various innovation governance processes and through them, the capability to adapt to changing challenges. All of these key aspects of modern innovation policy challenges – systemic dynamics, blurring boundaries, innovation governance and eventually the adaptive capability – have a decided impact on policy design, implementation and eventually on evaluation.

Policies targeting complex interactive processes between various actors with different motivations require a mix of schemes. These schemes are often designed to target specific systemic failures in a specific context at a given time. The effectiveness and efficiency of the whole mix, the portfolio of schemes is decided to a large degree by two main factors: coherence and the ability to adapt and learn. Policy coherence ensures that individual schemes are designed and implemented in the full understanding of the rest of the portfolio and, thus, all schemes are in line with and support each other. The ability to adapt and learn ensures that individual schemes are launched, renewed or terminated according to the systemic failures identified.

Understanding the complexity and dynamics of innovation systems and processes and targeting them with a portfolio of policy measures is like trying to hit a continuously changing set of moving targets with an arsenal of weapons. The challenge is to hit as many targets as possible at the right time and with as few weapons and ammunition as possible. Hitting targets with the minimum ammunition requires that several weapons deliver their ammunition to the same target at the same time. It is important that the arsenal is up to date, because old weapons are in many cases ineffective towards new targets. Building big and expensive weapons and ammunition might ensure that current targets can be hit, but might lead into lock-in problems, because big expensive weapons are arduous to update or dismantle. Policies should therefore be based on a mix of relatively small number of well designed basic schemes, which are flexible and can be easily re-targeted in case the set or characteristics of identified systemic failures change.

The other key challenge for evaluating policies and public support schemes is to do it in a continuously changing context. This means that even if the original rationale for a scheme and the subsequent activities during implementation were appropriate and valid when the scheme was launched, the situation might have changed dramatically by the time the scheme is evaluated. This raises an important question: How should evaluations be designed and implemented to help individual schemes and the portfolio of schemes to maintain relevance?

There are two main approaches to this question. One is that evaluation should not be considered as a one-time activity directly linked to a specific scheme. Evaluation should be approached as a learning process consisting of series of activities and tasks with the aim of continuously producing deeper insight and understanding to the impact of public policy. Like any key process, it should be sufficiently planned, resourced and monitored as a process, not as a series of individual isolated one-time tasks.

The second approach is to move from evaluating individual schemes more towards evaluating mix of schemes in the context of systemic failures. This is definitely way more challenging, but will also provide much better insight into policy design. Evaluating on the portfolio level can provide more understanding of the impact of the mix of policies, reveal the potential mismatches in the portfolio and can also provide more understanding of the real impact in the longer term, because a wider set of factors affecting the eventual outcome and impacts of policies can be brought into the analysis.

Combining these two approaches does not necessarily mean smaller number of larger evaluations. The key point is that evaluations are not designed as one-time exercises attempting to verify the impact of single schemes, but rather a well designed series of thematic, portfolio level evaluations aiming to further the understanding the impact of the whole mix of schemes.

The methodological challenges

One of the key challenges these new types of approaches pose to evaluations is that no single methodology is sufficient to produce the knowledge needed. Methodologies need to be combined and used in parallel to gain the necessary insight and understanding of the impacts and the relevant mechanisms.

The second challenge is the ability to move forward from static structural, project and scheme based methodologies towards more dynamic process based methodologies. This means being able to analyse the quality and characteristics of innovation governance processes at and between various levels from policy design and strategic intelligence to implementation and practical management of schemes. One attempt of doing this on the innovation systems level is made at the on-going MONIT project within OECD.

A further aspect of selecting the appropriate methodological approaches is the fit to the overall innovation governance and within that the overall continuous evaluation process. Understanding the dynamics of open innovations systems requires a continuous evaluation process, which consists of a series of evaluations on various themes across the policy portfolio supported by innovation system level analysis and evaluations. Setting up a continuous evaluation process or practice requires an overall evaluation strategy well integrated into the overall innovation governance process. Furthermore, the continuous evaluation function needs to be supported by sufficient and competent resources. This requirement is just as relevant for organisations commissioning evaluations as it is for those performing them.

Continuous evaluation through a series of evaluations is methodologically easier, since it allows focusing on specific themes. Traditional evaluations combine all relevant themes in single evaluations. These themes or objectives typically consist of very different mechanisms and time frames, which forces compromises in terms of methodologies. Thematic evaluations concentrate on one or few themes, which can allow for better selection of appropriate methodologies. However, this approach requires more sophisticated policy makers, since instead of giving simple answer, like this scheme works well and this scheme does not work well; it gives answers, like these mechanisms seem to work better than others in this context. The scheme based answers must be summarised from several evaluations made at different times, which poses a new challenge.

Evaluation methodologies are typically divided into statistical methods, modelling methods and qualitative and semi-quantitative methods. Statistical methods can be relatively easily used for thematic evaluations, since most of them are basically target group based. The main problems with statistical methods are that it is difficult to capture and identify the impact of mechanisms with different time frames and that it is difficult to capture all impacts (indirect, externalities).

The main application area for statistical and econometric methods is evaluations looking at the impact of public funding in general, not one or more programmes specifically. These types of evaluations can typically also draw data from various public RTDI funding monitoring databases to complement the overall statistical information.

More strategic evaluation of schemes calls for methodologies that allow for a deeper insight on targeted companies, innovations and projects; methodologies that can illustrate the true impact of schemes as well as the key mechanisms the impacts are realised through. Some qualitative and semi-quantitative methods, like network analysis, case studies, foresight and assessment and cost-benefit analysis can be used for thematic evaluations and portfolio analysis. The problem with most of these is that they can be quite costly. The main differences in using these methods to thematic evaluation are that the target group is not projects, but companies, innovations, or similar, and that the analysis should be able to capture the impact of all relevant schemes and other factors, not only the impact of a single scheme.

Modelling is methodologically closest to thematic evaluations and impact modelling. The main problem with these is that they can be quite costly. They should include both the mix of schemes and the wide range of impacts. The costs can be minimised by integrating the modelling approach to the overall scheme design. Scheme monitoring systems can be the designed to produce most of the necessary and otherwise quite costly to obtain data, which allows for the use of modelling methods at an affordable cost.

The use of thematic approach and impact modelling in the context of Tekes technology programmes

Tekes technology programmes

RTDI programmes typically range from large umbrella-type programmes to more focused and integrated programmes. Tekes technology programmes mostly fall into the latter category. They are typically designed to run 3-5 years. Each programme includes a portfolio of collaborative RTDI projects executed at universities, research institutes and companies. During 2003 Tekes had 34 on-going national technology programmes ranging from 10 to 120 million euro in total volume. On average Tekes covers half of the programme costs; rest of the money comes mainly from industry.

The early national technology programmes in the 1980’s were mainly technology or industry oriented. The core of the programmes was concentrating on adapting and developing specific technologies or improving the competitiveness of a specific industry. The second generation of technology programmes in the 1990’s widened the approach from single industries or technologies to industrial clusters and specific application areas. This brought together several industries and technologies to single programmes. The current generation of Tekes technology programmes is built around key national challenges and focus even more clearly on developing value-chains and value-networks, fostering new business opportunities and providing innovations to the wider economy and society.

The value of technology programmes can be approached from two directions. First of all technology programmes are a policy instrument. As a policy instrument they typically target nationally identified priorities, collect a critical mass and provide platform for networking. They improve interaction between actors, which results in improved transfer of knowledge and skills, which in turn produces a base for innovations and economic growth.

The other way to approach technology programmes is to look at them from the participant’s point of view. What is the added value provided by a technology programme instead of receiving funding through other instruments, e.g. funding of individual projects without a programme? In this sense, a technology programme is not so much a scheme, but rather a concept. Two technology programmes rarely are similar in detail, nor should they be. Different technologies, sectors and sets of actors deal with different challenges and problems. That means that each technology programme is designed to tackle specific challenges and problems facing potential participants. In certain respects, Tekes technology programmes could be considered to consist of a mix of schemes.

The added value of a technology programme comes from a fact that it allows a tailored set of services to be designed for a select clientele. These separate instruments or programmes can be targeted to encourage entrepreneurship, internationalisation, networking, technology transfer, technology watch, foresight, assessment and evaluation, etc. These separate instruments try to serve all companies, entrepreneurs and research organisations alike. Some of these instruments allow for tailored design of services for individual customers and some even try to target networks, but still they provide only a single service (technology transfer, internationalisation, how to set up a company, innovation management, etc.).

Technology programmes are an instrument which allows integration of services within a single concept. A technology programme can provide help in accessing international markets though international co-operation, market analysis and foresight activities. At the same time it can provide services to help transfer technologies from and to participants to and from international markets. To help adapt these new technologies in companies, research organisations in the programme can act as mediator in the process. Also to help adaptation, the same programme can provide innovation management training for participating companies. All these services can be tailored specifically for the group of participants within the programme, thus helping them tackle their specific challenges and problems and at the same time learn from each other.