Third Innovation Academics’ Workshop – 23 May 2013
‘Evaluation and Innovation’
Program
8.15-8.45 / Registration, tea & coffee8.45-9.00 / Opening
- Welcome (Mark Cully, Chief Economist, DIICCSRTE)
9.00-10.40 / Session #1 of Short presentations (15 mins each), followed by round table discussion
Evaluation and Innovation – complexity of issue
· Categories of Innovation and Impact Assessment (Prof Shirley Gregor, ANU, and Prof Al Hevner, University of South Florida)
· The Fundamentals of Evidence-Based Innovation Policy (Prof Paul Jensen, UniMelb)
· Outsourcing and Innovation: An empirical exploration of the dynamic relationship (Prof Robert Breunig, ANU, and Dr Sasan Bakhtiari, UNSW)
· Outcome-oriented science policy: Improving prioritisation and evaluation (Dr Paul Harris, ANU)
10.40-11.00 / Morning tea
11.00-12.40 / Session #2 of Short presentations (15 mins each), followed by round table discussion
Measurement and evaluation – impact of innovation
· Innovation: Impact Assessment Frameworks (Dr John Howard, UC)
· Sense-T – Evaluation of the economic impact (Dr Helmut Fryges, UTAS)
· Broadband Investment: Private Provision of Public Good and Information Asymmetry (Dr Philip Thomas, UNE, and Dr Erkan Yalcin, Flinders University, and Prof Theodore Alter and Dr Michael Fortunato, Pennsylvania State Uni, USA)
· VC/PE (Funds), Government Grants and Innovation in Newly Public Firms (Dr Jo-Ann Suchard and Dr Mark Humphery-Jenner, UNSW)
12.40-13.10 / Lunch
13.10-14.55 / Session #3 of Short presentations (15 mins each), followed by round table discussions
Measurement and evaluation – impact of innovation (ctd.)
· The use of alternative evaluation methods for public sector innovations (Prof Anthony Arundel, UTAS)
· A pilot study on the development of patent metrics (Vera Lipton, IP Australia)
· Using Leximancer for Innovation Evaluation Settings (Prof Kerry Brown, Prof Robyn Keast and Dr Nateque Mahmood, SCU)
· The impact of small business advisory services on small business innovation (Dr Sukanlaya Sawang and Prof Rachel Parker, QUT)
14.55-15.15 / Afternoon tea
15.15-16.45 / Group Discussion:Sharing / reporting on theoutcome of groupdiscussions
How to evaluate the impact of innovation? (Facilitated by Prof Brian Head, UQ)
16.45-17.00 / Close:Theway forward
Overview
The Department of Industry, Innovation, Climate Change, Science, Research and Tertiary Education hosted the third Innovation Academics’ Workshop on 23 May 2013 held in partnership with the HC Coombs Policy Forum at the Crawford School (ANU, Canberra).
The Workshop, ‘Evaluation and Innovation’, focused on developing a better understanding of the evaluation of innovation – how its impact can be better measured, what indicators we could move forward with, and how this could be used to influence new policy. It was also hoped that the Workshop would facilitate dialogue between academics and public servants to further promote mutual understanding and build ground for future collaboration.
The Workshop included representation from universities, publicly funded research organisations, state government, the Institute of Public Administration of Australia, and the Department.
The Workshop featured presentations from academics and government sharing their experiences and knowledge of the issues surrounding evaluation of innovation programs and instruments for measuring the impact of innovation. Each session comprised four short presentations followed by discussion, and the last session focused on brainstorming around the Enterprise Solutions Program (ESP) which is currently being designed by the Department.
Presentations
Opening - Mark Cully (Department of Industry and Innovation)
The Department’s Chief Economist Mark Cully opened the Workshop with a talk on the importance of the economic evidence-base for government support of innovation. The link between economic growth and innovation is especially important now that living standards in the United States and parts of Europe have fallen back to the levels of a decade ago.
In 2012-13 the Australian government invested an estimated $8.9 billion in science, research and innovation, including an estimated $1.8 billion in company tax revenue foregone from the R&D Tax Incentive, with further government contribution to such aspects of the innovation system as, for example, the underpinning regime of intellectual property rights administered by IP Australia.
Economists and policy analysts are working on evaluation of the effectiveness of our investment in science, research and innovation. There is solid evidence that investment in this intangible capital yields considerable returns for individual businesses and some evidence of positive spill overs to other businesses. However policy makers need to be convinced that the level of additionality and/or spill overs is such that the social benefits exceed the costs. One way to proceed is to accumulate an evidence base built on carefully designed studies that allow treatment effects to be isolated and measured.
To promote collaboration between researchers and government, the Department can offer the following four things: encouragement, insight, data, and engagement. We could put more money on the table, we could be more transparent, we could lessen restrictions on data access, and we could listen more.
Categories of Innovation and Impact Assessment - Shirley Gregor (ANU)
Professor Shirley Gregor presented a new typology (known as the knowledge-innovation typology) for categorizing innovations, and the types of value that can be achieved. Studies undertaken in the field of Information Technology found that two thirds of top innovations arise from industry - research collaboration. The work found that the definition of innovation is pivotal, and that innovation occurs frequently when one adopts a new IT system.
The knowledge-innovation typology can be applied to collaborative ventures between industry and research organizations. The typology results from a classification of innovations and knowledge contributions on two dimensions:
(i) application domain maturity; and
(ii) solution (knowledge) maturity.
The resulting matrix has four quadrants:
(1) Invention: radical innovation/exploration (low solution maturity, low application maturity);
(2) Exaptation: exapted exploration (high solution maturity, low application maturity);
(3) Improvement: incremental exploration (low solution maturity, high application maturity); and
(4) Adoption: routine design, exploitation, (high solution maturity, high application maturity).
For example, the original invention of bicycles was followed by incremental improvements and use of different types of bicycles in different setting (exaptation) and widespread adoption.
The typology offers a means for researchers and industry to categorize innovations and understand the range of outcomes and value to be expected with each type, noting that movement would occur between the quadrants.
The Fundamentals of Evidence-Based Innovation Policy - Paul Jensen (University of Melbourne)
Professor Paul Jensen is currently evaluating a series of programs from one state. He has identified that innovation policy research in Australia lags behind other areas of policy (including health, health economics and trade).
Professor Jensen addressed the issue of evaluating the impact of government support for innovation by offering the following four solutions:
(1) Creating systematic data infrastructure;
(2) Promoting access to the data infrastructure;
(3) Building capacity in universities and government; and
(4) Systematically evaluating innovation programs.
The 2010 ‘Strengthening Evidence-based Policy’ conference organised by the Productivity Commission provides some good background for evaluation.
For innovation program evaluation, the social science field offers good examples of methods, each having advantages as well as trade-offs. Randomized controlled trials are acknowledged as the “gold standard”, but they are also the most costly method. Other methods which could be used to evaluate innovation programs include: “difference-in-differences” (e.g. to compare two states – before and after one implements a new policy or program) is thought to be simple to administer and powerful; “regression discontinuity” can only be used in certain contexts; “quasi-natural experiments” are a possibility, although these are rare in Australia; and, case studies which are claimed to have only limited use.
Professor Jensen’s research is using the ABS Business Longitudinal Database, and the Confidential Unit Record File, along with firm’s ABN, to see the effects of innovation on individual firm’s performance data such as employment and exports.
Outsourcing and Innovation: An Empirical Exploration of the Dynamic Relationship - Robert (Bob) Breunig (ANU)
Professor Robert Breunig and Dr Sasan Bakhtiari have studied the implications of vertical integration on innovation performance using firm-level data on Australian manufacturing. They have used the data to distinguish between low-cost-oriented and innovation-oriented outsourcing. It has been found that outsourcing without innovation lowers costs at the expense of damaging the future chances of innovation, while innovation-oriented outsourcing leads to higher costs but increases the likelihood of future innovation. For firms that innovate and outsource, the probability of future innovation is 49 per cent compared to 8 per cent for those who outsource without innovating.
Comparing across firms that innovate, simultaneously outsourcing increases the probability of future innovation by 5 per cent. Innovation-oriented outsourcing is accompanied by firms shifting expenditure to research and development. The results offer strong support that outsourcing may be used not just as a cost-cutting strategy, but as part of comprehensive firm strategy to innovate and improve.
Outcome-oriented science policy: Improving prioritisation and evaluation - Paul Harris (ANU)
Paul Harris proposed a focus on outcome-oriented, rather than evidence-based, science policy. The effects of government investment in science include not only an increase in knowledge, but also contribute to economic productivity through innovation. It is also expected that a range of social benefits will result from investment in science, including enhanced health and wellbeing, and improved sustainability and security. However the ways in which we measure the effectiveness and appropriateness of current investments rely on a small set of scientific and economic metrics – we have little evidence of the connection to broader societal outcomes. Paul Harris questioned the supremacy of R&D investment (and other simplistic input/output metrics) as a measure of the performance of science policy.
The Department of Finance and Deregulation 2012 report ‘Sharpening the Focus’ highlights that most indicators for public sector performance are financial. However, non-financial indicators are needed to more accurately assess the outcomes of public expenditure. Paul Harris noted that in total, currently there are over 3,500 different performance metrics which public sector agencies are required to report against; and argued that instead of adding more metrics, we need to be asking the right questions.
The government has established the Australian Research Committee (or ARCom) and National Research Investment Plan to improve coordination – this provides a strong platform for further work on a consistent outcomes framework for public investment in science. The National Environmental Research Program was provided as a good practice example, as it elicits high quality research to enable policy makers to make better decisions, and also requires researchers and policy-makers to work together to articulate the expected outcomes prior to being granted funds.
Science policy should be re-oriented around outcomes, with prioritisation and evaluation following on from there. This will be challenging, as an outcome-oriented science policy framework needs to allow for a diversity of investments, institutions and outcomes, but this is precisely why one size, or one set of metrics, will not fit all.
Measurement and Evaluation of Impact of Innovation – Impact Assessment Frameworks - John Howard (Howard Partners)
Dr John Howard spoke about three broad frameworks for evaluation of innovation:
(1) Economic frameworks which measure the economic impact of research in terms of change in national economic output and productivity, as well as growth in industry output (production) or firm level output (sales);
(2) Knowledge transfer approaches combined with case study approaches to indicate change – as reflected, for example, in the work of the Go8 and the Australian Technology Network of Universities in the Excellence in Innovation Trial. These approaches focus more on the industry and enterprise level; and
(3) Program logic frameworks which are used widely in program design, evaluation and performance monitoring; they have a strong process orientation.
All three types of frameworks are useful, although they all have their limitations.
No matter which model is followed, it is important to collect evidence, and so a range of approaches to collecting evidence were presented, including economic modelling, surveys, consultations and focus groups, stories and narratives, as well as expert judgment / peer review. No single method is best; the choice would depend on the researchers’ resources, time available, and priorities. Different methods would be used in evaluation of systems as distinct from programs.
Sense-T – Evaluation of the Economic Impact - Helmut Fryges (University of Tasmania)
Dr Helmut Fryges reported on Sense-T, the world’s first economy-wide intelligent sensor network, initially trialled in Tasmanian farming. Sense-T is a partnership between the University of Tasmania, CSIRO, the Tasmanian Government and IBM. The first stage of the project has been funded by the Australian Government, Department of Regional Australia, Local Government, Arts and Sport.
Sense-T offers a combination of different sensors in a single, large-scale integrated information repository. It combines historical data, existing sensory data (for example weather data) and novel sensory data in a sensor cloud. The collected data are aggregated and interpreted, with hypotheses formulated and simulation models developed in order to detect relevant information from the sensor cloud. The information will then be provided to the community via web applications. Thus far four projects using Sense-T have been implemented, one of which is a dairy and beef project which can identify whether cows are sick, and it is hoped that the project will result in a product innovation in designer milk. Web-based decision support tools for the four initial projects will be provided until the end of 2013. The evaluation of the Sense-T projects will focus on three aspects: 1) whether the sensors work; 2) whether decision support tools provide useful information; and 3) whether projects have an economic, social and environmental impact. Various evaluation techniques are currently considered to evaluate Sense-T’s economic impact. These include quantitative methods (randomised control trials, difference-in-difference estimations) as well as qualitative methods (case studies, focus groups, outcome mapping, etc.). Research will be conducted in order to identify sustainable business models.
Broadband Investment: Private Provision of Public Good and Information Asymmetry - Philip Thomas and Erkan Yalcin (Flinders University) with colleagues (Prof Theodore Alter and Dr Michael Fortunato, Pennsylvania State Uni, USA)
Dr Philip Thomas and Dr Erkan Yalcin used an example of broadband investment in Australia (which has a public-private investment structure in the implementation of the NBN), compared to the USA (which has a private investment structure for broadband), to present an example of asymmetry of information in public-private contractual negotiation, and the implications of public-funded investment in innovation.