A Scottish adventure in cluster policy evaluation

This week I have been in Inverness on a learning journey with the Clusters3 Interreg project. This project, led by the Basque Business Development Agency (SPRI) and involving 9 partners from across Europe, seeks to improve the practice of cluster policies and their interaction with regional smart specialisation strategies.

This was the fourth Clusters3 project workshop, and was hosted superbly by Highlands and Islands Enterprise. The first day study-trip to Loch Ness provided plenty of food for thought around the particular challenges facing clusters and cluster policies in remote and rural areas (along with some wonderful scenery and great company!).

This theme was followed up in several discussions over the next couple of days, but the main focus of the learning workshop was on the monitoring and evaluation of clusters and cluster policies. This is a very challenging part of cluster policy practice because many of the impacts of working with clusters are intangible, only become visible in the long-term and/or spill-over outside of the cluster itself. Moreover cluster policies are extremely heterogeneous and typically have strong interactions with a wide range of other competitiveness policies. Hence attributing specific socioeconomic impacts to a given cluster or cluster policy is an extremely difficult task.

These are issues that we have been working on in TCI network for many years, bringing together experiences from around the world under the leadership of Madeline Smith. At the workshop, Emily Wise and I used some of the results emerging from the TCI cluster evaluation group as a springboard to facilitate a discussion around the monitoring and evaluation issues being faced by the Clusters3 partners. We started by asking them what they would expect to see in terms of results after 3 years from a €100,000 investment in a cluster initiative. The responses can be grouped into three broad types of results:

  1. ‘Harder’ economic results that demonstrate return on the policy investment in terms of increased sales, exports, private investment, jobs, etc.
  2. ‘Softer’, qualitative results in terms of improved capacity of firms in innovation, knowledge generation, entrepreneurship, internationalization, etc.
  3. ‘Even softer’ qualitative results in terms of changes in the underlying behaviour of firms with regards collaboration and collective action.

Each of these types of results has their own measurement challenges in terms of specifying the right indicators and then collecting accurate data on those indicators. Moreover it is important to acknowledge that the choice of indicators is likely to skew behaviour within the cluster. This led to an interesting discussion around the possible trade-offs between evaluation to demonstrate impact and evaluation to facilitate learning. There was a consensus that both are needed, and that we should look to strike a balance between hard economic data and softer aspects such as approaches that seek out the ‘voice of users’ (cluster members).

All of the partners are grappling with these issues in their day-to-day work managing cluster (or cluster-type) policies, and the workshop was a great forum for exchanging experiences and practices. While acknowledging that cluster evaluation is challenging and highly context specific, pooling together these different experiences over the two days led to the identification of several key success factors:

  • We should start by knowing what it is we want (from the policy, and therefore from the measurement/evaluation).
  • We should know who the audience of the monitoring/evaluation is.
  • We should design indicators and data collection to fit the evaluation requirements.
  • We should ensure continuity to data collection and monitoring, and real time monitoring (for example, through action research approaches).
  • We should work towards common understanding and dialogue with cluster practitioners (we rely on them for self-evaluation and data, and so need to create win-win situation).
  • We should only place realistic & user-friendly demands for information on cluster practitioners (it is important not to exhaust them).
  • We should position the evaluation as a learning process to engage firms/stakeholders.
  • We should make sure that evaluation feeds into change/action so that results are visible (if you ask, you must follow up with actions).

It will be interesting to see how the learning from this workshop feeds back into the cluster evaluation approaches of the partners over the course of the project, and there will be plenty more occassions to discuss and work on these issues collectively, both within the project and through TCI network. The next one comes as soon as September, when Innovation Norway will host the next TCI Cluster evaluation group workshop in Oslo.

Advertisements
This entry was posted in clusters and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s