Interview with Richard Stanton, MigrationWork CIC for « Trait d’Union. Ville et Communes de Bruxelles-Capitale », no. 121, November–December 2020, p42-47 (in French)
Is benchmarking currently useful for the development of migration and integration policies at European level?
To resolve a challenge as complex as migrant integration we need, of course, a range of instruments not a single ‘silver bullet’.1
But experience shows benchmarking is a powerful tool for strengthening integration work in Europe. It achieves two things that are fundamental to this work: first setting norms or for best practice that also embody European values; and second, enabling a process of among practitioners across Member States.
So what do we mean by a benchmark? The term has been around for decades, used in very different ways. Originally a management tool in the private sector, it was imported into public administration in the 1980s and 90s. Then during the 2000s Eurocities, the network of major cities, began to adapt a version of the public sector benchmark for the tasks of migrant integration.
It’s worth recalling the migration context. After the post-colonial phase of migration, bringing immigrants to the ‘motherland’ primarily as a labour resource with little effort to promote integration, the 1990s had opened an era of broader movement of migrants from the Balkans and the global South, many seeking asylum, most settling in Europe’s cities. A much more active approach to integration was clearly overdue.
City leaders realised the urgent need both to transform cities’ own practice, and to engage EU institutions with them as partners in the settlement of refugees and other migrants. Whilst other mayors’ initiatives also began addressing these tasks, Eurocities took the lead in designing benchmarks on different aspects of integration and then – backed by the European Commission – in applying them to city experience in a series of mutual learning projects from 2007 onwards.2
These projects used benchmarks initially in the method of peer review, which has since evolved into ‘mentoring’ and ‘community-of-practice’ approaches. As MigrationWork CIC, we began applying benchmarks in 2009 as moderators in a peer review project for DG Employment, looking at how effectively European Social Fund money was being used to promote migrant access to the labour market. Since then we’ve worked directly with Eurocities on a series of benchmarking projects, leading on to INCLUcities with the Council of European Municipalities and Regions.3
Through different iterations, our benchmarks have shared a similar basic design. Each takes a specific theme or area of integration as its focus: for example integration governance; migrant employment, or civic participation; addressing discrimination in provision of services; public attitudes to immigration, and so on.
Within that thematic area, experts carry out – from the literature and then from talking to practitioners – a review of EU-wide experience of projects at local level to see what worked best in promoting good results, and in realising European policy goals relevant to that field. In other words we search out the European norm of best practice for each integration area.
How is it turned into a benchmark? First we break down this successful experience into a series of vital elements: the things you evidently have to do in order to deliver that best practice. In the benchmark, each such element is identified as a key factor (or in earlier versions a ‘critical factor’).
Here was a crucial change agreed with partners when MigrationWork first started work on benchmarks. Previous versions had loaded the benchmark with a large array of separate indicators, many quantitative. Instead we designed it around a small number of key factors, largely qualitative, that describe what people do to make their integration work a success. Typically it includes between eight and 12 of these factors.
Each is presented in a short paragraph, supported by another couple of lines explaining why it makes a critical difference. You can see how the factors fit together. So already in simple, transparent terms familiar to practitioners or activists, the benchmark enables these users to start analysing what makes integration practice a success.
Since its main purpose is to help them to learn from one another, face-to-face or in virtual ‘visits’, we then add to each key factor a number of guide questions which you can ask your counterpart – say in another city – to see how far that factor is incorporated in their local practice. (This enquiry was shown in our earliest benchmark, for the IMPART project, as ‘tests for critical factor’.)
We also give an indicative list of evidence that might illustrate replies to these guide questions from the colleague in the other city – showing whether the key factor operates there or not. Relevant evidence may include quantitative data, but often we find the most helpful evidence is documentation from that city about the actors involved in this field, the policies set for them, and the way they really work.
None of this learning happens in a city ‘bubble’. Applying the benchmark to enhance its practice, the city or regional authority must be able to point out conditions beyond its control that affect its chances of success. So finally we add to each benchmark an explicit list of these contextual factors, inviting users to refer to them: from labour market conditions to national legislation and budget constraints. The point of benchmarking is to work for change in the real world, not in a utopia!
This is how we in MigrationWork have adapted the benchmark for transnational work. To highlight its significance in discussing integration, let’s compare it with a high-profile technique with which it is sometimes confused – indexing.
An index is a way of measuring variation in a set of quantitative parameters or ‘indicators’ – over time, or at a given time among actors like cities – and then synthesising it into a summary statistic or graphic. Useful work has been done in our field with this concept.4 But an index does a different job from the benchmark I’ve described.
Measuring how far city A varies from city B along a numerical scale will not explain in real organisational terms what systems each has created to deal with migrant integration; nor their dynamic, how well those systems interact in real life or where risks of failure arise; nor the relative importance of (say) city values, budgets, staff training or community mobilisation in driving the variation shown by the indicators. Simply adding more indicators to your index will never answer such questions.
In short, the index may help to monitor progress towards integration, in theory at least. But if we want to know what lies behind those results, we need to reflect analytically on how things work and why they work. That’s what the benchmark helps us to discover.
In guiding users through this process of discovery, for a given area of integration, it encourages them to interrogate and share their own experience of how things work. So the benchmark as European standard becomes also a stimulus to mutual learning.
How can /should benchmarking processes use the experiential knowledge and practices of migrants and refugees? i.e. seeing migrants and refugees as actors of integration, in the context of benchmarking;
Across all our benchmarks for integration work, no key factor recurs more often than the need to hear the voice of migrants themselves in improving such projects. This must be true similarly for the work of benchmarking and transnational learning between cities. Yes, we need their perspective and their agency in this work too.
Two caveats, however. First migrant participation will not happen overnight. Identifying migrants who are really representative of local communities – by gender, age and other equalities categories – and also have time for this discussion, may only be possible where links between official and community worlds already exist: formal structures like migrant forums, or informal links for example with women’s self-help groups or youth clubs. Then when representatives are found, they will typically need induction and training to feel confident in discussing the specialised issues of transnational learning.
Secondly, if we believe that integration is a two-way process – or more fundamentally, that it’s about the whole city becoming a more equal and convivial place – then we should try to listen also to non-migrant or ‘host’ communities. Some parts of that ‘host’ population will already be fully represented in city governance, but other parts may be marginalised, their voices rarely heard. Giving them also a place in discussion of integration practice could make the learning process more robust.
The priority though, as I say, is to ensure that the transnational learning process can benefit from the expertise of refugees and other migrants. In principle this may happen at the following three stages, though so far we’ve managed to realise only the second:
- Preparing the benchmark: This is the founding stage, researching what has worked best for projects Europe-wide. Obviously, we’d gain hugely from comment by migrant community observers at this point. The problem is that because each learning project is framed by its benchmarks, drafting them has to happen right at the start. So far it has just proved impossible to identify migrant participants across partner cities, enlist and train them in time for that drafting work, when the project has only just been launched.
- Joining the team to apply the benchmark: Each mutual learning project involves people from one partner city or region visiting (physically or virtually) its counterpart locality to explore the latter’s integration practice, comparing it with the benchmark. Migrants are included as far as possible among these visiting teams, and have been some of our most dedicated participants. This role, with preparatory training, gives them wide scope to comment on the benchmark they have been using and to suggest how it could be enhanced.
- Feedback after learning visits: The final phase of the project involves appraisal of the benchmark. Since by this stage the project activity with its interchange between partner cities should have engaged migrants and their associations in most of these localities, we might hope to get feedback from some of them on the benchmark. In practice however such feedback – whether from migrant communities or from partner administrations and NGOs – has been minimal, unless we hold a special workshop on ‘reviewing the benchmark’. And migrant colleagues may be too busy for an extra workshop …
The long term solution to this challenge seems pretty clear. As part of a wider shift to more participatory forms of local democracy for all residents, city authorities and other service providers need to start involving representatives of refugee and other migrant communities in regular, structured monitoring of the services and activities relevant to their integration.
It sounds ambitious. Those recruited for the monitoring role would of course need support (including a fee). But by establishing it as a normal part of municipal democratic practice, we might create a pool of migrant experts better prepared to engage with benchmarking and other tasks in transnational learning.
What are the really vital two or three preconditions for a city to make a success of benchmarking, in any field of integration work?
Generally, as I’ve said, the benchmark is for use in a collaborative enquiry by authorities learning from one another. Mostly we and other participants have found these exercises immensely rewarding. But there have been disappointments. Here are three things we must have if benchmarking is to work well in transnational learning projects:
- Political lead: Developing integration practice is about change, sustained over years. If the mayor or city authority doesn’t believe in it, or if they are enthusiastic but are about to lose an election to politicians who oppose it, then working to improve this practice may be futile. All our experience of integration confirms that the concrete change a city seeks from benchmarking will happen only if it’s led consistently over time by politicians elected to represent that city.
- Officer capacity: This sounds banal but it’s vital. The point of the benchmark is to explore practice in depth. Whichever mutual learning model they adopt, benchmark users will need to meet many agencies and groups (not least migrant communities), gathering evidence across the selected city or region. They will need meeting rooms, digital links, interpreters, maybe places to stay. To set all this up, a dedicated team in the ‘host’ city is a sine qua non – not one desperate officer doing the work of three!
- External engagement: Key factors in every benchmark include the city authority’s ability to engage with a range of actors outside its own structure: migrant communities, local voluntary associations and social partners of all kinds, other public service providers in the city, probably agencies at other levels of governance. If its relationship with external actors is weak, benchmarking its practice may be very difficult. If the relationship is strong, benchmarking is likely to yield rich results.
1 ‘Migrant’ is used here in the standard UN sense, to mean anyone who moves to live for 12 months or more in another country. The term therefore includes forced migrants ie. asylum seekers and refugees.
2 See www.integratingcities.eu for the timeline of these benchmark-based projects. In designing the benchmarks, Eurocities worked initially with Migration Policy Group and UK consultancy Ethics etc., and then after 2010 with MigrationWork CIC.
3 Further details on the ESF-focused project IMPART where MigrationWork began work with benchmarks can be found on our work page. Later projects making use of benchmarks, led by EUROCITIES in which MigrationWork CIC was moderator or facilitator, have included: MIXITIES (2010-12), ImpleMentoring (2012-14), CitiesGrow (2017-19), VALUES (2019-21) and currently CONNEXIONS. More information about them is at www.migrationwork.org/work/ and at www.integratingcities.eu