Assessing comparability: the indicators dilemma
The lack of comparability in the social sector is indeed an obstacle for decision-makers. Concretely, investors may see comparability as the key to quality decisions when allocating capital, as it helps answer such questions as “Which solution works best?” and “Which is the most successful social purpose organisation?”
“Two similar organisations independently choose a different set of measures: one decides metrics A, B and C are important, the other choses B, D and E. As a result, we’re left with two organisations attempting to solve the same social challenge, with little way to understand how their results compare, nor what one can learn from the other.”
These are the words of Tom Adams and Madeline Copp, respectively Co-Founder & CFO and Strategy Manager of 60 Decibels, in a recent Pioneers Post article.
The lack of comparability in the social sector is indeed an obstacle for decision-makers. Concretely, investors may see comparability as the key to quality decisions when allocating capital, as it helps answer such questions as “Which solution works best?” and “Which is the most successful social purpose organisation?”
However, if investors make straight comparisons, they risk failing to consider the uniqueness and complexity of each activity. Even if operating within the same sector, social impact is substantially context-specific. Therefore, making comparisons without contextualising each intervention might lead to flawed decisions. Furthermore, it is highly controversial to compare the performance of organisations operating in different sectors, serving different types of beneficiaries and having different impact objectives. Comparability should uphold, not jeopardise, well-informed decision-making.
In practice, what allows investors to compare data – either for benchmarking against other actors or for comparing performances within their portfolios – are standardised indicators. Relevant initiatives such as the IRIS+ indicator database have emerged to enhance data clarity and comparability.
Standardised indicators bring additional benefits. They help organisations save time, as practitioners have a reference to build their impact monitoring framework by relying on widely accepted metrics. They also enable organisations to aggregate impact at a portfolio level, which is often a requirement from shareholders.
However, there is a trade-off between comparability and granularity. Standardised indicators might not be well suited for monitoring impact and ensuring quality decision-making, thus investors must build customised indicators that emerge from the context where they operate and the uniqueness of the social solution they support.
Because the degree of standardisation varies across sectors, some outcomes are easier to capture with standardised indicators than others. For instance, as environmental impact is arguably more objective than social impact, there is greater homogeneity across environmental indicators, which fit well with the needs of the investees and can be compared and aggregated at a sectoral level.
Any discussion about indicators, however, should not overshadow the importance of outcome selection. Stakeholder analysis and the selection of outcomes should precede the discussion about indicators. A thorough understanding of the social problem and the needs of the stakeholders are prerequisites to assessing what type of indicators to use.
As the impact sector evolves, and investors share more and more indicators, the degree of standardisation is expected to increase over the coming years. Investors for impact are expected to lead this transition by preserving a stakeholder-centred approach to IMM, even when standardised indicators are at play.
*
Lissa Glasgo – Why Not to Use Rogue Metrics (and Three Reasons You Might Need to Anyway)
Lissa Glasgo serves as a Director on the GIIN’s IRIS/Impact Measurement & Management team, working on IRIS+ guidance and impact performance analytics to support stronger investor decision-making throughout the investment cycle.
With the GIIN’s recent estimate of the impact investing market’s size at $1.164 trillion and growing, investors face both incredible opportunity to create impact…and an increasing risk that the industry won’t be able to separate the impactful wheat from the greenwashed chaff. More specifically, investors currently lack transparent, rigorous data about which investments are the best bets to create real-world solutions in order to place their money as effectively into impactful investments as they want.
When investors are assessing companies for potential investment, they need rigorous financial data on areas like the company’s current operations, liabilities and market potential. The same is true of impact data – to make good investment decisions based on impact, investors need to understand what impact the company is currently achieving, how they can grow and improve that impact over time and what impact similar companies have had in the market.
But here’s the thing: to have those impact insights across a pipeline or portfolio, you need to be able to compare impact performance between investments. To do that, you need a “common language” for impact. Agreeing on how we, as an industry, are going to look at certain data points, including common definitions, helps to build a collective, evolving knowledge base about what is possible – and how we can better, more consistently drive positive outcomes. Investors that use common language to describe impact across their portfolios also benefit by using standardised metrics to check their progress. There are other pros to this approach as well: for example, easier data reporting for investee companies, better support systems for collecting and interpreting that data, and greater transparency in the market as a whole.
We know this because it’s what we at the GIIN work on – from developing that common language for the past 13+ years in the IRIS+ system, to testing actual comparability of impact performance data, to launching impact analytics that help investors understand how their investments are performing against the market. The IRIS+ system - which offers definitions that resonate with global goals and frameworks without requiring an off-the-shelf set of measures - continues to support companies and investors to articulate their goals and impact pathways, capture actionable data on progress, and enable a nuanced understanding of relative impact performance within a portfolio or strategy.
While standardised metrics are critical for a functioning, impact-oriented market, they’re not the answer in every case. Here are three things we’re *not* arguing:
-
Investors should start with standardised metrics. In ESG investing, fund managers use a standard set of disclosure topics to assess risks to their financial performance. Therefore, the best practice is to start with a pre-set list of measures and go from there. In impact investing, though, both the purpose and the approach of measurement are different: impact investments require that investors have a vision for the impact they’re aiming to create – ideally developed by asking the communities that will be affected by the investment. Only after articulating that vision do investors develop a strategy for how they’re going to get there. To help them understand if they are, in fact, supporting the outcomes they’re aiming for, they use impact metrics to track data. To invest for impact, start with the impact in mind, then decide which metrics can help you check your progress.
-
Standardised metrics are always all you need. Done well, impact performance data helps an investee company and their investors understand whether they are generating the impact they intend to create. To get there, investors and companies often use tools like a theory of change to articulate what their goals are, what assumptions they’re including, and what they need to do to make it happen (the IRIS+ Core Metrics Sets can help with this process, but they don’t replace it). We often find that in doing so, investors and companies land on many very similar measures, often ones that investors would like to understand in the context of other, similar ventures. These are the metrics that are most helpful to standardise because they create a shared way of capturing data that then builds our collective knowledge base.
But here’s the thing: impact is complex. It’s multifaceted, and incredibly context- and community-specific. There will always be stakeholders that experience something new, or projects that aim to create impact that doesn’t resonate in other places, and that’s a great thing. Consequently, investors and companies should take only the measures that are useful from standard sets, then tailor additional measures to get exactly the information needed to understand and act. -
Impact metrics should be the same in every geography, asset class, or business stage. Impact performance data without context is like trying to start a fire underwater -- it just doesn’t make sense. Standardised metrics are helpful in enabling comparisons between investments and companies, but to compare responsibly, we need to first understand the situation, the place, and the time period the investment is happening in. Sometimes, certain metrics don’t make sense across all geographies/business models/stages/other, and that’s okay – by understanding what *is* important to create impact in a particular place and project, we can better understand what “good” looks like.
There’s no shortcut to thoughtful impact management – it takes thoughtfulness at a portfolio, fund, and investment level, and it takes good, hard and ongoing questions to get things right. Tools like IRIS+ can help with some of those questions and, equally important, can help to build a stronger and more transparent industry by creating a deeper dataset supporting better analysis for investor decision-making.
*
Tom Adams – The Key to Meaningful Impact Comparability: A Great ‘How’ of Measurement
Tom Adams is the Co-Founder and Chief Strategy Officer of 60 Decibels. He has held various leadership roles across the public, private and charitable sectors. Immediately before 60dB, Tom was the Chief Impact Officer at Acumen.
The purpose of impact measurement is to gain understanding of impact performance, and to use this newfound understanding to facilitate impact management: the act of decision making to improve said performance. Data not used for decisions is of little value — it’s just nice numbers. That is why there has been so much excitement and attention paid to the growing practice of impact management.
That comparability is essential has been well-recognised since the early days of impact investing. The establishment of IRIS+ by the GIIN was done so precisely with the ambition of providing a taxonomy of indicators for comparison. IRIS+ can be seen as providing guidance for ‘what’ to measure.
But meaningful impact measurement doesn’t stop at listing ‘what’ to measure. We also need solutions to the considerably harder question of ‘how’ to measure. An inability to undertake ‘how’ means that metrics may be well-defined but not actually measured!
‘How’ requires a repeatable approach to quality social research — the act of sampling and surveying people who experience impact — that balances cost with complexity, rigour with speed. This is essential if it is to be widely adopted, as well as folded into the budgetary processes and decision-making speeds of investors and enterprises alike.
60 Decibels has been pioneering quality approaches to the ‘how’, all built bottom-up by listening to the people we hope to positively impact through the work we do. When we listen, customers, suppliers, employees or users can tell us what things change in their lives and, of these, which are most material to their wellbeing.
But we don’t stop there. Based on this listening we also build survey instruments that can be repeatably scaled across whole sectors, year-in, year-out. With respect to the ‘how’ of impact measurement, this is when the rubber really hits the road. This approach was at the core of 60 Decibels’ MFI index which included multiple material indicators – on the depth of household impact, business impact, financial resilience, financial management and financial inclusion – and consolidated them into a single impact index.
Once our sector starts to get behind, and to expect, such measures of impact performance as the essence for impact comparison, the results would be huge. We will all get much better, much faster at identifying companies and investment strategies that deliver quality social impact performance. The result of this is simple: greater impact.
*
Samuel Monteiro – Harmonising While Taking into Account Specificities: the Challenges of a Generalist Impact Investor
Samuel Monteiro is a Senior Manager in charge of ESG and Impact at Investisseurs & Partenaires (I&P) where he oversees the impact measurement and ESG risk analysis of the different programs implemented. He also conducts in-depth impact studies on portfolio companies to better target their impacts on different stakeholders. Samuel holds a PhD in Economics on the theme of employment in SMEs in sub-Saharan Africa and in particular the impact of formalisation.
The issue of comparability of indicators is all the more important (and difficult) for a generalist investor, who has to communicate on the aggregate performance of their portfolio while investing in sectors as diverse as agribusiness, health, education, tech, etc. This is our case at Investisseurs & Partenaires (I&P), an impact investment fund supporting African SMEs.
As our primary mission is to support African entrepreneurship and develop the missing middle that SMEs represent, we needed to ask ourselves what do we want to measure. We then developed an impact thesis based on stakeholders, because every company is made up of these different stakeholders: the entrepreneur, employees, customers, subcontractors, while integrating gender and the environment – since women and the planet are stakeholders.
We then defined a list of indicators collected for all our projects on each of these pillars: the percentage of African entrepreneurs, job growth but also salary levels and access to health insurance, the number of local suppliers, the share of green projects, the share of projects meeting the SDGs, etc. But it was also necessary to consider the specificities of each project. Let's take the case of a school. If it will be counted as part of the share of projects responding to the SDGs, one would like to know more and make it more concrete. For example, the number of students, including the number of women, and the number of scholarship holders will be requested. Each sector, especially in its client and subcontractor dimension, has specificities that require keeping some specific indicators.
This mix of a strong common base of harmonised indicators and a limited number of specific questions has allowed us to meet the challenge of communicating our impacts across our portfolio. But it does not meet the challenge of comparing performance between impact investors. The challenge persists even on issues as basic as employment. How do we define a job created? An informal job that is formalised thanks to investment can be considered either as a job created or as a job maintained, depending on the actors. In the context of Sub-Saharan Africa, where informality is prevalent, the impact of this formalisation is huge in terms of securing income, access to social benefits such as social protection, access to bank loans, etc. Comparing job creation without taking into account the specific context of each economy would be meaningless.
Trying to quantify impacts at all costs, sometimes even getting caught up in the madness of magnitude to show ever stronger performances, can lead to moving away from the very essence of what impact investors are about: going where others do not go, targeting the countries that need it most and targeting the companies that have the least access to finance. This is where the additionality of an investor lies. Seeking to increase impact does not necessarily mean having the highest absolute impact figures. This delusion of grandeur can distract us from the very meaning of what we want to do: reach the people who need it most and bring changes that would not have happened without us. An indicator, whether harmonised or specific, is not always able to take account of this essential context.
*
This post first appeared as a LinkedIn article.