This piece was originally published on WomenEffect.com, an independent knowledge hub about gender lens investing. Women Effect’s content is now part of the Wharton Social Impact Initiative. Read more about the transition here.
Jaclyn Berfond helped develop and launch the Women’s World Banking gender performance indicators in 2011, and is now responsible for developing the indicators to measure performance of new products and markets. In this interview, she shares some pertinent findings and challenges around data measurement with Women Effect.
How were the Gender Performance Indicators developed, and why?
The financial institutions we were working with [at the time of launch, all microfinance institutions] had an enormous database full of client information and transactional behaviour and weren’t using it at all. Many of these institutions were partnering with us because they have a mission to serve low income women yet didn’t really have a handle on how to work with their clients beyond reporting about borrowings.
We came up with well over a hundred indicators that we thought could be potentially interesting to look at. We knew that while these would be valuable, we needed to actually go in and look at the actual data to see: one, if it would actually yield meaningful information; and two, if it was actually feasible to proact.
The evolution of the gender performance indicators follows the trajectory of Women’s World Banking and over the past few years, we’ve really expanded to wider financial inclusion. The entire space has really expanded how we think about who serves the low income market. Ten years ago that was only microfinancial institutions, but today it’s banks, TelCos, digital financial services.
A lot of our partners now are commercial banks serving the low income segment or underbanked segment. And so in parallel, we’ve expanded the indicators of other tracks under the gender performance initiative.
We tested the indicators with one of our partners in Latin America, in Columbia; a partner in Uganda, Club Finance Trust; and a partner in India called Ujjivan. All three are very strong financial institutions in their own market of three very different sizes. Finance Trust is probably the smallest – about 200,000 clients – and Ujjivan has around 2 million. Ujjivan only serves women; that’s a very common model in India. But just because you serve women doesn’t mean you’re serving them well, and so we thought of the importance of including that perspective.
The indicators vary by level of complexity in terms of implementing and tracking; is there a lot of training required for people in different institutions to understand how to measure performance accurately?
Definitely. Those complexity rankings would actually be what came out of those pilots because [some of the indicators] are an easy add-on and [some] take a lot more investment. We have our entire network report against a good chunk of the indicators. Some of them are at an institutional level, for example the size of a deposit portfolio disaggregated by gender or the growth rate of a client base. Those are quite easy for any institution to report. Some are much more internally focused, so what’s the uptake of individual products by men and women. Those are harder for us to compare across institutions, obviously.
When we get into the more complex ones, we have some social indicators that are quite the advantage. It’s more about looking at poverty levels, then access of clients over time and those definitely need some training and some really need a bit of soul searching for institutions to decide that they’re willing to invest to gather information about the social impact of their work, on women in particular. That, I think, involves a little bit more handholding, but overall any of these indicators or data points can be collected quite easily. It’s more getting the institution to value actually investing in, analyzing and using that data.
You mentioned the indicators have evolved since you first started, can you talk us through how that happened?
We had a 1.0 version of the manual which was very credit-focused. A lot of our partners at the time were microfinancial institutions, some of them legally couldn’t accept savings so were mostly focused on loans.
The second iteration of the indicators was about focusing on the savings side. We tested a new set of indicators around savings with our partner in Nigeria, Diamond Bank. We’ve completed the pilot but haven’t published results yet. We also went in with our partner in Jordan and did some analysis, because we made an insurance product and we wanted to look at insurance indicators for gender performance. That hasn’t been published yet but we’re adding it to our toolkit of indicators.
Are the indicators weighted more towards social impact or financial impact?
Both. Sometimes, in the financial inclusion space we’ve been bucketed into social performance. We absolutely want to measure change in assets and income over time. We have some indicators to look at household dynamics that we’ve added in the most recent version; that is as important as operational financial data and understanding how you’re serving the women’s market.
Some of our most impactful data frames have come from financial indicators. A WWB colleague noticed that the known portfolio given to women in one of the institutions she is invested in was declining, but actually women had better repayment rates. When you take that kind of data to the management of an institution and say “let’s just look at the numbers and see that women are actually your better clients; why are you declining your loans to women?” – that’s a huge story to tell. We really value those financial numbers.
What surprised you in the data?
We did some analysis which shows that our partners that served a majority of women clients have lower portfolio at risk, better repayment rate and may have higher return on asset. Really interesting trends like that continue to make that business case for serving the women’s market. That’s the kind of network level; my dream would be to have that on industry level so we could look all financial institutions. Obviously, that’s where we getting to the dissemination of the gender performance indicators and why we advocate for others to use it.
The other interesting findings that come out are really at the individual institution level. We’re often on site with partners developing new products, and what we can see when we analyze a specific institution or a specific product is fascinating. Our pilot in Nigeria, for example, was around our savings product. We wanted to understand how women versus men were using it.
We also saw that the average savings balances for men were actually higher. We’d seen the research showing that women said they were saving more so we wanted to dig in to what was happening. We saw that women were depositing more frequently than men in smaller amounts, but were also withdrawing a lot less frequently. Even if their balances overall were smaller, the amount they were keeping in the account and the percent of everything they’d deposited was actually much higher. It was a longer term build, if you will. Men were putting in bigger amounts but taking it out much more frequently and women were building up their assets. That kind of analysis is really fascinating.