Learning from experience
Determining one's own impact is a tricky undertaking for charitable organisations. However, it significantly contributes to growth.
“In our funding, we place great emphasis on financing particularly impactful projects and organisations. We also want to keep increasing our own effectiveness. To do this, we want to know where our funding is already working well and in which areas we can do better.”
“From my business experience, I have always paid close attention to detail: what are my employees doing, is an investment worthwhile, how well are our suppliers and business partners performing? In my philanthropic engagement, too, I want to make sure that my funds are being used wisely and efficiently.”
Learning from evaluation and impact measurement
For many philanthropists, it is important to achieve the greatest possible impact with their funds and to know whether they are actually succeeding. However, real evaluation and impact measurement costs a lot of time and money; resources that are then no longer available for funding or project activities.
There is a simple rule of thumb for what funds are appropriate for evaluation – for example, as a percentage of the funding amount. Therefore, donors and foundations have to weigh up: how much is it worth to them to know exactly if and how their support is having an impact?
In many cases, the decisive factor for the decision is the question: what does the donor want, what does the foundation want to learn from the evaluation?
“It does not make sense to attempt an ambitious evaluation with too few resources. To make an assumption that there will be sustained change in behaviour from a single self-assessment with 15 participants is misguided.”
Impact on various levels
Evaluation and impact measurement answer two main questions: does the chosen approach work at all – how effective are the measurements? And how well does it work – how efficiently are the funds used in relation to the impacts achieved?
This is especially the case in funding for organisations, it makes sense to distinguish between effectiveness at various levels. The example of a programme in which parents learn to better support their children in educational processes is a good illustration of this:
- Ultimately, the impact should be on the children – their chances in the education system should improve.
- The programme of the non-profit organisation is aimed at the parents – here they are expected to change their behaviour towards the children.
- The foundation supports the non-profit organisation – it should be enabled to implement the programmes.
Can impact be measured?
In the article on impact orientation in strategy development, the answer to this question is: “Yes. And no.” What does this mean?
The first question is: what do we want to know?
In our example, this might look like this:
- For the children: is the programme successful – are their educational opportunities increasing?
- With the parents: is the programme working – are the parents changing their parenting behaviour?
- With the organisation: is the support helpful – can the organisation reach more parents from the target group with the programme?
Depending on this interest, it is then a question of obtaining suitable data. In practice, this is often very demanding and/or time-consuming.
- With the children, one could systematically collect data on how well they are doing at school, whether they are being promoted to the next grade according to their age and how their learning is developing.
- In the case of parents, questionnaires could be used to collect self-assessments or, through observations by third parties in certain situations; an assessment could be made as to whether their educational behaviour is changing.
- At the organisational level, one can ask: how many parents are reached, how the costs per course develop, what impact the success of the programme has on the awareness of the organisation, or what funds could be raised from others for the programme.
This data is often fraught with uncertainty. There is also always the question of whether the observed changes can actually be attributed to the impact of the organisation or the funder, or whether they have other causes.
Excursus: Trials with randomised control groups – the gold standard of impact measurement?
When dealing with large groups, “randomised controlled trials” (RCTs) are used in social research, but also for example in medicine. Here, a group with essentially comparable characteristics is randomly divided into an intervention group and a control group. The intervention group takes part in the project (or receives the drug to be tested), the control group does not (or receives an ineffective placebo).
If the intervention group then shows effects that do not occur in the other group, one can assume that these are due to the project (or the drug), because the groups do not otherwise show any differences that could explain the effect.
Such experiments with randomised control groups require large, reasonably homogeneous groups in order to fulfil the condition “essentially the same” and to allow statistically relevant statements; in practice, this is extremely difficult. In some cases, it is also considered ethically questionable to withhold aid from a randomly selected part of a needy group.
For this reason, one sometimes works with “waiting control groups”, which also receive the help, but later. RCTs are therefore a reliable method of verifying effects under certain conditions, but they are only suitable in selected cases with a lot of data and are neither sensible nor justifiable in terms of expense for smaller projects.
“In my philanthropic work, measuring impact is not very important. I don’t want to spend too much money on it; it’s more important to me that the aid reaches the people. I can live with the uncertainty of whether this is really successful – I trust that the impact of the partners I support is as important to them as it is to me.”
Trust in impact
Every philanthropist and foundation has to decide for itself what proportion of its funds it wants to invest in impact measurement or evaluation. Some choose to fund only projects or programmes that have already been evaluated by a third party. Others let it suffice that the funding partners explain conclusively how a project is supposed to have an impact and why these assumptions are realistic.
“I am firmly convinced: there is no project or programme that cannot be made even more efficient, effective and sustainable. For this, I am willing to allocate considerable resources for constant review and improvement. Everyone benefits from these improvements – the organisations I work with, but also other organisations that learn from them, and in the end, of course, the respective target groups of the programmes and projects.”
Continuous improvement
From the Japanese business world we know the concept of kaizen – the continuous striving for improvement and the constant questioning of assumptions. Non-profit organisations can also continuously learn. An important prerequisite for this is robust data on the effectiveness and efficiency of their work.
Most organisations lack the resources for this. For them, it is a stroke of luck when funders are willing to finance impact research, evaluation and data collection; also sharing the consequences. These can range from minor adjustments to a project to the complete reorientation of programmes or strategies.
“At the moment, we are mainly interested in the impact of our funding on partner organisations. Are our requirements tying up resources that should be available for something more important? Do our requirements or expectations lead to the organisation carrying out certain activities only because of us? Do we create dependencies with our funding that put the organisation in difficult predicaments?”
Effects on the funded organisations
No funding is without consequences for the funded organisation. The larger the funding and the higher the share of the budget, the more drastic these effects can be. In the best case, funding helps the organisation to learn, grow and fulfil its tasks better. In the worst case, it leads to high administrative costs, sometimes combined with concessions to the funder that tend to distance the organisation from its actual goal (“mission drift”).
If, for example, there is only project funding for work with a certain target group, then the temptation can be great to design suitable projects – even if the organisation would actually like to set other strategic priorities.
Keeping an eye on power imbalances
The relationship between funders and grantees is often characterised by a power imbalance; even though both are dependent on each other, the decision-making authority lies on the side of the foundation or philanthropist. This makes it difficult for funders to get honest feedback on the quality of their grant making; hardly any organisation dares to bite the hand that feeds them.
One tool that is increasingly appreciated by foundations are anonymous surveys of the organisations they fund, for example in the form of the Grantee Perception Report of the Center for Effective Philanthropy or other surveys. Here, foundations learn how much effort their application processes and reporting expectations create, whether grantee organisations feel well understood, and how they could further improve collaboration.
Some philanthropists and foundations involve a community of organisations that work in the field directly, in their strategy and planning processes; some even in their decision-making procedures.
Learning from experience
Philanthropic action is not without consequence – even if the desired effects do not always occur. Regular reflection helps to better understand impacts and to further develop one’s own commitment in a targeted manner.
Learning from experience
Improve engagement with evaluations and impact measurement