Assessing Impact: some current and key issues
a discussion paper
Alana Albee is a founding member of the Caledonia Centre for Social
Development. She has been an international consultant on participation, micro-finance and
community development for many years and takes up the post of Social Development Advisor
for DFID in Tanzania in May 1999. Alana is married with one daughter and runs her
consultancy business from Inverness in the UK. |
|
In this paper she suggests that the
present focus on PIA is drawing attention away from the need for a whole-system view
of monitoring and evaluation in which there is systematic reflection on the methods of
intervention as well as on their impact.
|
Email comments - albee@caledonia.org.uk
Assessing Impact:
some current and key issues
A.Albee, April 1999
- Impact Assessments
- The Issue
- Social Audit: a potential role
- Concluding remarks
- Further information
- References
- Acknowledgements
Are we in danger of disproportionately
emphasising primary stakeholders' views to the exclusion of organisational and other
factors that influence impact?
Participatory Impact Assessments (PIAs) are being undertaken by many
organisations working in development, but is the current emphasis on primary stakeholders'
views overshadowing organisational factors which also influence impact?
This article highlights the need to place PIAs within the wider
context of participatory monitoring and evaluation, as an integral component rather than a
parallel stream of information. It highlights the need to assess other factors such as
efficiency, effectiveness, replicability and sustainability, and briefly introduces the
potential role and limits of social auditing.
Impact assessments are most often viewed as a means of judging
performance by understanding changes (intended or otherwise) experienced by primary
stakeholders as a result of development interventions. They can help distinguish whether a
project intervention is in fact achieving its objectives, whether or not these objectives
remain relevant over time, and whether or not the best action strategies have been pursued
(Estrella and Gaventa, 1997).
Although it may be at least partially the case that the current
emphasis on impact assessments is rooted in donor concerns about poor project performance
and pressure to provide evidence of the quality work with the poor (Goyder, 1996), for
some organisations impact assessments are also considered an important step along the
complex path of shifting ownership and control of interventions to primary stakeholders.
To be considered participatory, impact
assessments frequently use tools originating from other forms of participatory enquiry,
such as PRA and PLA. These are incorporated in an effort to involve local communities,
groups and individuals in data gathering, analysis and archiving (Nopenen, 1997). In most
PIAs, participatory tools are coupled with the use of indicators, and increasingly
'beneficiary-identified' indicators are being used (Goyder, 1996). Questions are emerging
about the underlying assumption that participation in a PIA is in itself empowering
(S.Siddiqi in Action Aid 1998; Albee, 1999), and about the limits of participatory methods
in terms of time and unrealisable expectations (D.Kuppuswami, S.Sanjiri and C.Owusu in
ActionAid 1998; RYT Omar, 1998). The importance of these deliberations should not be
underestimated, yet the question remains as to whether the current focus on the how to aspects of PIA have overshadowed the more strategic why issues which relate to PIA within the broader context of
participatory monitoring and evaluation (PM&E).
During the 1980s, impact assessment was most frequently undertaken as
part of broader participatory evaluation approaches. In general terms these assessed
change and its significance in relation to effectiveness, efficiency, relevance, impact
and sustainability. It was argued that any single evaluation may not be able to
examine each of these factors comprehensively, but that each should be taken into
consideration (Feuerstein, 1986; Guba and Lincoln, 1989). This integration of factors
(including, but not exclusively impact) has been eroded during the 1990s by the trend to
develop ever increasing numbers of participatory forms of enquiry. This has partially been
in response to organisations finding it too difficult and time consuming to undertake full
participatory monitoring and evaluation processes. Some organisations have, in essence,
sliced-off PIA from broader PM&E. Is it adequate to focus exclusively on the primary
stakeholder's views when determining impact?
PIAs put a key focus on enabling the poor to voice their views
about impact. The importance of this is not in doubt. Local people can and often do assess
the impact of development interventions in an organic way and by using a variety of their
own (often informal) indicators. Many organisations need to improve their capacity to
facilitate, listen and learn from this. The reality of most development efforts in the
1990s is that there is limited local control and influence over the setting of the agenda
of interventions, the methodologies used or their subsequent management and
implementation. The intention of many organisations is to improve on this. Nonetheless,
assessment of donor experiences reveals that only a few cases actively involve primary
stakeholders in shaping development decisions (Rudgvist, 1996). Several overviews suggest
that participation varies during the project cycle, with less involvement of all
stakeholders at early stages (design and planning) and later stages (analysis and
dissemination).
If indeed primary stakeholders have not been involved throughout the
project or programme cycle, are we expecting too much from impact assessments? Can we
realistically expect primary stakeholders who have not been involved throughout the
project conceptualisation and management to address factors such as an organisation's core
values & objectives, efficiency, mode of delivery and sustainability?
While the impact of an intervention may be partially determined through
an assessment involving primary stakeholders, organisational factors also need to be
assessed. PIAs can only partially do this, and therefore should be placed within the
broader context of PM&E. A study of key literature (Estella and Gaventa, 1997) placed
impact assessment amongst the five general functions of PM&E. These included:
- project management and planning
- organisational strengthening or institutional learning
- understanding and negotiating stakeholder perspectives
- public accountability and
- impact assessment
Recognition that this now needs to be revisited is emerging in some
organisations. The need to revisit PM&E is beginning to emerge. In the recent
ActionAid Impact Seminar (Dec 1998), for example, issues raised by participants reflected
this and included:
Hope for improvements in approaches to monitoring and
evaluation, and by extension PIAs, are increasingly pinned on social auditing. Some
organisations are interested in social auditing as a means of underpinning and
strengthening new institutional forms and processes (DFID, Kirk, 1997) while others
see its usefulness in filling the gaps between assessing work carried out by an
organisation and changes taking place within an organisation (Roche, OXFAM, 1998). Can
social auditing fill the need for organisational assessment? How well has it worked for
development organisations to date?
In brief, the social auditing process enables an organisation (or
business) to assess and demonstrate its social, community and/or environmental benefits
and limitations. The emphases generally are on the organisation rather than projects or
programmes within it, and the desire to establish a systematic and accountable approach
which recognises all stakeholders (NICDA, 1997). When well embedded into an organisation,
a social audit can provide an annual overview about how well an organisation has addressed
its objectives and core values, as well as its effectiveness, efficiency and equity. Thus
it is more than an assessment of the mechanics of an organisation. It sets the scene for
what has been expressed as the assumptions, norms, values and theory of business
which can be powerful filters affecting what each person sees (Drucker, 1994).
The greatest concentration of social
audits by development organisations began in 1996 in Northern Ireland. Seventeen
organisations have supported their staff to establish social auditing systems and to
complete accredited (NVQ level 4) training. The training has involved learning both the
concepts and practice of social auditing by establishing systems within their own, or
partner organisations. External verification has been established through the Open College
Network, and a panel of verifiers provides an independent assessment of each audit. This
reduces the common risk known as 'managerialism' (Guba and Lincoln, 1989) or the rather too cosy
relationship between the managers who commission the work and the auditor. The course and
support systems are orchestrated by NICDA: The Social Economy Organisation together with
technical consultants who have developed and annually refined the social auditing
methodology.
From this experience, it is clear that social auditing is no panacea.
It faces many of the typical dilemmas of other forms of monitoring and evaluation. A key
issue has been how to balance the need to be realistic about the amount of information
which can be manageably assessed, with the need to ensure adequate coverage of what an
organisation is doing. Within many organisations it may be impossible to find the
resources to undertake a full annual audit of all values and activities. To make it more
realistic some organisations have developed strategies in which a slice of the
organisation is audited each year. Yet, to ensure it is adequately comprehensive two
factors have been identified as important (Albee, NICDA, 1999):
| policy commitment and planning by the organisation towards covering all sections
of the organisation through a sequence of audits (e.g. the Tradecraft approach), and |
| annual assessment of core values of the organisation, regardless of the section
of the organisation which is focused on in any particular year. |
Challenges currently being faced by the social auditors in Northern
Ireland may reveal lessons for others considering the process. The most fundamental
include (ibid):
- Time:
the process often faces difficulties when it is added-on to an existing busy
schedule. It requires time to think through the particular system and its elements as well
as to actually do the audit.
- Embedding:
shifting away from social audit being considered as a particular person's
responsibility within an organisation, to a broader sense of ownership. Northern Ireland
experience shows that in most instances older organisations face more internal
difficulties adapting and embedding social audit than younger organisations.
- Documentation
: determining how best to present the information and the extent to
which all information is made available to all stakeholders are also common challenges.
Concluding Remarks:
The validity of conventional (and mainly non-participatory) methods of
monitoring and evaluation came under much scrutiny in the late 1980s and early 1990s.
Recognition of the inability of social cost-benefit analysis and social impact assessment
to accurately and adequately reflect the dynamics of change resulted in calls for
involvement of those most aware of, and better able to explain, qualitative developments:
i.e. the so-called project beneficiaries (Marsden et al 1994). The extent of this
'participation' and how it is utilised is at the heart of PIA. Yet, we are in danger of
excluding organisational factors which influence impact such as the organisation's own
objectives and core values, as well as their intervention's effectiveness, efficiency and
sustainability.
Organisational analysis, through the use of tools such as social
auditing, may go some way towards addressing this, but they need to be done as a part of a
holistic approaches to monitoring and evaluation, as an integral part rather than a
parallel stream (Cole, 1997). This may be agreed in principle by many organisations, but
one of the most frequently cited constraints on M&E is time. This has been a major
reason cited for shifting away from broad based participatory evaluation of the 1980s and
early 1990s, and, despite the narrower focus of PIA and social audit it is also often
cited as their key limitation. Addressing this requires a paradigm shift in thinking, from
one in which monitoring and evaluation are considered add-on tasks to one in which they
are given central importance (Fernandes and Tandon, 1981). There are illustrations of how
this has been effectively done in large-scale urban interventions in Bolivia and Sri Lanka
(UNCHS/DANIDA, Community Participation Training Programme, 1988-1995). These have
highlighted the key role of local national consultants as researchers and evaluators
throughout the life of interventions. There is much to be learned from such experiences!
| One-page briefs on managerialism and other key aspects of evaluation in Guba and
Lincoln's classic text on Fourth Generation Evaluation (Sage, London, 1989), see http://www.srds.ndirect.co.uk/4th.htm |
| National consultants as researchers and evaluators, see The Final Report by Prof.
Tilakaratna, UNCHS/DANIDA, community Participation Training Programme, Colombo, Jan 1990. |
References:
| Action Aid, Impact Assessment Seminar (report), Dec, 1998. Web Site |
| Albee, A., Participation Matters: an analysis of the use and impact of participatory
methods in Egypt, CDS and DFID, 1999. |
| Albee, A., External Verification Report:Social Auditing, NICDA, Jan 1999. |
| Center for Development Services, A Study of the Views of Community Members about PRA
Methodologies, Cairo, 1998 (unpublished). |
| Cole,A., P.Evans and C,Heath, Impact Assessment, Process Projects and OPRs, DFID,
1997. |
| Drucker, PF, The New Realities, Mandarin, 1991. |
| Fernandes, W. and R.Tandon, Participatory Research and Evaluation: Experiments in
Research as a Process of Liberation, India Social Institute, Delhi, 1981. |
| Estrella,M., and J.Gaventa, Who Counts Reality?, IDS, 1997. |
| Fernandes,W. and R. Tandon, Participatory Research and Evaluation: Experiments in
Research as a Process of Liberation, India Social Institute, Delhi, 1981. |
| Feuerstein, M-T, Partners in Evaluation, MacMillan, 1986. |
| Goyder,H.(et al), Participatory Impact Assessment, Action Aid, 1997 |
| Guba and Lincoln Fourth Generation Evaluation, Sage 1989 Summary |
| Kirk,C. Notes on Participatory Monitoring and Evaluation in DFID: challenges and
potential, DFID, 1997 |
| Marsden,D. (et al), Measuring the Process: Guidelines for Evaluating Social
Development, INTRAC, 1994. |
| NICDA, Social Audit Training Manual, Derry, 1997 NICDA_Derry@compuserve.com |
| Nopenen, H. Participatory Monitoring and Evaluation: A Prototype Internal Learning
System for Livelihood and Micro-Credit Programs, in Community Dev.Journal, vol
32,no.1,1997. |
| Rudqvist, A. and P. Woodford-Berger, Evaluation and Participation, some lessons,
SIDA, 1996. |
The assistance of George
Clark and Graham Boyd of the Caledonia
Centre for Social Development was greatly appreciated in the preparation of this paper. It
is available on http://www.caledonia.org.uk where hyperlinks to support materials are also
available.
Email comments - albee@caledonia.org.uk
|