Craig Eisele on …..

June 8, 2016

Blueprint for the Future of Egypt

Filed under: Uncategorized — Mr. Craig @ 5:04 am

Churchman (1979) identified four common ways of intuitive problem solving and called
them the enemies of the systems approach. These are:

a) Politics. The power to influence and get things done, often aimed at
enhancing personal power. These are the bullies of life.
b) Religion, in the sense of the absolute, unconditional, unquestioned,
dogmatic, and sometimes fanatical belief in the truth of something. These
are the terrorists of life.
c) Morality. The choice between right and wrong made by excluding all
competing perspectives. These are the constipated people of life.
d) Aesthetics. The intuitive choice between beauty and ugliness. These are the
artists of life

Logic is most often understood to be scientific logic. Scientific method usually follows the
following steps; observation, hypothesis, experimentation, and reflection upon the result.
But there are also other kinds of logic, of which philosophical logic is the better known.
The trouble with philosophical logic is that there are numerous competing schools,
disagreeing not only on what philosophy is, but also on whose method of reasoning is
likely to be better (Johnstone, 1965). Hence, at this point of the argument, we are left
with two dilemmas. Is intuitive inquiry a valid form of inquiry, and, which, if any, of the
competing methods of logic ought to be the preferred method for reasoning?

THE DESIGN OF INQUIRING SYSTEMS
It is towards the solution of these two problems that Churchman turned in The Design of
Inquiring Systems (1971) and The Systems Approach and Its Enemies (1979). Put in
different words, the question is what should inquiry as a system look like?
Although Churchman did not say so directly, it is my belief that the basis of his approach
is heavily indebted to Hegel’s philosophy. But this statement could itself be verified by
Hegelian logic, as I shall attempt to show. Three cornerstones of Hegel’s work
(Johnstone, 1965) are relevant to the rest of the argument.
According to McKeon (1965), philosophical methods may be classified under three main
headings, dialectic, logistic, and inquiry. Dialectic methods attempt to unify experiences
in a larger whole, logistic methods attempt to trace knowledge to its simplest elements,
in other words, those about which there can be no dispute. Methods of inquiry attempt to
solve problem situations one at a time in order to satisfy a particular purpose. Both
logistic and inquiry systems require a choice between methods of reasoning to the
exclusion of all others. For dialectic methods, the contradicting viewpoints in themselves
are a valid reason for debate. In other words, in terms of the dialectic methodology, the
contradiction may be resolved by including the differing viewpoints, or in this instance,
methodologies, in a larger whole. This is precisely what Churchman’s inquiring system
does.
a) Politics. The power to influence and get things done, often aimed at
enhancing personal power. These are the bullies of life.
b) Religion, in the sense of the absolute, unconditional, unquestioned,
dogmatic, and sometimes fanatical belief in the truth of something. These
are the terrorists of life.
c) Morality. The choice between right and wrong made by excluding all
competing perspectives. These are the constipated people of life.
d) Aesthetics. The intuitive choice between beauty and ugliness. These are the
artists of life.
a) The dialectic method. Contradictions may be overcome by including them in
a larger whole.
b) Philosophy, and therefore methods of inquiry, is an expression of the history
of a civilisation and as such unfolds over time.
c) To explain purpose, we need to find a whole of which our experiences reveal
an aspect

According to McKeon (1965), philosophical methods may be classified under three main
headings, dialectic, logistic, and inquiry. Dialectic methods attempt to unify experiences
in a larger whole, logistic methods attempt to trace knowledge to its simplest elements,
in other words, those about which there can be no dispute. Methods of inquiry attempt to
solve problem situations one at a time in order to satisfy a particular purpose. Both
logistic and inquiry systems require a choice between methods of reasoning to the
exclusion of all others. For dialectic methods, the contradicting viewpoints in themselves
are a valid reason for debate. In other words, in terms of the dialectic methodology, the
contradiction may be resolved by including the differing viewpoints, or in this instance,
methodologies, in a larger whole. This is precisely what Churchman’s inquiring system
does.

An inquiry system was already identified as a human activity system, and is therefore
inhabited by people. It is people who become aware of a problem situation and ask
questions, and inquiry is therefore an activity that alters the lives and environment of
people. The human structure of an inquiring system refers to the way people organise
themselves for inquiry to take place. Churchman (1971) suggested that inquiry influence,
and is influenced by, people acting in the following roles.
Structure Process Function Structure FunctionProcess Function Stability Stability5
But there is also the symbolic structure of an inquiring system; namely the method
people use to alter information into knowledge (Diagram 1). As mentioned earlier, the
process of transformation may be by the use of intuition, or of logic, and the latter will be
the focus of our attention in the next section.
PROCESS
By definition, process describes those factors that are transformed during the period that
a system is under observation. Such factors are often found to be matter, energy, or
information (MEI). In inquiring systems, information is transformed into knowledge. That
raises the questions what is knowledge, how do we verify our knowledge, and how is
knowledge transformed?
WHAT IS KNOWLEDGE?
One way of conceptualising knowledge is as a system. In terms of this notion, a
hierarchy exists; namely, knowledge consists of information that in turn consists of data
(De Vree, 1994). Data is bits of information that is not contextually linked and by itself
confer no meaning. Bits of data become information when joined in strings, and
information, when connected to a specific context, becomes knowledge. Let us borrow
from language for an example. M, O, U, S, E are bits of data that confer no meaning.
MOUSE becomes information when the data is linked but has to be linked to a specific
context to become knowledge. MOUSE may be a small furry rodent, but it may also be a
trackball device for moving a cursor on a computer screen. The point of this perspective
of knowledge is that the information we hope to manipulate is contextually dependent.
Ignorance of context translates into incorrect or absent knowledge. There is another very
important aspect. Inquiry into the relationship between information and context may alter
resident knowledge, and when this happens, learning takes places. Inquiry is therefore
an important method of learning. But in addition, inquiry questions current knowledge
and by doing so, adds to existing knowledge and corrects incorrect information.
HOW DO WE KNOW?
So, how do we know that what we know is accurate and may be depended upon to
make correct decisions or find accurate answers? It is to answer this question that we
have to turn to Churchman’s (and by implication Hegel’s) work again.
Firstly, we have to have an understanding of how the total body of knowledge came and
comes about, and this requires an understanding of the history of philosophical
knowledge. Churchman and Ackoff (1950) grouped the different schools of logic in four
categories, and the contribution of each to an inquiring system will be discussed in turn.
a) Clients are people who detected a problem situation or have a question.
They are affected by and hope to benefit from the outcome of the inquiry.
b) Decision takers are people who control the resources that if used, may alter
the problem situation.
c) Planners are people who try to align the wishes of clients and decision
takers.

Critical systems heuristics (Ulrich 1983) is a framework for reflective practice based on
practical philosophy and systems thinking. The name stands for three major concerns.
First, the aim is to enhance the ‘critical’ (reflective) competence not only of well-trained
professionals and decision-makers but also of ordinary people. Second, reflective practice
cannot be secured by theoretical means only but requires ‘heuristic’ support in the form
of questions and argumentation tools that make a difference in practice. And third,
‘systems’ thinking can provide us with a useful starting point for understanding the
methodological requirements of such an approach to reflective practice. Here is a short
explanation of each of these three pillars of CSH:

Heuristics means literally ‘the art (or practice) of discovery’; the Greek verb ‘heuriskein’ means to find or to discover. In professional practice, heuristic procedures serve to
identify and explore relevant problem aspects, assumptions, questions, or solution
strategies, in distinction to deductive (algorithmic) procedures, which serve to solve
problems that are logically and mathematically well defined. Professional practice cannot
do without heuristics, as it usually starts from ‘soft’ (ill-defined, qualitative) issues such
as what is the problem to be solved and what kind of change would represent an
improvement.

A critical approach is required since there is no single right way to decide such issues;
answers will depend on personal interests and views, value assumptions, and so on. At the
latest since Christopher Columbus we know that discovery goes hand-in-hand with
deception! A critical approach does not yield any single right answers either; but it can
support processes of reflection and debate about alternative assumptions. Sound
professional practice is critical practice.
Systems thinking is relevant because all problem definitions, solution proposals,
evaluations of outcomes, and so on, depend on prior judgments about the relevant ‘whole
system’ to be looked at. Improvement, for instance, is an eminently systemic concept, for
unless it is defined with reference to the entire relevant system, sub-optimisation will
occur. CSH calls these underpinning judgments ‘boundary judgments’, as they define the
boundaries of the reference system that is constitutive of the meaning of a proposition
and for which it is valid.

Boundary judgments determine which empirical observations and value considerations
count as relevant and which others are left out or considered less important. Because they
condition both ‘facts’ and ‘values’, boundary judgments play an essential role when it
comes to assessing the meaning and merits of a claim.
Claims are all assertions or suggestions to which we attach some relevance
(meaningfulness) and validity (justifiability) in processes of opinion formation, problem
solving, decision-making, action, or conflict resolution. Typical claims are: a problem
definition or an account of a problem situation; a solution proposal; a suggested measure
of success or an assumed general notion of improvement; an assertion of moral rightness;
a claim to knowledge or to rationality; and so on. All these types of claims are inevitably
partial (selective) in the dual sense of representing a part rather than the whole of the
total universe of conceivable considerations, and of serving some parties better than
others – no proposal, no decision, no action can get it equally right for everyone!
Merit is a pragmatic criterion in the sense of philosophical pragmatism and semiotics. For
a claim to have pragmatic merit, it is not sufficient that its formulation is grammatically
and logically coherent and semantically clear, it also needs to be relevant and acceptable
to those concerned in the light of the real-world consequences that it may have if it is
accepted as a basis of action. In order to clarify a claim’s meaning and to judge its merits,
we need to examine the question: What difference does it make in practice? Accordingly,
issues such as ‘Who will benefit and who not?’, ‘How does this claim deal with the
concerns of those who are not likely to benefit?’, or ‘What is the underlying notion of
improvement?’ are to be considered.
In the terms of CSH, the sum-total of these considerations of fact and of value make up
the reference system that gives meaning to a particular claim and conditions its validity.
In everyday language, we may speak of the ‘relevant context’ or of the ‘situation of
concern’ (meaning the perceived context or situation), but we usually do so in an intuitive
rather than a systematically reflected way. Consequently, in many discussions we fail to
achieve mutual understanding, since due to divergent references systems, we actually
speak about different subjects.

Clear and valid thinking, as well as productive communication, demand that we make it
clear to ourselves as well as to everyone else concerned what is the reference system that
we assume in each discussion. If we are not able to qualify the reference system for which
we claim a proposition to be meaningful and valid, we don’t really know what we are
talking about, and we certainly risk that others will understand something else than what
we expect them to understand. If however we are very well aware of our boundary
judgments but do not disclose them to others, we risk claiming too much – we fail to
qualify our claims by pointing out their limitations.

Evaluation Based on Critical Systems Heuristics
1 Introduction
Critical systems heuristics (CSH) draws on the substantive work and philosophy of C. West
Churchman, a systems engineer who, along with Russell Ackoff during the 1950s and 1960s, defined
operations research in the United States. Churchman later pioneered developments in the 1970s of what
is now known as ‘soft’ and ‘critical’ systemic thinking and practice in the domain of social or human
activity systems. Churchman died in 2004. His legacy lies in signalling the importance of being alert to
value-laden boundary judgements when making evaluations. Boundaries are what we socially construct
in designing and evaluating any human activity system of interest (e.g., any situation of concern from a
kinship group, an organisation, or a larger entity such as a national health system). The primary
boundary of any human activity systems is defined by ‘purpose’. Churchman’s work is characterised
by a continual ethical commitment to the overarching purpose of improved human well-being. In order
to fulfil such purposeful activity, there is always a need to broaden inquiry from the particular system
of focus so as to appreciate what Churchman calls the total relevant system. The effectiveness and
efficiency of a system of interest depends on the actual boundary judgements associated with that
system of interest. Churchman first identified 9 conditions or categories (including the category
‘purpose’) associated with any purposeful system of interest in his book The Design of Inquiring
Systems [1, 2]. He later extended these to 12 categories in a book provocatively entitled The Systems
Approach and Its Enemies, significantly taking into account 3 extra factors (‘enemies’) that lie outside
the actual system of interest but which can be affected by, and therein have an effect on, the
performance of the system [1, 2].
In the early 1980s a doctorate student of Churchman from Switzerland, Werner Ulrich, translated
Churchman’s 12 categories into an operational set of 12 questions which he called critical systems
heuristics [3]. Ulrich returned to Switzerland and worked with CSH as a public health and social
welfare policy analyst and program evaluator [4].
Section 2 introduces the basic toolbox of CSH, along with suggestions on when to use it and the
benefits of its use. Section 3 will guide you through a suggested operational use of CSH questions in a
process of evaluation. Section 4 provides a summary of an extensive case study in which CSH was
used for evaluating the role of public participation in natural resource-use planning. Section 5 provides
some advice for the practitioner in developing skills on using CSH for evaluation.
2. The toolbox
2.1 CSH questions
The 12 boundary setting questions are grouped under 4 sources of influence; motivation, control,
expertise, and legitimacy. My own adaptation of these questions are summarised in Table 1.
Table 1 Critical Systems Heuristic Questions for Evaluation
Sources of motivation
1 Beneficiary (‘client’): who should be /is the client or beneficiary of the service or
system (S) to be evaluated?
2 Purpose: what should be /is the purpose of S?
3 Measure of success: what should be/is S’s measure of success (or improvement)?
1
Dr Martin Reynolds is a Systems Lecturer at The Open University. Contact address: Systems Department, The
Open University, Milton Keynes, MK7 6AA, UK. Email: m.d.reynolds@open.ac.ukSources of control
4 Decision maker: who should be/is the decision maker (in command of resources
necessary to enable S)?
5 Resources: what components of S ought to be /are controlled by the decision
maker?
6 Decision environment: what conditions ought to be /are part of S’s environment,
i.e. not controlled by S’s decision maker and therefore acting as possible
constraint?
Sources of expertise
7 Expert (or designer): who ought to be/is involved as providing expert support for
S?
8 Expertise: what kind of expertise or relevant knowledge ought to be/is part of the
design of S?
9 Guarantor: what ought to be /is providing guarantor attributes of success for S
(e.g., technical support, consensus amongst professional experts, experience and
intuition of those involved, stakeholder participation, political support…)and
hence what might be/ are false guarantor attributes of success (e.g. technical fixes,
managerialism, populism, tokenism..)?
Sources of legitimation
10 Witnesses: who ought to be /is representing the interests of those affected by but
not involved with S, including those stakeholders who cannot speak for
themselves (e.g. the handicapped, future generations and non-human nature)?
11 Emancipation: to what degree and in what way ought/are the interests of the
affected free from the effects of S?
12 Worldview: what should be /is the worldview underlying the creation or
maintenance of S? i.e. what visions or underlying meanings of ‘improvement’
ought to be /are considered, and how ought they be /how are they reconciled?
adapted from [5]
Two features of Table 1 need immediate elaboration.
1. The 3 questions associated with each source of influence address parallel issues: the first
question (1, 4, 7, and 10) address issues of social role; the second question (2, 5, 8, and 11)
address issues of role-specific concerns; and the third question (3, 6, 9, and 12) relates to key
problems associated with roles and role-specific concerns.
2. Each of the 12 questions in Table 1 are asked in two modes, thereby generating 24 questions
in total. In CSH all questions need to be asked in a normative, ideal mode (i.e., what ‘ought’
to be…) as well as in the descriptive mode (what ‘is’ the situation…). Contrasting the two
modes provides the source of critique necessary to make an evaluation.
These two features are represented in Figure 1. (Author’s note: my suggestion is to have this as an
appendix to the chapter. The diagram has been formatted to A4 size for ease of photocopying for
evaluator’s direct use if required). Figure 1 might be used as a template for any boundary critique
enquiry. Fig 1 Recording Table for CSH Evaluation
adapted from [6]
Social roles Role-specific
concerns
Key problems
‘is’ Beneficiary/ client Purpose Measure of improvement
‘ought’
Sources of
motivation
critique ‘is’
against ‘ought’
‘is’ Decision-maker Resources Decision environment
‘ought’
Sources of
control
critique ‘is’
against ‘ought’
‘is’ Expert Expertise Guarantee
‘ought’
Sources of
knowledge
critique ‘is’
against ‘ought’
‘is’ Witness Emancipation Worldview
‘ought’
Sources of
legitimation
critique ‘is’
against ‘ought’ Some questions may appear familiar to an evaluator’s existing repertoire or stock-in-trade, and others
may appear less familiar. A few initial health-warnings might be appropriate: firstly, these questions
will only gain meaning to an evaluator when they are actually used in practice; and secondly, the
precise wording of the questions may need changing with respect to different context of use and
preferred vocabulary of the user. With these two caveats in mind, evaluators are likely to discover as
well as nurture familiarity. Meanwhile, I will attempt to flesh out a little more meaning behind the
categories.
The four sources of influence are generic interdependent categories associated with any human activity
driven by a sense of purpose. The 12 categories of questions can be first delineated between an
association with those involved in the operations of the system (associated with sources of motivation,
control and expertise) and those not involved in the system but otherwise affected by the operations of
the system (associated with sources of legitimation).
Identifying first the ideal purpose of the system of interest being evaluated (category 2) in the ‘ought’
mode, a CSH evaluation leads to an unfolding of key normative (i.e., ought mode) features. Stipulating
the intended beneficiaries (category 1) and associated measures of success (3) – i.e., being transparent
about the value-basis of the system – leads to questions regarding the resources or components needed
for success (5); who has control over such resources (4)? What relevant factors ought to lie outside
such control (6) but may have an important impact on the system’s performance? One such set of
factors requiring independence is ‘knowledge’ or expertise. What are the necessary types and levels of
competent (ideally, independent) knowledge and experience (8) required to ensure appropriate
implementation? Who ought to provide such expertise (7)? How might such expert support prove to be
deceptive or false (9)? Given the inevitable bias regarding values (motivation), power (control) and
even knowledge (expertise) inherent to any purposeful system of interest, what is the legitimacy of
such a system within wider spheres of human interests? In other words, if the system is looked at from
a different viewpoint (12), in what ways might the activities be considered as coercive rather than
benign (11)? Who (or what) is negatively affected – i.e., the ‘victims’ of the system – and what type of
representation is made on their behalf (10)? These last set of three questions are crucial in exploring
possible longer-term feedback effects (that is, systemic effects) of the situation being evaluated, as well
as evaluating its moral underpinnings.
A full CSH evaluation then provides a powerful tool for evaluating the built-in values, power structure
and knowledge-base for a system of interest, whilst not ignoring the moral basis on which the system
operates (as considered from the perspective of others who may not be beneficiaries). For a more
concise overview of the actual questions and their historic derivation from practical philosophy, readers
are directed to the original writings of both Churchman (particularly, 1979) and Ulrich (1983, 1988 and
2000).
2.2 When to use CSH
The CSH questions can be applied to any purposeful system of interest; that is, any area or situation of
concern that might be associated with human purpose, whether individual or collective concern. CSH
is not used merely for goal-oriented (or purposive) evaluation, where the purpose may be predefined
and assumed unproblematic, with the emphasis on evaluating the means, but also for evaluating the
actual purpose(s) and implications of purposeful activity with relevant stakeholder groups. As Ulrich
explains: “purposiveness refers to the effectiveness and efficiency of means or tools, purposefulness to
the critical awareness of self-reflective humans with regards to ends or purposes and their normative
implications for the affected” (Ulrich, 1983, p.328). In other words, evaluating ‘means’ ought not to be
confused with evaluating purposes or ‘ends’ (e.g., counting the number of schools does not constitute
evaluating regional or national education objectives!), and any action, however well intended, will have
consequences outside the immediate sphere of intended effects, but which may (i) possibly later impact
back on the system of interest, and (ii) be unethical in the wider scheme of human activity.
Most typically, CSH is used in the arena of evaluating plans or planning processes either as a posthoc, summative evaluation, or as a more constituent in-situ formative evaluation. Both Churchman and
Ulrich stress the importance of locating the planning process at specified levels in order to appreciate
the selectivity or partiality of any purpose associated with planning. These levels of planning are based on the principle of ‘vertical planning’ originally suggested by Erich Jansch, which I paraphrase a little
from a description by Ulrich (1988):
• Goal planning takes the purpose of the mandate as given. The job is to define the exact goals
that will secure “improvement” in terms of the given purpose…
• Objective planning determines the purpose so as to secure improvement toward some overall
vision of improvement, which is assumed to be given…
• Ideal planning can drop the feasible and the realistic and challenge the soundness of the visions
implied by “realistic” purposes”.
The three levels are associated respectively with administrative (or operational) practice, management
practice, and practice associated with policy design.
2.3 Why use CSH?
There are three good reasons for considering CSH questions as a template for making evaluation.2
1. Boundary judgements encapture key dimensions of any purposeful system of interest. CSH
draws in a range of factors which other evaluation approaches may inadvertently not consider.
Mainstream evaluation issues regarding ‘measures of success’ are linked with important issues
of ‘power’ and ‘knowledge’, as well as ‘externalities’, including the influence of those affected
by, but not involved with, the built-in design of such measures. Concerning a specified
system of interest, CSH is used to ascertain who important stakeholders might be (‘social
roles’), and what their particular stakeholdings (‘role-specific concerns’) and stakes (‘key
problems’) relate to. Applying this framework of inquiry reveals important assumptions and
premises underlying entities being evaluated. These are often important potential sources of
underlying ‘failure’ in performance.
2. Value judgements are made transparent. CSH questions are asked in an ‘is/ought’ mode
thereby ensuring a continual ethical alertness to the process of evaluation. The response to
CSH questions leads to important reflection and triggers of conversation around various
aspects of situational change. CSH questions can be used on a monological basis, as a
reflective analytical tool, or as a dialogical tool, for generating discussion amongst
stakeholders around planning issues. Whilst the mystique of ‘evaluation’ often interferes with
stakeholders’ engagement with evaluation, CSH makes the role of the evaluator transparent in
the process of evaluation thereby helping with the demystification of the evaluation process.
3. Securing improvement provides the driving principle for evaluation. Purposeful systems
evaluation using CSH enshrines the notion of improved well-being as a trigger for unfolding
boundary and value judgements. Such improvement may take the expression of freedom from
material deprivation and/ or ideological deception. CSH enables questions to be raised
regarding not only whether particular ‘goals’ are being achieved, but whether they are the
right goals to be sought after as viewed from the perspective of others, and what alternative
goals might be more appropriate. In short, CSH enables a learning approach to evaluation.
3 The Technique: doing a CSH evaluation
An evaluator would need to gain familiarisation with the use of 12 questions in a range of different
‘systems’ (entities being evaluated); each defined at the outset by some ideal-type ‘purpose’ (ie.category
2). Skill in CSH-based evaluation arises from practical use and unfolding of CSH questions, both in ‘is’
and ‘ought’ modes, in different contexts. The technique of doing CSH varies between different
practitioners with different interests and prior experiences of using CSH or similar techniques, and
between different contexts of use. There is no prescribed methodology. The guidelines below comes
from my own experience of using CSH in a range of contexts. As with any set of guidelines regarding
a technique, the suggestions are open to adaptation and critical appraisal.
2
These reasons reflect broader principles associated within the wider domain of what is known as
‘critical systems thinking’ associated with systemic intervention 7. Midgley, G., Systemic
Intervention: Philosophy, Methodology and Practice. 2000, New York: Kluwer/Plenum.. 1. Identify the system of interest (SoI) which you are evaluating (i.e., the plan, task, project,
programme, strategy, policy etc.). Name your SoI by addressing CSH question 2 assigning a
higher-order, ideal, purpose to the entity being evaluated ( i.e., ‘A System to…..’).
2. Reflect and make a note on your own role as evaluator in the system of interest being
evaluated. Evaluation is often part of the expert support provided by sources of expertise. Do
you consider yourself an ‘expert’ associated with the system (category 6), or more as a
witness for the affected (category 10), or both, or neither? To what degree is your evaluation
independent of the decision maker(s), or is there some possible compromise in the relationship
which may inhibit independent appraisal? Is the evaluation a post-hoc summative or more
process-oriented formative?
3. For the SoI identified, attempt to locate where it fits within the three level hierarchy of
planning: (i) goal, (ii) objective or (iii) ideal planning (see 2.2 above). My own preferred
vocabulary is whether the SoI operates at (i) operational/ administrative, (ii) management, or
(iii) policy design level of planning. At the management and administrative levels of planning,
purposes might be respectively more specifically expressed.
4. Focusing on the SoI, and its underlying purpose, identify associated stakeholders representing
beneficiaries, decision makers, experts, and witnesses. Provide examples of representative
individuals or groups associated with each source of influence for possible interview. There
will inevitably be some crossing-over of interests associated with any one stakeholder
identified. A government agency for example may claim to act in all four roles regarding a
system for improving welfare development. The key point of this activity though is to get a
general sense of which stakeholders are primarily concerned with particular role-specific
concerns. A government agency in the context of a SoI for health care provision in the
United States might primarily represent the ‘witness’ category, whereas in the United
Kingdom a similar agency might primarily represent the ‘decision maker’ category.
Depending on your capacity and resources available to you, the evaluation might be further
undertaken either through your own reflection, monologically, using written resource material
such as reports or, more preferably, dialogically using conversations with stakeholders
themselves. Often a mixture of both approaches is used. Indeed, identifying relevant
stakeholder groups represents in effect a first stage in monological appraisal before dialogical
appraisal might be undertaken.
5. Monological: Build up a picture of the SoI through addressing CSH questions in a systematic
manner, beginning with questions of purpose in the ‘ought’ mode. My own preferred sequence
of questions for unfolding the SoI is: 2, 1, 3; 5, 4, 6; 8, 7, 9; and 11, 10, 12. (see Fig. 2). For
each question, critique the ‘is’ with the ‘ought’ making notes of your reflections (possibly
using the template shown in Fig. 1/ appendix). Fig 2 Unfolding sequence of CSH questions
Social roles Role-specific
concerns
Key problems
Sources of
motivation
1 Beneficiary/ client 2 Purpose 3 Measure of improvement
Sources of
control 4 Decision-maker 5 Resources 6 Decision environment
Sources of
knowledge 7 Expert 8 Expertise 9 Guarantee
Sources of
legitimation 10 Witness 11 Emancipation 12 Worldview
It is advised that this sequence of unfolding be first undertaken in the normative or ‘ideal/
ought’ mode, followed by the descriptive ‘actual/ is’ mode, before then critiquing the ‘ought’
with the ‘is’.
6. Dialogical: Design an interview questionnaire for each of the key stakeholder groups
identified. Focus the inquiry on issues regarding the purpose of the SoI in focus. The
questionnaire can be designed around CSH questions in two ways. In either way, it is
important that the terminology used in asking the questions is adapted for the particular
context in which you are working. Firstly, the questionnaire might be structured to
systematically unfold a perspective of the SoI from each stakeholder group through adapting
all 12 CSH questions in the same unfolding sequence as suggested in Figure 2.. Alternatively,
you might like to start your conversation with the ‘role-related concerns’ associated with the
particular stakeholder group that you are addressing. It may be that given the context of the
evaluation, a limited number of more specific role-related questions is all that is required, but
changing these questions in relation to different stakeholder roles being questioned. In this
way, a composite evaluation evaluation might be gradually established.
7. The ‘final’ evaluation will then need to be written up in a clear narrative form. Simply
presenting 12 sets of critiques will not make much sense outside the evaluator(s). In writing a
narrative, it is advised that, (i) your own role as evaluator is clearly registered (i.e., which
views are yours and which views are assumed?); a useful device for this, though often
uncomfortable amongst evaluators, is to write in the first person singular (using terms like ‘I’
and ‘in my view..’) and to avoid any pretence towards making scientific judgements; (ii)
reference to a normative ‘ought’ is clearly explained (and open to challenge); and (iii)
crucially, you present your evaluation as an invite for further comment and deliberation.
Evaluation using CSH, whilst sometimes done in a summative post-hoc context, is an
essentially iterative learning process. A key task is to engage stakeholders in a continual
reflective learning cycle around the system of interest in order to develop a sense of mutual
development of purposeful collective activity rather than an ‘inspection’. 4 Case Study: natural resource management
The notes below are a brief summary of an extensive evaluation exercise made during fieldwork in
Botswana in the mid 1990s. The aim of these notes is to briefly illustrate the techniques employed
rather than to detail the substantive outcomes. Further detailed reference to the process and outcomes
of this evaluation can be sought from [8]. The notes are ordered in the same sequence of technique
stages outlined in the previous section. The category numbers referred to are CSH categories
illustrated in Table 1.
4.1 Identifying the system of interest
Botswana is often cited as an African economic success story. Economic planning has been based
principally on the trickle-down strategy of using revenue from a rich source of non-renewable
diamonds to finance public sector expansion and improvements in rural infrastructure including
provision of health, education, agriculture and communications. The impact of planning renewable
natural resource-use is less impressive, as evidenced by persistent high levels of rural poverty amidst a
diminishing and degrading stock of communal (as against privatised) natural resources.
Since the early 1990s, considerable attention has been given to promoting participatory planning in
less-developed countries as a means of poverty alleviation and protection of the natural environment.
In Botswana, participatory planning was being extensively piloted as a means of natural resource-use
appraisal in rural areas during the 1990s with the support of donor agencies and the national
government.
The situation of interest to me was the role of participatory planning in rural development. My system
of interest (SoI) for evaluation might be simply phrased as follows: A system to enhance natural
resource-use appraisal (NRUA) through participatory planning for assisting rural poverty alleviation
and protection of the natural environment in Botswana.
4.2 Role of evaluator
My own role as an evaluator was closely associated with both categories 7 (‘expert’) and 10 (‘witness’)
relating to the SoI described above. I was not commissioned or paid for by any stakeholders associated
with the system of interest, and so can claim a fair degree of independence. My own source of support
derived from the UK Economic and Social Science Research Council which financed my fieldwork as
part of a wider package of support for doctorate studies. The reports produced were written and
presented to the stakeholder representatives without prior conditions.
In relation to the SoI, the evaluation was intended to be more ‘formative’ than ‘summative’, as my
input became part of a wider on-going appraisal of participatory planning in Botswana.
4.3 Level of planning
Three separate on-going projects were chosen for evaluation:
(i) Participatory Rural Appraisal (PRA) Pilot Project
(ii) Natural Resource Management Project (NRMP)
(iii) Botswana Range Inventory & Monitoring Project (BRIMP).
The projects successively represent the three progressively wider domains of planning: The PRA Pilot
Project was oriented towards administration (‘goal planning’); NRMP was oriented towards project
management (‘objective planning’); and BRIMP was oriented more specifically towards policy design
(‘ideal planning’).
Whilst occupying different levels of planning, each project shared important features: firstly, their
prime objectives are social and environmental rather than economic; secondly, significant direct or
indirect non-governmental sources of expertise (NGOs, private consultants and parastatals) – reinforced with donor support – were commissioned; and thirdly, each project promotes the use of ‘participatory
techniques’.
In effect there are three systems of interest being evaluated. Each nested within a particular level of
planning.
4.4 Stakeholder groups
Four institutional types were identified as representing generic social roles of beneficiaries, decisionmakers, experts, and witnesses associated with NRUA in Botswana. These are, respectively,
government departments, donor agencies, consultants, and non-government organisations (NGOs) (see
Table 2). Their generic roles are not mutually exclusive, but whilst there might be considerable
overlap in roles and role-related concerns, it was useful to have this first mapping of stakeholders as a
basis for starting a more detailed evaluation of NRUA associated with each project.
Table 2: Stakeholder map associated with natural resource-use appraisal in Botswana
Institutional type Primary role in NRUA projects
Government
department
Beneficiary: improved NRUA practice for better delivery on,
and design of, government policy.
Donor agency Decision-maker: providing resources efficiently for effective
NRUA practice
Consultancy
(academic or private
business)
Expert (professional): ensuring impartial production of
knowledge for sustainable and ethical natural resource use
NGO Witness: representing interests of impoverished natural
resource users, future generations, and non-human nature
Whilst impoverished natural resource users would clearly represent the ultimate ‘ideal’ or intended
beneficiaries (see Table 3 below), for the purpose of identifying actual stakeholders, it was considered
more manageable within the constraints of the evaluation being undertaken to deal with immediate
beneficiaries of NRUA whilst keeping in-check assumptions that (a) government would make
appropriate representation of such stakeholders, and if not, (b) NGOs would ensure such
representation.
4.5 Monological appraisal
As a first step to unfold normative use of participatory planning for NRUA practice in Botswana, an
initial ‘normative mapping’ (or ‘ideal mapping’) of the system of interest was undertaken based
principally on background reading of the situation. Table 3 illustrates an initial pass through normative
mapping using the 12 CSH categories in the sequence as described in Figure 2. Table 3: Normative (‘ideal’) mapping of natural resource-use appraisal in Botswana
Sources of
influence
Role Role-specific concerns Key problems
Motivation Beneficiary
Rural poor, future
generations and nonhuman nature.
Purpose
To improve natural resource
use planning in addressing
needs of the vulnerable
Measure of improvement
Indices of rural poverty
alleviation and enhanced
condition of natural
resources
Control Decision-maker
Communal resource
users
Resources
Necessary components to
enable NRUA; including
(i) natural
(ii) project/ finance
(iii) human
Decision environment
(i) natural environment
not required as
resources
(ii) interest groups
affected by project
(iii) expertise not beholden
to decision maker
Expertise Expert
Communal resource
users informed by
natural and social
scientists and other
sources of relevant
knowledge/ experience
Expertise
(i) technical and experiential
know-how & knowledge,
including rural peoples’
knowledge;
(ii) interdisciplinary and
intersectoral facilitation
skills
(iii) social & environmental
responsibility
Guarantee
Avoidance of incompetent
expertise and false
guarantors of ‘scientism’
(sole reliance on objective
and statistical ‘fact’),
‘managerialism’ (sole
reliance on facilitating
communication), and
‘populism’ (allowing
loudest collective voice as
sole guarantor)
Legitimation Witness
Collective citizenry
representing interests of
all affected by natural
resource use, both local
and global, and present
and future generations
Emancipation
Freedom from
(i) material deprivation
(poverty)
(ii) deception/ ideological
coercion
(iii) degradation of natural
environment
Worldview
NRUA depends on
continual dialogue
between involved and
affected with attention to
improved human and
natural well-being
In the ideal world of purposeful human activity, the roles of beneficiary, decision maker, expert and
witness are closely interrelated and at-one together. For natural resource-use appraisal, a system of selforganisation and appraisal amongst conscientious natural resource users might therefore be considered
as the ideal situation.
This initial ideal mapping provided a benchmark for developing further iterations of normative
mapping at each level of planning, as well as providing the basis to critique ‘descriptive mapping’
when evaluating each of the three projects.
4.6 Dialogical appraisal
The stakeholder mapping and normative mapping were also useful as a basis for designing the format
of semi-structured interview schedules for different stakeholder interviewees. These interview
schedules were kept deliberately open in order to allow respondents to develop their thinking/
reflection during conversation. Rather than systematically going through each of the 12 CSH
questions (see Fig. 2) for each interview in both the ‘ought’ and ‘is’ modes (which in some
circumstances might be appropriate), each schedule for this evaluation was customised according to (i)
the perceived stakeholder role (beneficiary, decision-maker etc.), (ii) the particular level of planning/ project being focused upon (often, interviewees would have a stakeholding in several projects at same
time, though it was important to record level-specific notes where appropriate), and (iii) information
arising from prior interviews with other stakeholders. After introducing the focus of evaluation in
terms of participatory planning for NRUA, each schedule began with questions relating to what the
stakeholder considered to be their main role, and their main concerns and key problems in fulfilling
their role. Time and interest permitting, more general questions were then asked about relationships
with other stakeholders, and an impression of what the roles, concerns and problems associated with
these stakeholders might be. The CSH ideal mapping provides possible prompts in developing the
conversation throughout the interview (e.g., see Table 4).
In recording feedback from such conversations, it was useful to continually update the impression of
what ‘is’ the situation with respect to each level of planning. In other words, the descriptive mapping
is a continually evolving exercise during conversations and any associated reading of informal ‘grey’
material (e.g., internal reports, memos, discussion documents etc.) that are revealed and made
available from such conversations. At the same time, critiques were emerging from the descriptive
mapping. It was important to keep a record of the developing critique as this became the basis for
reporting back. There is not the space here to look at any actual descriptive mapping associated with
the projects, though Boxes 1-3 give some indication of the final critiques that emerged from the
mapping exercise.
Table 4: Sample stakeholder role-specific questions associated with NRUA in Botswana
Primary stakeholder
role in NRUA projects
Initial generic prompts for further inquiry
Beneficiary:
Government department
How to reconcile tradition of centralised roles for government
extension officers (supply) with decentralised imperatives for
appraisal (demand from ultimate intended beneficiaries)? Is
appraisal undertaken at cross purposes (i.e., supply not addressing
demand)?
Decision-maker:
Donor agency
How to transfer ‘ownership’ and control to national & local
agencies whilst maintaining some control over natural resource
intervention (‘global commons’)? Is the decision ‘environment’ in
which appraisal is undertaken properly understood and clear?
Expert (professional):
Consultancy
How to ensure impartial production of knowledge whilst changing
validity criteria for appraisal output? Is ‘participation’ enough as a
guarantee of good knowledge?
Witness:
NGO
How to avoid conflict of interests given that NGOs are generally
answerable primarily to government & donors? Is there a risk of
losing representation of intended beneficiaries?
4.7 Reporting
The three projects were evaluated over a relatively long period of time (2 years), with a substantial
number of interviewees (78), many of whom (24) were interviewed on 2 separate fieldwork occasions.
Along with fieldwork observation of participatory rural appraisal techniques in operation, and analysis
of a substantial amount of grey material associated with each project and level of planning, inevitably
this exercise generated a large amount of data and information to assimilate. Keeping an up-to-date
record or journal of the critique became a particularly important feature of this particular evaluation,
along with the development of a series of three successive interim reports submitted back to the
stakeholders which provided important feedback for further iterations.
Each report began with an explicit statement on (i) what I perceived were the main issues of the
evaluation, couched in terms appreciated by the stakeholders (ie., underlying values and purpose of the
project, issues of relevant power and decision making, relevant knowledge, and moral underpinnings),
and (ii) my own role and purpose with respect to the evaluation exercise. In hindsight, it would also
have been appropriate to add (iii) a disclaimer regarding any pretence to having made a ‘scientific’
evaluation. Reporting back on a CSH based evaluation requires transparency as well as skill in
translating findings and impressions in a mutually appreciated vocabulary and narrative. A key to
successful evaluation is in eliciting recognition and critical appreciation and further engagement amongst the stakeholders involved. All stakeholders were invited to comment on the interim reports
either through written submission and/or verbal communication through either further private
communication or special discussion sessions (one exclusive seminar and one public seminar were
specially convened in Botswana for such feedback).
Boxes 1 to 3 provide very brief summaries of the final critique presented for each respective project.
Each embellishes some descriptive mapping and specific critique of ‘role’, ‘role-specific concern’ and
‘key problems’ associated with each source of influence (i.e., as derived from template in Fig.1/
appendix).
Box 1 Participatory Rural Appraisal (PRA) Pilot Project (‘goal’ or administrative level of
planning)
Motivation
critique
Predominant purpose to alleviate perceived rural social inertia. Local government
extension officers were chief beneficiaries rewarded with facilitation skills to enable
greater involvement of local people in extension work. The key measure of success
for the project was centred on high levels of participation and generation of self-help
projects. Alternatively, rural poor possibly need better access to and control over
resources rather than being subject to further (effectively) top-down extension
practices.
Control
critique
Under trajectory of (i) increased privatisation and fencing of communal land resulting
in further alienation of natural resource, and (ii) reduced government assistance for
local development projects, rural poor livelihoods are increasingly dependent on
contracts with landowners and donor support for collective projects. Also, risk that
rural peoples knowledge loses its independence in becoming increasingly subject to
government extension practice.
Expertise
critique
Participation levels amongst rural poor in PRA exercises provide a questionable
guarantee for success in that participation levels (i) are unlikely to be sustainable if
benefits are not quickly realised, and (ii) distract from large body of empirical data
and experience indicating significant correlation between rural poverty and land
fencing policy since the mid 1970s.
Legitimation
critique
Dominant underpinning belief that benevolent government (through tradition of
generous handouts and transfer of technology projects) has been responsible for
generating rural social inertia, hence the need for government to step back and allow
‘development from within’. Possible further marginalisation of rural poor through
not addressing perceived root cause relating to control and access to land.
Box 2 Natural Resource Management Project (NRMP) (‘objective’ or management level of
planning)
Motivation
critique
Participatory techniques considered useful for triggering multisectoral planning to
counter problem of intersectoral conflict around natural resource planning. Key
beneficiaries are the project managers responsible for eliciting support/ resources
from different line Ministries (e.g., Wildlife & Tourism, Agriculture, Water Affairs,
Local Government). Key measure of success is the number of community based
natural resource management (CBNRM) projects being generated. Questionable
long-term actual impact of CBNRM on rural poverty reduction and natural
environment.
Control
critique
CBNRM ‘projects’ become currency for rural development, each controlled by
project manager. Short term ‘projects’ elicit funding support from donor agencies,
allowing government to divert resource support away from local rural development.
Expertise
critique
Project management requires multidisciplinary expertise and skills in facilitation.
Participatory techniques involving rural participants appreciated as a useful trigger
for intersectoral collaboration and communication between traditional disciplinary experts. Rural peoples knowledge (RPK) also regarded as useful check on
professional judgements rather than as a prime driver for rural development
initiatives.
Legitimation
critique
Dominant underpinning belief that appropriate expertise (supported by evidence from
RPK through participatory techniques) ought to drive rural development rather than
traditional dependence on civil service bureaucratic functions which inevitably create
the closed ‘silo’ mentality. Possible conflict with local understandings of the need
for greater autonomy and control over development amongst rural participants rather
than project managers.
Box 3 Botswana Range Inventory & Monitoring Project (BRIMP) (‘ideal’ or policy domain
level of planning)
Motivation
critique
Predominant purpose to instil longer-term co-ordinated planning in tune with
national economic development planning, to address problems of piecemeal
development. Immediate beneficiaries are the policy advisors who wish for greater
responsiveness to market pressures whilst wishing to avoid piecemeal planning. Will
this benefit rural poor?
Control
critique
Commoditised resources provide the most appropriate means for economic or
econometric planning. Thus fencing of communal land, privatising water supply,
project-oriented development, and having rural participants on-tap for consultations
during monitoring and evaluation efforts, are all important measures of control.
Risks further disenfranchising of rural community.
Expertise
critique
Central guarantee for ensuring properly co-ordinated efforts is through purposive
monitoring and evaluation using econometric indices based on criteria of efficiency
and effectiveness in terms of generating economic wealth from natural resources.
Participatory techniques using rural peoples knowledge, regarded as a means of
ground-truthing or checking information arising from more technically oriented
surveillance systems like remote sensing.
Legitimation
critique
Belief that free-market determinism using econometric devices applied to natural
resource-use provide most effective means for reducing poverty and protecting the
natural environment. Possibly sidelining Tswana tradition in democratic debate as a
means of determining policy.
5 Summary: reflections on skills development
Critical systems heuristics is not a prescribed methodology. There is a wide variety of practice in the
use of CSH questions. In some circumstances, not all the questions may need addressing. Descriptive
mapping might be appropriate before, or as a trigger to, normative mapping. Ulrich himself uses CSH
in slightly different ways in evaluating two substantial planning case studies – economic planning in
President Allende’s Chile, and health systems planning for Central Puget Sound in North America
(Ulrich, 1983).
The key to developing CSH skills rests with appreciating the systems principles embodied in the tool:
(i) the idea of boundary critique, in being systemically aware (and generating systemic awareness) of,
and making explicit, the boundary judgements implicit in any human activity; (ii) appreciating your
own role and values relating to a situation of evaluation and the need for nurturing critical conversation
amongst stakeholders to develop, rather than merely protect, stakeholdings; and (iii) using CSH
evaluation to serve wider ethical interests of well-being, both social and ecological.
More specifically, I offer some practical tips in the use of CSH arising from personal experience.
(i) Practice at deploying CSH questions is the only way of developing skills and appreciating
the interrogative power of the questions being asked. (ii) Adapt the terminology to your own needs/ culture, whilst retaining the essential meaning
of the 12 categories.
(iii) Practice using a system of interest relevant to you personally (e.g., a domestic or work
situation, activity, proposal in which there is some ‘purpose’ attached)
(iv) Be prepared to encounter moments of discomfort in using CSH. Making values
transparent is not a painless exercise, either for the evaluator or stakeholders involved
with evaluation.
Further reading
1. Churchman, C.W., The Design of Inquiring Systems: basic concepts of systems and
organizations. 1971, New York: Basic Books.
2. Churchman, C.W., The Systems Approach and its Enemies. 1979, New York: Basic Books.
3. Ulrich, W., Critical Heuristics of Social Planning: a new approach to practical philosophy.
1983, Stuttgart (Chichester): Haupt (John Wiley – paperback version).
4. Ulrich, W., Churchman’s “Process of Unfolding” – Its Significance for Policy Analysis and
Evaluation. Systems Practice, 1988. 1(4): p. 415-428.
5. Ulrich, W., Reflective Practice in the Civil Society: the contribution of critically systemic
thinking. Reflective Practice, 2000. 1(2): p. 247-268.
6. Ulrich, W., A Primer to Critical Systems Heuristics for Action Researchers. 1996, Hull:
University of Hull.
7. Midgley, G., Systemic Intervention: Philosophy, Methodology and Practice. 2000, New York:
Kluwer/Plenum.
8. Reynolds, M., “Unfolding” Natural Resource Information Systems: fieldwork in Botswana.
Systemic Practice and Action Research, 1998. 11(2): p. 127-152.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: