U gebruikt een verouderde browser. Wij raden u aan een upgrade van uw browser uit te voeren naar de meest recente versie.

AISHE: the Books                                                                                         < Previous page        Next page >

These books can be downloaded free of charge: see below.

Wikipedia (Dutch):   https://nl.wikipedia.org/wiki/AISHE 

This page shows the AISHE books that offer the complete AISHE instrument. For the details of this AISHE instrument, including its applications, its history and its versions, please go to this page, which is a section of the website that describes the Management Methods developed by Niko Roorda.

Editions

1st edition (AISHE 1.0):

  • English: 2001
  • Dutch: 2001
  • Swedish: 2008, ISBN 97886135171

2nd edition (AISHE 2.0):

  • English: 2009
  • Dutch: 2012

 

Download
You can download several versions of AISHE. Here are the links:

AISHE 2.0, English

AISHE 2.0 Reporter, English

AISHE 2.0 Reporter, English (example)

AISHE 2.0, Dutch

AISHE 2.0 Reporter, Dutch

AISHE 2.0 Reporter, Dutch (example)

AISHE 1.0, Swedish

Swedish brochure

AIFSHE, English

AIFSHE, French

 

Edition for Developing countries (AIFSHE), see here:

  • French: 2013
  • English: 2013

 

 

 

 

 

Below, a section is shown of Roorda's PhD dissertation, 'Sailing on the Winds of Change'. This section explains some of the fundamental decisions that had to be made when AISHE was developed.

7.2. The quality approach: Development and application of AISHE (2000 – 2009)

7.2.1. Development, structure and validation

Fundamentals
The development of AISHE started with a definition phase. Many literature sources were studied by the members of the DHO working group. Existing models for quality management and for environmental management were compared, like the ISO 9000 and 14000 series, EFQM, BS 7750, EMAS. Literature about sustainable development and ESD was used, e.g. the various declarations and charters that have been described above.

Based on these sources and on a series of discussions within the working group and with experts on sustainable development, a number of decisions were made. The first decision was related to the four roles a university can fulfill towards sustainable development: see §3.4. It was decided to put the focus of the assessment instrument on the educational role, because it was estimated that the universities’ contribution in this role has the greatest potential contribution to sustainable development in society, because of the ‘snowball effect’ that was described in §3.4.

Secondly, it was decided that the instrument that was to be developed should be highly comparable with existing instruments for quality management or environmental management that were already in use in higher education. This was an important condition, since it would make it easier to further the integration process of ESD: it would make the new tool more familiar looking and more acceptable to universities.

Three other fundamental decisions had to be made. The following is a (long) citation from the AISHE book that was published at the end of the development process (Roorda, 2001b). The cited sources are: British Standards Institute (1992), EMAS (1993), ISO (1994) and (1996), HE21 (1999) and Expertgroep HBO (1999):

 

Decision 1: Content oriented versus process oriented criteria

“Content oriented criteria are about the concrete selection of subjects that should or should not be part of certain curricula, from a sustainable perspective, and about guidelines for the organization management.
Process oriented criteria give information about the way in which the curricula are to be designed, and about the way in which decisions are made concerning the organization management. These are criteria on a meta level. Examples:

Decision 1

Content oriented

Process oriented

Curriculum

Photovoltaic cells are a part of the curriculum.

Decisions about sustainable subjects in the curriculum are made explicit.

Vision

The use of hen batteries is not compatible with sustainable development.

The organization has a vision on ethical questions that are relevant for the own professional fields. This vision is updated regularly.

Staff
development

Engineering teachers receive supplementary schooling in environment oriented product development.

There is a policy and a budget for staff development in sustainable development.

Considerations
The advantage of content oriented criteria is, they offer clarity: clarity about the product that is to be delivered (the educational content) and about the process (e.g. curriculum development, staff development).
At the same time, this clarity is a disadvantage, for various reasons:

  • They are absolute: they don’t leave space for the own responsibility of an individual education institute (or a part of it);
  • Fundamentally, they are not generally acceptable: they mirror the subjective opinion of the designer of the criterion, and so they carry the risk that others don’t agree with them. If so, at best a never-ending yes-no-discussion could rise;
  • They are time related and static: they have a risk of getting obsolete because of new developments. When for instance a new technical invention would be made which would make photovoltaic cells technically obsolete, at the same time the criterion would be obsolete.
    Although process oriented criteria carry the risk of vagueness, this doesn’t really have to be a serious disadvantage. For instance, the above mentioned criterion about a vision on ethics entails that educational organizations in which animal welfare is a relevant subject, will not be allowed to deny taking position about hen batteries.

Decision
Actually, the point about adopting process-oriented criteria is that, if the processes are formulated carefully and are executed carefully as well, it may be expected that the resulting contents will be ok too.
On the basis of this point, in the AISHE method the process-oriented principle has been chosen.”

 

Decision 2: Quantitative versus qualitative criteria

“Criteria can be formulated as quantitative measuring data, or in a less precise, more describing, qualitative way.
In the British “Higher Education 21” programme (“HE21”) a large amount of quantitative indicators has been designed. Some examples are shown in the table below, in the column “quantitative”.

Decision 2

Quantitative

Qualitative

Curriculum

Percentage of students participating in modules that are related to sustainability

The relation between sustainability aspects in the professional qualifications and the curriculum has been formulated explicitly.

External effect

Number of sustainability related conferences, organized in the current year

The organization contributes actively to enlargement of knowledge and insight about sustainable development in society and to the public opinion.

Internal environmental management

CO2 emission per FTE per annum

Annually an environmental report is published.

Considerations
Using quantitative criteria can only be meaningful, if the indicated quantities can be defined and measured in an exact way, and if there is an objective method to agree upon limits for them.
This is a problematic point of all above-mentioned quantitative examples.

  • The mentioned percentage of students, for example, can only be measured if it is possible to determine for each module if it is related to sustainability. But, how can this be determined? According to some people, nuclear energy is essential for a sustainable system of energy, while others combat this opinion; does a module on nuclear energy count for the above percentage?
  • How does one determine whether a certain conference is sustainability related? Is, let’s say, a conference on waste processing sustainability related?
  • For which kinds of CO2 emission will the educational institute be held accountable, and which will not? And: how exactly will the measurements be done to establish the numbers?

On top of all this, for all the above examples the decision of choosing a limit value is subjective and normative, and so each measured quantity will always be questionable. In other words, the disadvantage of quantitative criteria is that they suggest a fictitious level of exactness that in real cannot be made true.

The “right” percentage of credits
A characteristic example of this fictitious exactness is the - in some places ongoing - discussion about the “right” percentage of the curriculum that should be dedicated to sustainable development (expressed in a percentage of the credit points). According to some this should be 5%; others claim the optimal value should be higher or lower. In fact every concrete percentage is fundamentally wrong. In the first place because of the fictitiousness of the exactness: does a module handling, say, environmental law, fall within this percentage of sustainable curriculum parts? And what about the earlier mentioned module on nuclear energy?
In the second place, quite a lot of modules have nothing or hardly anything to do with sustainability when viewed on their own, but are very relevant for sustainability when viewed in a larger framework. A characteristic example is a module in a mechanical engineering course dealing with connection technologies (gluing, screwing, welding, clamping, etc.): on their own these techniques are not clearly more or less sustainable. But when a product consisting of several components is to be designed, subjects appear like design for disassembly, reuse and recycling, which are very relevant for sustainability; and a thorough knowledge of connection technologies contributes to a good designing process. Such a module doesn’t belong in a direct sense to the percentage of sustainable curriculum parts, but it certainly does in an indirect way.

Decision
Many aspects of the level to which sustainability has been integrated in education and in the organization have fundamentally no exact nature. This does not imply that they cannot be measured; but usually they have to be expressed on an ordinal scale, instead of a quantitative interval or ratio scale.
Therefore, with respect to the AISHE method a qualitative approach has been adopted; and the results are expressed on ordinal scales.”

 

Decision 3: Prescriptive versus descriptive criteria 

“Criteria can be designed as obligatory prescriptions, as is usual with many of the customary instruments for quality and environmental management. In the table below in the left column a number of examples are shown, derived from ISO 14001, EMAS and BS7750. The alternative is a descriptive character. This may take the form of an ascending progression of descriptions, together constituting an ordinal scale; an organization can compare itself with this scale and determine which organization development stage it is in. A good example of this is the EFQM-INK method: for a series of criteria five “stages” are discerned. The table below shows some examples in the right column, taken from the HE version of the EFQM model (Expertgroep HBO, 1999).

Decision 3

Prescriptive

Descriptive

Staff
development

The organization shall (…) require that all personnel whose work may create a significant impact upon the environment, have received appropriate training.
(ISO 14001: 4.4.2)

Stage 1:
Staff counseling, training and development are dependent on individual initiatives.
(EFQM-HE: 3.5)

Policy

The company environmental policy shall be adopted and periodically reviewed.
(EMAS: appendix 1, A.2)

 

Stage 3:
The policy is evaluated on the basis of a systematic analysis (...).
(EFQM-HE: 2.4)

Communication

The organization shall establish and maintain procedures for receiving (…) communications (internal and external) from relevant interested parties.
(BS7750: 4.4.1)

Stage 4:
Interested parties are actively involved in discussions about policy development and implementation.
(EFQM-HE: 2.3)

Considerations
The use of prescriptive criteria has several disadvantages.
A main problem is that the prescription of criteria is normative. True enough, the actual designing of sustainable education is fundamentally normative, because the goals and the contents are strongly related with the personal view of those who are responsible for the ideal future society and for their ethical norms. But exactly because of this, it is impossible to construct a measuring instrument based on normative prescriptions and then receive a general acceptation.
Besides, imposing external obligatory criteria would contradict one of the most important cornerstones of sustainable development: the own individual responsibility of each person and institution involved in the process of sustainable development.
Another problem with forceful prescription is of a more practical nature. Prescribing criteria offer exactly two possible states: either the organization satisfies the requirements, or it does not. Such an on-off criterion makes it impossible to describe a situation in some details. So, such a measuring instrument is not very discerning. It will not offer much insight in the situation in an organization, and it won’t offer many starting points for choosing priorities with respect to the policy.
A final argument is that it isn’t always evident that an educational organization will have to strive for the highest quality demands in all respects: the maximum isn’t always the optimum. An organization may decide deliberately to aim at another stage for certain aspects, on the basis of internal or external reasons. If a measuring instrument would be based on on-off prescriptions, an organization doing so would automatically disqualify itself.

Decision
Criteria for sustainable education should place the responsibility for choosing limits with those who take care of designing and implementing education, i.e. with individual organizations (universities or parts of universities).
Besides, criteria should be practically applicable and contribute to the organization policy.
For these reasons AISHE is decided to consist of descriptive criteria, enabling the formulation of auditing results in more than two possible values.”

Figure 32. From EFQM to AISHE

Regarding these three fundamental decisions, there was an excellent candidate to use as a starting point for the development of the assessment instrument: EFQM-HE, the higher education version of the EFQM model (Expertgroep HBO, 1999), and so it was decided to base AISHE on this model. All in all, the background of the new instrument is shown in figure 32. As a name for the instrument, ‘Auditing Instrument for Sustainability In Higher Education was chosen, or in short: AISHE. After some years of applying AISHE it became clear that the term ‘audit’ raises some resistance in some universities, and so it was later decided to use the term ‘assessment’ instead, which has the same initial. This is why in older reports, e.g. in chapter 6 of this dissertation, the term ‘audit’ is used, and elsewhere ‘assessment’.


< Previous page        Back to Top       Next page >