Introduction
The quality of open and distance learning (ODL) varies, like any
other form of education. Its quality (however you define it) can be the
result of a variety of factors, both internal and external to an ODL
organization for example, the levels of skills and expertise of staff,
the amount of resources available, weak or strong leadership,
efficiency of its administrative systems, or the communications
infrastructure in a country. An aspect receiving growing attention is
how ODL institutions, whatever their structure, context or
circumstances, manage their own quality. All institutions providing ODL
will have some existing systems and procedures for ensuring the quality
of what they do. Most are concerned to achieve the highest possible
quality they can-at least to the threshold of equivalence with
conventional provision and preferably surpassing it. But not all have
addressed the management of quality within their organizations in a
systematic way as much as they need to. Some continuing failures of qu
ality are avoidable. Procedures for ensuring quality can be ad hoc,
piecemeal, unsystematic, too reliant on individual discretion, and
standards of practice can be unnecessarily inconsistent and variable.
In some cases, an institution's claims to quality fall to match the
performance observed or experienced by those inside and outside of it
(learners, tutors, course developers, despatch clerks, sponsors,
professional bodies and policy-makers).
So how can an institution providing ODL manage its own quality
effectively? How can it improve the quality of the ODL it offers? These
are large and difficult questions. This paper does not provide easy
answers to them but seeks to examine some aspects of managing quality
in general and of quality assurance in particular.
How Can Quality Be Managed?
The adoption of 'quality' as an organising principle for ODL systems
and institutions seems to offer considerable potential for mobilising
people and resources. It enables the various policy and procedural
strands relating to the management of quality to be brought together at
an institution-wide level, within a structured framework and in a
systematic way. It presents a guiding, value ('quality') which few
would dispute. How easily does this principle translate into practice?
How can it be achieved?
Defining quality in an institutional context
To begin with, notions of quality in ODL will differ. It means
different things to different stakeholders (course coordinators,
students, media producers, local tutors) and also stems from their
varying conceptions of quality:
This is not a different perspective on the same thing, but
different perspectives on different things with the same label (Harvey
and Green, 1993, p.10)
Quality is not value-free. It is a social and political construct,
not a predetermined or static entity, and is therefore open to
continual re-examination and re-interpretation. Wide debate is needed
to develop a shared discourse and language about it as a precursor to
adopting specific approaches through this will also highlight
conflicting ideologies. Definitions arrived at need to link the
acceptable generalities to more uncomfortable, concrete interpretations
of quality. So any institutional plan has to be responsive to the
diverse legitimate views across the system as well as forming a clear
strategy for action-no easy task. However, it is clear from other
fields of activity (Juran 1989) that institution-wide action is needed
at a strategic level if lasting and significant improvements in quality
are to be achieved.
Aspects of quality
'Quality' in ODL is most often judged in terms of the learning
materials, whatever the medium. These are the pivot on which the whole
learning enterprise turns. However, a course is more than just the
materials; it is also the totality of experience of the learner. Since
the purpose of an ODL provider is to create the conditions for
learning, its success depends on how well the course production,
delivery and student support sub-systems function, and how well they
all integrate in operational terms. Excellent materials are useless if
not delivered to students; poor materials have limited value even if
delivered on time. Underpinning the creation of products and provision
of services are processes and operations which are not very visible
unless they fail. They get less attention than they deserve and are a
key area for attention in improving quality in ODL.
A framework for managing quality in ODL has therefore to accommodate all aspects of it, for example:
- Products: the learning materials and courses, media materials, the
output (e.g. number of graduates, assessment outcomes such as
examination pass rates, performance of competencies or practical
skills);
- Services: registration and advisory services, tutoring, and
counselling, feedback and guidance on learning (assignments), support
for progress as a learner, career advice, provision and management of
study centres;
- Processes that support both of the above: delivery systems,
record keeping, scheduling, warehousing and stock control, quality
assurance procedures;
- General philosophy: policy and mission statements, ethos and
culture of the organisation, mottos (such as 'Nothing but the best' as
at IGNOU, or 'Students first'), attitudes of staff and levels of
commitment, self-images presented.
Approaches to quality
The approaches used for managing quality in ODL reflect those
developed for business and industry, for example, quality control
(Guri, 1987), quality assurance (Freeman, 1991; Lewis, 1989) and total
quality management (McIlroy and Walker, 1993). All are aimed at
managing an enterprise in order to achieve a defined standard of
performance for activities, whether the notions of quality revolve
around conformance to specifications, fitness for stated purposes, or
service to fellow-workers as internal clients or students as external
ones. Such applications are not always easy to use in educational
contexts and are often resisted by academic staff who come from a
different culture, but they can offer useful strategies and new ideas
for improving quality (as illustrated in recent collections of
conference papers on quality in ODL, e.g., Atkinson and others, 1991;
and Tait, 1993). In transferring these approaches, care needs to be
taken in two respects: firstly, not to adopt them uncritically, and
secondly, to find an acceptable balance between their utility and their
potential for constriction.
An increasingly used approach to managing quality in education is
quality assurance. This is the set of activities or procedures that an
organization undertakes to ensure that standards are specified and
reached consistently for a product or service. Its goal is to create
reliable systems by anticipating problems and designing procedures to
avoid as many errors and faults as possible. By contrast, quality
control operates retrospectively, 'inspecting out' or discarding
defective products which fail to conform to a given standard. Quality
control and quality assurance, together with the assessment of quality
systems (that is, their monitoring, evaluation and audit) overlap. They
all have a role in more holistic approaches to managing quality, such
as total quality management (Oakland, 1989). While quality assurance
focuses on procedures, other approaches emphasise the 'people' aspect
of managing quality. For example, the 'Investors in People' initiative,
adopted by the UK Open University, sees partic
ipation in policy and decision-making, and staff development for all
individuals at all levels as a major way of maintaining and improving
institutional quality. Both approaches can contribute to the management
of quality within the same institution.
From principle to practice
The adoption of approaches for managing quality should not begin and
end with the procedural, the 'how to do it'. A dangerous temptation for
an institution is to jump too quickly to a procedural stage before
adequately addressing issues of context and value, especially in the
face of academic concerns. A necessary starting point is an open
examination of how quality comes to be on the agenda, that is, the
institutional, social and political factors at work. This can
significantly shape policies and plans for quality in any given
setting. Collaborative problem definition is as important as
collaborative solutions. It must be openly acknowledged too that some
persistent problems are not in fact caused by students (a convenient
magnet for attracting blame) but by the institution itself. Critical
debate is essential if more than mechanistic outcomes are to result and
it is itself a vehicle for change and improvement (Barnett, 1992).
Without it the ownership of change needed to sustain continuing action
is unlikely to develop. It also helps keep attention on the educational
goals. Key questions are:
- what goals and standards of quality are we seeking to achieve as an institution? what are our guiding values and principles?
- what do departments, sections and work groups need to do to align themselves with these goals?
- what procedures do we need to have in place?
- what criteria will we use to judge our achievements in quality?
- what evidence will we need to demonstrate our achievements?
- what mechanisms do we have for identifying and correcting poor quality?
- who will be responsible?
- what do we need to do in order to operate a cycle of continuous improvement?
At the very least, this kind of debate raises awareness about
quality issues across an organization and improves communication and
understanding about other people's work. It may also lead to the
development of a more systematic approach to the management of quality,
such as quality assurance. How can this help ensure quality in ODL?
Quality Assurance
Quality assurance is an approach to managing quality which focuses
on the management of processes. It aims to apply agreed procedures to
them to achieve defined standards, as a matter of routine. The purpose
of quality assurance is to ensure consistency of services and products,
and reliability in their delivery and quality (a reduction of
variability and unpredictability). It aims to make processes and
procedures transparent to the people using them (reducing uncertainty
in staff), and to avoid errors as a consequence. It can facilitate the
three things identified by Daniel (1992, p.75) as essential for
managing distance education: communication, coordination and careful
attention to detail. It does not guarantee the value or worth of a
product or service (different kinds of action and judgement are needed
to achieve that), only the consistency and reliability of the processes
which produce them (a point often misunderstood).
Some existing practices in ODL can be described as quality assurance
even if not called that, for example, the use of external assessors in
course development, the re-drafting and peer-review of course units in
production. So an initial task in developing a quality assurance system
is to map and review the quality assurance practices already in place.
However, if they are to be more than mechanistic, procedures need to be
linked to the aims and purposes for undertaking them and to their roles
in achieving an organization's educational goals, chief of which will
be enabling learners to learn. Quality assurance procedures need to be
developed in a way that leaves scope for individual initiative and
professional judgement while still achieving a baseline of consistency
in standards of practice.
Quality assurance focuses on operational processes and systems in the following way:
- you set standards for a product or service (e.g., turn around times
for students' assignment work, the provision of accurate, or consistent
and timely course-choice information to all students);
- you organize the development of a product or provision of a service so that the stated standards are consistently met;
- you develop, as a consequence, reliable and consistent procedures for essential activities.
It appears easily straightforward; it can prove surprisingly
difficult to implement, not least because an apparently simple problem
in ODL can be complex to untangle (as Mills and Ross, 1993, illustrate
in describing the reasons for late return of marked work at Athabasca
University). It can also involve a large shift in organizational
culture. However, this is not an argument for abandoning the use of
quality assurance procedures. Their value is best demonstrated by
illustrations of their absence.
Avoidable failure
The following (real) example illustrates one kind of failure in ODL:
A student wrote 1 12 letters to a distance teaching
institution in a continuing attempt to get the course materials for
which she had registered and paid. This achieved no result. She finally
obtained them after the course had begun and after travelling over 100
miles to the headquarters of the institution, with her parents, to make
a personal appeal (Robinson. 1994, p. 185).
When I've asked participants in workshops on ODL to suggest the
reasons for failure here, several begin by blaming the student and
listing her various inadequacies, before turning their attention to
institutional factors. If you stand in the student's shoes and view the
problem you get a different perspective—a starting point which should
be used more often for the analysis of problems and design of
procedures to correct deficiencies. A key concept in managing quality
is client-centred service: this value requires that the institution,
its sub-systems and individual staff put students first, not last, in
designing procedures and services.
In this case, although the learning materials were well designed,
failure in the delivery system to deliver them on time and later, to
remedy the error of non-delivery quickly enough, prevented the student
from obtaining them when needed and as promised. Unpicking the causes
of failure revealed a poor records system, lack of specified time
(standards) for dispatching materials after receipt of registration
information, lack of monitoring procedures and written guidelines,
unclear designation of responsibility for ensuring specific actions and
above all, lack of materials to send. Tracking further back, the root
of the problem lay in late production of the materials, caused by late
handover by writers, a failure to meet schedules by the
course-coordinator, and a haphazard contracting relationship with
external printers. Each department or section blamed another. What this
example illustrates is a failure of the institution to manage and
regulate its internal processes well enough to achieve its purposes
, that is, to create conditions for learning for students. Failure in
one operational process had a 'knock-on' effect which damaged the
quality of the whole enterprise. The problem is compounded when the
enterprise is large-scale and hundreds of students are affected, as was
the case here. Better quality assurance procedures could have avoided
some of the problems above. Of course, they do not address the other
vital ingredient in managing quality, the 'people factor': that is, the
role of staff in the process, their commitment, accountability, skills
levels, training and staff development provision and the resources for
it (the latter can be an acid test of institutional commitment to
improving quality and is an important feature of the 'Investors in
People' initiative). Procedures are essential for managing quality but
by themselves are not enough. They need to be partnered by an approach
which focuses on the staff involved.
Managing processes
If better control of processes can assist the management of quality,
how can this be done? Two ways in particular seem to be productive.
Firstly, processes need to be mapped in order to be understood,
monitored and measured and managed. Mapping makes them explicit, clear
to all, and can identify problems about job boundaries, sequencing, and
bottle-necks. It can also be a vehicle for communication between
departments. Procedures can relate to one function, usually managed by
a single manager, or several (cross-functional), running across the
work of several departments or managers (such as course-coordinator,
editor, radio producer, and others).
Secondly, since cross-functional procedures are the Achilles' heel
of ODL (a source of error, delay and conflict) they are an important
focus for improving quality. One strategy is to set task-focused,
time-limited teams (drawn from several departments) to work on them.
Project teams of this kind can be more effective than committees or
working groups (some confirmation of this is provided by Mills and
Paul, 1993, in describing experience of initiatives for improving
quality at Athabasca University in Canada, and by Mcllroy, 1991, who
describes the use of 'goal groups' for solving problems at Massey
University in New Zealand). One reason for the success of this
approach, also demonstrated in other organizational contexts (Juran,
1989), is the location of ownership for the cross-functional process
with the team and team-leader, instead of responsibility being
dispersed across several departments, each of which is likely to blame
another for things going wrong. This project team approach to improving
qua
lity needs to be repeated throughout an institution to achieve
system-wide improvement. It could be used much more often than it is.
Starting points are the identification of persistent problems, either
by the learners or by groups within the institution. Criteria for
project selection should be defined in advance and, to begin with,
those projects chosen which are most likely to be feasible,
significant, have measurable results, and solve a chronic problem
(Nouwens and Robinson, 1991), in order to engender confidence and
provide visible results. Experience from other contexts (Juran, 1989)
suggests that a project team of about seven people works well,
providing scope for a mixture of expertise, perspectives and
departmental representation. Looking at the timescale for achieving
improvement in quality at an institutional level, evidence from the
field of organizational studies suggests that the time needed is often
underestimated, leading to unfulfilled expectations. It takes three to
five years at the minimum to achieve significant improvements in
quality, though with careful choice of projects, some visible progress
can be made in the first year.
A framework for quality assurance
Although quality is improved incrementally, project by project, an
institution needs an institution-wide framework for managing quality if
it is to have impact. What might this look like? The following
checklist attempts to map the areas that a quality assurance system
would need to cover. It reflects practice and experience from higher
education and training contexts in the UK, from fields other than
education, and from my own work as editor of the SATURN European Guide
to Quality in ODL (SATURN, 1991, described in Robinson, 1992). It is
intended to be illustrative rather than exhaustive, and would need
adapting to fit differing contexts of use.
- Quality policy and plan
Has your ODL organization developed a quality on policy? Have all
levels of staff had the opportunity to shape its development? Have
goals been agreed? Has the policy been translated into a practical
plan? Are all the staff familiar with both? How do you know (what
feedback loop is in place)?
- Identifying critical functions
Have critical functions targeting goals been identified? Have some of
these critical functions taken the learner as the starting point? Have
the procedures to implement critical functions been analysed and
mapped? Do they match reality? Do the procedures embody best possible
practice?
- Specification of standards
Are there specified and clearly defined standards for all critical
functions? Have they been constructed by those concerned in working to
them? Are they clearly communicated and available in written form for
easy reference? Are they reasonable, achievable, and measurable? Are
there regular opportunities for reviewing their appropriateness and for
amending them?
- Involvement of users
Have students, tutors, course developers, operational service units and
all other stakeholders been involved in setting appropriate standards
and developing procedures? Does the framework being constructed by the
institution accommodate all 'voices'? Will there be opportunities to
provide feedback on their effectiveness in use?
- Staff involvement
Have all staff been involved in the development of the procedures,
particularly the aspects that affect their work directly? Have their
suggestions been built in? Has enough time been given to this process
(to ensure real not token participation)?
- Documentation
Are the procedures for achieving standards clearly documented? Are they
explicit'? Do they represent fact (practice as it happens) or fiction
(an idealised version)? Are the practices described consistently in
different documents? Have they concentrated on essential procedures'?
Are they in readable and user-friendly form? Do all those who need them
have access to copies? Are they up-to-date? Is the provision for
revising them when necessary? Is their use burdensome and
time-consuming? How do you know (what feedback loop is in place)?
- Training and staff development
Is there adequate provision of training and staff development? Is this
closely linked to the achievement of standards? Are there effective
mechanisms for assessing training, needs? Are these reviewed regularly?
Are there resources allocated to meet them?
- Monitoring
Are there systematic and routine monitoring mechanisms for critical
functions'? Do these check whether standards are being, met and
procedures followed? How do you know? Is the monitoring information
harnessed to appropriate action, or simply filed for unspecified future
use? How does the monitoring information feed back into the system and
corrective action?
- Costs
Is there a strategy for monitoring the costs of implementing and
maintaining quality assurance activities? Does this take account of
human and financial costs? Are the costs greater than the benefits? Is
there a review process to find out'?
(based on Robinson, 1994, pp. 187–188)
As can be seen, standards play an important role in putting quality
assurance into practice. To be acceptable and realistic they need to be
constructed with 'the cooperation and consensus, or general approval of
interested parties' (Dale and Oakland, 1992, p. 20). Standards can also
help define different levels of quality as a result of deliberate
strategy, not accident and mishap. Perfection is not always an
appropriate goal: for example, minimum standards can be set for
different grades of study centres; specification can be made of
broadcast or lesser standards for video production, or a standard of
materials 'good enough' for a particular purpose but 'best possible
quality' for another-in other words, choice of appropriate quality for
a given context and purpose. One difficulty in introducing 'standards'
is that they can be perceived (and sometimes used) as a threat to
professional judgement and as a way to control rather than enable
staff. Participation in their construction and critical debate a
re ways in which staff influence and shape the system.
Instituting quality assurance: some guidelines
It is one thing to devise an institutional framework for quality
assurance, it is another to have staff accept it, even though they may
have contributed to its construction. How can this process be assisted?
Improving quality is essentially a strategy for promoting change and
addressing the causes of resistance to change (an aspect which tends to
be neglected in introducing initiatives for improving quality). So
strategies offered by the literature and research on the management of
change are relevant. Implementation also has to take account of
specific contexts and cultures, as well as decisions about who the
change agents should be. Detailed schemes for quality assurance seem to
have limited transferability, though they may share the same goals. The
following guidelines suggested by Barnett (and amended to fit ODL more
closely) were designed to be relevant to higher education in the UK,
and depict one way of instituting quality assurance.
- Give a senior respected manager responsibility for taking
leadership in developing quality assurance and a realistic amount of
time to do it.
- Provide opportunities for staff at all levels and in all
locations, including regional and part-time staff, to participate in a
real (not token) way to help shape the quality assurance system as it
develops.
- Establish a cycle of review, with published timetable,
covering all aspects and departments of an ODL organization, including
regional and part-time staff activities. Ensure communication about it
throughout the organization.
- Use trusted intermediaries to act as a channel of
communication between central departments, regional units and senior
managers in shaping the system.
- Involve students and staff at regional or local level in
designing the system, and provide a means for their representation and
advocacy at a senior level, including key committees and working groups.
- Set the whole initiative going with the help of a 'dynamic'
group of interested and motivated staff, central and regional, who will
help develop and disseminate the ideas, informally and formally, and
contribute a variety of perspectives within an ODL institution.
- Give the key ideas exposure across the institution, ensuring
that those in the field (students, tutors, local advisors and
administrators) are included. Increase the flow of information and
ideas between the field and the centre; ensure that it is a two-way
flow.
- Disseminate good practice in improving quality and give publicity to progress and developments in quality assurance activities.
- Provide staff development which is specifically linked to the
goals of improving quality. Provide adequate resources for it.
Encourage critical debate, facilitate the sharing of good practice and
allow room for practitioner led staff development.
- Develop appropriate reward structures (including acknowledge
of achievements) at all levels for staff who make a significant
contribution to the development of quality assurance.
(based on Barnett, 1992, pp. 131–132)
The role of information in managing quality
Whatever approach to managing quality is adopted, all organizations
need information to manage themselves effectively. It is sometimes
surprising to find out what organizations do not know about themselves,
their own operations, basic activities, practices, performance and
problems. This is a major obstacle to an organization becoming, one
which continually learns about itself, its weaknesses and strengths,
appropriate corrective measures to take and priorities for action
(Argyris and Schön, 1978).
Four kinds of information and evidence can assist the management of
quality: information from the main functional areas (for example,
planning office, finance, student records, despatch), usually referred
to as management information; and data from monitoring, evaluation and
research. These four categories can overlap, for example, Calder (1994)
describes how student records can be used for evaluation and research
purposes, and even be set up to facilitate these activities. There is
wide variation in what ODL institutions do for evaluation and research
(as illustrated by Schuëmer, 1991) and how they use their findings in
managing quality. Difficulty lies in ensuring that the data and
findings link into the decision-making process and systems development.
It is a problem most institutions share, as Reddy identified in
relation to distance teaching universities in Asia and the Pacific in
1986:
Monitoring and evaluation systems are generally inadequate both
structurally and operationally. Because of this, proper evaluation of
the system and consequent adjustments become very difficult. More
important control and supervision becomes very difficult which in turn
affects the progress of the system (Reddy, 1986, p.264).
Yet without research and evaluation, when students are at a distance,
it is easy to delude yourself into thinking that things are
different from the way they really are. . . Research can put facts in
the place of these delusions. . . .Research cannot guarantee that
people will adopt the best policies, but it can bring a bit of realism
to their thinking (Mitton, 1982, p. 239).
Quality assurance requires that an institution is able to
demonstrate knowledge and documentation about its own practices but it
also needs information itself for its own functioning.
Conclusions
This paper has proposed 'quality' as an organizing principle for ODL
systems and has suggested ways in which this might translate into
institutional practice. It concludes with a question and a reminder.
A question
Management structures and practices accepted as normal in one
context may be alien elsewhere. Approaches to the management of quality
need to be context-specific though different institutions also have to
achieve common goals (for example, the delivery of good quality
materials on time, the provision of feedback to students on their
assignment work, access to support and advice for students, the
facilitation of deep engagement by students in learning). However the
means of achieving the goals may differ. The model of quality assurance
described in this paper starts from a particular context and culture,
reflecting one set of values and characteristics, such as democratic
participation, relatively low levels of power distance (Hofstede, 1980)
between staff within an institution and between staff and students, a
familiarity with teamwork, and a willingness to share knowledge and
information (not withholding them as a form of power and control). A
question to ask is how far quality assurance approaches ar
e transferable from one context to another. In what ways would a
quality assurance approach operating in another country and culture be
different?
A reminder
Quality assurance can help improve the quality of functioning of an
ODL provider in what is a very complex endeavour. However, its
limitations also need to be remembered and understood. You can use
quality assurance to improve the process of producing materials. What
you produce may still be of low quality, for example, poorly designed
learning activities, or assessment strategies which require only the
lowest levels of thinking or surface learning. While attention to
managing processes and procedures (the focus of this paper) is
essential for assuring quality in ODL, staff also need a clear
institutional vision of what constitutes good quality learning, what
conditions foster it, and how to assess it.
Messages that convey to students, whatever our intentions, that the
assessments we carry out are just machinery for deriving grades invite
cynicism; they will jump the hoops, and in return they will get their
qualification. (Biggs, 1989, p.28).
If this is the end product of quality assurance, then of itself it has little purpose or value.
References
Atkinson, R., McBeath, C. and Meacham, D. (eds), 1991, Quality in Distance Education, ASPESA Forum Papers, ASPESA, Australia.
Argyris, C. and Schon, D., 1978, Organizational Learning: A Theory of Action Perspective, Reading, Mass., Addison Wesley.
Barnett, R., 1992, Improving Higher Education: Total Quality Care. Buckinghamshire, Society for Research into Higher Education and Open University Press.
Biggs, J. B., 1989, 'Does learning about learning help teachers with
teaching? Psychology and the tertiary teacher', inaugural lecture, 8
December 1988, Supplement to the Gazette, University of Hong Kong, XXXVI(l), 20 March 1989.
Calder, J., 1994, Programme Evaluation and Quality, London, Kogan Page.
Dale, B. G. and Oakland, J. S., 1991, Quality Improvement Through Standards Cheltenham, Stanley Thornes Publishers.
Daniel, J. S., 1992, 'The management of distance education', Report of the 1992. EDEN Conference, Krakow, Poland, EDEN Secretariat, Milton Keynes, pp. 72–75.
Freeman, R. , 1991, 'Quality assurance in learning materials production', Open Learning, 6 (3), pp. 24–31.
Guri, S. 1987, 'Quality control in distance learning', Open Learning, 2 (2), pp. 16–21.
Harvey, L. and Green, D., 1993, 'Defining quality', Assessment and Evaluation in Higher Education, 78 (1), pp. 9–34.
Hofstede, G., 1980, Culture's Consequences: International Differences in Work-related Values, London, Sage.
Juran, J. M., 1989, Juran on Leadership for Quality, New York, The Free Press.
Lewis, R. 1989, 'What is "quality" in corporate open learning and how do we measure it?' Open Learning, November, 4 (3), pp. 9–13.
McIlroy, A., 1991, 'Enhancing the quality of management learning and
teaching in the distance mode', in Atkinson, R., McBeath, C. and
Meacham, D. (eds), 1991, Quality in Distance Education, ASPESA Forum Papers, ASPESA, Australia.
McIlroy, A. and Walker, R., 1993, 'Total quality management: Some implications for the management of distance education', Distance Education, 14 (1), pp. 40–54.
Mills, R. and Paul, R., 1993, 'Putting the student first: management
of quality in distance education', in T. Evans and D. Nation (eds), Reforminig Open and Distance Education, London: Kogan Page, pp. 113–129.
Mitton, R., 1982, Practical Research in Distance Education: A handbook for developing countries, Cambridge, International Extension College.
Nouwens, F. and Robinson, P., 1991, 'Evaluation and the development of quality materials', Australian Journal of Educational Tecnology, 7 (2), pp. 99–116.
Reddy, G. R., 1986, 'Planning, management and monitoring of distance
education', Distance Education in Asia and the Pacific, Vol. 1, Proceedings of the Regional Seminar on Distance Education, Asian Development Bank, Manila, 1986.
Robinson, B., 1992, 'Applying, quality standards in open and distance learning', Paper presented at Quality, Standards and Research in European Distance Education Conference,
University of Umeå, Sweden, March 5–6. Organised by EADTU (European
Association of Distance Teaching Universities) and SADE (Swedish
Association of Distance Education).
Robinson, B., 1994, 'Assuring quality in open and distance learning', in F. Lockwood (ed), Materials Production in Open and Distance Learning. London: Paul Chapman Publishing, Ltd.
Rumble, G., 198l, 'Organization and decision-making', in Kaye, A. and Rumble, Distance Teaching for Higher and Adult Education, London: Croom Helm.
SATURN, 1991, Quality Guide for Open and Distance Learning, Pilot Version, Amsterdam, SATURN.
Schuëmer, R. (ed), 1991, Evaluation Concepts and Practice in Selected Distance Education Institutions, Hagen, ZIFF.
Tait, A. (ed), 1993, Quality Assurance in Open and Distance Learning: European and international perspectives, Conference Papers, Cambridge: Open University.
|