tugas Media Pembelajaran Kelompok 4

A systematic approach to instructional design

2.1 The emergence of the systems approach
What happens when you put the theoretical ideas just discussed into practice? Very soon, two questions assume great importance:
1. How do we know we have the right objectives specified?
2. How do we measure the success of our course?
1.    Valid objectives
Taking the first question and re-phrasing it, we might ask — where do course objectives come from Several answers are possible:
(a)A need for certain knowledge or skills is dictated by the trainee's future job.
(b)Certain knowledge or skills are held to be desirable by the society in which the trainee lives.
(c) The trainee himself is interested in attaining certain knowledge or skills.
(d)The teacher has personal interests or preferences which he intends to transmit if at all possible.
The first two summarize the vocational and non-vocational objectives of a course, the third to a large extent governs whether students will partake in the course and what they will gain from it. The last is the inevitable 'noise' in the communication channel, generally held to be of supreme value in non-vocational instruction (education??) and often a problem in vocational instruction (training??).
Discrepancies between any of these categories of objectives and those set up (or discernible in retrospect) for your course are going to lead to inefficiency (high cost), ineffectiveness (high dropout), lack of relevance (student revolt) and so on. Similarly there may be conflicts of objectives within or between categories which must be resolved.
We are implying the need therefore for a very thorough analysis of the whole system in which the course, the trainees and the teachers operate — a systems analysis.
2.    Course evaluation
The success of a course is similarly judged by various criteria. As before:
(a)Can students do the job for which they have been trained? (are cognitive and psychomotor objectives met?)
(b)Do students fit into the society and can they operate in it? (cognitive, psychomotor and affective)
(c) Are students satisfied with the course? (affective)
(d)Are teachers satisfied with the course? (a reflection, one hopes, of the above three sets of objectives being met).
But there are many other criteria. Just a few of these are:
(e) Is the cost of the course acceptable?
(f)  Is the course structure in line with our philosophical or political viewpoint?
(g)How does this course compare with other alternative courses?
Instructional Design and the Media Selection Process
(h) Does the course use all resources efficiently (teachers, time,
media, buildings, environment as well as money)?
(i) What are the organizational problems associated with this course
structure?
and in general terms:
(j)  What can we learn which will enable us to improve this course?
(k) What can we learn which will enable us to improve our general course-design procedures?
These last two avenues of evaluation summarize the development and the research aspects of evaluation. They imply that there is feedbuck from course evaluation to course design, and maybe even to the objectives of our course. They imply that defects in original analysis and course design will be identified and corrected.
This concept of self-regulation is one of the key concepts in the educational technologist's approach to course design — an approach now commonly named 'the systems approach'. Although its techniques may only bear a slight resemblance to those used in engineering system design, and the seientific basis of educational systems may be much less developed than in the case of natural systems such as animal nervous systems, the systems approach to education involves the following basic types of activity common to all systems approaches:

1. Analysis
of system needs (job and task analysis, society's needs, students' aims)
of system resources (manpower, space, time, materials, money, students' existing abilities)
leading to a statement of the problem (usually in terms of overall objectives)

2. Design
identification of whether the problem is entirely a training problem (other strategies may involve job redesign, redesign ofsociety, change in selection procedures, etc)
if a training problem, identification of precise course objectives
deriving instructional strategies and tactics (use may be made here of models such as Gilbert's, Bloom's or Gagne's)

3. Development
  planning of available resources (this involves selection of presentation media)
  preparation of materials, organizational structure, ete

4. Implementation/evaluation
  small-scale try-out concentrating on the instructional effectiveness of materials, efficiency of organization, etc (this is variously called developmental testing, validation, or more recently formative evaluation, as it helps to establish the
From of the system’s components)
• large-scale try-out which, in addition to following up the above factors in the wider context, concerns itself with the value of the course to the organization, the community and the individual. (As this is in a way the summing-up of an existing implemented system, it is sometimes referred to as summative evaluation. However, this distinction between formative and summative evaluation refers mainly to the use made of the information gathered - ie does it lead to changes in the system? - rather than to the stage at which the information is gathered.)


These overall stages in the application of a systems approach to course design may be variously broken down into subsidiary procedures. Different people specify different procedures often illustrating the sequence of procedures by some sort of flow diagram. Several such flow charts are illustrated in Figures 2.2, 2.3 and 2.4.
Although extremely helpful in illustrating the authors overall approaches, such flow charts are a trifle misleading in that they imply a fixed sequence of procedures. Although some overall sequence is generally followed the intelligent use of a systems approach involves the user in analysis, synthesis and evaluation at all stages of course design. It is not the sequence of the procedures, or the exact methods by which each procedure is carried out that makes the systems approach work, but the intelligence and experience of the course designer (or more commonly the course design team) in performing the procedures and drawing the correct conclusions.      
The pictorial flow chart (Figure 2.2) was used as a general guideline by the National Special Media Institutes, which included the Universities of Michigan State, US International, Syracuse and Southern California. It illustrates the overall concept very well but does not spell out the procedures to be carried out. It does, however, hint at different levels of activity, such as the project management level (steps 3 and 9), the design level (steps 1, 2, 4 and 5), and the development level (steps 6, 7 and 8).
The flow chart in Figure 2.3 is the 'Dick and Carey' (1985) model for instructional materials development. The flow chart is much more detailed, but concentrates on one level of the instructional design/development process. This model is close to the reality of many teachers, who cannot step far out of the bounds of the content and objectives as specified by some already existing curriculum and will not face the tasks of large-scale dissemination and implenientation of their materials on several sites.
The third example (Figure 2.4) is Robert Diamond's model for instructional development in a university setting. This model quite clearly identifies two levels, or phases, which we might call the macro and micro design phases. The first phase is concerned with curriculum and even organizational change. The second is not-unlike the Dick and Carey model.
In order to clarify these diflerent levels of operation, in the next section I present an expanded 4-level model of the total instructional design and development process. Within this model we will identify how and why media selection decisions are taken. We will see how the rationale and the procedures used for media selection are quite different in each of three levels.

2.2 The four levels of instructional design/development
2.2.1 The four levels of analysis
We can define four levels of analysis. These can be summarized as follows (Romiszowski 1981):
Level I. Defines the overall instructional objectives for our system, as well as certain other non-instructional actions that should be taken to ensure success in resolving the initially defined problems.
Level 2. Defines:
(a) the detailed intermediate objectives that have to be achieved to enable us to achieve the overall objectives (hence the term 'enabling objectives');
(b) the interrelationship between these objectives (in terms of prerequisites); and
(c). the level of entry, or the knowledge and skills which will not be taught but which the learner must have mastered before entering the instructional system we are developing.
Level 3. Classifies the detailed objectives according to some system or taxonomy of types of learning and assigns specific instructional tacties to each objective or group of similar objectives. Thus, typically, one might find that.

Comments

Popular posts from this blog

BAHAL TEMPLE

tentang waktu

tugas Manajemen Organisasi PMM-3 IAIN-SU