Arega Mamaru
National Educational Assessment
and Examinations Agency
January 2014
Addis Ababa
Classroom Assessment Manual for Primary and Secondary School Teachers
Prepared by
Arega Mamaru Yewore
Yilikal Wondimeneh
Effa Gurumu
Bekele Geleta
Tel: 011-1-22-65-21/ 011-1-23-28-84
Fax: 011-1-22-65-21/251-11-1-23-28-90
P.O.Box: 30747
Email: [email protected]
Addis Ababa
January, 2014
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
My heartfelt thanks should go to all people who participated for the realization of this
manual but mentioning all of them is impossible. I have, however, to name a few
professionals and organization for their special contribution. Primarily, I am indebted to
my assistants, rather my intimate friends, Yilikal Wondimeneh, Effa Gurumu, and
Bekele Geleta for their technical advisement, insightful feedback and patience in
shaping the framework, the development and the completion of the manual.
My sincere thanks should go to Ato Araya G/Egziabher, Director General of NEAEA
and Ato Zerihun Duressa, Deputy Director General, for their special attention and
unreserved support, and continued understanding throughout the assignment. I should
also acknowledge my colleagues, Ato Tamiru Zerihun (head of NEAD), Ato Mengistu
Admassu and Ato Abiy Kefyalew - for their unreserved moral and material support.
I should acknowledge H.E. Ato Fuad Ibrahim, State Minister of Education, for his
concern, following up and insightful interest he showed from the conception to
completion of the manual. Special acknowledgement with many thanks should be given
to the World Bank READ TF Ethiopia Country Office, for its generous provision of the
required financial assistance and material facilitation for the validation and firming up
workshops and the two round TOT workshops participants.
Last, but not least, I deserve my special thanks to all participants who involved in the
validation workshops, firming up and TOT workshops for their enormous and
professional contributions.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table of Contents
CHAPTER ONE ..................................................................................................................................................... 8
1.0. INTRODUCTION ........................................................................................................................................... 8
1.1. Background ...................................................................................................................................................... 8
1.2. Rationale .......................................................................................................................................................... 9
1.3. Purpose of the Manual ................................................................................................................................... 11
1.4. Organization of the Manual ........................................................................................................................... 12
CHAPTER TWO .................................................................................................................................................. 14
2.0. OVERVIEW OF CLASSROOM ASSESSMENT ........................................................................................ 14
2.1. Concepts of Terms Related to Classroom Assessment .................................................................................. 14
2.1.1. Testing, Measurement, Assessment, and Evaluation ........................................................................ 14
2.1.2. Classroom Assessment and Examinations .......................................................................................... 15
2.1.3. Assessment, Evaluation and Action .................................................................................................... 15
2.1.4. Formative and Summative Assessments ............................................................................................. 17
2.2. Purpose and Characteristics of Classroom assessment .................................................................................. 18
2.2.1. Purposes of Classroom assessment ..................................................................................................... 18
2.2.2. Characteristics of Effective CA .......................................................................................................... 18
2.3. Assumptions and Principles of Classroom Assessment ................................................................................. 19
2.3.1. Assumptions of Classroom Assessment.............................................................................................. 19
2.3.2. Principles of Classroom Assessment .................................................................................................. 20
2.4. Alignments of Competency Based Curriculum, Learning, and Assessment ................................................. 21
2.4.1. Competency Based Curriculum .......................................................................................................... 21
2.4.2. Learning .............................................................................................................................................. 22
2.4.3. The Interaction of Curriculum, Learning and Assessment ................................................................. 24
2.5. Country Experiences on Classroom Assessment ........................................................................................... 27
CHAPTER THREE .............................................................................................................................................. 30
3.0. COMPONENTS CLASSROOM ASSESSMENT......................................................................................... 30
3.1. Assessment For Learning ( AfL) ................................................................................................................... 31
3.1.1. Purposes of assessment for learning ................................................................................................... 31
3.1.2. Strategies of Assessment for Learning ................................................................................................ 31
3.1.3. Assessment for Learning Practices ..................................................................................................... 35
3.1.4. Commonly Used AfL Techniques and Tools...................................................................................... 40
3.1.5. Commonly Used AfL Tools ................................................................................................................ 45
3.2. Assessment As Learning (AaL) ..................................................................................................................... 51
3.2.1. Purposes of AaL .................................................................................................................................. 52
3.2.2. Planning AaL………………………………………………………………………………………..54
3.2.3. Techniques of AaL…………………… ……………………………………………………………..54
3.3. Assessment Of Learning (AoL)…………………………………………………………………………….59
3.3.1. Purpose of assessment of learning ...................................................................................................... 60
3.3.2. Techniques of Assessment Of Learning ............................................................................................. 61
3.4. Assessment Tools for Integrated Assessment ................................................................................................ 61
3.4.1. Performance Assessment .................................................................................................................... 62
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
3.4.2. Personal Communication Assessment ................................................................................................ 73
CHAPTER FOUR................................................................................................................................................. 78
4. 0. Providing Feedback for Classroom Assessment ........................................................................................... 78
4.1. Concept of Feedback...................................................................................................................................... 78
4.1.1. What is feedback? ............................................................................................................................... 78
4.1.2. Purposes of Feedback.......................................................................................................................... 79
4.1.3. How to give and receive feedback ...................................................................................................... 79
4.1.4. Strategies for Effective Feedback ....................................................................................................... 81
4.2. Ways of Giving Feedback .............................................................................................................................. 82
4.2.1. Giving feedback by Comparing .......................................................................................................... 82
4.2.2. Outcome and Process Ways of Giving Feedback ............................................................................... 83
4.2.3. Descriptive and Evaluative Ways of Feedback ................................................................................... 84
4.3. Feedback for Target Users and Stakeholders ................................................................................................. 86
4.3.1. Feedback to Students .......................................................................................................................... 86
4.3.2. Feedback to Teachers .......................................................................................................................... 87
4.3.3. Feedback to Parents ............................................................................................................................ 88
4.3.4. School Administrators and Authorities ............................................................................................... 88
4.4. Utilizing the Information for Improvement ................................................................................................... 89
CHAPTER FIVE .................................................................................................................................................. 90
5. 0. Planning and constructing Paper and Pencils Assessment Tools ................................................................. 90
5.1. The Preconditions before Constructing Assessment Tools ............................................................................ 90
5.2. Planning and constructing the instruments .................................................................................................... 90
5.2.1. Determining the Purpose of the Assessment ....................................................................................... 91
5.2.2. Identifying the Learning Outcomes to be Measured ........................................................................... 91
5.1.3. Defining the Learning Outcomes ........................................................................................................ 93
5.2.4. Outlining the subject matter to be measured ....................................................................................... 93
5.2.5. Developing Table of Specification ..................................................................................................... 94
5.3. Construction of Paper and pencil Tests.......................................................................................................... 99
5.3.1. Writing Objective Test Items .............................................................................................................. 99
5.3.2. Writing Supply Items ........................................................................................................................ 112
5.3. 3.Writing Essay Test Items .................................................................................................................. 113
CHAPTER SIX ................................................................................................................................................... 118
6.0. Assembling, Administering, Scoring Tests and Reporting Results ............................................................. 118
6.1. Assembling, Administering and Scoring Test Items.................................................................................. 118
6.1.1. Assembling test items ....................................................................................................................... 118
6.1.2. Administering Tests .......................................................................................................................... 120
6.1. 3. Scoring Tests .................................................................................................................................... 121
6. 2. Marking, Referencing, Recording and Reporting Test Result .................................................................... 123
6.2.1. Marking/grading................................................................................................................................ 123
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
6.2.2. Approaches for Referencing and Interpretation ................................................................................ 124
6.2.3. Recording and Reporting Students’ Progress and Achievement ...................................................... 126
CHAPTER SEVEN ............................................................................................................................................ 127
7.0. Describing and Summarizing Test Scores ................................................................................................... 127
7.1. Ways of Describing Test Scores .................................................................................................................. 127
7.1.1. Frequency distribution ...................................................................................................................... 127
7.1.2. Graphs ............................................................................................................................................... 128
7.2. Measures of Central Tendency .................................................................................................................... 129
7.3. Measures of variability ................................................................................................................................ 130
CHAPTER EIGHT ............................................................................................................................................. 133
8.0. Evaluating the Test and Test items .............................................................................................................. 133
8. 1. Attributes of Good Test .............................................................................................................................. 133
8.1.1. Validity ............................................................................................................................................. 133
8.1.2. Reliability .......................................................................................................................................... 134
8.1.3. Fairness and Wash-back Effect ......................................................................................................... 136
8.1.4. Practicability ..................................................................................................................................... 138
8.2. Improving the Quality of Test Items through Item Analysis ....................................................................... 139
8.2.1. Quantitative item analysis ................................................................................................................. 139
8.2.2. Interpreting item-analysis data .......................................................................................................... 143
CHAPTER NINE ................................................................................................................................................ 147
9.0. THE WAY FORWARD .............................................................................................................................. 147
9.1. National and Regional Level........................................................................................................................ 147
9.2. Teacher Training Colleges and Universities ................................................................................................ 147
9.3. Schools/ School Cluster centers ................................................................................................................... 148
REFERENCES ................................................................................................................................................... 149
ANNEXES .......................................................................................................................................................... 155
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
AaL : Assessment as Learning
AfL : Assessment for Learning
AoL : Assessment of Learning
APA: American Psychological Association
AERA: American Educational Research Association
ANCME: American National Council on Measurement in Education
CA: Classroom/Continuous Assessment
CDICP: Curriculum Development and Implementation Core Process
FCA: Formative Continuous Assessment
ICDR: Institute of Curriculum and Research
IEQ: Improving Educational Qualification
MLC: Minimum Learning Competency
MoE : Ministry of Education
NOE: National Organization for Examination
NEAEA: National Educational Assessment and Examinations Agency
OECD: Organization for Economic Co-operation and Development
SCA: Summative Continuous Assessment
TGE: Transition Government of Ethiopia
UNESCO: United Nations Education, Science and Culture Organization
: United States Agency for International Development
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
1.1. Background
There is considerable evidence that assessment in general and classroom assessment (which is
also known as continuous assessment) in particular is a powerful instrument for enhancing the
attainment of learning outcomes to ensure quality education and academic excellence in the
education institutions.
Realizing this, the current Ethiopian education and training policy (TGE, 1994) emphasized the
ongoing classroom assessment (continuous assessment) in academic and practical subjects to ascertain
the formation of all round profile of students at all levels. To translate this policy issue into practical at
the classroom level, a comprehensive and system-wide classroom assessment manual is needed for
teachers to help them engage in the assessment activities.
Concerning this, even though there is no system-wide classroom assessment policy framework with
implementation guidelines, very few resources are available for teachers to conduct classroom
assessment activities. The newly reframed competency based general education curriculum framework
provides limited guidelines on what students are expected to learn and how to be assessed. As stated in
the framework, the subject teachers are advised to carry out regular checks on the progress of all
students in each subject through continuous and formal assessment (MoE, 2010). Moreover, NOE
(2002) and NOE (2004) had tried to prepare continuous classroom assessment guidelines and
techniques which mainly focused on the traditional summative aspect of classroom assessment for
primary and secondary teachers. After piloting in few schools, USAID (2012) seemed to include some
formative and summative continuous assessment guidelines into the first cycle primary school
teachers’ continuous professional development (CPD) resource material. This resource material is also
focused on lower primary school teachers and lacked comprehensiveness. Furthermore, ICDR (2004)
and Desalegn (2004) wrote on similar topics by focusing on some aspect of the classroom assessment
resource material.
Most of these classroom assessment resource materials are focusing on some aspects of classroom
assessments and lacked comprehensiveness to go with the current reframed competency based
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
curriculum. In the competency based education, assessment plays an important role to improve
students’ learning progress. It enables to make judgments about whether the competency has been
achieved or not.
To assess students’ competencies in line with the reframed curriculum and to know whether the
learning process takes place so as to achieve the predetermined learning outcomes properly, school
teachers are supposed to conduct ongoing classroom assessment. For this to be more effective,
preparing this classroom assessment manual may be important to enable school teachers implement
the competency based curriculum in classroom and improve their assessment techniques. Hence, this
manual is designed to support teachers in assessing their students effectively, efficiently, and fairly
with the intention of enhancing student learning by empowering teachers in conducting classroom
assessment with active student involvement.
1.2. Rationale
The planned and intentional use of continuous assessment in the classroom enhances students’
achievement. When classroom assessment is frequent and varied, teachers can learn a great deal about
their students. They can gain better understanding about students’ existing beliefs and knowledge, and
identify the gaps in understanding that enable them to probe students’ thinking over time in order to
link the prior knowledge and new learning (Black, 1998; Hoy and Gregg, 1994). As learnt from Black
and Wiliam’s research, improving student learning through assessments depends upon five factors: (1)
providing feedback to students, (2) students’ active involvement in their own learning, (3) adjusting
teaching to take account for results of assessment, (4) recognizing the influence of assessment on
students’ motivation and self-esteem, and (5) ensuring students assess themselves and understand how
to improve. This implies teachers can use classroom assessment to become well aware of the
knowledge, skills, and beliefs that their students bring in to a learning task and to monitor students’
changing perceptions as the instruction proceeds.
In this regard, it would be good to note what Angelo and Patricia (1993) exemplified that to avoid
unhappy surprises, a school and its students need better ways to monitor learning throughout the
semester. If a teacher's goal is to help students learn points "A" through "Z" during the course, then the
teacher needs first to know whether all students are really starting at point "A" and, as the course
proceeds, whether they have reached intermediate points "B," "G," "L," "R," "W," and so on. To
ensure high-quality learning, it is not enough to test students when the syllabus has arrived at points
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
"M" and "Z”. Classroom Assessment is particularly useful for checking how well students are learning
at those initial and intermediate points, and providing information for improvement when learning is
below satisfactory. This means that teachers need a continuous flow of accurate information on
students learning.
Similarly, some official documents including the current Ethiopian education and training policy
(TGE, 1994) and the GEQIP (MoE, 2008) documents have given special attention for the necessity of
ongoing classroom assessment to enhance academic excellence and answer important questions about
the student, the classroom, the school and the education system as a whole. Despite the fact that much
has been said about the importance of ongoing classroom assessment in different assessment related
documents and on different occasions like workshops, studies indicated that teachers seem to have
critical gaps in conceptions and practical application of it at the classroom level. For example, many
teachers have conducted over use of testing, not considering assessment as part and parcel of learning,
and inadequate provision of feedback. According to Kibre (2010) , Ethiopian Academy of Sciences
(2012) and MoE (2012) , the existing practices and knowledge of teachers regarding the pedagogical
advantages of ongoing classroom assessment and their attitude towards implementing it in their actual
classroom seem to be minimal. There seems to exist a general misunderstanding among teachers in the
use of ongoing classroom assessment techniques. Most teachers prepare mid exam and final exam in
assessing their student‘s level of understanding. This is professionally proved to be wrong because
these teachers are using only limited variety of assessment tools. The following misunderstandings
were found as some of the common ones.
1. Ignoring the importance of ongoing assessment with appropriate feedback , many teachers
carry out over use of testing at the end of a week, month, mid semester or unit/ series of
2. Being confused with the purposes of assessment, many teachers grade student dispositions
and behaviors like attendance, effort, attitudes etc instead of reporting separately from
3. Being confused about the purposes of assessment (focusing on gathering information about
student learning) and grading (an end point judgment about achievement), many teachers
consider the two as the same;
4. Many teachers are giving the same mark/ grade to each participant in a group assessment.
This ignores the importance of validly assessing each student’s work within a group
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
5. Failing to align teaching objectives with assessment tasks, many teachers use assessment as
an auditing exercise about what students do and don’t know or can and can’t do, by testing
student memory, asking trick questions etc.
6. Failing to address what is important for learning, many teachers focus only on what is
easiest to measure. They use mainly simple learning outcomes with paper and pencil tests
rather including high order learning thinking skills and performances.
Various reasons may be attributed for the aforementioned problems and misunderstandings. Here it
would be good to pinpoint some major reasons why many teachers do not use ongoing classroom
assessment in their classrooms from Desalegn’s (2004) study. The following are mentioned as the
causes of the problems in the study:
Lack of sufficient training in classroom assessment;
Lack of skills to develop classroom assessment tools;
Absence of manuals and other supporting materials that assist teachers in the development of
classroom assessment tools and the like.
In responding to the aforementioned problems, National Educational Assessment and Examinations
Agency (NEAEA) has prepared this comprehensive and system-wide classroom assessment manual to
equip primary and secondary school teachers with practical classroom assessment techniques and tools
and engage them in the context specific assessment activities. During the preparation of the manual,
assessment and curriculum experts together with university and college instructors have participated
from conception to the realization of the work. Then, it had been presented to policy makers two times
and their concern and suggestion were taken as feedback and included in the manual.
1.3. Purpose of the Manual
Classroom assessment is now seen as one of the best vehicle to reach quality education for which
most schools are dying for to achieve. Many examples of classroom assessment are for a primary and
secondary level, but the concept and process can also be applied to higher education level as well.
This handbook can guide teachers, who are the intended users of the manual and potential
implementer of context specific classroom assessment. Therefore, teachers may find this manual as a
useful tool to adapt and implement a classroom assessment activity in their own educational
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
More specifically, the manual has the following objectives:
A. To acquaint teachers with the essence (theoretical and practical foundation) and advantages of
classroom assessment for students achievement.
B. To ensure the assessment process should demonstrate to students, teachers, principals, and
outsiders how they work jointly to improve students’ achievement through classroom
C. To enable teachers to prepare better table of specifications/ blueprint/ for their classroom
D. To improve teachers knowledge and skills of preparing and administering assessment
instruments , and analyzing, recording and utilizing the classroom assessment data
E. To guide teachers to utilize the assessment results and provide feedback to determine how the
students achievement can be improved.
F. To enable teachers to identify the learning difficulties of students through classroom
assessment at the early stage and apply context specific teaching methods and assessment
G. To help teachers plan their own context specific remediation for alleviating the problems they
H. To serve as a base for future specific subject level assessment manual development.
1.4. Organization of the Manual
The manual is organized into different chapters with different topics. The first chapter provides
background information about why classroom assessment has moved recently to the forefront and the
rationale for preparing this handbook. The second chapter deals with a detailed description of the
theoretical foundations and practical implications, assumptions and principles. This chapter also
provides with a brief description of the concepts and alignment of curriculum, learning and
assessment. Moreover, case examples from highly performed countries in international assessments
are presented. The components (paradigms) of classroom assessment are discussed in chapter three.
Even though the assessment for learning and assessment as learning are dealt in detail, assessment of
learning is also highlighted. Besides, the chapter is enriched with Afl and AaL strategies and,
techniques and suggested tools including performance and personal communications assessment with
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
practical examples. Chapter four suggests different ways of providing feedback for the different
classroom assessment components.
Chapter five is focusing on constructing paper and pencil assessment instruments that school teachers
can adapt in their test development process. This chapter will include an extensive discussion on how
to assess learning outcomes, develop table of specification (blueprint) for classroom assessment. It
also presents a discussion on guidelines and criteria for selecting the appropriate methods of
assessment by considering each student’s learning needs and measuring the learning outcome.
Chapter six focuses on assembling, administering and scoring of classroom assessment instruments
(tests), marks and marking systems. Chapter seven is dealing with describing and summarizing test
results with graphs and measure of central tendency.
Chapter eight focuses on evaluating the test and test items with validity and reliability, fairness, and
wash back effect of tests. Some recommendations about implementing classroom assessment are
forwarded in chapter nine. The annotated references/bibliographies used in the preparation of this
manual are listed next to chapter nine. Some pertinent templates, practical examples and some
assessment formats together with brief explanations are annexed for teachers to model them in their
day- to- day classroom assessment practices.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
2.1. Concepts of Terms Related to Classroom Assessment
In this section, an attempted is made to clarify the concepts of classroom assessment in relation to
other related concepts since some people use them interchangeably in a wrong way.
2.1.1. Testing, Measurement, Assessment, and Evaluation
Testing refers to specific instruments that measure the achievement and proficiency of students, and
measurement refers to systematic description of student’s performance in terms of numbers.
Essentially, measurement is the process of scoring a test that involves assigning numbers, or
quantifying to represent an individual’s performance (Alausa, 2004). Tests and measurement are
subset of assessment. Assessment (sometimes known as “student assessment,” “educational
assessment,” or simply “assessment”) to a more general concept of examining closely the students'
learning progress. As mentioned in Clarke (2012), assessment includes classroom assessment,
examinations, and large scale (system level) assessments. Whereas, Evaluation is the process of
systematic collection of information regarding to the nature and quality education in order to make
rational decisions. Testing, measurements, assessment, are the subsets of evaluation. The following
figure depicts the relationship of among them.
Figure 1: Relationship among testing, measurement, assessment, and evaluation
Source: Dochy (2001, p.4343)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
2.1.2. Classroom Assessment and Examinations
Classroom assessment is also known as continuous assessment. For this reason these two phrases are
used interchangeably in this guiding manual as used in other books. According to NOE (2004)
document, classroom assessment is a process of collecting information on the progress of students’
learning using varieties of tools like checklist, formal tests, observations, self-assessment, creative
writing, and portfolios. For Black and Wiliam (1998), classroom assessment refers to all those
activities undertaken by teachers and by their students in assessing themselves to provide information
as feedback to modify teaching and learning activities. Such assessment becomes formative
assessment when the evidence is actually used to adapt the teaching to meet student needs.
Examination is a series of questions designed to measure a learner's knowledge on a particular subject
at the mid or end of the academic year. Classroom assessment and end-of-year examinations are meant
to complement one another. When classroom assessments are done properly, they can predict
performance on the end-of year examinations.
2.1.3. Assessment, Evaluation and Action
Assessment is a thorough but constant appraisal, judgment and analysis of students' performance
through meticulous collection of information, whereas, evaluation is an overall but regular judgment
and analysis of teaching and learning as well as curriculum through systematic collection of data. In
assessment, the focus is on specific points of the subject; but in evaluation, the emphasis is placed on
overall aspects of the subject. Assessment calls for forming a process which occurs during the learning
process, but evaluation emphasizes the conclusion of a process that takes place at the end of the term.
Assessment looks at the individual subject learners to improve the process of teaching and learning,
but evaluation judges the goodness, worth or quality of learning, students’ achievement, and the whole
learning program.
Data in assessment are collected by concentrating on students' moment-by-moment performance in
the classrooms through various techniques while evaluation involves the gathering of data by focusing
on teaching performance and learning outcomes.
Action is what we do as the result of the assessment of learners and evaluation of their assessment
information. For example, when we stand on a scale that assesses our weight at is 100 kilograms. We
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
evaluate this assessment as being unhealthy for us. We then decide to take action and go on a diet to
reduce our weight. This process can be summarized as:
1. Assessment: 100 kilograms of the personal weight
2. Evaluation: unhealthy condition
3. Action: weight reduction diet
Actions in Classroom Context
Remediation is a method of helping students to overcome their learning difficulties. After assessing
the learners, the teacher is expected to give non masters and borderline’s remediation. As shown in the
figure below, remediation by peers is very important to feel at ease students who have learning
Assessment: During a Grade 2 lesson, in which a teacher is teaching the learners to identify square
shapes, the teacher gathers information on whether learners can identify square shapes. Suppose the
learners cannot identify square shapes.
Evaluation: The teacher judges that it is not good that the learners cannot identify square shapes. The
learners must be able to identify square shapes in order to move on to the next learning outcomes
(competencies) which are to name the different shapes and to identify objects of the same shape.
Action: Based on students’ performance, the teacher may decide that remediation or enrichment
actions are necessary. Therefore, the teacher groups the learners in pairs and they practice drawing big
and little squares, circles and triangles. The teacher then helps them to learn how to identify squares
from among the other shapes.
Figure 2: Remediation by peers
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Enrichment is the work that is given to the group of students who have mastered the objectives. The
students work independently, but are checked by the teacher. However, to be meaningful, enrichment
is supposed to be meaningful, motivating, rewarding, enjoyable and challenging. Remediation and
enrichment are intended to run parallel in the classroom.
Assessment: During a Grade 2 lesson in which a teacher is teaching the learners to identify circles,
the teacher gathers information that establishes the learners can identify circles.
Evaluation: The teacher judges that the ability of learners to identify circles is very good. The
learners must be able to identify circles in order to move on to further objectives, for instance; to
identify the different shapes and to identify objects of the same shape.
Action: Depending on the time available, the teacher may decide to go on to teach the next
competency or provide the learners with further instruction (enrichment) on circles. For example, the
teacher may ask the learners to identify circles in common objects such as car and bicycles. Or, the
teacher may ask them to draw pictures of objects in their school that shows how circles are part of the
real world (Angelo and Patricia, 1993)
2.1.4. Formative and Summative Assessments
Formative assessment is an ongoing assessment including reviews and observations in a classroom as
part of the instructional process with the intention of modifying and validating instruction. Summative
assessment, on the other hand, is typically used to evaluate the effectiveness of instructional programs
and services at the end of an academic year or other predetermined time for making a judgment of
student competency after an instructional phase is complete (READ, 2011). To sum up, it is very
important to note what the Earl (2004) analogy about the delineation of assessment and evaluation that
when the cook tastes the soup, that is formative assessment; when the guests taste the soup that
is evaluation. If the cook tastes the soup, he/she has time to add some ingredients and can learn from this
for next time. However, when the gust tastes the soup, he/she judges how good the soup is.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
2.2. Purpose and Characteristics of Classroom assessment
2.2.1. Purposes of Classroom assessment
If a teacher conducts continuous classroom assessment to find out what a student knows,
understands, and can do, he /she becomes in a position to get better understanding about where his/her
students are and what their learning needs are. When teachers know how students are progressing and
where students are having trouble, they can use this information to make necessary instructional
adjustments such as re-teaching, trying alternative instructional approaches, or offering more
opportunities for practice. The main purposes of Classroom assessment are to:
1. find out what students know and can do
2. help teachers to adjust their teaching methods based on the need of students
3. have confidence in what students know, understand and can do.
4. Provide all students with opportunities to show what they know
5. help students to learn with understanding
6. improve teaching methods
7. help to determine the remediation and enrichment methods
8. let students know their own progress
9. let parents know their child’s progress
10. lead to overall evaluation of the students
2.2.2. Characteristics of Effective CA
The following are the characteristics of classroom assessment.
Mutually Beneficial
Guidance –Oriented
Rooted in Good Teaching Practice
(Atkin, Black and Coffey, 2001).
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
The table below signifies how important the classroom assessment is for students and teachers.
Table 1: the need of classroom assessment for students and teachers
Classroom assessment helps to
identify prior knowledge
identify strengths and weaknesses
help in setting success criteria and learning goals
help in understanding themselves as learners
help in understanding the learning process
measure learning progress and attainment
Classroom assessment helps to
identify strengths and weaknesses in teaching
help in setting learning outcomes and objectives
tell how and what to assess
show fairness and objectivity of assessment
inform and guide teaching and learning
assign grades
monitor student progress and attainment
be used to develop oneself as a teacher
2.3. Assumptions and Principles of Classroom Assessment
2.3.1. Assumptions of Classroom Assessment
Assumptions are sets of beliefs and theories or conceptual frameworks developed for practical
understandings. From assessment perspectives, teachers need certain assumptions relevant to the
classroom assessment issues in order to make more effective decisions about their students' actual
learning abilities. According to UTC (2002), the assumptions of classroom assessment are the
1. The quality of student learning is directly, although not exclusively, related to the quality of
teaching. Therefore, one of the most promising ways to improve learning is to improve
2. To improve their teaching effectiveness, teachers need first to make learning outcomes explicit
and then to get specific, comprehensible feedback on the extent to which they are achieving
them. Teachers should articulate specific skills and competencies on where are they going and
where do their students want to go.
3. To improve their learning, students need to receive appropriate and focused feedback early
and often; they also need to learn how to assess their own learning.
4. The type of assessment most likely to improve teaching and learning conducted by teachers is
to answer questions teachers themselves have formulated in response to issues or problems in
their own teaching.
5. Systematic inquiry and intellectual challenge are powerful sources of motivation, growth, and
renewal for teachers, and classroom assessment can provide such challenge.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
6. Classroom assessment does not require specialized training; it can be carried out by dedicated
teachers from all disciplines.
7. By collaborating with colleagues and actively involving students in classroom assessment
efforts, teachers and students enhance learning and personal satisfaction.
2.3.2. Principles of Classroom Assessment
The following principles are identified as the major and common ones to guide excellence in
improving quality education with classroom assessment by providing guidelines and framework for
stakeholders (Rudner and Schafe, 2002) to better perform their part.
1. Classroom assessment of student learning begins with educational values (cognitive,
psychomotor, and affective).
2. Classroom assessment works best when it is ongoing, not episodic.
3. Classroom assessment provides efficient feedback on instruction to make decision for
enrichment and/or remedial actions
satisfactory (proceed to next)
unsatisfactory (re teach)
Classroom assessment uses a variety of assessment procedures to gather appropriate
information about students’ knowledge, skills, and attitudes. Just as one size of learning
doesn’t fit all, one size of assessment doesn’t suit either.
5. Classroom assessment ensures that assessments are valid, reliable, fair and usable
Valid- reflects purpose of the test( fit for purposes)
Reliability- yields consistence on the results
Fair- free from biases
Usability- practicability, coverage, convenience, and economical
6. Classroom assessment allows students to document/keep record of their performance assessment
like portfolio.
Classroom assessment fosters wider improvement when representatives from across the
educational community are involved.
Interpreting/communicating the results of assessment meaningfully with stakeholders
Assessing with correct meaning, student can make correct decision, falling scores can
motivate, passing can inspire,
Through assessment, educators meet responsibilities to students and to the public
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
2.4. Alignments of Competency Based Curriculum, Learning, and Assessment
2.4.1. Competency Based Curriculum
Competency-based learning or competency based education and training is an approach to teaching
and learning more often used in learning concrete skills than abstract learning. It differs from other
approaches in that the unit of learning is extremely fine grained. Learners work on one competency at
a time, which is likely a small component of a larger learning outcome. After being evaluated on the
individual competency, the student moves on to the next only if he/she has mastered it.
Competency-based learning is learner-focused and works naturally with independent study and with
the teacher in the role of facilitator. Competency –based learning methods allow a student to learn
those individual skills they find challenging at their own pace, practicing and refining as much as they
like. Then, they can move rapidly through other skills to which they are more adept. It requires
mastery of every individual learning outcome making it very well suited to learning credentials. This
implies that teachers need to know the learner so that they can make sure that the curriculum fits.
They need to know students’ traits, how students access, process, and express information, to know
their learning and thinking styles. Teachers should use of varied instructional methods and varied
assessment techniques to meet diverse student needs.
There are four student traits that teachers must often address to ensure effective and efficient learning.
These are readiness, interest, learning profile, and affect of the students.
Readiness refers to a student’s knowledge, understanding, and skill related to a particular sequence of
learning. Only when a student works at a level of difficulty that is both challenging and attainable for
that student does learning take place.
Interest refers to those topics or pursuits that evoke curiosity and passion in a learner. Thus, highly
effective teachers attend both to developing interests and as yet undiscovered interests in their
Learning profile refers to how students learn best. Those include learning style, intelligence
preference, socioeconomic background, culture and gender. If classrooms can offer and support
different modes of learning, it is likely that more students will learn effectively and efficiently.
Affect has to do with how students feel about themselves, their work, and the classroom as a whole.
Student affect is the gateway to helping each student become more fully engaged and successful in
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Fundamental shift in Curriculum
Currently, there is a fundamental shift on curriculum reform of standards, curriculum itself and
assessment worldwide. The goal of emerged curriculum is less dependence on rote learning, repetitive
tests and ‘a one size fits all’ type of instruction, and more engaged learning, discovery through
experiences, differentiated teaching, the learning of life-long skills the building of character through
innovative and effective teaching approaches and strategies (Bartram, 2005). The following table
shows the difference between the emerged competency based and the traditional curriculum.
Table 2: Change characteristics for shifting the learning emphasis
Teacher’s role
Specific, measurable
Outcomes and standards-based
Continued until outcome achieved
Learner readiness
Variety of media and resources
Criterion-referenced and developmental
Demonstrated competence
Communication inquiry, reasoning and
problem solving
and responsibility, self
New and generalized context to aid inference
prediction and generalization
Subject content-based
Fixed time units
Text book dominated
Delayed and summative
Grades or scores
Fixed body of knowledge under teacher
Follows a pre-determined course of
content based. Inhibits generalization
and prediction
Descriptive, developmental reports prepared Grades, scores and norms provided to
for teachers and parents, progress and targets parents and teachers
shared with students
Adapted from Bartram (2005)
2.4.2. Learning
The contemporary learning has stemmed from constructivism theory, which is a view of learning is
something that happens inside the heads of learners as a core concept. No matter how meticulously we
plan or what marvelous strategies we use during teaching, we can’t reach inside learners’ heads and
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
put the learning there. There is a gap between learning and teaching that learners have to negotiate in
order to construct new knowledge, skills and attitudes. This view strongly underpins what all teachers
and school leaders know lies at the heart of good classroom practice. In order to have knowledge of
individual learner’s needs and quality of teaching, these actors should create good relationships with
students. It is via these that the gap between learning and teaching may be more successfully bridged.
The Four Pillars of Learning
The four pillars of learning and their details are dealt as follows:
1. Learning to know (learning how to learn)
This type of learning is looking learning as both a means and an end of human existence.
Looking at as a means, students have to learn to understand the world around them, at least as
much as necessary for them to lead their lives with some dignity, develop their occupational
skills and communicate with other people.
Regarding as an end, pursuit of knowledge for the sake its own that can be derived from
understanding, knowledge and discovery. This aspect of learning is typically enjoyed by
researchers, but good teaching can help everyone to enjoy it. The broader our knowledge, the
better we can understand the many different aspects of our environment for greater intellectual
curiosity, sharpens the critical faculties and enables people to develop their own independent
judgments on the world around them.
2. Learning to do
This type of learning is closely associated with the issue of occupational training: how do we adapt
education so that it can equip us to do the types of work needed in the future? In this regard, skill
training has to evolve and become more than just a means of imparting the knowledge needed to do a
more or less routine job.
3. Learning to live together
Students should be taught to understand other people's reactions by looking at things from their point
of view. Where this spirit of empathy is encouraged in schools, it has a positive effect on young
persons' social behavior for the rest of their lives. For example, teaching youngsters to look at the
world through the eyes of other ethnic or religious groups is a way of avoiding some of the
misunderstandings that give rise to hatred and violence among adults.
4. Learning to be
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Education should contribute to every person's complete development - mind and body, intelligence,
sensitivity, aesthetic appreciation and spirituality. All people should receive in their childhood and
youth an education that equips them to develop their own independent, critical way of thinking and
judgment so that they can make up their own minds on the best courses of action in the different
circumstances in their lives ( retrieved on 01/01/2013.
2.4.3. The Interaction of Curriculum, Learning and Assessment
Why link assessment with instruction?
Learning is impossible without ongoing assessment. Learning is about attempting to reduce the gap
between what the learner knows and what he/she wants to know. Assessment is the process of gaining
information about the gap. To identify the gap, teachers need to pre-assess the learner so that they can
find out what they already know, to assess during the learning by using ongoing assessments, and
to assess after the learning. The following figure illustrate how better assessment is a means for teacher
teaching and learning for better opportunities for a better life.
Figure 3: Assessment for better teaching and learning
components of any learning program are the curriculum, which provides a framework of knowledge
and capabilities, or the content, the instructional methods used to deliver the curriculum and the
assessment techniques with which the success in attaining the learning outcome is evaluated. Effective
curriculum, learning, and assessment are appropriate for students’ development and responsive to
individual interests and needs and sensitive to their cultural and linguistic contexts.
Aligning standards, curriculum, instruction, and assessment horizontally and vertically creates a
learning continuum within which all students can develop and learn at their own pace
(, retrieved on 10/06/2013).
Linking assessment directly to curriculum and instruction generates meaningful data needed to inform
instructional practice. Using appropriate classroom assessment to refine instruction and provide
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
individualized support helps ensure that students make academic progress and develop in all domains.
The following figure depicts the interaction of the three.
Figure 4: The Interaction of Curriculum, Learning and Assessment
Assessment purposes and practices are closely linked to curriculum principles and to students’
learning practices. These three components are inextricably linked and are bound together as shown in
figure 4 above. The role of assessment then is to measure the effectiveness of the curriculum and the
learning methods with respect to the intended outcomes.
For practical reasons, let us borrow an example from Teachers Handbook on Formative Continuous
Assessment Grades 1-4 in Ethiopia and have a look at how well assessment tasks align with MLC and
intermediate/ higher learning outcomes (USAID, 2012). The table below signifies to what extent the
curriculum goals, learning outcomes, and assessments are aligned.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 3: Example of Alignment of curriculum goals, student profiles, learning outcomes, and
assessment in Grade 1 English
Goal in the
to learner
1. They will 1.
be able to
write in
standard 2.
speak some
English and
2. Have some
and their
Learning Outcomes
Minimum Learning
Intermediate and Higher
1. Pronounce the 26
letters of the English
2.Write the 26 capital
and small letters of the
English alphabet from
a model
3. Identify English
names of human body
4. Greet each other in
6. Read all English
7. Write the 26 capital
and small letters from
8. Say short English
phrases to describe
people, animals and
9. Listen to and respond
to greetings in
10. Understand and
respond appropriately
to short questions in
5. Follow simple oral
commands in English
1. Giving assignment to
bring any 5 English
letters on flash card.
2. Class work: copy A,
B, V,D and E five
times in your exercise
3. Point to human body
parts on poster
4. Asks oral questions
like ‘What is this?
5. Pair work on greeting
6. Oral commands like
‘Show me’
Source: USAID (2012: 7)
As shown in table 3 above, not all the assessments are aligned with the learning outcomes. The activity
that the teacher asks students to do in an assessment must be the same as that required by the
corresponding learning outcome. If the teacher asks the students to do something different, then the
assessment is not aligned with the learning outcome. Poor alignment of assessment will provide
misleading information about students learning and is poor teaching.
A. MLC 1 requires students to pronounce the letters of the English alphabet. However, the
assessment task corresponding to MLC does not require students to pronounce letters of the
alphabet only to bring 5 letters on a flash card. The assessment is not aligned.
B. MLC 2 requires students to copy both upper and lower case of all 26 English letters from a
model. Assessment tasks 1 and 2 only require 5 letters to be copied and do not specify both
upper and lower case letters. Even taken together assessments 1 and 2 do not align with
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
C. MLC 3 requires students to name body parts in English. Assessment task 3 asks students to
name body parts on a poster of the human body and so it is aligned.
D. MLC 4 requires students to greet each other in English. Assessment task 5 asks students to
work in pairs to demonstrate greeting in English and so assessment task 5 is aligned with
MLC 4.
E. MLC 5 requires a student to follow simple commands in English. Assessment task 6 only
mentions one command and so we cannot be sure this is a complete assessment for MLC 5.
Moreover, a very poor alignment of the assessment tasks with intermediate and higher competencies
from 6-10 is observed.
2.5. Country Experiences on Classroom Assessment
Several countries promote classroom assessment as a fundamental approach to education reform.
Darling –Hammond and Wentworth (2010) cited in Clarke (2012) reviewed the high performing
education systems around the world and noted the following student assessment activities:
illustrate the importance of integrating of assessment of, for , and as student learning , rather
than as a separate disjointed element of the education system,
provide feedback to students , teachers and schools about what has been learned, and feed
forward information that can shape future learning as well as guide college and career –related
decision making,
closely align curriculum expectation, subject and performance criteria and desired learning
engage teachers in assessment development and scoring as a way to improve their professional
practice and their capacity to support student learning and achievement,
engage students in authentic assessments to improve their motivation and learning,
seek to advance student learning in higher – order thinking skills and problem solving by using
a wider range of instructional and assessment strategies,
privilege quality over quantity of standardized testing ( for example Finland) ,
These imply that classroom assessment helps to identify and respond to the students’ learning needs.
This enables the countries to adjust their teaching to meet individual student’s needs, and to better help
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
all students to reach high standards. Similarly, Looney (2011), found that many teachers in these high
performing countries incorporate aspects of continuous (formative) assessment into their teaching
systematically. They still work toward standards while identifying the factors behind the variation in
students’ achievements and adapting their ultimately since the goal of formative assessment is for
students to develop their own “learning to learn” skills.
In classrooms, teachers gather information on student understanding and adjust teaching to meet
identified learning needs. In schools, school leaders use information to identify areas of strength and
weakness and to develop strategies for improvement. At the policy level, officials use information
gathered through national or regional assessments, or through monitoring of school performance, to
guide investments in training and support for schools, or to set broad priorities for education. The
following countries were found high performing countries in Program for International Student
Assessment (PISA) and Third International math and Science study (TIMSS) assessments except
South Africa.
Table 4: Country experience on classroom assessment
Teachers perform CA of their students at all levels of education. On a-day-to-day bases, this
assessment is informal and based on student work in and out of the classroom
Ireland places an emphasis on early diagnosis of serious literacy and numeracy problems at the
beginning of primary schooling. Teachers and learners assess progress, and use information to shape
next steps. In primary school, classroom assessment includes observation, teacher-designed task and
tests, conferencing and portfolio assessment. In secondary schools, oral assessments in languages and
hands-on assessments in subjects like geography and science are increasingly common.
Finland supports classroom assessment, which is based on each student’s progress. Diverse
assessments, including verbal feedback, assessment interviews, and portfolio assessments, are based
on objectives defined in the curriculum. Curriculum guidelines outline the principles of student
assessment, e.g., encouragement of student self-assessment skills.
In Norway, classroom assessment is seen as a tool for promoting the student’s learning and
development. Students play an active role in the process, and also develop skills for self assessment.
Assessments (unmarked) are an integral part of the daily learning process. The results of daily
assessments are included in regular conferences between teachers, students and parents. Students do
not receive marks at all during primary school. Marks are introduced in lower secondary schools as
part of student assessment. During primary school (Years 1 to 7) there are no formal assessments of
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
The following assessment instruments have been used in the Scotland education system : Observation,
Product evaluation, Questioning , Alternative response questions , Assertion/reason questions
Assignments, Aural/oral tests , Case studies , Cloze questions , Completion questions , Expressive
activities ,
Extended response questions , Grid questions , Matching questions , Multiple choice questions ,
Multiple response questions , Oral questions , Practical exercises , Projects , Question papers ,
Restricted response questions, Role-plays , Self-report techniques ( Log-books, Personal interviews
and Questionnaires), Simulations , Short answer questions and Structured questions.
Classroom assessment involves students in activities such as making oral presentations, developing a
portfolio of work, undertaking fieldwork, carrying out an investigation, doing practical laboratory
work or completing a design project, help students acquire important skills ,knowledge and work
habits that cannot readily be assessed or promoted through paper – and-pencil testing .
Adapted from Looney (2011)
Moreover, in South Africa, learners’ progress is monitored during learning activities. This informal
daily monitoring of progress can be done through question and answer sessions; short assessment tasks
completed during the lesson by individuals, pairs or groups or homework exercises. Learners involve
actively in self-assessment, peer assessment and group assessment. The results of the informal daily
assessment tasks are not formally recorded unless the teacher wishes to do so. In such instances, a
simple checklist is used to record this assessment. However, teachers are supposed to use the learners’
performance in these assessment tasks to provide verbal or written feedback to learners, the school
management team and parents. This is particularly important if barriers to learning or poor levels of
participation are encountered (Department of Education of Republic of South Africa, 2008).
As can be learned from the aforementioned country experiences on CA, schools utilize a variety of
both formal and informal assessment approaches chosen to address the nature of the learning being
assessed and the varied characteristics and experiences of the students.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
There is considerable evidence that assessment is a powerful means for enhancing learning so as to
ensure quality in education. Black and Wiliam (1998) synthesized over 250 studies linking assessment
and learning, and found that the intentional use of assessment in the classroom is to promote learning
and improve student’s achievement. Teachers use classroom assessment to become aware of the
knowledge, skills, and beliefs that their students bring to a learning task. They use this knowledge as a
starting point for new instruction, monitor students’ changing perceptions as instruction proceeds and
evaluate the students’ level of achievement.
The components of classroom assessment are also known as paradigms of assessment and purposes of
assessment as well in some literatures. The following figure depicts the category of traditional and
contemporary classroom assessments as an introductory part of this chapter.
Figure 5: Components of Classroom Assessment
Source: Stiggins (2002)
In most cases, teachers use three intertwined but distinct assessment components –assessment For
learning; assessment Of learning and assessment As learning (Stiggins , 2002). Following are details
of each.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
3.1. Assessment For Learning ( AfL)
The term 'Assessment for Learning' (AfL) was coined in 2002 based on research that had begun in
1998 by Black and Wiliam. AfL is the process of seeking and interpreting evidence for use by
teachers and learners for the purpose of deciding where learners are in there learning, where they need
to go and how best to get there. It acknowledges that individual students learn in idiosyncratic
(special) ways, but it also recognizes that there are predictable patterns and pathways that many
students follow. It requires careful design on the part of teachers so that they use the resulting
information to determine not only what students know, but also to gain insights into how, when, and
whether students apply what they know (Manitoba Education, Citizenship and Youth, 2006).
3.1.1. Purposes of assessment for learning
The major purposes of assessment for learning are to:
plan lesson considering the previous experience of the learners
diagnose learners’ needs
monitor learning progress
improve and guide learning
adjust learning-teaching techniques
intervene for remedial action
provide immediate feedback
report the status of learner’s progress
3.1.2. Strategies of Assessment for Learning The Pivotal Questions and the Seven Strategies
There are seven common strategies that do build on one another although they may not be a recipe to
be followed step by step. These strategies are a collection of actions that will strengthen students’
sense of self-efficacy (a belief that their effort will lead to improvement), their motivation to try, and
ultimately their achievement (Stiggins, Arter, Chappuis and Chappuis , 2004). The seven strategies are
to help students take control of their own learning and organized with three compelling essential
questions. Moving students forward in their learning depends on the teacher and the students
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
answering the three pivotal questions, but they cannot be answered if the teacher hasn’t first identified
and shared clear learning outcomes and success criteria for students. The three pivotal questions are:
1. Where is the learner going?
1. Where am I going?
2. Where is the learner right now?
2. Where am I right now?
3. How will the learner get there?
3. How will I get there?
The following are some details of the pivotal questions on the teacher’s side with the seven strategies.
Where is the learner going?
Strategy 1 Provide students with a clear and understandable vision of the learning target
Strategy 2 Use examples and models of strong and weak work
Where is the learner right now?
Strategy 3 Offer regular descriptive feedback.
Strategy 4 Teach students to self-assess and set goals.
How will the learner get there?
Strategy 5 Design lessons to focus on one learning target or aspect of quality at a time.
Strategy 6 Teach students focused revision.
Strategy 7 Engage students in self-reflection, and let them keep track of and share their learning.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 5: Overview of Seven Strategies of Assessment for Learning
Question to
be addressed
Where is the
Provide students with a
clear and understandable
vision of the learning
Use examples and models
of strong and weak work
3. Offer regular descriptive
Where is the
learner right
4. Teach students to self-assess
and set learning goals.
5. Design lessons to focus on
one learning target or aspect of
quality at a time.
Motivation and achievement both increase when instruction is
guided by clearly defined targets
Carefully chosen examples of the range of quality can create
and refine students’ understanding of the learning outcomes by
helping students answer the questions,
“What defines quality work?” and “What are some problems to
Effective feedback shows students where they are on their path
to attaining the intended learning. It answers for students the
questions, “What are my strengths?”; “What do I need to work
on?” and “Where did I go wrong and what can I do about it?”
It teaches students to identify their strengths and weaknesses
and to set learning goals for further learning. It helps them
answer the questions, “What am I good at?”; “What do I need
to work on?”; and “What should I do next?”
When assessment information identifies a need, the teacher can
adjust instruction to target that need. In this strategy, the
teacher scaffolds learning by narrowing the focus of a lesson to
help students master a specific learning outcomes or to address
specific misconceptions or problems.
6. Teach students focused
How will
the learner
get there?
7. Engage students in selfreflection, and let them keep
track of and share their
When a concept, skill, or competence proves difficult for
students, the teacher can let them practice it in smaller
segments, and give them feedback on just the aspects they are
practicing. This strategy allows students to revise their initial
work with a focus on a manageable number of learning targets
or aspects of quality.
Long-term retention and motivation increase when students
track, reflect on, and communicate about their learning. In this
strategy, students look back on their journey, reflecting on their
learning and sharing their achievement with others.
Adapted from Chappuis (2007) Effective Questioning Technique as AfL Strategies
Assessment for learning (day-to-day assessment) strategies of questioning, observing, discussing,
checking on students’ understanding and analyzing their responses are not mutually exclusive; neither
is the list necessarily exhaustive. Asking questions to assess children’s starting points, in order to be
able to adapt learning and teaching activities appropriately to meet children’s needs. The following
questioning techniques are also important to provide effective assessment opportunities.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Questioning Techniques
Challenging (how did you do it/why did you do that?) How do you …?
Checking learners’ understanding (how do you know…? ) Is it ever /always true/false
that…? What is wrong with…? What is the same and what is different about …?
(‘Wrong’ answers are also helpful)
Uncovering thinking (can you explain this?) How do you explain …?
Offering strategies (have you thought about….?)
Re-assuring (are you happy with that?) What does that tell us about …?
Sometimes a ‘devil’s advocate’ question (what makes you sure?, How can we be sure
that…?) Why is …true?
‘Going beyond’ questions: (how does this idea connect with …?)
• Asking a range of questions, from literal to higher-order, to develop understanding:
 application, for example ‘What other examples are there?’
 analysis, for example ‘What is the evidence for parts or features of…?’
 synthesis, for example ‘How could we add to, improve, design, solve …?’
 evaluation, for example ‘What do you think about …?’, ‘What are your criteria for
• Using thinking time and talk partners to ensure all students are engaged in answering questions.
Some questions are better than others. For example, a teacher wants to find out if students know the
properties of prime numbers. The teacher asks, “Is 7 a prime number?” a student responds, “Yes, I
think so, or No, it’s not”
This question has not enabled the teacher to make an effective assessment of whether the student
knows the properties of prime numbers. Changing the question to “Why is 7 an example of a prime
number?” does several things.
It helps the students recall their knowledge of properties of prime numbers and the properties
of 7 and compare them.
The answer to the question is “Because prime numbers have exactly two factors and 7 has
exactly two factors.” This response requires a higher degree of articulation than “Yes, I think
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
It provides an opportunity to make an assessment without necessarily asking supplementary
questions. The question “7 a prime number?” requires further questions before the teacher can
assess the student’s understanding.
The question “Why is 7 an example of a prime number?” the general question “Why is X an example
of Y?”
3.1.3. Assessment for Learning Practices
The figure below followed by the details shows the steps to make assessment for learning practices
effective at school and classroom levels.
Figure 6: Eliciting of Understanding
Sharing Learning Intentions (Outcomes)
Identifying Success Criteria
Providing Descriptive Feedback
Developing student self and peer-assessment skills
& setting goal
Adapted from Earl (2004)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers Sharing the Learning Intention/ Outcomes
Assessment for learning (and as learning) requires that students and teachers share a common
understanding of what is being learned and assessed. Learning outcomes clearly identify what
students are expected to know and be able to do, in a given subject. Teachers develop learning goals
based on the curriculum expectations and share them with students at or near the beginning of
learning. Teachers and students come to a common understanding of the learning outcomes through
discussion and clarification during instruction. The learning intentions/outcomes are what the teacher
hopes students will know, understand or be able to do by the end of a lesson. They are expressed in
terms of knowledge, understanding and skills, and link directly with the relevant curriculum
objectives/ competencies.
The questions to be answered by the teacher during sharing learning intentions
What do I want students to know?
What do I want students to understand?
What do I want students to be able to do?
If these questions are stated at the beginning as the target question that the teacher wants answered,
then this could become the learning intention. Students should have clear notion of learning intention
of each lesson (should be put on board at start of class).
By the end of this lesson students should be able to separate sand, salt and water...
By the end of this lesson students should be able to understand the character of ….
By the end of this lesson students should be able to draw a diagram of … Identifying Success criteria
The term 'success criteria' is synonymous with 'assessment criteria' but, instead of reminding students
of their (perhaps negative) experiences of being assessed, this term focuses (much more positively) on
students' ability to succeed. Sometimes the success criteria might be just a series of dot points. The
success criteria are the benchmarks of successful attainment of the learning outcomes. Success criteria
are described in specific terms what successful attainment of the learning outcomes look like. When
planning assessment and instruction, teachers, guided by the achievement chart for the particular
subject or discipline, identify the criteria they will use to assess students’ learning, as well as what
evidence of learning students will provide to demonstrate their knowledge and skills. The success
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
criteria are used to develop an assessment tool, such as a checklist, a rubric, or an exit card (i.e., a
student’s self-assessment of learning).
Teachers can ensure that students understand the success criteria by using clear language that is
meaningful to the students and by directly involving them in identifying, clarifying, and applying
those criteria in their learning. Examining samples of student work with their teachers helps students
understand what constitutes success and provides a basis for informed co-construction of the success
criteria. The learning intention of a lesson or series of lessons tells students what they should know,
understand and be able to do, and the success criteria help teachers to decide whether their students
have in fact achieved the learning intention. Importantly, the success criteria also answer the same
question from the point of view of the student: “How will I know whether I've achieved the learning
Success criteria are developed for student use and should be written in language students understand.
Students use them to monitor their own learning in relation to the learning outcomes. Behavioral
objectives are generally written for the teacher or parent. The success criteria are used to make sure the
students know what is in the teachers mind as the criteria for judging their work. Similarly, the
students know what they have to do but do not know how the teacher is going to judge their
performance can be lists of ingredients/ series of steps. The success criteria are written in student
friendly language for student use. Learning outcomes are generally written in teacher language for
teacher use. Both are important and have very different uses.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 6: The Difference between Learning Outcomes and Success Criteria
Learning outcomes ( G9 Maths)
Success Criteria
The student will apply the SSS, SAS and AA
I can use the SSS, SAS and AA similarity theorems to prove
similarity theorems to prove similarity of
similarity of triangles
Students will discover the relationship between the
I can discover the relationship between the perimeters of
perimeters of similar plane figures and use this
similar plane figures and use this relationship to solve related
relationship to solve related problems.
Students will solve quadratic equations by using any
I can solve quadratic equations by using any one of the three
one of the three methods ((a unique solution, no
methods ((a unique solution, no solution, infinitely many
solution, infinitely many solutions)
To achieve success criteria, the teacher can give samples just simply be one statement or a list of
success criteria e.g. the “how will we know” needs to state exactly what the students and teacher will
want to see.
For example: let’s look at language learning intention of “using effective adjectives” with three
alternatives, success criteria might be:
What you’re looking for is that you have used at least five effective adjectives in your
What you’re looking for is that you have used at least four adjectives just before a noun;
What you’re looking for is that you used at least four adjectives which describe the jungle. Providing descriptive feedback
Feedback is provided students with a description of their learning. The purpose of providing feedback
is to reduce the gap between a student’s current level of knowledge and skills and the learning
outcomes. Students learn better by receiving precise information about what they are doing well, what
needs improvement, and what specific steps they can take to improve. Ongoing descriptive feedback
linked specifically to the learning outcomes and success criteria is a powerful tool for improving
student learning and fundamental to building a culture of learning within the classroom. The details of
how to give feedback is dealt in chapter 4.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers Developing student self- and peer-assessment skills and setting goal
The emphasis on student self-assessment represents a fundamental shift in the teacher-student
relationship, placing the primary responsibility for learning with the student. Once students, with the
ongoing support of the teacher, have learned to recognize, describe, and apply success criteria related
to particular learning outcomes, they can use this information to assess their own and others’ learning.
Teachers help students develop their self-assessment skills by modeling the application of success
criteria and the provision of descriptive feedback, by planning multiple opportunities for peer
assessment and self-assessment, and by providing descriptive feedback to students about the quality
of their feedback to peers.
Group work provides students with opportunities to develop and practise skills in peer and selfassessment and gives teachers opportunities to model and provide instruction related to applying
success criteria, providing descriptive feedback, and developing collaborative learning skills.
Teachers and students can use assessment information obtained in group situations to monitor
progress towards learning goals and to adjust the focus of instruction and learning. As a result of
developing self-assessment skills, students learn to identify specific actions they need to take to
improve, and to plan next steps.
Teachers begin by modeling the setting of individual learning goals for students. They also provide
follow-up support, give specific feedback on learning goals, and help students identify and record
focused actions they can take to achieve their goals and procedures they can use to monitor their own
progress. In order to improve student learning and help students become independent learners,
teachers need to make a committed effort to teach these skills and provide all students in the class
with opportunities to practise them. Teachers need to scaffold this learning for students, using a
model of gradual release of responsibility for learning, as follows:
demonstrate the skills during instruction;
move to guided instruction and support;
have students share in the responsibility for assessing their own work;
gradually provide opportunities for students to assess their own learning independently.
The ultimate goal of the process is to move each student from guided practice to independent
practice, based on the student’s readiness.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
3.1.4. Commonly Used AfL Techniques and Tools Techniques for Assessing Prior Knowledge/ Preconception Check
1. Background Knowledge Probe
This is surfacing the misconceptions to discover class’s preconceptions and useful for starting new
chapters. The teacher can consider the most important misconceptions/ areas of troublesome
knowledge in the topic. Generating a questionnaire for students is important to determine the most
effective starting point for a new lesson, elicit levels of prior knowledge (2-3 open ended questions or
series of short-answer questions).
Short and simple questionnaires prepared by teachers for use at the beginning of a course, at the start
of a new unit or lesson, or prior to introducing an important new topic.
For fast analysis responses can be sorted into "prepared" and "not prepared" piles.
For a detailed analysis, answers can be classified into the following categories: [-1] =
erroneous background knowledge; [0] = no relevant background knowledge; [+1] = some
relevant background knowledge; [+2] = significant background knowledge.
With this feedback the teacher can determine the most effective starting point for a given lesson and
the most appropriate level at which to begin instruction.
2. Focused Listing
This technique enables the teacher to check how students can define or describe the central tenets of a
topic or recall important terms. Write a word/brief phrase about the topic and ask students to write a
list of related words (3 minutes – 10 words). This allows the teacher to re-focus on his/her teaching.
Focused Listing focuses on a single important term, name, or concept from a particular lesson or class session
and directs students to list several ideas that are closely related to that “focus point.”
Student responses can be compared to the content of teacher’s own lists.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Focused listing can be used before, during, or after the relevant lesson. As a result, teachers
can use this technique to gauge the best starting point, make midpoint corrections, and measure
the class’s progress in learning one specific element of the course content.
3. Empty Outlines
This creates an outline of the teacher’s presentation and asks students to fill in it. Students fill in an
empty or partially completed outline of an in-class presentation or homework
assignment within a limited
amount of time. Student responses can be compared to those the teacher expected, counting the number
of students who agreed or disagreed with your responses for each item. The range of responses among
students can be reviewed with a focus more on the patterns that emerge than on how well they match
instructor expectations. With this feedback teachers can find out who well have “caught” the important
points of a lecture, reading, etc.
Example: Provide 2 examples of each category.
1. Subject-Related Knowledge and Skills
2. Learner Attitudes, Values, and Self-Awareness
3. Learner Reactions to Instruction
This allows the teacher to check what he/she taught with what was caught. Techniques for Assessing Understanding
1. One Minute paper:
One Minute paper provides a quick and extremely simple way to collect written feedback on student
learning. The teacher stops class two or three minutes early and asks students to respond briefly to
some variation on the following two questions:
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
"What was the most important thing you learned during this class?" and
"What important question remains unanswered?"
Students write their responses on index cards or half-sheets of a piece paper and hand them in.
The teacher cumulates answers and provides feedback at the start of the next class. He/she reviews
responses and notes any useful comments. With this feedback teachers can decide whether any
corrections or changes are needed and, if so, what kinds of instructional adjustments to make.
2. The Muddiest Point
It is a remarkably efficient instrument since it provides a high information return for a very low
investment of time and energy. Students write down one or two points on which they are least clear.
This could be from the previous lesson, the rest of the unit, the preceding activity etc. The teacher and
class can then seek to remedy the muddiness.
The technique consists of asking students to jot down a quick response to one question: "What was the
muddiest point in ... (class meeting, readings, homework assignment, lecture, etc.)?" The focus of the
Muddiest Point assessment might be a lecture, a discussion, a homework assignment, a play, or a film.
The teacher quickly reads through at least half of the responses, looks for common types of muddy
points. Then go back through all the responses and sort them into piles -several piles containing
groups of related muddy points, and one "catch-all" pile made up of one-of-a-kind responses. The
teacher cumulates answers and provides feedback during the next class.
With this feedback the teacher can discover which points are most difficult for students to learn and
this can guide their teaching decisions about which topics to emphasize and how much time to spend
on each.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers Techniques for Assessing Skills in Analysis and Critical Thinking
1. Categorizing Grid
Students are given a grid containing two or three important categories along with a
scrambled list
of items, which students must then sort into the correct categories. With this feedback the teacher can
determine quickly whether, how, and how well students understand “what goes with what.” Students
can also see if they need to revise their categorizing rules.
Example: Categorize the following list in to Plants and Animals cells or both category
Mitochondria, cell wall, nucleus, cell membrane, lysosomes, centrosomes , chloroplast,…
Plant cells
Animal cell
Both plant and animal
2. Pro and Con Grid
Students are given a grid containing two or three important categories along with a scrambled list of
items, which students must then sort into the correct categories. Teachers do a frequency count for the
pros and cons students have listed; which points are most often mentioned. The teacher can compare
the students’ grids to see if they have excluded points or included extraneous points. This feedback
provides the teacher a quick overview of a class’s analysis of the pros and cons, costs and benefits,
and advantages and disadvantages of an issue of mutual concern. The teacher can thus see the depth
and breadth of the students’ analyses and their capacity for objectivity and discuss the results with the
participants at the next class session.
Example for Pro and Con Grid
Make a list of the pros and cons for using classroom assessment techniques instead of formal tests to
get feedback on student learning. Try to provide at least 3 of each.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers Techniques for Assessing Skills in Synthesis and Creative Thinking
1. One sentence summary
One sentence summary is a simple technique in which the learner tries to summarize a given topic by
answering the questions "Who does/ did what to whom, when, where, how and why?" in a simple
informative sentence. Its main purpose is to require students to select only the defining features of an
idea. This allows the teacher to evaluate the quality of each summary quickly and holistically and note
whether students have identified the essential concepts of the class topic and their interrelationships.
2. Directed Paraphrasing
Students are asked to write a layman’s “translation” of something they have just learned–geared to a
specified individual or audience– to assess their ability to comprehend and transfer concepts. The
teacher separates the responses into four piles, which might be labeled “confused,” “minimal,”
“adequate,” and “excellent.” Then he/she compares within and across categories. This feedback allows
the teachers to evaluate the accuracy of the paraphrase, its suitability for the intended audience, and its
effectiveness in fulfilling the assigned purpose. Techniques for Assessing Skills in Application and Performance
1. Application Cards
After learning about an important theory, principle, or procedure, students are asked to write down at
least one real-world application for what they have just learned. The teacher quickly reads once
through the applications and categorizes them according to their quality. He /she picks out a broad
range of examples (including both excellent and marginal/unacceptable examples) and present them to
the class. This feedback efficiently shows teachers how well student understand the possible
applications of what they have learned.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
2. Student-Generated Test Questions
This technique allows students to write test questions and model answers for specified topics, in a
format consistent with exams. This will give students the opportunity to evaluate the course topics,
reflect on what they understand, and what good test items are.
The teacher tally the types of questions students propose and look at the range of topics the questions
span. This feedback shows the teacher to assess some aspects of student learning. In these questions,
teachers see what their students consider the most important or memorable content, what they
understand as fair and useful test questions, and how well they can answer the questions they have
posed. This also alerts the teacher when students have inaccurate expectations about upcoming tests
(Angelo & Patricia, 1993).
3.1.5. Commonly Used AfL Tools
The classroom teacher can match the assessments tools (instrument) with the learning targets
(knowledge, skills, and attitude). The type of assessment instrument depends on the kind of learning
to be measured. The classroom teacher can select the right tool that is appropriate for the pertinent
1. Homework: refers to tasks assigned to students by their teachers to be completed outside of class.
Common homework assignments may include a quantity or period of reading to be
performed, writing, problems to be solved, a school project to be built (display), or other skills
to be practiced.
2. Class works: are tasks that are given during learning teaching process.
3. Assignments: are tasks or activities that are undertaken at home or outside the classroom.
4. Group work: a form of cooperative learning. It aims to cater for individual differences, develop
students' knowledge, generic skills (e.g. communication skills, collaborative skills, critical
thinking skills) and attitudes.
5. Quiz: short and informal questions usually administered in class hours.
6. Oral presentation: a performance which requires a learner to use his or her oral skills to verbalize
their knowledge.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
7. Debate: a performance which puts one learner, or team of learners, against another learner, or
team of learners, to logically argue issues.
8. Oral questioning: a process focused which requires a learner to respond to questions.
9. Observation: It is a process focused which is usually informal where the teacher gathers
information by watching learners interacting, conversing, working, playing, etc. A teacher can
use observations to collect data on behaviors that are difficult to assess by other methods
(attitude toward problem solving, ability to work effectively in a group, persistence,
concentration and completion of tasks). The details of collecting information with observation
are discussed in the next part. The following figure can be the best example in this context.
10. Dance/movement: a performance which requires a learner to move rhythmically to music,
using prescribed or improvised steps and gestures.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
11. Gymnastic/Athletic competition: a performance which requires a learner to take part in
competitive sports.
Checklist and rating scales and Scoring rubrics can be used.
12. Dramatic reading: a performance which requires a learner to combine verbalizations, oral and
elocution (voice production) skills in reading a theatrical passage.
13. Role Play (Enactment): a performance which requires a learner to act out. For example,
learners may dramatize their understanding of fictional characters or historical persons by
acting a role showing ideological positions and personal characteristics of these persons.
14. Interview: a process where a learner is expected to respond to questions concerning his or her
15. Musical recital: a performance which requires a learner to perform music in front of audience.
16. Flow chart: a constructed response technique that requires the learner to provide a visual
schematic representation of a sequence of operations.
17. Graph/table: a technique that requires the learner to provide a visual representation of numerical
18. Illustration: a technique where a learner uses a visual representation to clarify or explain things
(objects, people, events, or relations).
19. Story/play: a technique which requires a learner to write a theatrical work.
20. Poem: a product which requires a learner to write a composition in verse rather than in prose.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
21. Portfolio: Student portfolios are a collection of evidence, prepared by the student and evaluated by
the staff member, to demonstrate mastery, comprehension, application, and synthesis of a given set
of concepts. The details are dealt with in the next section of this manual.
22. Model: a product which requires a learner to prepare a small object, usually built to scale, that
represents another, often larger object.
23. Art exhibit: a product which requires displaying the learner’s art for public view.
24. Science project: a product which requires a learner to plan, carry out, and present a science
research undertaking.
25. Matrix: a constructed response technique that is a type of matching exercise. Elements from
several lists of responses (e.g. scientists, famous people, and important events) are matched
with elements from a common list of premises. The learner is to select one or more elements
from each response list and match the elements with one of the numbered premises.
26. Performance assessment tasks: Performance assessment tasks are composed of three distinct parts:
a performance task, a format in which the student responds, and a predetermined scoring system.
Tasks are assignments designed to assess a student’s ability to manipulate equipment for a given
purpose. Students can either complete the task in front of a panel of judges or use a written
response sheet. The student is then scored by comparing the performance against a set of written
criteria (scoring guides) because they are the heart of performance assessment.
27. Video/audiotape (performance in context): a product which requires a learner to produce a
video and/or audio filmed, taped or televised presentation by focusing real life applications on
mastery of skills.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
28. Conference: a process which requires a learner to meet with the teacher and/or other learners for
the purpose of exchanging views.
29. Process description: a technique which requires a learner to talk through a course of action. For
example, a learner may describe the process he or she uses in subtracting two numbers. It is
usually used in the lower primary level as an informal less structured assessment to provide
feedback on achievement to the teacher and the learner.
30. Think Aloud: An approach to investigate mental processes involved in a task or other activity in
which a subject is asked to describe audibly the thought processes while the student is
performing a task or engaging in the activity of interest. It is a process which requires a learner
to discuss what he or she is thinking about in trying to solve a problem. It is usually used as an
informal less structured assessment to provide feedback on achievement to the teacher and the
31. Learning log: a process that is based on a learner’s written record of what he/she has learned in a
lesson, topic or theme. It is usually used as an informal less structured assessment to provide
feedback on achievement to the teacher and the learner.
32. Cloze test (cloze deletion test) is an exercise, test, or assessment consisting of a portion of text
with certain words removed (cloze text), where the participant is asked to replace the missing
words. Cloze tests require the ability to understand context and vocabulary in order to identify
the correct words or type of words that belong in the deleted passages of a text. This exercise is
commonly administered for the assessment of native and second language instruction.
33. Jigsaw group projects: In jigsaw projects, each member of a group is asked to complete some
part of an assignment; when every member has completed his assignment task, the pieces can
be joined together to form a finished project.
34. Book review: students will be given different books appropriate for their level and then required to
tell what they read from the book, discuss what they learn from the book, etc.
35. Definitions and Applications – In groups, students provide definitions, associations, and
applications of concepts discussed in lecture.
36. Crossword Puzzle – Create a crossword puzzle as a handout for students to review terms,
definitions, or concepts before a test.
37. Jury Trial. Divide the class into various roles (including witnesses, jury, judge, lawyers,
defendant, prosecution, and audience) to deliberate on a controversial subject.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
38. Concept map: It is a graphical representation of the relationship among terms (phrases or
sentences). In other words, it is a diagram of nodes containing concept labels that are linked
together with labeled directional lines. The concept nodes are arranged in hierarchical levels that
move from general to specific concepts. It requires students to explore links between two or more
related concepts. The following diagram illustrates the concept map.
Figure 7: Example of a concept map
If a science teacher wants to investigate how well students understand the correct connections among
concepts in a subject, to document the nature and frequency of students’ misconceptions, or to capture
the development of students’ ideas over time, he/she can use a concept map as depicted above. A
classroom teacher can prepare a summary of assessment tools when he/she plans and reports the
lessons as shown in the following table.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 7: Assessment for Learning tools with Bloom’s taxonomy (see the details in Chapter 5)
Assessment Tools
memorization 
and recall
Comprehension understanding 
taking apart
Note: All the aforementioned varieties of assessment techniques and tools can be used as a menu for
the classroom teacher to choose from and implement to gather information about his/her students
learning progress. There is no single recipe or blueprint that all teachers can successfully adopt and
follow. What matters most is improving students’ diverse learning needs using various techniques and
tools of assessment. The purpose of assessment for learning as mentioned above is to improve the
teaching learning process not for grading ( Stiggins , Arter, Chappuis and Chappuis , 2004).
3.2. Assessment As Learning (AaL)
Assessment as learning (AaL) is based on the assumption that students are capable of becoming
adaptable, flexible, and independent in their learning and decision-making. Students should take an
increased responsibility to generate quality information about their learning and that of others. One
way of increasing the efficiency of assessment is to allow students play a role in assessing themselves
or each other. Assessment as learning generates opportunities for self assessment and for peer
assessment Students have to construct their own learning from what teachers give them. If they have to
construct their own learning, it makes sense to help them to do it better. Students’ engagements as
active and critical assessor make sense of information, relate it to prior knowledge, and use it for new
learning. When teachers focus on assessment as learning, they use classroom assessment as the vehicle
for helping students develop, practice, and become comfortable with reflection and with critical
analysis of their own learning. Overtime, students move forward in their learning when they can use
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
personal knowledge to construct meaning, have skills of self-monitoring to realize that they don’t
understand something, and have ways of deciding what to do next (Earl, 2004).
Assessment for learning and assessment as learning activities should be deeply embedded in teaching
and learning and be the source of iterative feedback, allowing students to adjust, rethink and re-learn.
3.2.1. Purposes of AaL
The main purposes of Assessment as Learning are to:
promote active learning by placing the student at the center of the assessment
determine what to do next in the learning process (e.g. strategy, focus)
provide descriptive feedback to peer and self assessment,
help students to become reflective, self‐ monitoring learner
AaL as a metacognition process
Assessment as learning focuses on students and emphasizes assessment as a process of metacognition
(knowledge of one’s own thought processes) for students. Assessment as learning emerges from the
idea that learning is not just a matter of transferring ideas from someone who is knowledgeable to
someone who is not, but is an active process of cognitive restructuring that occurs when individuals
interact with new ideas. Within this view of learning, students are the critical connectors between
assessment and learning. This is the regulatory process in metacognition; that is, students become
adept at personally monitoring what they are learning, and use what they discover from the monitoring
to make adjustments, adaptations, and even major changes in their thinking.
Dimensions of Metacognition
Knowledge of Cognition
• knowledge about ourselves as learners and what influences our performance
• knowledge about learning strategies
• knowledge about when and why to use a strategy
Regulation of Cognition
planning: setting goals and activating relevant background knowledge
regulation: monitoring and self-testing
evaluation: appraising the products and regulatory processes of learning
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Monitoring Metacognition
What is the purpose of learning these concepts and skills?
What do I know about this topic?
What strategies do I know that will help me learn this?
Am I understanding these concepts?
What are the criteria for improving my work?
Have I accomplished the goals I set for myself?
3.2.2. Planning AaL
A classroom teacher may ask him/herself the following questions during planning of AaL:
Why am I assessing?
What am I assessing?
What assessment method should I use?
How can I ensure quality in this assessment process?
Why am I assessing?
The teacher wants to help his/her students develop increased awareness of their approaches to
problem-solving and their level of persistence so that they can advance their learning in a variety of
What am I assessing?
He/she assesses his/her students’ abilities to monitor their own thinking processes and their strategies
for persisting when solving complex problems.
What assessment method should I use?
Students need frequent opportunities to think about and monitor their level of persistence when faced
with difficult problems. They also need the tools to articulate their efforts. The method will need to
elicit evidence of their learning and metacognitive processes. The teacher will need to observe the
students working and sharing their reflections, and converse with them about their learning.
How can I ensure quality in this assessment process?
The teacher needs to be sure that the students recognize what persistence looks like when solving a
complex problem and that they are making reasonable and consistent judgments about their own
persistence. He/she needs to ensure that his/her students keep a relevant record of the self-assessment
of their persistence when solving complex problems. This record needs to be kept over time to show
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Student’s guiding criteria for Persistence in Problem-Solving
• I reread the problem carefully and several times in order to fully understand it.
• I break the problem into parts to find out what I know, and what information I need to find.
• I check notes, books, and other resources to find ideas that might be useful in solving the problem.
• I ask other people focused questions to try to find helpful ideas (but I do not ask for the solution).
• I draw diagrams or use objects as models to think about the problem in many ways.
The students used these criteria as a guide during the problem-solving and reflecting processes. To
follow up each student, the teacher should have a brief conversation based on the following questions:
How did you know you were persisting?
What was your thinking as you worked through the problem?
What decisions did you make along the way?
Can you tell me more about the decisions?
How does your thinking and decision-making fit with your goal for persistence?
How can I (and the students) use the information from this assessment?
By understanding and valuing the students’ thinking, the teacher can scaffold their growth and provide
direction for further developing the habits of mind that will promote persistence in any learning
situation. Based on what they learned from their self-assessments, students reviewed what persistence
in solving problems looks like. Together they revised and refined their criteria.
3.2.3. Techniques of Assessment as Learning
The strategies, techniques and tools that are discussed under Assessment for Learning can be applied
for Assessment as Learning in one way or another. However, the unique and most commonly used
techniques are dealt in this part. These techniques include self assessment, peer assessment and group
assessment. Self Assessment
Self assessment has been defined as the involvement of students in identifying standards and/or
criteria to apply to their work and making judgments about the extent to which they have met these
criteria and standards. Assessment decisions can be made by students on their own essays, reports,
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
presentations, projects, and so on, but it is believed to be more valuable when students assess the work
that is personal in nature, like a learner log, portfolio etc.
With self assessment, students check their work, revisit assignment drafts and texts, and research and
reflect upon their past practices. Care is needed to teach the student to make judgments on what was
actually achieved rather than what was ‘meant’. But once mastered, in addition to judging one’s own
work, the concept of self-assessment develops skills in self awareness and critical reflection. Peer Assessment
In the context of learning as assessment, peer assessment is used to estimate worth of other students’
work, and to give and receive feedback. This approach to assessment requires careful planning,
agreement of criteria and use of common tools for analyzing marks. Furthermore, the teacher may
need to encourage the students to take this practice seriously, and developing the necessary skills.
The benefits of peer assessment are many and some of them are:
Peer assessment is becoming widely used as a means of giving feedback to students, arguably,
more feedback than a teacher can normally provide.
Peer assessment benefits both those giving the feedback as well as those receiving it.
Students also learn diplomacy, how to receive and act on constructive criticism, as well as the
more obvious skills of making explicit and criterion-referencing judgments. In studies carried
out, students have reported real benefits in retention of knowledge, enhanced creativity, greater
resourcefulness and increased motivation.
Peer assessment can deepen the student learning experience as students can learn a great deal
about their own work from assessing other students’ attempts at a similar task. They will also
learn about the assessment culture of the school, become autonomous learners, and develop
skills of lifelong learning. Group Assessment
Group assessment occurs when individuals work collaboratively to produce a piece of work (Price,
Carroll, O’Donovan and Rust, 2011). Group assessment becomes more than a way of giving grades;
they also act as learning tools before, during, and after test administration to have an important aspect
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
of students' learning profiles, namely, the ability to present factual knowledge within a given time
limit (Smith, 2009).
1. Group assessment as a learning tool before test administration
Having students help write an assessment instrument is useful for reviewing and for easing test
anxiety. After completing a unit or a lesson a teacher can apply different techniques including making
students create the test itself. The following steps are suggested:
Step 1 : Students look through their notes and other material from the unit and write down the main
points they have learned. This is the first review, done by the individual student at home.
Step 2 : The teacher invites the whole class to brainstorm a list of learning points, all of which are
written on the board. The list serves as a review for all students, and it reveals what students
perceive as the main points of the unit. He found that the student-generated list did not always
match his intended teaching points, so it has helped him to refine his own teaching for
emphasizing better the points he wanted students to learn.
Step 3: The teacher should discuss with the class how the learning points are covered, whether
through discussions, group work, readings, digital learning, field trips, and so on. This step
reminds the students and the teacher that learning and teaching activities come in different
Step 4: The teacher briefly informs the students about the basics of test design. When s/he explains
how open and closed test items tap into different types of knowledge and stresses the
importance of clarity of instructions and of the questions themselves, students are usually keen
on peeping into the "black box" of testing, and learners suffering from test anxiety may feel
more at ease when they better understand test design.
Step 5: The class might be divided into groups of five to eight students. Each group designs
approximately six learning tasks or test items, keeping in mind the tips they are given in the
previous step. Ideally, students will create a mix of multiple choice and discussion questions;
however, most students' questions reflect the type of questions they have encountered on other
tests in the class.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
To write a good test item, the designer has to understand the topic well, so the student groups spend
time in intense discussion and review. If any student does not understand the topic, other group
members usually explain it. The teacher monitors or facilitates the groups, pushing them to go beyond
the factual level to design tasks that require the higher levels of understanding that students need to
Step 6: Each group presents its final product to the class. Group members answer classmates'
questions about the test items. After the presentations, all test items are familiar to all students,
and most misunderstandings have been clarified by the students, not by the teacher. If the
students are unable to clarify some misunderstandings, the teacher steps in and re-teaches the
material in a different way. These misunderstandings are usually evidence of the problems with
the learning/teaching gives the teacher useful feedback and ensures that students will not be
assessed on material that they haven't had adequate opportunity to learn.
Step 7: The teacher collects the student-designed tests and creates a final version; using only items
written by students and making sure questions from each group appear on the test. The goal is
to create a test with a good mix of fact-based and analytical questions, so that students can
demonstrate their strengths and push themselves to grow in their areas of weakness. If it seems
a bit frightening to take the full step the first time the teacher involves his/her students in
designing tests, he/she can always have them design 50 percent of the test, while the teacher
can design the other half.
Step 8: The students are assessed by the instruments that they developed.
Student-created assessment instruments give students repeated reviews of the material, both
individually and in groups, and allow for fruitful peer teaching and learning. Students take the activity
seriously because they know that they will be assessed by this instrument. The main challenge to this
approach is that it is time-consuming.
2. Group Assessment as a learning tool during test administration
Many testing situations may create tension among students. The setting is formal, and they are anxious
to get a good grade. When students are taking a test, they are not open to learning but are focused on
presenting what they have already learned. However, Smith (2009) found that less formal test
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
situations can be learning opportunities. One informal test-administration strategy is group assessment
during the test administration as follows:
Step 1: The teacher announces the test as usual, often a week or so ahead of time. Students are not told
at this point that they will be taking the test in groups.
Step 2 : On test day, the students can randomly be placed in groups of three.
Step 3 : Each student must turn in an individual copy of the test but can freely discuss any test tasks
with others in their group. Students who know the material well are generally eager to explain
why they think the answer is correct, and this peer teaching enhances the learning of both the
explainer and the person receiving the explanation.
Step 4: The teacher marks the papers. The individual test papers from group members are usually
similar, but that's acceptable, as long as each student has completed his or her own paper. The
group scores are presented to the whole class, and groups aspire to have the highest score. For
a group to "win," all group members need to do their best.
3. Group Assessment as a learning tool after test administration
Teachers spend much time scoring and writing comments on test papers, and when they return the
tests, the students only looked at the score and paid no attention to my careful comments. When
students have just taken a test, some of the tension is behind them, yet the material is still fresh in their
minds, which offers opportunities for learning. But when students focus only on the grade, they miss
those opportunities. When learners score their own tests, the problem largely disappears. The
following steps show how group assessment used after test administration
Step 1: The teacher scores the test and returns it to the students as usual, but with nothing written on
the actual test paper. The learner's strengths and weaknesses on the test, the score for each task,
and the overall score are recorded and saved elsewhere. Students are informed of this.
Step 2 : The teacher divides the class into groups and asks them to retake the test together. Students
who did well on the test chair the group discussions. This motivates students who scored well
to share their knowledge with peers. The other students who might not have done so well ask
questions what was wrong on their test items.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Step 3: Each group develops an answer key and presents it to the class. For open questions, the group
writes a "best sample" response. If necessary, the teacher provides further explanations and
makes corrections. Together, the whole class comes up with an answer key, which is put on the
Step 4: Each student corrects his or her own test, using a pen of a colour different from the one used to
take the test. The point value for each test item is written on the test paper, the studentdeveloped key is displayed on the board, and examples of good open answers are provided.
The student has all the information needed to act as an informed assessor.
Step 5: Each student writes the score he or she believes is appropriate on the test paper and returns it
to the teacher. Because students know that the teacher has already scored the tests, they are
unlikely to award themselves a score they do not deserve.
Step 6: After seeing the marks the students give themselves, the teacher calculates the final score for
each learner by averaging the student and teacher scores. If there is a discrepancy of more than
10 percent, the teacher should meet with the student to clarify any disagreement. In practice,
the teacher’s score is usually the higher of the two scores, and discrepancies of more than 10
percent are rare.
In the beginning, students may not feel comfortable to correct peers' work, and they may not like peers
seeing their personal strengths and weaknesses. Rather, they may be happy to ask one another
questions and benefit from discussing the test with peers, but they resist being scored by a friend.
3.3. Assessment Of Learning (AoL)
Assessment Of learning, which is also known as Summative Assessment, is summative in nature and
is used to confirm what students know and can do, to demonstrate whether they have achieved the
learning outcomes, and occasionally, to show how they are placed in relation to others ( Stiggins,
2002; Black and Wiliam ,1999 ). Assessment of learning occurs periodically to ensure at a particular
point in time (at the end of a, unit, course, term, semester or year) what students know and can do. It
also provides evidence for the purpose of making judgment about student competence or program
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Summative assessment at the classroom level is mainly used as part of the grading process and means
of evaluating the effectiveness of learning process. Some common examples of summative assessment
include school-based (end of unit tests, mid semester and final examinations) and external
examinations (primary school leaving and general secondary education certificate examinations and
university entrance examination in the Ethiopian context).
Teachers are expected to use summative assessment to provide accurate and sound evidence about
their students’ proficiency, so that the recipients of the information can use the information to make
reasonable and defensible decisions.
3.3.1. Purpose of assessment of learning
The main purposes of assessment of learning are for :
assigning grades: providing marks/scores in the form of numbers or letter so as to
communicate the result to the concerned bodies ( students ,parents, organizations etc);
certification of students’ achievement: giving written witness about students
successful completion of a certain level of educational program;
selection of students: serving for selection decision, if it is used for acceptance or
rejection of individuals in to a given program based on their competence;
placement of students: assigning students in to different fields of study;
guidance and counseling: guidance and counseling decision involves the assessment
data to help recommend programs of study that are likely to be appropriate for the
administrative policy decisions: making decisions such as holding school
administrators accountable , how some schools achieved best or poor, for sharing and
scaling up best practices to other schools, and understanding why some schools
performing poorly, rewarding best performing schools and determining school-career
ladder or merit pay decisions;
program assessment decisions: administrative decisions are also involved in areas
such as curriculum planning, and teachers training.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
3.3.2. Techniques of Assessment Of Learning
There are many instruments that could be used for assessment of learning but, the most frequently
used ones are the following:
Take-away (home take) exams
Assignments ( project work)
Book review (more appropriate from grades 7 and above)
Performance ( musical, dramatic, sport activities)
Report writing
Portfolio (it may be more applicable for formative or assessment for learning)
Lab report
Senior essay, thesis, and exit examinations (law exit exam for university students)
Paper and pencil tests
Oral examination
End of term/semester or year written examination
The above can be categorized into three main techniques of assessment of learning. These are Paper
and pencil tests, performance assessment tasks, and personal communications. The latter two are
discussed under the next sub-topic of integrated assessment and the former- paper and pencil tests- are
dealt in chapter five in details.
While selecting techniques of assessment of learning, the intended learning outcomes to be measured,
the context of the subject and its relationships with other subjects, the level of study, the characteristics
of the students, the availability of resources and the teaching methods of the subject should be taken
into considerations.
3.4. Assessment Tools for Integrated Assessment
Integrated design of classroom assessment tools is very commendable to assess each outcome or
performance criterion. A more integrated approach can take less time, avoid over-assessment, improve
motivation, make the assessment process more meaningful, and facilitate verification. Moreover, it,
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
gives assurance of overall competence, improve validity, and benefit learning. Assessment is more
comprehensive and effective if it is applied in an integrated way as depicted in the diagram below.
Figure 8: Tools and Strategies of Integrated Assessment
To best support students' learning, teachers are expected to be engaged continuously in an ongoing
assessment of the learning and teaching process in their classroom by balancing the assessment
for/of/as learning.
3.4.1. Performance Assessment
Performance assessment refers to the application and demonstration of knowledge and skill to a
particular context. Performance assessments can take on different forms, which include written and
oral demonstrations and activities that can be completed by either a group or an individual (Brualdi,
1998; Wiggins, 1993). Through observation, the teacher can determine what the student knows, what
the student does not know and what misconceptions the student holds with respect to the purpose of
the assessment.
Categories of Performance Assessment
1. Oral Communication: Students must demonstrate the ability to generate effective
communication skill using an oral modality. Examples include debates, speeches, dialogues,
reading aloud, reciting, and pronunciation.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Written Communication: Students must demonstrate the ability to generate effective
communication skill using a written modality. Examples include papers, drawings,
poems, research reports, directions, note taking, and check writing.
Psychomotor: Students must demonstrate the ability to successfully complete a physical
activity. Examples include bouncing a ball, operating a lathe, completing push-ups,
drawing geometric shapes, sewing, dissecting, painting, and using a computer.
4. Affective: Students must demonstrate the ability to value and/or express appropriate
Affective assessment is not usually a part of the grading process, but is used more in a
formative, diagnostic manner. Examples include feelings toward a specific subject,
prejudice, appropriate emotional responses, anger management, and attitudes (Gronlund,
1998). The following are some practical examples of performance assessment tools. Observation and its Tools
Performance- oriented behaviors can be assessed using observational techniques which are relied on
highly practical way to gather information without interrupting classroom activities. Observation can
be direct/indirect, natural/ artificial, scheduled/ unscheduled, participant/ non-participant. The major
observation techniques include anecdotal records, rating Scale, and check list (Gronlund, 1998). The
details are presented as follows:
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Figure 9: Tools for Performance Assessment
Anecdotal Records
Numeric & graphic
rating scale
Anecdotal Records
Anecdotal Records are factual description of meaningful incidents and events which the teacher has
observed in the life of his/her students. They are written record of occurrences or incidents students
involve spontaneously. These written descriptions of the observations (factual records about
individual behavior) enable the teacher to record significant evidences towards students’ certain
learning competencies at unusual times and places during the course of school day. Anecdotal records
are suitable method of collecting and recording such evidences not only daily activity but also
performances of a long period of time a semester or a year. Anecdotes of each or sample students
should be recorded on a separate card or sheet that is easy to keep records by filling, sorting, handling
and arranging them.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 8: Sample of Anecdotal Records
Anecdotal Record
Name of student _____________ Grade_____ Subject __________ Date ________
Place ___________ Time/period ______________
Observer Comment:__________________________________________________
Teacher/Observer’s Name __________________ Signature ________________
Adapted from ICDR (2004)
Rating Scale
Rating scale consists of a set of characteristics or qualities to be judged with some type of scale for
indicating the degree to which each attribute is present like: Excellent- very good- good-fair- poor. It is
also a list of characteristics that is accompanied by a scale that reflects the degree to which a
characteristic is observed (e.g., deficient, adequate, superior). It involves a set of characteristics
(behaviors) to be judged, accompanied by a kind of scale. Hence, the observer uses the scale to
indicate which of the several descriptions best characterized. According to ICDR (2004), rating scale
may take different specific forms, however, most of them belong to numerical, graphic and,
descriptive graphic rating scales forms. Rubrics are also a kind of rating scales-as opposed to
checklists-that are used with performance assessments. They are formally defined as scoring guides,
consisting of specific pre-established performance criteria, used in evaluating student work on
performance assessments.
1. Numerical and graphical rating scale
A. Numerical Rating Scale
Direction: In the following numerical rating, indicate the degree to which the student contributes to
class discussion by encircling the appropriate number. For example, to indicate the degree of students’
participation in a discussion and to answer the question, “To what extent does the student participate in
discussion?” the teacher can use the numbers represent the following values: 5=very good, 4=good,
3=average, 2=poor, and 1=very poor.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
B. Graphic Rating Scale
The same question “To what extent does the student participate in discussion?” can be indicated as
shown in the following graphic rating scale.
To what extent does the student participate in discussion?”
C. Descriptive Graphic Rating Scale
The same question “To what extent does the student participate in discussion?” can be indicated as
shown in the following by descriptive graphic rating scale.
To what extent does the student participate in discussion?
Never participate;
quiet, passive
Participates more than any
other group members
Participates as much as
other group members
2. Rubrics
Rubrics are typically the specific form of scoring instrument used when evaluating student
performances or products resulting from a performance task. They are ways of describing evaluation
criteria based on the expected outcomes and performances of students. Rubrics are scoring guides that
seek to evaluate a student’s performance based on the sum of a full range of criteria rather than a
single numerical score for students’ written assignments, tests or oral presentations. Rubrics are
working guide for students and teachers, usually handed out before the assignment begins in order to
get students to think about the criteria on which their work will be judged. Each rubric consists of a set
of scoring criteria and point values associated with these criteria. In most rubrics, the criteria are
grouped into categories so the teacher and the student can discriminate among the categories by level
of performance. In classroom use, the rubrics provide an objective of external standard against which
student performance may be compared.
Rubrics can be used as scoring criteria for the following:
Process and Product performance tasks
Free-response questions
Oral presentations
Article review or reactions
Portfolios and many others
Rubrics provide a readily accessible way of communicating and developing the learning outcomes
with students and the criteria the teacher uses to discern how well students have reached them. The
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
following table can be a good example of holistic rubrics and analytical rubrics with practical sample
answer for open ended questions respectively.
Table 9: Example of Holistic and Analytical Rubrics
Example of Holistic Rubrics
Demonstrates complete understanding of the problem. All requirements of task are
included in response.
Demonstrates considerable understanding of the problem. All requirements of task
are included.
Demonstrates partial understanding of the problem. Most requirements of task are
Demonstrates little understanding of the problem. Many requirements of task are
Demonstrates no understanding of the problem.
No response/task not attempted.
Example of Analytical Rubrics with practical Sample answer for open ended questions
Level of
Context : During a storm, a girl noticed that she always heard thunder shortly after
she saw a flash of lighting. Explain why there is a difference a time between seeing
lightning and hearing thunder.
Sample Answer
Light is faster than sound. You can see
the lightning bolt before sound reaches
Response include the fact that light
travels faster than sound; makes the
connection with scenario
Response only mentions the fact that
light is faster than sound; does not
relate the concept of hearing and
Response is scientifically incorrect
Question or parts of question restated
Thunder follows lightning
No answer or answer erased
Sound travels slower than light
Sound is faster than light
Check List
Checklist is a device of recording whether a characteristic is present or absent. Checklist is often used
to search out very specific and selected aspects of student’s behavior. A list of performance criteria is
associated with particular performance of product and as a result the presence or absence of the
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
specific character is indicated. It uses a yes/no judgment or response applied for the behavior to be
checked. It is a list of characteristics, both desired and undesired, for which the evaluator will observe.
Those characteristics that are present, either desired or undesired, are checked, those that are not
present are not checked. (Kubiszyn and Borich, 1987). The following Amharic table summarizes the
three performance assessment methods.
Table 10: Summary of performance methods
ዕለታዊ ባህርይ መመዝገቢያ (Anecdotal)
የግል ወይም የጋራ ልዩ ባህርይ ላይ
የሚከተለውን ባህርይ በሐተታ የሚገለጽበት
ባህሪያቱ በት/ዕቅድ ላይ የማይመሰረት
በግልና ማህበራዊ ሁኔታ በልዩ ባህርያት ላይ
ብቻ የሚያተኩር ነው።
ባህርዩ በማን የትና መቼ
እንደተከናወነ መግለጽ
የተመልካቹ አሰተያየት ወይም
የአንድን ተማሪ ባህርይ ስሜት ፍላጎት
በጥልቀት ለማወቅ
ለማዘጋጀት ብዙ ድካም የሌለበት
አሰተያየት/ለመስጠት አመቺ የሆነ
ውስን ባህሪያት ላይ ብቻ እንጂ ብዙ
ባህርያትን የሚያካትት አይደለም
የሁሉን ተማሪ ባህርይ የሚያካትት አይደለም
ለማወዳደርና መረጃን
ከት/ዓላማ ጋር
ለማገናዘብ አስቸጋሪ ነው
ለመመዝገብ ጊዜና ጥንቃቄ የሚጠይቅ ነው
ዝርዝር ባህሪያት መቆጣጠሪያ
(Checklist )
በእያንዳንዱ ተማሪ በሚከናወን ዝርዝር ባህሪያት
ላይ የሚያተኩር
በት/ዕቅድ ላይ የሚመሠረት ሁሉንም የተግባር
ማህበራዊ) ብዙ ዝርዝር ባህርያትን ለማካተት
የሚያስችል ሂደት ነው።
የባህርዩን ዓይነት (process and product)
በቅድሚያ መለየትና ዝርዝር ባሀርያትን ማውጣት
ከትምህርቱ ዓላማ ጋር የተገናዘበ መሆኑን መለየት
ለእያንዳንዱ ባህርይ ምን ምልክት መደረግ
እንዳለበት መወሰን
ስልቱን ደጋገሞ ለመጠቀም ይቻላል
ብዙ ባህርያትን ለማካተት
የአመራር ድክመትን ለማረም ወይም ለማስተካከል
ለማዘጋጀት ቀላል ነው
ውጤትን ለማጠቃለል ይረዳል
በእያንዳንዱ ባህርይ ላይ ለማወዳደር የሚያስችል
ምርጫው ውስን የሆነ /አዎ ወይም የለም/ ነው
የባህሪያት ማወዳደሪያ
(Rating scale)
ዝርዝር ባህረያቱ በቸክሊስቱ ከተዘረዘሩት ጋር ለማወዳደር
ከት/ዕቅድ ጋር የተያያዘ
የተለያዩ የመለኪያ እስኬሎችን ያካተተ
በሂደትና ውጤት ግላዊ ማህበራዊ ባህሪያት በሁሉም
ባህሪያት ላይ ተግባራዊ የሚሆን
በቸክሊሰት እንደሚደረገው ዝርዝር ባህርያቱን ማወጣት
በእያንዳንዱ ባህርይ ላይ ለማወዳደር የሚያስችል መሆኑን
ከትምህርት ዕቅድ ጋር የተያያዘ
የተለያዩ መለኪያ እስኬሎችን መጠቀም
በሁሉም ባህሪያቱ ላይ ተግባራዊ የሚሆን
ስልቱን ደጋግሞ ለመጠቀም አመቺ ነው
ብዙ ባህርያቱን ለማካተት አመቺ ነው
እያንዳንዱን ባህርይ ለማነጻጸር የሚያስችል ነው
ውጤቱን በተለያየ ስልት ለማቅረብ አመቺ ነው።
ለማዘጋጀት ከፍተኛ ጥንቃቄ የሚጠይቅ ነው
Adapted from NOE (2002) Debate with Reports
Debate with reports is usually more appropriate for learning outcomes consisting of controversial
topics that invite alternative views. For example, a Geography teacher may raise an issue: “Population
growth: Is it an advantage or a drawback to economic growth of a nation?” Students are allowed to
discuss on the issue, develop their own reports and present to their class. A classroom teacher may set
a marking scale for the presentation as well as for the report. A total 10 marks, of which 5 marks for
the presentation and 5 marks for report may be allotted.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 11: A marking scheme for the presentation
Marking scale
Originality of the idea
Coherent explanation
Presentation of the right argument
Open mindedness to accept the views of others
Adapted from: NOE (2004)
Note: The teacher needs to work out in detail how he /she gives marks to the stated behaviors. For
example, he/she may take confidence and indicate the marking procedures as follows:
If a student is afraid of looking his/her class mates, sweating, and standstill (unmovable)
throughout while reporting, 1 mark can be given.
If a student is afraid of looking his/her class mates, sweating and standstill three-fourth of the
time while reporting, 2 marks can be given.
If a student is afraid of looking his/her class mates, sweating and standstill half of the time
and deliver good speech with less fear in the remaining half of the time, 3 marks can be given.
If a student reports three – fourth of the time looking his/her class mates and doesn’t show any
sign of fear, 4 marks can be given.
If a student reports with full confidence throughout looking the class mates and doesn’t show
any sign of fear, 5 marks can be given. Similarly, a marking scheme can be prepared for the
written report. The following are just some of the marking criteria the teacher needs to
Table 12: Marking Scheme for Written Report
Marking Scale
Organization of the report
Language clarity
Grammatical correctness
Selection of powerful words
Flow of ideas
Hand writing
Adapted from NOE (2004)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
3.4.1. 3. Assignments
Assignments like tests are more structured and less open ended than projects. They do not necessarily
follow strictly prescribed procedure or fact like tests. They are not concerned exclusively with one
topic. In fact, they are more challenging and less textbook-based than tests even though the challenge
is not that serious require a collective or team work. The learner can manage to provide answer
individually. The only thing serious about them is that they require reinterpreting textbook issues in
their own language.
Assignments should not be a mere replica of the questions or ideas raised in the students’ text books.
Instead, they have to be written in a way they can enhance higher level of thinking. At the same time,
assignments help students to see the applicability or practicality of a given theoretical concept in real
life situation. Therefore, before writing the assignment question, a classroom teacher should think
twice as to the role an assignment needs to play.
For example, a history teacher may teach a topic about the Battle of Adwa. Ethiopia won the Italian
troops in the battle field. The battle of Adwa was the symbol of freedom to the African countries. In
this case, a teacher may write a question for an assignment in such a way: “What are the pros and cons
of the battle of Adwa in the liberation of African countries?” This item will increase the imaginary
thinking of students. To make the responses of the students, the teacher can prepare a marking guide as
shown in the debate with reports. The teacher may need to correct and return comments before getting
into the text lesson. Group Projects
A project is an exercise on a single learning outcome or topic that requires a more relaxed time than
assignments. Moreover, projects require much more information than assignments and demand the
involvement of a group of learners working together. Projects are in general, comprehensive, openended, and tackled without close supervision but with the guidance and support of the teacher.
For example, a Biology teacher may order students in a class to take a piece of land in the school
compound to plant vegetables. This might be a semester or a three or a four month’s project. He/she
will supervise the activities periodically and guide students to abide to the principles discussed in the
classroom. Firstly, students dig the ground and prepare the land, then make rows, sow the seeds in
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
rows, water the land and after sometimes supply the right fertilizer in the right amount. While the
vegetables are growing, the students gather data and finally prepare a project report to be submitted to
their teacher. The teacher can mark the students’ project work based on his/her observation as well as
the final report. If the teacher’s emphasis is on the process he/she can give marks to each and every
step done by students while planting the vegetables. How do they prepare the land, the depth of the
rows, the space between the rows, etc., each will be evaluated and marked while the project is on
progress. On the other hand, if the teacher plans to give marks to the output, i.e., the final product,
again he/she is expected to set certain marking criteria. In most cases, it is advisable to give more
emphasis to the process than to give marks for both the process and the product. This is because in the
teaching –learning process the aim of giving a project work is not to benefit from the
products/outcomes, instead to make students familiar to real life situations by being directly involved
in the task. In general, projects are used to measure a comprehensive range of skills and integration of
activities. Portfolio Assessment
Portfolio is a form of performance assessment that provides a way of collecting and presenting
samples of a student's work reflecting effort and progress over time. It is a systematic, purposeful, and
meaningful collection of students’ work to demonstrate their academic growth. Portfolio may contain
written work, journals, maps, charts, survey, group reports, peer reviews and other such items (Moya
and Malley, 1994). For Airasian (1997), portfolio assessment is a trend to make authentic assessment
with regard to students’ performance or product in classrooms. Portfolios include products that reflect
a student’s development over time.
Importance of Portfolios:
1. For Students
Shows growth over time
Displays student’s accomplishment
Helps students make choices
Encourages them to take responsibility for their work
Demonstrates how students think
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
2. For Teachers
Highlights performance-based activities over year
Provides a framework for organizing student’s work
Encourages collaboration with students, parents, and teachers
Showcases an ongoing curriculum
Facilitates student information for decision making
3. For Parents
Offer insight into what their children do in school
Facilitates communication between home and school
Gives the parents an opportunity to react to what their child is doing in school and to their
Shows parents how to make a portfolio so they may do one at home at the same time
4. For Administrators
Provides evidence that teacher/school goals are being met
Shows growth of students and teachers
Provides data from various sources
As a form of performance assessment, portfolios have the advantage of providing tangible evidence of
learning and they provide a continuous record of student accomplishments.
Characteristics of a Good Portfolio Assessment
The major features of a good portfolio assessment are:
Comprehensiveness: to assess the depth and breadth of a student’s capabilities, a comprehensive data
is required. This may be achieved by using both formal and informal assessment techniques; focusing
on both the processes and the products of learning; seeking to understand the overall student
development in the behavioral domains; and showing learners and parents.
Predetermined and Systematic: a sound portfolio procedure is planned prior to implementation. The
purpose and contents of the portfolio data collection schedules, and student performance criteria
should be clearly stated as part of portfolio planning.
Informative: the information in the portfolio must be meaningful to teachers, students, staffs, and
parents. It must be usable for instruction and curriculum adaptation to students’ needs. It involves a
mechanism for giving a timely feedback to the teacher and students.
Tailored/ customized: an exemplary portfolio procedure should be tailored to the purpose for which it
will be used, to classroom learning outcomes and competencies and to individual students’ assessment
needs. It should be adapted to match information needs and reflect students’ characteristics.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Authentic: a good portfolio procedure provides student information based on assessment tasks that
reflect real world activities used in classroom instruction.
Disadvantages of Portfolios
Require more time for teachers to evaluate than test or simple-sample assessment.
Require students to compile their own work, usually outside of class.
Do not easily demonstrate lower-level thinking, such as recall of knowledge.
May threaten students who limit their learning to cramming for doing it at the last minute.
Table 13: Scoring Rubrics for portfolio writing
Level of
(5 points)
(4 points)
(3 points)
- Addressed the question
- States a relevant, justifiable answer
- Presents arguments in a logical order
- Uses acceptable style and grammar (no
- Does not address the question
explicitly, although does so tangentially
- States a relevant and justifiable answer
- Presents arguments in a logical order
- Uses acceptable style and grammar
(one error)
- Does not address the question
- States no relevant answers
- Indicates misconceptions
- Is not clearly or logically organized
- Fails to use acceptable style and
grammar (two or more errors)
General Approach Comprehension
- Demonstrates an accurate and complete
understanding of the question
- Backs conclusions with data and warrants
- Uses 2 of more ideas, examples, and/or
arguments that support the answer
- Demonstrates accurate but only adequate
understanding of question because does not
back conclusions with
warrants and data
- Uses only one idea to support the answer
- Less thorough than above
- Does not demonstrate accurate
understanding of the question
- Does not provide evidence to support
their answer to the question
No Answer
(0 points)
3.4.2. Personal Communication Assessment
Gathering information about students through personal communication is just like finding out what
students have learned through interacting with them.
Types of personal communication assessment include:
Looking at and responding to students’ comments in journals and logs
Asking questions during instruction
Interviewing students in conferences
Listening to students as they participate in class
Giving examinations orally
The following are some practical examples of personal communication assessments:
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
3.4.2. 1. Case Studies
Case study is a description of an event concerning a real-life or simulated situation, usually in the form
of a text, a paragraph, a video, a picture, or a role play exercise. This is followed by a series of
instructions and questions to elicit responses from the learner. Some of the possible uses of case
studies are to measure the learner’s ability to: analyze a situation, draw conclusion, and report on
possible courses of action.
Case studies can be undertaken individually or in a group and students may be allowed to take more
time to work on the topic. For example, an Amharic teacher may give to a group of students a case
study about “the contributions of Haddis Alemayehu to the development of modern literature in
Ethiopia”. While designing the item for a case study he/ she has to think of the mechanisms on how
scoring can be done. As a result he/ she may outline the possible answers and the marks to be allotted
to them. The total marks for the proposed case study item may be 5, while the possible answers are 10.
In this case, two possible answers will carry 1 mark. Taking this into consideration the marking scale
can be designed as follows:
2 possible answers = 1 mark
4 possible answers = 2marks
6 possible answers = 3 marks
8 possible answers = 4 marks
10 possible answers = 5 marks
Note: If the group properly states 3 possible answers for the proposed case study topic, 1.5 marks will
be given. Similarly, a teacher may give special attention to the language of presentation, flow of ideas,
selection of appropriate words to convey the message, etc. In this regard, he/she has to objectively
decide the amount of marks and the procedure to follow in implementing his / her intentions. Oral Questions and personal Interviews
Oral questions and personal interviews are mainly used to generate evidence of a learner’s ability to
listen, interpret, communicate ideas and sustain a conversation in the language of assessment. Its
implication is assessing the interpretation and expression of ideas by the use of completion / short
answer questions.
In the teaching – learning process, a classroom teacher may raise several questions from the topic
he/she is discussing. Students are allowed to give the correct answers. A teacher may set a marking
criterion for this assessment tool. Students who succeeded to give the correct answers may be given 1
mark each time in a period, while those who give partial answers receive 0.5 mark and those who
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
missed the answers earn zero. At the end of the semester, the total marks earned will be converted into
a certain minimum of the total evaluation assigned for oral question, i.e., 10%, 15%, etc.
A personal interview is probably the oldest and best known means of eliciting information directly
from learners. It combines two assessment methods, namely, observation and questioning. An
interview is a dialogue between the assessor and the learner, creating opportunities for learner
questions. Personal interviews are applied by using different forms of questions, particularly open
ended questions. Role Play
Learners are presented with a situation, often a problem or an incident to which they have to respond
by assuming a particular role. The enactment may be unrehearsed, or the learner may be briefed in the
particular role to be played. Such assessment is open-ended and is person-centered.
Role Plays are used to the assessment of a wide range of behavioral and inter- personal skills. Based
on the content of the curriculum, a teacher may instruct learners to perform a role play. For example,
the issue selected for a role play might be “Life in a family”. In this case, some of the students may
carry the responsibility to be a father, a mother, and children. Then after, each started to play his/ her
role in the family. As an assessment tool, in role plays the teacher is expected to have a criterion to
judge whether or not a student fully behaves in a way he/ she is expected to behave. Therefore, the
teacher will convert his/her own observation into a numerical value. One thing the teacher should bear
in mind at this point is that role play is both a technique of learning and a tool for assessment. The
rubrics that are dealt under performance assessment can also be used for personal communication.
Table 14: targets to be assessed with performance and personal communication assessments
It is not a good match—too time
consuming to cover everything.
It enables the teacher to watch
students solve some problems and
infer reasoning proficiency.
Good match can be observed and
evaluated skills as they are being
It is a good match.
The teacher can assess the attributes
of the product itself.
Ability to
The teacher can ask questions, evaluate
answers and infer mastery—but a timeconsuming option.
It enables the teacher to ask student to
“think aloud” or can ask follow up
questions to probe reasoning.
It is a strong match when skill is oral
communication proficiency; but not a good
match otherwise.
It is not a good match.
Adapted from Stiggins (2005)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Summary of the components of assessment
Let’s summarize the components of assessment with analogy. Think the students as plants; the process
of simply measuring the plants is assessment of learning (summative). It might be interesting to
compare and analyze measurements but, in themselves, these do not affect the growth of the plants.
Assessment for learning, on the other hand, is the equivalent of feeding and watering (nurturing) the
plants appropriate to their needs - directly affecting their growth. Assessment as learning of the plants
is the process of making photosynthesis by their own branches properly and absorbing sufficient water
and mineral by their roots for their proper growth and maturation. Sound assessment practice is
centered on appropriateness of the purpose. The purpose (the why) and the process (the how), to some
degree, determines whether the assessment is formative or summative (New Brunswick Assessment
Program, 2013). The next table summarizes the three distinguished classroom assessment components.
Table 15: Planning of Assessment for/as/of Learning
Why assess?
to enable teachers to determine
next steps in advancing student
to guide and provide opportunities for
each student to monitor and critically
reflect on his or her learning and
identify next steps
to certify or inform parents or
others of student’s proficiency
in relation to curriculum
learning outcomes
each student’s progress and
learning needs in relation to the
curricular outcomes
each student’s thinking about his or her
learning, what strategies he or she uses
to support or challenge that learning, and
the mechanisms he or she uses to adjust
and advance his or her learning
the extent to which students can
apply the key concepts,
knowledge, skills, and attitudes
related to the curricular
a range of methods in different
a range of methods in different modes
modes that make students’ skills that elicit students’ learning and
and understanding visible
metacognitive processes.
accuracy and
consistency of student’s
self-reflection, selfmonitoring, and selfadjustment
engagement of the
student in considering
and challenging his or
her thinking
students record their
own learning
accuracy and consistency of
student’s self-reflection, selfmonitoring, and self-adjustment
engagement of the student in
considering and challenging his
or her thinking
students record their own
January 2014
a range of methods in different
modes that assess both product
and process
accuracy, consistency,
and fairness of
judgments based on
clear, detailed learning
fair and accurate
Classroom Assessment Manual for Primary and Secondary School Teachers
Using the
provide each student
with accurate
descriptive feedback to
further his or her
differentiate instruction
by continually checking
where each student is in
relation to the
curricular outcomes
provide parents or
guardians with
descriptive feedback
about student learning
and ideas for support
How it
provide each student with
accurate, descriptive
feedback that will help him or
her develop
independent learning habits
have each student focus on the
task and his or her learning (not
on getting the right answer)
provide each student with ideas
for adjusting,
rethinking, and articulating his
or her learning
provide the conditions for the
teacher and student to discuss
students report about their
indicate each student’s
level of learning
provide the foundation
for discussions on
placement or
report fair, accurate,
and detailed
information that can be
used to decide the next
steps in a student’s
Students assess themselves and Students assess themselves and or others Usually compiles data into a
or others in order to become self in order to become self aware of the
single number, score, or mark
aware of the learning process
learning process
as part of a formal report
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
4. 0. Providing Feedback for Classroom Assessment
4.1. Concept of Feedback
4.1.1. What is feedback?
According to UNESCO (2009), feedback is a mechanism of providing information to the learners on
their current state of performance, achievement and progress (learning). Teachers communicate to
their students on how well or poorly they are performing. After assessment, teachers ought to give
feedback and follow-up activities.
As reiterated in Literacy and Numeracy Secretariat (2010), teachers select and sequence the learning
experiences (instruction) integrated with opportunities to gather information about the learning
(assessment). The feedback process is iterative and relies on assessment providing information to the
teacher and the learner, through feedback, so each can take appropriate actions to support learning. A
critical requirement of the feedback process is that assessment is ongoing, woven seamlessly with
instruction into the learning experience.
It is at these points where students receive feedback from the teacher, from peers, and from themselves
(through self-assessment), and use the feedback to take further action to learn and improve. The figure
below shows the role of feedback in the learning process, and how it is used by the teacher and the
Figure 10: The Feedback Loop based on Teachers and students responses
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
The state of students’ understanding has to be monitored by all the actions through which students
develop and display, that is overseeing their discussions, the observation of their activities, and the
marking of written work, alongside a careful listening to their talk (Black & William, 1998; Hattie,
Providing effective feedback to students continuously throughout the learning process will help them
improve their skills and give them information to evaluate themselves. Based on an extensive study of
the research on effective feedback, Miller (2002) concluded that high-quality feedback is timely,
direct, accurate, substantive, constructive, prescriptive, specific, outcome-focused, encouraging, and
Constructive feedback that addresses the learners’ individual differences can be non verbal or verbal/
written. The type of feedback to be administered is highly determined by the level of growth and
development of an individual.
4.1.2. Purposes of Feedback
The main purposes of feedback are to:
help to clarify what good performance is (goals, criteria, expected standards);
facilitate the development of self-assessment (reflection) in learning;
deliver high quality information to students about their learning;
encourage teacher and peer dialogue around learning;
encourage positive motivational beliefs and self-esteem;
provide opportunities to close the gap between current and desired performance; and
provide information to teachers that can be used to help shape teaching (Nicol and Macfarlane ,
The components underlying the feedback process
To consider the effectiveness of the feedback process, addressing the five WH-questions: Who (the
players in the feedback-process), What (the information that is fed back), When (the occasion upon
which the information is fed back), Where (the location in which the information is fed back is
psychologically safe), and How (the manner in which the information is given and received) is very
important (Brinko, 1993).
4.1.3. How to give and receive feedback
Giving constructive feedback in the form of verbal or written is a vital aspect of ongoing classroom
assessment especially for AfL and AaL. Feedback can be provided in a range of situations: from an
instant, informal reply to a more formally planned review. While giving oral feedback, the teacher
should do:
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
emphasize the positive. Always give specific feedback on what a learner has done well.
appreciate what’s been achieved and be clear about exactly what needs to be improved next
and how.
seek learners’ views and value their contribution. This will help them to get better at
assessing their own work, which is vital to them to become independent learners.
invite the learner to comment on what the teacher does as well. Feedback is not a one-way
frame questions carefully. Use open questions and resist asking more than one question at a
use prompts/cues such as ‘Would you like to say more about that?’
give a few seconds after posing a question or a response has been given, to encourage
learners to carefully consider and expand on what they have said.
avoid generalizations such as ‘There are a lot of inaccuracies’. Instead focus on specific areas
for development which you can discuss with the learner.
focus on things that each learner can change, and avoid overloading them with too much
feedback at once.
be sensitive if the teacher has to give feedback to one person in a group. The student might
feel undermined if others hear.
look for ways forward together. Share ideas and explore solutions rather than always putting
forward teacher’s own suggestions.
create a situation on how students agree on the given feedback. This could include agreeing
new targets or planning learning opportunities.
adapt your approach to suit one-to-one or group situations.
While giving written feedback,
Don’t jump straight to the errors. Praise first on the strength.
Respond to the content and the message rather than focusing only on minor errors
If writing is poor, select one or two particular areas to draw attention to. Don’t cover work in
red ink.
Be specific. Indicate what action the learner should take in relation to weaknesses that have
been marked.
Encourage the learner to make corrections. Don’t simply write correct answers, spellings and
so on.
Link the comments to the learning competencies
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
4.1.4. Strategies for Effective Feedback
Feedback is more effective when:
1. Using Positive comment
negative information should be “sandwiched” between positive information;
Constructive criticism with explanation of how to improve
2. Using Contextual statement
I liked….because….
Now/Next time…
Interactive statement e.g. a question based on the work
3. Giving it as soon as possible after performance, and
4. Reducing uncertainty for students by increasing knowledge through a reduction in uncertainty and
by eliminating half of the alternative or competing explanations for behavior
5. Allowing student to act on feedback
use lesson time to redraft work.
allow students time to focus on the feedback for improvement .
reinforce the value of the feedback and working in a supportive environment.
6. Following-up
time in the lesson to talk individually.
have a written dialogue in the students’ book.
use a comment tracker or target sheet to formalise the dialogue in a workbook
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
4.2. Ways of Giving Feedback
The ways of giving feedback will depend on the kind of teaching situation. Bookhart and Nitko cited
in USAID (2008) described three ways of giving feedback: feedback that compares a student,
feedback on the outcome a student produces or the thinking process a student uses, and feedback that
describes or evaluates the student’s work.
4.2.1. Giving feedback by Comparing
Sometimes the feedbacks teachers give to a student compares the student to someone or compare the
student’s work to something. The comparison feedback can differ according to the kind of comparison
it makes. There are three basic ways the teacher may give students comparison feedback.
1. Norm-referenced feedback – the feedback to a student compares the student’s work to another
students’ work.
Example: The teacher may say “Your drawing of the water cycle was the best in the class.” In
this way, he/she is comparing this student’s work to the norm of the other students. This
type of feedback does not tell the student how well he or she has met the learning
outcome stated in the curriculum, nor does it tell how he or she has improved overtime.
2. Criterion- referenced feedback – Teachers feedback to a student compares that student’s work to
a learning outcome and tells the students what he or she can or cannot do.
Example: The teacher may say “You are particularly good at multiplying four-digit numbers.” In this
way the teacher’s feedback compares the student’s work to the learning outcomes in the
curriculum, it does not tell the student how well the work compares to other students’ work,
nor does it tell the student how the current work compares to what he or she has done
3. Self-referenced feedback – teacher’s feedback to a student compares the student’s work to his or
her own past work, or sometimes to the level of performance you expect of that particular
Example: the teacher may say, “This list of living and non-living things you made is better than the
last one you made.” In this way, the teacher’s feedback compares the student to his or her
previous work. It provides the student with no information about how well the work compares
to other students’ nor does it provide any information to the student about how well the student
has met the learning outcome.
Recommendation about feedback by Comparing
The best comparison feedback to give when the students are learning is criterion- referenced or selfreferenced feedback. The use of self- referenced feedback is very important especially for those
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
students in the class who are not sure of by their ability to learn because teachers want to show them
how they are improving.
4.2.2. Outcome and Process Ways of Giving Feedback
The other ways the teachers can use to give feedback is to tell the student about the teacher’s
evaluation of his/her results (outcomes) or tell the student the cognitive (thinking) processes
underlying what the student has done, rather than comparing the students work to something or
1. Outcome based feedback – teacher’s feedback tells to the student only of the results of the
Example: the teacher may say “You got an 80% on that home work.” This statement tells the about
teacher’s evaluation of the work, but only as a summary of the results. This way of giving
feedback does not tell the student how well she or he has learnt any particular learning
outcome, it does not tell the student how much she or he has improved, it does not tell the
student what she or he needs to do to improve her or his learning, and it does not tell the
student how to improve the work.
2. Cognitive based feedback – teacher’s feedback tells a student the connection between how he or
she went about doing the work and his or her achievement, and does so in a way but leads to
improving the work.
Example: the teacher may say “It doesn’t seem like you used the chart that shows the different
kingdom of living things when you make your list of what you saw in the school area. Maybe
that is why your list is not complete.” This statement tells the student what the teacher observed
about the way he or she completed the assignment and how the student can improve the work
(i.e., by using the chart that contains information about different kingdoms of living things)
Recommendation about Outcomes and Cognitive Based Feedback:
Cognitive based feedback is better than outcomes based feedback because it helps the students know
what to do to improve. Outcomes based feedback is more limited in this respect.
Outcome feedback helps a student to improve only if the student can create the cognitive feedback for
him/herself. Younger students, however, cannot create the cognitive feedback on their own. They need
the teacher to help them realize what to do to improve when they see the marks on their paper.
Example: A student got back an English paragraph where the only thing a teacher did was marked
three comma faults [i.e., the teacher gave outcome feedback]. To use that feedback, the student
would have to conclude on his or her own. “I should study how to use commas.” To make the
feedback for this example more useful, the teacher might say, “It looks like you forgot to use
commas to separate the three words in your sentence that you listed in a series. Be sure to use
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
commas whenever you have three or more words in a series in a sentence. For example,
commas are used correctly in this sentence: ‘Apples, mangos, and avocado are all delicious
fruits’ Can you see how the three words in the series of fruits are separated by commas? ” This
type of cognitive feedback tells the student what he or she needs to do to improve the work.
Just marking or circling a student’s errors does not tell the student what to do to improve, and
therefore, is less useful as feedback to improve learning.
4.2.3. Descriptive and Evaluative Ways of Feedback
1. Descriptive feedback - teacher’s feedback tells the student information about the work, especially
how the work meets quality criteria. It is what the student did and providing guidance for
improvement. It is critical in ‘closing the gap’ for students
Based on the needs of the student and focused around learning intention of the task, the teacher can
write down or asks for an improvement suggestion to help the student know how to make the specific
improvement. There are three types of improvement prompt, each linked to an area of improvement:
A. Reminder – reminding the learner of the learning objective by calling back what was the
lesson about. Ex: Remember the rule about…
B. Scaffold – giving possibilities/ examples of what they need to do. Ex: Why don’t you try
C. Example – giving clear sentences, words or processes to copy as an example. Ex: why
don’t you use a simile word as…
Example: the teacher may say “I like the way you used action words in your story about the happiest
day in your life. Using these action words make the people seems real” This type of feedback
describes the good parts of the work by pointing out to the student that s/he meets the criterion
of making the people in the story seem real because the student is using action words.
2. Evaluative feedback – Teacher’s feedback tells the student only his/her overall judgment on the
work, without describing particular qualities about the work. Evaluative feedback is involving
judgment and can affect how students feel about themselves
Example: the teacher may say only “Good job!” This feedback does not communicate to the student
what he or she did on the work that merits the teacher’s “good job” judgment. The student is
left guessing about why the work is good and cannot use the teacher’s “good job” statement to
help improve other work.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Recommendations about descriptive and evaluative feedback
Students will find it more useful when you give them descriptive feedback that specifies why parts of
the work are good or not so good, than when teachers give simple evaluative feedback. This is because
description of why something is good or not so good gives students information on how to improve
their work. The descriptive feedback you give students should describe to the student how the work
relates to quality criteria you have shared with students.
Examples of good and poor Feedback
The table below shows how two different fourth grade teachers might possibility mark the
mathematics work of a student. The table shows the mathematics problems, the responses of a student
in the class, and the feedback of two teachers: Teacher Abebe and Teacher Fatuma. Look at the two
teachers’ feedback.
Table 16: Example of poor and good feedbacks on a student's responses to a grade four
Teacher Abebe: Poor feedback
Correct Student’s
Assignment Questions
Answer A answer
a) 4 × (2 + 4) – 6 =
b) 6 + (6 – 2) × 2 =
c) 2 × 4 ÷ 2 + 3 + 3 =
Teacher Fatuma: Good feedback
Assignment Questions
Teacher Abebe’s Feedback
“Correct. Good!”
“Incorrect. You did not do the calculations correctly. Redo it!
“Incorrect. Your calculations are wrong. Pay attention to your
work ”
Teacher Fatuma’s Feedback
a) 4 × (2 + 4) – 6 =
“Correct. I like the way you did the calculations inside the
parentheses first, and then multiply ”
b) 6 + (6 – 2) × 2 =
“Can you see what you did incorrectly? I think you forgot to
do the calculations inside the parentheses first. What do you
Check the problem again and remember after calculating
inside the parentheses, to do multiplication before you do and
adding or subtracting. Try the problem again; I think you will
do it better now.”
c) 2 × 4 ÷ 2 + 3 + 3 =
“Hmmm. This is interesting answer but it is not correct. Can
you show me how you got this answer? ”
“Yes, I can see what you did incorrectly. Can you think a
little about it?”
“Remember, first finish all of the division and multiplication
in the problem, then do addition and subtraction.”
“Try it again, I think you will do it better now”
Source: USAID (2008)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Analysis of the Feedback and Comments
Teacher Abebe’s feedback is mostly general and non-specific, so it will be difficult for the student to
see how to improve. His feedback is outcome feedback (“Correct.” “Incorrect.” “You did not do your
calculation correctly”) and evaluative feedback (“Good!” “Redo it!”). It may be recalled that neither
outcome feedback nor evaluative feedback is recommended. Abebe’s comments, “Pay attention to
your work!” are not very helpful to the student who obviously paid enough attention to do the work
but made some error.
Fatuma’s feedback, on the other hand, is much more specific and focuses on the calculation process
required by the learning outcome. She uses mostly cognitive feedback (e.g., “Can you see what you
did incorrectly? I think you forgot to do the calculation inside the parentheses first. What do you
think?”) and descriptive feedback (e.g., “I like the way you did the calculation inside the parentheses
first, and then multiply.”) to help the student see what he/she needs to do next to improve his/her
learning. Fatuma also tried to get the student to do some self-evaluation (e.g., “Can you see what you
did incorrectly?” “Think about it.” “What do you think?”). Instead of ordering the student to redo the
incorrect solution, Fatuma uses a more encouraging tone (e.g., “Try it again; I think you will do better
now.”) (USAID, 2008:39-43).
4.3. Feedback for Target Users and Stakeholders
As discussed so far, assessment results should be conveyed to students and used to strengthen
successful performance and assist in the remediation of weak performance. This feedback should be
(1) immediate, (2) detailed, (3) emphasize strengths and weaknesses of performance, (4) indicate
remediation, and (5) should be positive in nature.
There are five audiences in this regard, each of whom need their own share. Students, teachers,,
parents, school administrators, , and administrative authorities are the people who receive the
assessment information or results.
4.3.1. Feedback to Students
Students do rely on their marks and on teachers’ comments as indicators of their level of success, and
to make decisions about their future learning endeavors. Feedback to students help to aid their future
learning, indicate areas of success in their work, and show areas of future improvement.
Because the assessment is said to be conducted with the students, they can make full use of the
assessment data or results. Students (can) reflect on what they have learned. They can take more active
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
roles in making decisions about what they need for the next classroom learning. Reporting to students
can also facilitate the next instruction process because they will realize what the instructional
objectives are, while these objectives are set with them. In this case, students' learning awareness will
increase considerably due to their growing assessment awareness.
Therefore, Feedback to students should:
support their future learning
indicate areas of success in their work
indicate areas for future improvement
enable them to improve and plan their next steps and scaffold ( the gradual of shifts
responsibility to the students until the student become independent) students efforts towards
such improvement.
An example of scaffolding in the classroom setting could include a teacher first instructing his/her
students on how to write a sentence using commas and conjunctions. As sometimes goes on, the
teacher has his/her students practice writing these sentences with peers, gives feedback to the students
and eventually makes the students complete this skill without teacher’s guidance.
4.3.2. Feedback to Teachers
The main users of the assessment information are certainly teachers themselves. They use them to
check the effectiveness of instruction and course materials. They also make decisions about students'
needs for the upcoming term. What is of great note to teachers is to know how well their students
could reach their stated goals. The process of writing comments can also be helpful to teachers.
Writing comments gives teachers opportunities to be reflective about the academic and social progress
of their students. This type of reflection may result in teachers gaining a deeper understanding of each
student's strengths and needs. They, therefore, evaluate student progress or achievement to use the
information for careful planning to the next instructions.
Feedback to teachers should:
Help them to check the effectiveness of instruction
Make decisions about students’ needs to carefully plan for the next lesson
Help them to know how well their students could reach the stated competencies
Provide them opportunities to be reflective about the academic and social progress of their
Support them to gain a deeper understanding of each student’s strengths and needs.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
4.3.3. Feedback to Parents
The process- of assessment will not put parents out of the educational program, though they are out-ofclass audiences. Information reported to parents can provide them with clear feedback and concrete
evidence of their children progress and achievement. Well written comments can give parents
guidance on how to help their children make improvements in specific academic or social areas. For
example, the teacher who wrote the previous report card comment on spelling may also wish to
include that practicing how to write the different plural nouns at home or playing different spelling
games may help the child to enhance her/his spelling skills. Parents can use the information to monitor
and supervise their children's work and assignments at home based on the suggestive directions given
by the teacher. They can also supply the teacher with information about their children's learning
appropriate for internal decision-making. Therefore, reporting the assessment data to parents can
create a communication line between them and teachers so that they can both monitor the student
learning more effectively through exchanging views. This exchange may be made in letters, phone
calls, or through electronic devices.
Feedback to parents should:
Provide them with clear and concrete evidence of their children progress.
Provide adequate information to them to monitor, supervise and support their children’s work
and assignments.
Increase parents involvement in school activities
4.3.4. School Administrators and Authorities
School administrators are also the outside-classroom audiences who need reports to make a variety of
decisions at higher levels. Using them, they deal with more convenient and careful scheduling and
planning. They also make sound decisions about different needs for different proficiency levels as well
as the issue of inclusion, exclusion, or modification/revision of the given materials in line with the
program policy. Placement of students into the correct levels/classes is another decision that
administrators make accordingly. Their major concern would be the administrative accountability; i.e.,
maintaining and enhancing the quality of institutional programs. Because teaching and assessment
occur within the framework of educational systems, the information obtained should be reported to
different administrative authorities for making their own decisions.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
4.4. Utilizing the Information for Improvement
Assessment should result in improvement. Though the student, the teacher, the parents and others are
all stakeholders in this paradigm, it is the teacher who has to take the initiative to use the analysis of
information on each learner to enhance learning.
Some questions that the teacher could ask himself/ herself are:
1. Are all the learners involved in the activities of the class?
Are there learners who face problems in coping with the pace and flow of the teaching/ learning
3. What are their problems and how should I help them?
4. Is there something in my teaching strategy that has to be modified to make the class learn better?
How should I go about it?
5. Are there some learners who are not challenged by the materials and methods and hence lose
motivation quickly? How should I respond to their special needs?
6. Are there some lessons/ chapters/ units that pose difficulties to many learners?
7. How should I add value to these portions of the syllabus?
8. Have I identified certain common errors, mistakes and instances of lack of conceptual clarity from the
information collected and analyzed? How should I go about an effective program of remediation?
9. Is my classroom time management effective? What are the changes that I could introduce to make it
more learner and learning oriented?
10. Am I getting adequate support from the school management, colleagues, the parents and the
community? How can I involve all the stakeholders more actively in what I am doing for the benefit
of my learners?
11. What are my own needs of professional development? How can I fulfill them in a continuous
Such reflective questions can help the teacher modify and refine the teaching strategies to achieve the
learning outcomes as well as to enhance his/ her professional competence continuously.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
5. 0. Planning and constructing Paper and Pencils Assessment Tools
5.1. The Preconditions before Constructing Assessment Tools
1. Learning – What is important for students to learn in the curriculum in the limited school and
classroom time available? What kind of learning is promoted?
2. Instruction – How does one plan and deliver instruction that will result in high levels of learning
for large numbers of students?
3. Assessment – How does one select or design assessment instruments and procedures that provide
accurate information about how well students are learning? What outcomes (level of
understanding/performance) are assessed?
4. Alignment – How does one ensure that objectives, instruction, and assessment are consistent with
one another?
The above four guiding questions can be applied not only for paper and pencil tests, but also for
performance and personal communications. Planning is an important step in the assessment process. It
determines why a teacher should assess, what he/she should measure, and who he/she should evaluate.
It also specifies how the assessment process should be carried out. Moreover, there should be a focus
on the time of assessment administration and the degree to which a teacher can assess the students.
Writing an assessment plan is a critical step in assessment tools development. If the plan is written
properly, it helps for ensuring the success of the other steps in the process; if an assessment plan is not
prepared, or if it is developed carelessly, it is likely the assessment will not provide valid or reliable
information (Capper, 1996).
5.2. Planning and constructing the instruments
Item developers and classroom teachers usually use series of steps in planning paper and pencil
assessment instruments/tools. According to Gronlund (1998), there are five major steps: determining
the purpose of the tool, identifying the learning outcomes, defining the learning outcomes in terms of
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
specific, observable behavior, outlining the subject matter to be measured, and developing table of
specification( for paper and pencil tests).
5.2.1. Determining the Purpose of the Assessment
Assessment tools are used in an instructional program to assess entry behavior (placement), monitor
learning progress (formative), diagnose learning difficulties (diagnostic), and measure performance at
the end of instruction (summative) etc. Therefore, each type of assessment tool typically requires some
modification in its design in line with its purpose.
5.2.2. Identifying the Learning Outcomes to be Measured
Learning outcomes are statements of what students are expected to achieve at the end of a set of
lessons that emanate from broad educational objectives. A useful guide for developing a
comprehensive list of learning outcomes is the taxonomy of educational objectives. It attempts to
identify and classify all possible educational outcomes. These objectives can be classified into the
following three major domains:
I. Cognitive Domain: Includes knowledge, intellectual abilities and skills (Please see annex 1 for
1. Knowledge
 Terminology
 Specific facts
 Concepts and Principles
2. Methods and procedures
3. Comprehension
 concepts and principles
 methods and procedures
 Written materials, graphs, maps, and numerical data
 Problem Situations
4. Application
 Factual Information
 Concepts and Principles
 Methods and Procedures
 Problem Solving Skills
5. Higher order thinking skills (Analysis, Synthesis, and Evaluation,)
 Critical Thinking
 Problem Solving
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
 Scientific Thinking
 Inventing
II. Affective Domain: Includes attitudes, interests, appreciation, and modes of adjustment (please see
annex 3 for details)
1. Attitudes
 Social Attitudes
 Scientific Attitudes
2. Interests
 Personal Interests
 Educational Interests
 Vocational Interests
3. Appreciations
 Literature, Art, and Music
 Social and Scientific achievements
4. Adjustments
 Social Adjustments
 Emotional adjustments
III. Psychomotor Domain: includes perceptual and motor skills (please see Annex 4 for details)
1. Laboratory Skills
2. Performance Skills
3. Communication Skills
4. Computational Skills
5. Social Skills
Table 17: Summary of instructional domains
(Intellectual ability)
Knowledge (remember)
Receive (awareness)
Respond (react)
Application (use)
Value (understand and act)
Organize personal value
Synthesis (create/build)
Internalize value
(adopt behavior)
Evaluation (assess, judge in
relational terms)
January 2014
Imitation (copy)
Develop Precision
integrate related skills)
system Naturalization
become expert)
Classroom Assessment Manual for Primary and Secondary School Teachers
5.1.3. Defining the Learning Outcomes
The main reason of using learning outcomes is to identify the leaning competencies that the student
must demonstrate so that the teacher may infer the student has or hasn’t learned a particular skill or
knowledge set. The learning outcome should be defined in terms of specific, observable behavior. A
learning outcome has three parts. These are:
Condition - the setting in which the learning will take place
Behavior - something which can be observed and measured
Performance Level - measurement criteria
Example: After reading a specific case study, the student will be able to correctly identify and
delineate the proper course of remediation.
Condition: After reading a specific case study
Behavior: identify and delineate the course of remediation
Performance Level: Correct and Proper (sometimes specific, sometimes general, always
These three essential elements of learning outcomes stem from Cognitive (mental skills, Knowledge
or thinking), Affective (growth in feelings or emotional areas, Attitude) and, Psychomotor (manual or
physical skills/actions).
These learning outcomes should be “SMART”:
Specific, Measurable, Attainable /achievable, Relevant and results-oriented, Time bounded
5.2.4. Outlining the subject matter to be measured
The learning outcomes specify how students are expected to react to the subject matter of a course.
Although it is possible to include both the students' behavior and the specific subject matter in the
same statement, it is usually desirable to list them separately. The reason for this is that the student can
react in the same way with many different areas of subject matter, and he/she can react in many
different ways to the same area of subject matter. For example, when we state that a student can
"define a term in his/her own words," "recall a specific fact," or "give an example of a principle," these
behaviors can be applied to almost any area of subject matter.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
5.2.5. Developing Table of Specification
The table of specification which is also known as Test blue print is a two- way chart or grid that
includes learning outcomes along the horizontal axis which is concerned with the types of
performance students are expected to demonstrate (i.e., knowledge or recall, comprehension,
application, etc) and the content along the vertical axis which is concerned with the topics learned.
The important contents are listed in the first column and the cognitive levels are presented in the next
three or more columns (ICDR, 1999). In general, the purposes of the table of specification are to
coordinate the assessment questions with:
The time spent on any particular content area,
The learning competencies of the subject being taught,
The level of thinking required by the competencies or stated standards.
Knowing what is contained in the assessment and that the content matches the
standards and benchmarks.
Increasing the quality of assessment items.
To construct a table of specification, first list the subject content that are reflected in the syllabus and
lesson plans. These will be listed on the far left column. Next, determine the cognitive levels of
understanding what students should achieve for each of the content areas. For instance, how
thoroughly should students understand the material? Should they be able to recognize an appropriate
step in a process (knowledge), explain a concept in their own words (comprehension), apply a
principle or process to a new set of circumstances (application), compare and contrast components of
schema (analysis), or create a plan to solve a problem (synthesis)?
Calculate the number of items to accompany the percentages based upon the total number of test
items. When deciding upon the total number of test items, keep in mind that all students should have
adequate time to finish the exam (allow at least one minute for each item written above the knowledge
level), but reliability is usually strengthened with well-written items.
If only facts were presented in classrooms and in the assignments, do not include analysis-type
questions. Similarly, if concepts were analyzed, write an appropriate number of items requiring
analysis, which could be short essay-type items. If the test blueprint is reflective of the content and
cognitive levels, and is followed carefully when writing the items, a reliable assessment of students’
achievement can likely lead to best result.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Determining the total number of items for the test depends on the following factors:
 The type of items used in the test
 The ability level of the examinees
 Availability of time for testing
 Complexity of test items
 The type of learning outcome being tested
 period allotment
The development of table of specification is procedurally illustrated with the use of model examples as
shown in the following consecutive tables.
Example on table of specification
School : ____________ Primary School
Subject: Mathematics
Semester: one
Purpose of the Test :
a) To assess students’ change of behavior in learning
b) To monitor their learning progress
c) To identify if they have any learning difficulties
d) To get feedback on methods and techniques of instruction
Table 18: Table of specification for determining the proportion of items
Major and Specific Content Areas
1. Rational numbers
1.1 The concept of Rational
1.2 The order of Rational numbers
1.3 Operation of rational numbers
1.3.1 Addition Rational
1.3.2 Subtraction of Rational
1.3.3 Multiplication of Rational
1.3.4 Division of Rational
questions in relation
to period allotment
Number of
January 2014
Formula for deciding the No of
Number of periods X total items
Total number of periods
Eg. 9 X 30 = 8.45 = 8
Eg. 7 X 30 = 6.56 = 7
Eg. 4 X 30 = 4
Eg. 3 X 30 = 3
Eg. 6 X 30 = 5
Eg. 3 X 30 = 3
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 18 shows the following general Information:
It covers three major contents of a chapter
It covers 32 periods
Total number of questions is 30 (This number can be decided based on the purpose
of the test)
Questions are designed based on the period allocation of each major content
To decide the number of questions for each sub-topic, we multiply the given period
of each subtopic by total number of questions and divided by the total period
allocation as indicated in the table above. The next table (Table 19) is designed on
the basis of the learning outcomes with cognitive levels.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Table 19: Table of specification by learning outcomes
1. Rational numbers
1.1 The concept of
Rational numbers
1.2 The order of
Rational numbers
1.3 Operation of
rational numbers
Rational numbers
1.3.2 Subtraction of
Rational numbers
1.3.3 Multiplication of
Rational numbers
1.3.4 Division of
Rational numbers
express Rational numbers as fractions.
represent rational numbers as a set of fractions on a number line.
solve simple equations contains absolute value. compare rational
add rational numbers
apply the commutative and the associative properties of addition
subtract one rational numbers from another.
find the product of two rational numbers
apply the rules of multiplication of rational numbers.
use the commutative and associative properties of multiplication
and distributive property of multiplication over addition of
rational numbers.
divide one rational number by another non-zero rational number
apply the rules for division of two rational numbers.
January 2014
Students will be able to:
number of
Major and Specific
Content Areas
Cognitive Levels
Major Sample Competencies
Classroom Assessment Manual for Primary and Secondary School Teachers
What comes next is to decide on the type of item to be selected. This depends on the nature of
learning outcomes and the corresponding contents to be tested. The above table of specification
can also help us to determine the appropriate item types that correspond to the content.
Table 20: Table of Specification by Item type Versus Content
Item types
Major and Specific
Content Areas
Fill in the Multiple
/ choice
1. Rational numbers
1.1 The concept of Rational 3
1.2 The order of Rational numbers 2
1.3 Operation of rational numbers
1.3.2 Subtraction of Rational
1.3.3 Multiplication of Rational
1.3.4 Division of Rational
Academic Year _______
Teacher’s Name: _________________ Sign & date _______________________
Department’s Opinion_____________ Sign & date ______________________
Principal’s Opinion _____________ Sign & Date_______________________
Note: the above table of specifications was prepared based on one chapter of grade 7
mathematics subject so that users can construct a table of specification on covered
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
5.3. Construction of Paper and pencil Tests
In this section construction of paper pencil test, performance assessment and personal
communication assessment tools will discussed. Paper and pencil tests usually consist of a range
of questions covering almost all of the learning outcomes of a unit. Learners are required to
respond to a question within a specified time. The tests consist of a wide array of topics of a unit
all with clear and specific answers that could be specified before starting the correction. Paper
and pencil tests are generally classified in to broad categories: objective test and essay test.
5.3.1. Writing Objective Test Items
Characteristics of objective tests
if different individuals score the same paper ;they come up with the same result
It is a highly structured
It limits the type of response the students can make
It is not appropriate for measuring the ability to supply , organize and integrate ideas
Objective test is classified into :
A. supply type – short answer & completion
B. selection – True/False , matching and multiple choice
Assessment tools may take the form of objectively scored questions, such as multiple-choice,
true-false, yes/no, fill-in-the-blank items and matching items. These types of questions typically
have only one correct answer which has been predetermined prior to the assessment activity.
Student responses are scored against the predetermined answer key. Objective questions may
appear in the form of selection or supply. Writing True/False Test items
True/false items are commonly used to assess whether or not an examinee can identify a correct
or incorrect statement of fact. Tests employing true/false items should contain enough items so
that the relevant domain is adequately sampled and that the examiner can draw sound
conclusions about examinee knowledge.
Useful for assessing knowledge of categorical, factual information like correctness of
facts, definitions of terms, statements of principles etc.
Relatively easy to write
Allows wide sampling of course material
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
 It measures simple learning outcomes
 It is susceptible to guessing i.e. 50% chance for guessing
 True-false items tend to elicit emotional responses depending on culture etc…
 Truth or falsehood are relative terms and can vary from situation to situation
 Do not discriminate students of various ability
Guidelines for Writing True/False Items
Incorporate only a single idea in each statement.
Write sharp, clear, and grammatically correct statements as briefly as possible.
Select precise wording for the item so that it is not ambiguous. The item should be
either absolutely true or false; avoid statements that are partly true and partly
Underline or stress a word if there is an adverb or adjective which is key to
marking a statement correctly,
Avoid double negatives. Negatively worded statements tend to confuse examinees
so use them rarely.
Attribute opinion statements to their source. However, if testing examinees’
ability to distinguish fact from opinion, don’t attribute.
Use true statements in measuring cause and effect relationship.
Avoid absolute or relative terms (e.g., always, never, sometimes, probably, etc)
since they tend to give clue to the correct response.
Make the length of true/false items approximately equal.
Make the number of true and false statements nearly equal.
Avoid overlapping statements
Randomize the occurrence of affirmative and negative statements
Avoid directly copying from text books because it encourages rote memory
Avoid presenting answers in a manner that form a pattern
Do not use highly technical terms because it diverts students attention
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Some Examples of Poor and Better True/False Items
Poor: Water will boil at a higher temperature if the atmospheric pressure on its surface is
increased and more heat is applied to the container. (Do not express the single idea)
Better: Water will boil at a higher temperature if the atmospheric pressure on its surface is
Poor: A body cannot produce sound unless it is vibrating. (Double negative)
Better: A body can produce sound when vibrating.
Poor: When sodium is put in water, it takes the water molecules apart and joins with part of the
broken water molecules making a new substance called sodium hydroxide. (Not clear,
sharp and simple)
Better: When sodium is combined with water the new substance sodium hydroxide is produced.
Poor: Equivalent sets are equal sets. (Not absolutely true/false)
Better: Equal sets are equivalent sets.
Poor: ማንኛውም ዘር አየርና ሙቀት ብቻ ሲያገኝ ሊበቅል ይችላል፡፡ (use of modifiers)
Better: ዘር አየርና ሙቀት ሲያገኝ ሊበቅል ይችላል ፡፡ Writing Matching Items
The matching item is simply a modification of multiple-choice type. Instead of the possible
responses being listed underneath each individual stem like multiple-choice, a series of stems,
called premises, is listed in one column and the responses are listed in another column.
Examinees are directed to match (associate) items from two lists. Traditionally, the premises (the
portion of the item for which a match is sought) is listed on the left hand side of the page. On the
right side of the page are listed the responses (the portion of the item which supplies the
associated elements). The key to using this item format is the logical relationship between the
elements to be associated.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
It is useful for testing students’ knowledge of terms, definitions dates, events names,
principles and so on.
It is efficient and useful in emphasizing relationships.
Difficulty of getting plausible distracters.
It is restricted to the measurement of factual information based on rote memory
It is highly susceptible to the presence of irrelevant clues
Difficulty of getting homogenous distracters
Guidelines for writing Matching Items
Give full directions which include the logical basis for matching and indicate how often a
response may be used.
Employ only homogeneous content for a set of premises and responses.
Keep each list reasonably short, but ensure that each is complete. Mostly, seven premises
and 10-12 options are recommended as the maximum for each set of matching items.
To reduce the impact of guessing, list more responses than premises. The teacher can let
a response be used more than once. This helps prevent examinees from gaining points
through the process of elimination.
Put responses in alphabetical or numerical order.
Don’t break matching items across pages.
Follow applicable grammatical rules.
Keep responses as short as possible by using key precise words.
Put longer statement in the premise and shorter statements in the response list.
Avoid using incomplete sentences as premises
Randomize the position of correct answers and avoid making patterns of any kind.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Examples of relationships considered important by teachers, in a variety of fields include the
Persons------------------------------------------- Achievement
Dates --------------------------------------------- Historical Events
Terms -------------------------------------------- Definitions
Rules ------------------------------------------- Examples
Symbols ------------------------------------------ Concepts
Authors ------------------------------------------ Titles of Books
Foreign words ---------------------------------- English Equivalent
Machines ------------------------------------------- Uses
Plants and Animals ------------------------------ Classification
Principles ------------------------------------------ Illustrations
Objects --------------------------------------------- Names of Objects
Parts ------------------------------------------------ Functions etc
Source: ICDR (2004:53)
Additional Examples
Poor- Directions: Match the following. (Not homogeneous)
1. Water
A. NaCl
2. Discovered Radium
B. Fermi
3. Salt
C. NH3
4. Ammonia
D. 1942
5. Year of the first Nuclear Fission
E. H20
F. Curie
G. 1957
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Better - Directions: On the line to the left of each compound in Column A, write the letter of the
compound’s formula presented in Column B in the space provided.
Column A
Column B
____1. Water
A. H2S04
____2. Salt
B. HCl
____3. Ammonia
C. NaCl
____4. Sulfuric Acid
D. H20
E. H2HCl
Better: Directions: Match the terms presented in the column B to their definitions which are
presented in column A. Write the letter which represents your choice of definition in the
blank provided.
Column A
Column B
---1.ከፕሮቲን እጥረት የሚመጣ በሽታ
ሀ. አኒሚያ
---2.ከቫይታሚን ሲ እጥረት የሚመጣ በሽታ
ለ. እስከርቪ
---3.ከአዮዲን እጥረት የሚመጣ በሽታ
---4.ከብረት ማዕድን እጥረት የሚመጣ በሽታ
መ. ኩዋሻርኮር
ሠ. ፔላግራ Writing Multiple-Choice Test Items
Multiple-choice test items consist of a problem written in the form of a question or incomplete
statement known as a stem and list of suggested solutions named as alternatives. The correct
alternative in each is called the answer while the remaining is called distracters. The list of
suggested solutions may include words, numbers, symbols or phrases etc.
Types of Multiple Choice Items
1. Closed Stem: The stem poses a question
Ends with a question mark
Spells out clearly what is being asked
Directs readers to the options
In items in which the options are figures, diagrams, tables, etc., the stem must be
closed. Example: Which of the following is the capital city of Ethiopia?
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
2. Open stem: An incomplete sentence to be completed by one of the options
Shorter and more direct
Could be too unfocused
Options should be grammatically compatible with stem
Example: The capital city of Ethiopia is ___________.
 It provides a wide sampling of course content.
 Scoring is easy and objective
 Used in item analysis
 It provides valuable diagnostic information
 It is used in all level of teaching.
 It is limited to learning out comes at verbal level
 It does not measure the ability to organize and present idea
 Difficulty to find incorrect but plausible distracters
Guidelines for Writing Multiple-Choice Items
According to Haladyna, Downing, and Rodriguez (2002), writing of Multiple Choice (MC) items
can be summarized as follows:
Every item should reflect specific content and a single specific learning outcome
Base each item on important content to learn; avoid trivial content.
Use novel material to test higher level learning. Paraphrase textbook language or
language used during instruction when used in a test item to avoid testing for simply
Keep the content of each item independent from content of other items on the test.
Avoid over specific and over general content when writing multiple choice items.
Avoid opinion-based items.
Keep vocabularies simple for the group of students being tested
Edit and proof items.
Use correct grammar, punctuation, capitalization, and spelling.
Avoid lengthy item so as to minimize the amount of reading in each item.
Note: Some of these guide lines can be applied to other types of objective test items.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Writing Stems
The goal of writing questions is to discover what students know and can do, not to create tricky
questions. Here are some suggested guidelines in writing stem for multiple choice items:
State the stem in question form or completion form (note: recent research findings favor
question form over completion).
When using the completion format, don’t leave a blank for completion in the beginning
or middle of the stem of the question
Ensure that the directions in the stem are clear, and that wording lets the examinee know
exactly what is being asked.
Base each item to measure a specific learning outcome.
Avoid window dressing (excessive verbiage) in the stem.
Word the stem positively; avoid negative phrasing.
Include the central idea and most of the phrasing in the stem.
The stem should fully state the problem and all qualifications. To make sure that the stem
presents a problem, always include a verb in the statement.
Use multiple-choice to measure higher level thinking.
Multiple-choice items should be independent. That is, an answer to one question should
not depend on the answer to another question.
Test for important or significant material; avoid trivial material.
Concentrate on writing items that measure students’ ability to comprehend, apply,
analyze, and evaluate as well as recall.
Include words in the stem that would otherwise be repeated in each option. Following
this guideline not only saves time for the typist but also saves reading time for the
Don't ask a question that begins, "Which of the following is true [or false]?" followed by
a collection of unrelated options. Each test question should focus on some specific aspect
of the course. Therefore, it's OK to use items that begin, "Which of the following is true
[or false] concerning X?" followed by options all pertaining to X.
Eliminate excessive wording and irrelevant information in the stem.
Punctuate complete sentences correctly
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Minimize the amount of reading by putting as much of the word(s) as possible in the stem
Avoid negative stems. Because a negative stem causes confusion, its use should be
avoided. If the stem can be expressed only negatively, make the font weight of the word
“not” bold, if a negative stem is unavoidable.
Keep the vocabulary consistent with the examinees’ level of understanding.
Avoid over specific knowledge when developing the stem.
Avoid direct copying from textbook, verbatim phrasing when developing the stem.
Use appropriate grammar, punctuation, and spelling consistently.
Avoid tricky items, those which mislead or deceive examinees into answering
Writing Options
Options are of similar length and are written in a similar style to the key. The key must
not stand out among the distractors because of its length, wording, or other superficial
Randomly distribute the correct response among the alternative positions throughout the
test. That is, have approximately the same proportion of A’s, B’s, C’s, and D’s, as the
correct response.
Make sure that there is only one correct/best answer
Options do not give a clue to the answer to another item.
Develop as many effective choices as you can, but research suggests three to five options.
Each item should have exactly one key, and three plausible distractors (four options in
Options do not mislead or confuse through lack of clarity or ambiguity.
Options do not overlap in meaning. The distractors must have different meanings from
one another. Distractors should not be synonyms. A particular meaning in one distractor
should not be included in the general meaning of another distractor.
Avoid the use of the all-of-the-above option. The problem with “all of the above” as an
option is that it makes the item too easy. If students can recognize at least one incorrect
option, they can eliminate “all of the above” as a viable option. On the other hand, if they
can recognize at least two correct options, then they know that “all of the above” is the
correct answer. Students are quick to pick up on this clue.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Do not use “None of the above” as an option. It can be used only when absolute standards
of correctness can be applied, such as in math, grammar, spelling, geography, historical
dates, and so on. Otherwise, students can often argue about the correctness of one of the
other options.
Avoid the complex multiple-choice format /pairs or triplets of options. (e.g., A and D, A
and C, A, B and C, etc.). Options do not include partially correct distractors, such as
paired options, where each distractor contains one incorrect and one correct option.
Avoid for specific determiners/absolutes such as “all”, “always”, “never” which are more
likely to be in incorrect options, avoid also using vague terms such as “usually” ,
“frequently” and “sometimes”.
Format the options vertically, not horizontally if possible.
Place options in logical order, if there is a logical sequence in which the alternatives can
be arranged (alphabetical if a single word, in order of magnitude if numerals, in temporal
sequence, or by length of response), use that sequence.
Keep options independent; options should not be overlapping.
Keep all options in an item homogeneous in content.
Try to make all responses similar in length. A very short response or a very long response
often is clue to the correct answer.
Avoid giving clues through the use of faulty grammatical construction and
Use plausible distractors (equally attractive as the key but wrong); avoid illogical
distracters rather incorporate common errors of students in distractors.
Use true statements that do not correctly answer the item for distractor.
Avoid conspicuous /obvious correct choice.
Avoid verbal association between the stem and the correct answer.
An item should contain only one correct or clearly best answer.
Examples Poor and Better Multiple Choice Items
Poor: The cell islets of the pancreas ____. (not a definite, explicit and singular question in the stem)
A. are located around the edge of the
C. disappear as one grows older
*D. produce insulin
B. contain ducts
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Better: The cell islets of the pancreas secrete the substance called______ .
A. trypsin
C. tryptophan
* B. insulin
D. adrenaline
Poor: If the pressure of a certain amount of gas is held constant, what will happen if its
volume is increased? (Repeated words in alternatives are not included in the stem)
A. The temperature of the gas will decrease.
*B. The temperature of the gas will increase.
C. The temperature of the gas will remain the same
Better: If you increase the volume of a certain amount of gas while holding its pressure
constant, its temperature will ____.
A. Decrease
*B. Increase
C. Remain the same
Poor : A word used to describe a noun is called an____. (It has a clue, the answer will
begin with vowel letter)
*A. Adjective
C. Pronoun
B. Conjunction
D. Verb
Better: A word used to describe a noun is called_____ .
*A. an adjective
C. a pronoun
B. a conjunction
D. a verb
Poor: What age of range represents the physical "peak” of life? (Alternatives are not
mutually exclusive)
A. 11 to 15 years
B. 13 to 19 years
C. 18 to 25 years
D. 24 to 32 years
Better : What age of range represents the physical "peak" of life?
A. 11 to 15 years
C. 21 to 25 years
B. 16 to 20 years
D. 26 to 30 years
Poor: 8032- 5743 =________(alternatives are not plausible)
* A. 2289
C. 2378
B. 2288
D. 3378
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Better: 8032- 5743 =________
* A. 2289
B. 2389 [failing to change 0 to 9]
C. 3399 [failing to decrease two digits borrowed from]
D. 3711 [subtracting the big number from the small one]
Poor: Penicillin is obtained from a: (choice B is grammatically inconsistent with the stem)
A. bacteria.
* C.
B. coal-tars.
D. tropical tree.
Better: Penicillin is obtained from:
A. bacteria.
* C. mold.
B. coal-tars.
D. tropical tree.
Poor: Which of the following structures of the ear is not concerned with hearing? (Negative stem)
A. cochlea
C. oval window
B. eardrum
* D. semicircular canals
Better: Which one of the following structures of the ear _____ helps to maintain balance?
A. cochlea
C. oval window
B. eardrum
* D. semicircular canals
Poor: Cells of one kind belong to a particular group performing a specialized duty. We call this
group of cells a tissue. All of us have different kinds of tissues in our bodies.
Which of the following would be classified as epithelial tissue? (Excessive verbiage or
irrelevant information in the stem)
A. adenoids and tonsils
B. cartilage
* C. mucous membranes
D. tendons
Better: Which of the following would be classified as epithelial ------ tissue?
A. adenoids and tonsils
B. cartilage
* C. mucous membranes
D. tendons
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Poor: Which of the following units is a fundamental unit? (Terms in the alternatives are not in
alphabetical order)
Better: Which of the following units is a fundamental unit?
Poor: What is the cost of an item that normally sells birr 9.99 that is discounted 25%?
(The numbers in the alternatives are not in the natural order)
A. 5.0
C. 2.50
*B. 7.50
D. 6.66
Better: What is the cost of an item that normally sells for birr 9.99 that is discounted 25%?
A. 2.50
C. 6.66
B. 5.00
D. 7.50
Table 21: Summary Tips for writing of the three objective test items
True and False
Multiple Choice
 statements
 keep lists short (4-7
 question is related to the
stated positively
learning outcomes
 avoid
 arrange
 there is a question for each
alphabetical order (or
learning outcome
numerical order if they
 the statement is
 choices should be brief
completely true
 important learning outcomes
 write clear instructions
may have more than one
 tell how many times a
 the statement is
response may be used
 list
based on a single
 entire set of matches
alphabetical order
appears on same page.
 only one correct choice
 avoid “none of the above”
and “all of the above"
 incorrect
Adapted from IEQ (2003)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
5.3.2. Writing Supply Items
The two major supply items are completion and short answers. Short answer and completion items
are equivalent test formats: they differ only in the phrasing of the question.
Completion Items are incomplete statements. They require the examiner to supply a word, a
number, a symbol or a phrase, as a response to a question to complete a statement.
Short Answer items are usually given in a direct question where the student is expected to write
names, places, dates, processes, terms etc.
Advantages of Supply Items
 Supply items are the easiest to construct.
 Guessing is minimized as compared to other objective tests because students are required to
produce their own responses rather than select.
 Efficiently measure simple (factual) knowledge or lowest levels of cognitive domain;
knowledge of terminology, facts, principles etc…
 Does not measure complex learning outcomes.
 They are relatively time consuming to score than other objective tests.
 Although the answers are very short , they are not completely free from subjectivity
Guidelines for writing Completion and Short Answer Items
Focus the statement or question so that there is only one concise answer of one or two
words or phrases. Use precise statement or question wording to avoid ambiguous items.
A complete question is recommended over an incomplete statement. You should use
one, unbroken blank of sufficient length for the answer.
If an incomplete statement is used, avoid using broken lines which correlate to the
number of letters in each word of the desired answer.
Omit only significant words from the statement, but do not omit so many words that the
statement becomes ambiguous.
Use blanks of the same length throughout the test so that the length is not a clue
Place the blank at the end of the statement or question.
For incomplete statements, select a key word or words as the missing element(s) of the
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Ensure that extraneous clues are avoided due to grammatical structure, e.g., articles such
as “a” or “an”.
Examples poor and Better Items
Poor: Two lines perpendicular to the same line in the same plane are _____ to each other
Better: If two lines are drawn perpendicular to the same line on a sheet of paper, then they
are _______.
Ethiopia defeated Italy in _____ ? (Has no one concise answer)
Better: Ethiopia defeated Italy at the battle ____. Or In what battle did Ethiopia defeat Italy?
Poor: ________ = y , if 4y+8=3y-10 (blank space is not at the end of the statement)
Better: If 4y+8=3y-10, Then y = __________ .
Poor: Every atom has a central _________ called a nucleus. (Omitted word is not significant)
Better: Every atom has a central core called ____________.
If a room measures 7 meters by 4 meters, the perimeter is _____ . _____.(unit of
measurement is not indicated)
Better: If a room measures 7 meters by 4 meters, the perimeter is _____ meters (or m).
5.3. 3.Writing Essay Test Items
Essay test has two forms: Extended and Restricted Response forms. Writing extended response and restricted response items
Extended response type gives almost complete freedom to the student to make response. It permits
students to select, organize and write as much he/she considers necessary. Scoring is sometimes
difficult because there is no common ground for comparing students.
The following are some examples of extended response essay type items :
Compare and contrast the use of land and air transportation systems (Extended),
Describe what you think the role of federal government should be in maintaining a stable
economy in Ethiopia(extended)
Discuss about the importance of the African Union (extended)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Explain the difference between Assessment of learning and assessment for learning in a
paragraph (extended).
Restricted Response Type limits the nature, length, or method of organization of students’
response. It reduces the difficulty of scoring since it has absolute criteria to compare students based
on their achievement.
The following are some examples restricted response essay type items:
State two advantages and disadvantages of maintaining high tariffs on goods from other
Name two types of clouds and describe the characteristics that distinguish them (restricted)
Explain the difference between Assessment of learning and assessment for learning in a
paragraph (restricted)
Essay type questions typically do not have one correct response.
Measure complex learning outcomes that cannot be measured by other means.
Emphasize the integration and application of thinking and problem solving skills
They have desirable influence on students study habit. Students tend to direct their attention
toward the integration and application of larger units of subject matter when essay questions
are included in classroom test.
Are easier and quicker to construct
Minimize guessing to a greater extent
Scoring is unreliable
The amount of time required for scoring is too long
They limit the number of questions that can be asked and thus limit the content coverage of
the examination
The influence of bluffing due to writing ability of the student Guideline for writing Essay Items
When writing essay questions, the content of the subject should be adequately sampled and
expected responses are specific as precisely as possible.
Ensure that items are relevant and appropriate for the subject matter.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Make the questions precise so that students clearly understand what is expected of them.
Have all students respond to the same essay questions; do not let them choose among the
questions. Course content cannot be adequately sampled if students select content on which
they wish to be tested. Also, students’ performance cannot be compared if they are tested on
different contents.
Write more essay questions that allow for restricted responses rather than one or two essay
questions that require long responses. This improves content sampling, which is especially
important if the essay is being used to measure achievement rather than writing ability.
Validity and reliability are improved if content is accurately and adequately sampled (for
details see chapter 8, attributes of good test).
Students should have sufficient time to plan, prepare, and review their responses. Consider
this when planning the number of essay items to include on the test.
Have a colleague review the questions for ambiguities.
Write items that measure the higher order thinking skill (analyzing, evaluating and
Indicate for each question the number of points to be earned for a correct response. If time is
running short, students may have to choose which questions to answer. They will want to
work on the questions that are worth the most points. Scoring Essay Items
Review teaching notes and learning materials before scoring students’ essay responses.
Write the ideal answer you think correct for each essay item
Choose a scoring model (analytic or point method, Holistic or rating method, for details see
Chapter 6 of test scoring)
Read each individual’s response to a single item one time before scoring and before reading
responses to the next item.
Have students sign their names on the backs of the papers so the examinees are anonymous.
Specify the content to be covered. Also, determine the weight to be given to each element
expected. Allow for unexpected, but valid, responses. This is the reason for reading
students’ responses to an item one time before actually scoring them.
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
If achievement of the content is the sole emphasis, ensure that it is the achievement, but not the
writing ability, to be evaluated. The sentence structure, grammar and other aspects of writing
should not be considered in the scoring of a paper unless they are part of the content. The general
measurement perspective is that students should be tested only on the material taught in the
subject. However, if writing will factor into a student’s grade, the importance of writing skills
should be clearly emphasized before students prepare for the test (Alabama Department of
Education, 2002). Time Allocation for Test Items
All question items may not require equal time to be answered depending on the type of behavior
intended to be measured (simple or complex), instrument and the method of the assessment. The
number of questions and the type(s) of questions used both affect the amount of time needed for
completion of the test (Nitko, 2001).
These estimates provide information needed to decide what type(s) of questions and how many of
them to use. More true-false questions can be answered during a given period of time than multiple
choice or short answer questions. However, our choice of question types must be based on the level
of learning at which we are assessing our students. We can decide to use true-false and short-answer
questions for the knowledge component, and multiple choices for the range of behaviors from
simple to complex. We will assign the lowest point per question (1 per question) for the easiest
questions. The approximate time interval needed to respond to an item is summarized in the table
Table 22: Item Types and Time Allotment
Item Types
True-False questions
Multiple choice (recall questions that are brief)
More complex multiple choice questions
Multiple choice problems with calculations
Short answer (one word)
Short answer (longer than one word)
Matching (5 premises, 6 responses)
Short essays
Data analyses/graphing
Drawing models/labeling
Extended essays
Time Needed per an Item
15-30 seconds per question
30-60 seconds
60-90 seconds
2-5 minutes
30-60 seconds
1-4 minutes
2-4 minutes
15-20 minutes
15-25 minutes
20-30 minutes
35-50 minutes
Adapted from Alabama Department of Education (2002)
January 2014
Classroom Assessment Manual for Primary and Secondary School Teachers
Summary on paper – and –pencil tests
The paper – and –pencil tests prepared in schools are of two general forms: Objective (selection and
Supply) and Essay Types. The table below summarizes the formats, the behavior assessed, scoring,
advantage and limitation of the two major forms of items.
Table 23: Summary on the characteristics of item formats
Objective Tests
Matching Items
alternative (Key)
&complex learning
outcomes ( variety
of Outcomes)
sentence to be
-True or False;
-Yes or No;
-Simple learning
outcomes( Rote
columns: premises
and responses to be
words or phrases
for which a match
is sought.
-Responses learning
words, or phrases
from which the
selection is made.of
-Measure a variety
- Cover a large area
of content
construct (set)
- Not free from
-cover a large
area of content
-Quick and easy
to mark
-Easy to construct
- Measure simple
related facts
-Do not measure
complex learning
-susceptible for
guessing (50%)
-Do not measure
complex learning
-Lack of obtaining
learning task
Short answer/
(Supply Tests)
Simple learning
Essay Tests
limited number of
paragraph (restricted
with no limitation
(extended essay)
,synthesizing ideas,
use of language etc.)
-The easiest to
-Avoid guessing
have to write out
-do not measure
Scoring is subjective
and unreliable
- Easy to construct
- Measure
which cannot be
measured by other
-Avoid guessing
Adapted From: ICDR (1999)
January 2014
6.0. Assembling, Administering, Scoring Tests and Reporting Results
6.1. Assembling, Administering and Scoring Test Items
Getting valid achievement test results is the end product of a systematically controlled series of steps,
beginning with identification of objectives and ending with scoring and interpretation of results.
Although validity is "built in” during the construction of test items, systematic procedures of
assembling, administration, scoring, marking and recording , and appraising results are necessary to
assure that the test items will function with maximum effectiveness. Lack of attention for the
aforementioned factors has adverse effect on the validity of test results.
6.1.1. Assembling test items
Assembling the test for use includes recording, reviewing and editing the items, arranging the items in
some logical order and preparing clear directions and reproducing the test.
Recording: is the placing of each item on separate card (computer, if available) that
provides the flexibility needed in preparing the test for use.
Reviewing and editing: is reading the items again after they have been set aside for a few days
and asking others to get comment on them for checking the appropriateness of the items. It is helpful
for the reviewer to read and answer each item as if he/she were taking the test. This provides a check
on the correct answer and a means of spotting any obvious defects.
The following guiding questions help for careful evaluation of items during reviewing.
Does each test item measure an important learning outcome include in the table of
Is each item type appropriate for the particular learning outcome to be measured?
Does each item present a clearly formulated task?
Is the item stated in simple, clear language?
Is the item free from extraneous clues?
Is the difficulty of the item appropriate?
Is each test item independent or are they free from overlapping?
January 2014
Do the items to be included in the test provide an adequate coverage of the table of
Does the item show fairness for all examinees?
Are the items free from sensitive issues (violence, drug, alcoholism etc)?
C. Arranging the test item: is the listing of the types of test and items in each type from
simple to complex level. Test items may be arranged in sections by item types and in
of ascending difficulty as follows (Mehrens and Lehmann, 1984).
1st - True-False items or alternative response items
2nd - Matching items
3rd - Supply (short answer or completion) items
4th - Multiple-choice
5th - Essay items
This means that the advantages of arranging items from simple to complex levels are to motivate
students, to retain the same mental set, and to facilitate scoring. The following are useful guidelines for
arranging items.
For instructional purposes, it is usually, desirable to group together items that measure
the same learning outcome.
Where possible, the items should be arranged so that all items of the same type may be
grouped together.
The items should be arranged in order of increasing difficulty
D. Preparing directions for tests – is stating instruction in written or oral form or both. The
directions for a test should be simple and concise and yet contain information concerning each
of the following. The directions are useful parts and should include:
The purpose of the test ( intended mission of a test)
Time allowed to complete the test
The basis for answering- type of work expected from students
The procedures for recording the answers- methods of recording the answers.
Example: Directions: This mid-semester test contains 30 multiple-choice items, and you have one
hour to complete the test. For each item, select the answer that best completes the statement, or
answers the question, and circle the letter of that answer.
January 2014
E. Reproducing the test: is arranging items to have a needed quality by spacing, making good
viewing, sequencing, numbering, and thinking about clarity of the illustration of
items/material. During reproducing the test, the teacher should consider the following:
spacing the items not to be crowded,
preparing a column for true and false at the beginning or end of items,
putting the two lists of matching on the same page,
using key lists of multiple choice items, if possible, on the same page,
numbering all items consecutively,
providing numbers for blanks to record answers accordingly for short answer items,
making clear, legible, and accurate all illustrative materials.
6.1.2. Administering Tests
The guiding principle for administering test is providing a fair chance to all students to demonstrate
their effort of achieving learning outcomes.
This means that making both the physical and
psychological conditions conducive and controlling other factors that interfere with valid testing or
Conducive physical conditions involve adequate work place, light, ventilation, optimum temperature
etc; free from noise, good testing time (for instance, testing time should not be given before or after
pleasant things, long vacation/holidays, championship game, and big festivals.)
Conducive psychological conditions involve creating/having positive attitude, good motivation, and
other good internal states of the examinees. Because students cannot perform at their best if excessive
tense or anxiety (such as threatening students with tests if they do not behave, telling students they
must work fast in order to finish on time, and warning students to do their best “because this test is
important”) is there. Therefore, anxiety should be avoided by any means (Miller, Linn, and Gronlund ,
In line with these conditions during test administering one needs to consider the following guidelines.
Common guidelines for administering tests
In administering tests, one is expected to:
Consider the suitability of testing place for the students.
Motivate students or maintain the positive attitudes of the students
Make sure that the students understand the directions and use the answer sheets correctly.
Keep time accurately and inform students of the time left at regular time intervals.
January 2014
Record the significant events during the test to provide information
Keep interruption during test as minimum as possible. Avoid distraction of students’ work
either by talking or eating or standing behind to read what they are writing.
Avoid giving hints to students who ask individual help for items.
Collect the test material properly.
Allow students to go out after completing the test but not at the last few minutes.
Discourage cheating by special seating arrangement and proper proctoring/ careful
Techniques to prevent cheating
Take special precautions to keep the test secure during administration
Have students clear of the tops of their desk (for adequate work space and to prevent use of
If scratch paper is used ( e.g., for math problems), have it turned in with the test
Proctor the testing session carefully (e.g., walk around the room periodically and observe how
the students are doing)
Use special seating arrangement, if possible ( e.g., leave an empty row of seats between students)
Use two forms of the test and give a different form to each row of students ( for this purpose use
the same test but simply re-arrange the order of the items for the second form)
Prepare tests that students will view as relevant, fair and useful.
Create and maintain a positive attitude concerning the value of tests for improving learning.
6.1. 3. Scoring Tests
Test scoring is the process of assigning evaluative codes (sign) or statements to performance on tests,
tasks, interviews, or other behavior samples. It refers to correcting the appropriateness of examinees
response for the test items in order to assign mark and show the level of students’ achievement.
The scoring of tests may fall into two categories: the scoring of the objective test items and the scoring
of the subjective/essay test items.
A. Scoring objective tests
Some general considerations for scoring the objective type tests.
1. Prepare answer key in advance (preparing Rubric). This will help to save time when scoring
the test and identify question that need rewarding or need to be eliminated.
January 2014
2. Check answer key. If possible have a colleague check your answer key to identify possible
alternative answers or potential problems.
3. Record scores. Before returning the scored papers be sure you have recorded their scores in
your record book.
B. Scoring Subjective /Essay Questions
Essay tests are criticized for difficulties of scoring- because of subjectivity (bias) in scoring. The
following suggestions for scoring subjective/essay questions critically increase the reliability of the
Prepare an outline of the expected answer in advance.
Use the most appropriate scoring method.
The two methods commonly used are: Point Method (Analytic Method) and Rating (Global) Method.
1. Point Method (Analytic Method) is a scoring method in which examinees’ response is
compared to ideal answers.
2. Rating (Global) Method is a method in which each examinee’s response is placed in one of
the number of groupings after reading the response. E.g., if an item is scored out
of three, four
groupings will be used ranging in value from 0 to 3 so after reading each students’ response, the
responses will be categorized in to 0, 1, 2, and 3 groupings.
minimizes biases.
Pay attention only to the significant and relevant aspects of the answer. If factors other than
the correctness of the answer (such as spelling, grammar, legibility of answer) are
considered, they should be given a separate score.
be careful not to let personal idiosyncrasies (unique characteristics) that affect scoring,
Apply uniform standards to all papers. Be consistent in scoring/ grading (Score all
responses to a particular question without interruption.)
Score blindly. That is to keep examinees name out of sight to prevent your knowledge
about, or expectation of, the examinee from influencing the score. Shuffle the papers before
Whether students should receive credit for partial knowledge is another confusing problem
that deserves some consideration.
give comments and the correct answer for a question
January 2014
6. 2. Marking, Referencing, Recording and Reporting Test Result
6.2.1. Marking/grading
Marking or grading is the process of offering different types of symbols (letter or number grade) to
academic progress or achievement of student. The marks given to students’ academic achievement are
usually reported to the school administration in general and parents in particular.
Designing a good marking scheme can help to be uniformly fair to all students. Marks should be based
on a well- constructed assessment tools that are pertinent with the learning outcomes. Furthermore, the
weight of marks or grades assigned to each test must be based on the learning outcomes and the
allotted time to cover the subject content.
Types of Marking System
Types of Marking or grading systems of students’ performance are different. In some cases student’s
marking is expressed by letter grade such as :
Three letter scale-
(O, S, N, where, O- outstanding, S- satisfactory, and N - needs
Five letter scale – (A, B, C, D, (E) F, where, A- excellent, B- very good, C- good/average, Dsatisfactory/low, and E/F- unsatisfactory/failure) and
Two-letter scale- (P and F where, P – pass and F – fail),
In other cases numbers and percentage marking system- (0-100%), as well as Five point number
scale- (5, 4,3,2,1 where, 5-excellent, 4- very good, 3- good/average, 2- satisfactory/low, and 1unsatisfactory) are used.
Teachers need to consider the following points related to marking:
Be sure to mark all assigned work. Because, nothing frustrates students more than spending
substantial time on a project and not having teachers react to it.
Students should be provided with appropriate feedback on time.
Provide specific comments about students’ work based on their mark. Teachers’ comments
should indicate precisely where an error is and, if necessary, how to correct it.
A successful teacher should give mark- related comments carefully in order not to hurt and
discourage sensitive, developing personalities.
In spite of the above mentioned important points, marks do have certain limitations, which can be
summarized as follows:
January 2014
1. No matter how objective the marking scheme may be, marks merely reflect a relative
assessment not objective measure.
2. The need to achieve high mark often results in unhealthy competition and cheating among
students, and the marks awarded them may not reflect their abilities.
3. Marks often de-motivate low performing students.
6.2.2. Approaches for Referencing and Interpretation
The referencing of an assessment is the basis of judgment. There are three main ways of referencing in
1. Absolute Grading: Absolute grading, or criterion-referenced grading, consists of
comparisons between a student's performance and some previously defined criteria. Criterion
referenced test on the other hand is a comparison of a score of students to a fixed criterion
developed with the development of a test. When using absolute grading, one must be careful in
designing the criteria that will be used to determine the student's grades.
2. Relative Grading: Relative grading, or norm-referenced grading, consists of comparisons
between a student’s score with others’ score in the same class, the norm group. In normreferenced tests, the results of a given student are compared to those of other students. In this
method, the meaning of the scores is expressed in terms of percentile marks.
3. Growth Grading (self-referenced grading): It is a comparison of an individual to
him/herself. Although generally unsuitable for selective purposes, self referencing can be
extremely useful for diagnostic or formative purposes. It consists of comparisons between a
student's performance and his/her perceived ability/capability ( McAlpine, 2002 ). The table
below summarizes the three grade referencing types.
January 2014
Table 24: Example of referencing interpretations for letter grades
Outstanding or Advanced; complete knowledge
of all content and skills; mastery of learning
Very Good or Proficient; complete knowledge
of most content and skills; mastery of most
Acceptable or Basic; command of only the
basic content or skills; mastery of some
learning outcomes
Lacking; little knowledge of most content;
master of only a few learning outcomes
Unsatisfactory; lacks knowledge of content; not
mastery of learning outcomes.
Outstanding; among the highest
performers in the norm group
learning outcomes
improvement on most or
all the learning outcomes
improvement on some of
the learning outcomes
Lacking; minimal progress
on most learning outcomes
learning outcomes.
Very Good; performs above the
average of the class
Average; performs at the class
Poor; below the class average
Unsatisfactory; far below the class
average; among the worst in the
Adapted from Nitko (1996) and McMillan (1997)
Guidelines for Effective and Fair Grading
1. Grading should not be a mystery. Teachers should discuss the type of grading framework to be
employed with students (e.g., absolute, relative, growth), and the weighting of each assessment.
The grading scheme should be public, in print, and shared with student and parents (as
2. Grades should reflect and base only on student's level of achievement. Grades should be a
composite of only those assessments that validly measure the achievement. If a teacher wishes to
convey information regarding a student’s aptitude, improvement, attendance, or effort rather this
information should be reported separately.
3. Grades should be based on a composite of several valid assessments. Since a grade is designed to
reflect a student's total achievement, all assessment devices that were employed during the
grading period (e.g., tests, quizzes, labs, projects) should be incorporated into the final grade.
Each assessment device conveys some information concerning a student's level of achievement
and should be included.
4. When combining several valid assessments, each assessment should be appropriately weighted.
While several sources of achievement data may be combined to attain a final grade, each source
may not convey the same amount of information relative to achievement. Thus, each assessment
device should be weighted to reflect its amount of contribution or importance to the final grade
(Gronlund, 1998).
January 2014
6.2.3. Recording and Reporting Students’ Progress and Achievement
Recording students’ achievement is an important aspect of assessment. The reports on student’s
progress and performance may be misleading and incomprehensible unless records are properly kept.
According to Oggunigi (1984), the major records to be kept are teacher’s records, students’ cumulative
report card and the transcript
The teacher’s record book: It is a permanent record book which every teacher must keep in
his/her class. The teacher’s record book is expected to contain a detailed scheme of work,
an accurate diary or daily record of work and progress report of the students’
The student’s cumulative record card: This contains the most available information of
student’s development throughout the primary or the secondary school course. The
following main information should be included in the students’ cumulative record card:
 Personal information about the student,
 Periodic report of academic performance
 Report on his/her character and industriousness,
 Report on his /her social and physical development, and
 Report on the summary of progress in all areas of the curriculum.
C. The transcript: This includes the results of continuous and summative assessments add up to
100%. There are different formats of transcript.
Points to be considered in providing comments during reporting
The use of specific comments encourages positive communication between teachers, parents,
and students.
Written in a positive and informative manner, comments can address a variety of issues while
maintaining the dignity of the child. This is especially important if a child has had difficulty
with a particular subject area or controlling his/her behavior over an extended period of time.
When teachers write comments on report cards, they need to be cognizant of the fact that each
child has a different rate of social and academic development. Therefore, comments should
not portray a child's ability as fixed and permanent (Shafer, 1997). Such comments do not
offer any reason to believe that the child will be successful if he/she attempts to improve.
Negative words/phrases; such as: unable, can’t, won’t, always, never should be avoided or
used with caution (Shafer, 1997).
January 2014
7.0. Describing and Summarizing Test Scores
After scoring tests and other assessment tools and getting numerical data, the next step could be
describing and summarizing the scores for ease of communication with others. To this effect one can
employ different ways of describing (tables and graphs) and summarizing the scores (measures of
central tendency, measures of variability and measures of relationship). However, under this unit
frequency table and graphs, and measure of central tendency are discussed briefly since they are
prerequisites for describing and summarizing scores.
7.1. Ways of Describing Test Scores
Describing test scores are procedures for organizing and summarizing sample data so that we can
communicate and describe their important characteristics. We can apply different ways of describing
test scores. The common ways are expressing the data in the form of frequency distribution,
histogram, and frequency polygon.
7.1.1. Frequency distribution
The most common way to organize scores is to create a simple frequency distribution. A simple
frequency distribution shows the number of times each score occurs in a set of data. To find frequency
( f) or a score, count how many times the score occurs. If three participants scored 6, then the
frequency of 6 is 3. Example: Suppose a teacher records 18 students’ raw scores corrected out of 20 :
14, 13, 14,15, 11, 15, 13, 11, 12, 13, 14, 13, 14, 15, 17, 14, 15, 14. In this disorganized arrangement, it
is difficult to make sense out of these scores. See what happens when we arrange them into the simple
frequency table shown below.
Table 25: score and frequency
Total (N)
frequency ( f)
January 2014
From this table the teacher can describe that the most frequent score is 14, thus, one student obtained
the highest score 17, the lowest score is 11 and it is scored by two students. This helps the teacher to
tell that most of the students scored 14 and above.
7.1.2. Graphs
When we talk of a frequency distribution, they often imply a graph. A graph will almost always clarify
the information presented in a grouped frequency distribution. Essentially, it shows the relationship
between each score and the frequency with which it occurs. We ask, “For a given score, what is its
corresponding frequency?”, so we place the scores on the x-axis and frequency on the y- axis.
Bar Charts/ graph
A simple bar chart is a good place to start. Along a horizontal axis you write out the different
categories in any order that is convenient. The height of the bar above each category should be
proportional to the number of your cases that fall into that category. In bar graph, a vertical bar is
centered over each score on the axis, and adjacent bars do not touch. The figure below shows a bar
graphs of simple frequency distributions.
Figure11: Bar graph of score and Frequency
frequency ( f)
January 2014
A histogram is similar to a bar graph except that it uses continuous distribution of scores in a
histogram adjacent bars touch. For example, say that we measured the number of parking tickets some
people received; obtaining the data in Figure below again, the height of each bar indicates the
corresponding score’s frequency. Although you cannot have a fraction of a ticket, this ratio variable is
theoretically continuous (e.g., you can talk about an average of 3.14 tickets per person). By having no
gap between the bars in our graph, we communicate that there are no gaps when measuring this
Figure 12: Histogram of Score and Frequency
7.2. Measures of Central Tendency
It is a statistical measure to determine a single score that defines the center of a distribution. The goal
of central tendency is to find the single score that is most typical or most representative of the entire
group. There are several ways to define central tendency. This section defines the three most common
measures of central tendency: the mean, the median, and the mode ( Popham, 1999).
Arithmetic Mean: The arithmetic mean is the most common measure of central tendency. It simply
the sum of scores divided by total number of respondents. The formula for mean is shown below:
m=ΣX/ N
January 2014
ΣX is the sum of all the scores in the sample and
N is the number of scores in the sample.
As an example, the mean of the numbers 1+2+3+6+8= 20/5 =4 regardless of whether the numbers
constitute the entire population or just a sample from the population. The arithmetic mean is also often
called the "average."
Median: The median is the midpoint or middle score in a distribution scores: the same number of
scores is above the median as below it.
Computation of the Median: To find the median score in serious of scores,
1. Arrange the scores from lowest to the highest.
2. With an odd number of scores, the score in the middle position is the approximate median.
For example, for the nine scores 1, 2, 3, 3, 4, 7, 9, 10, and 11, the score in the middle position
is the fifth score, so the median is the score of 4.
3. On the other hand, if N is an even number, the average of the two scores in the middle is the
approximate median. For example, for the ten scores 3, 8, 11, 11, 12, 13, 24, 35, 46, and 48,
the middle scores are at position 5 (the score of 12) and position 6 (the score of13). The
average of 12 and 13 is 12.5, so the median is approximately 12.5. The median can also be
thought of as the 50th percentile score.
Mode: The most frequently-occurring score(s) can be called the mode. For example, say we’ve
collected some test scores and arranged them from lowest to highest: 2, 3, 3, 4, 4, 4, 4, 5, 5, and 6.
The score of 4 is the mode because it occurs more frequently than any other score. A distribution could
have one mode (unimodal ) and two modes (bimodal) and so on.
7.3. Measures of variability
Measures of variability /dispersion are measures that tell us about the dispersion of scores in a
distribution. Measures of variability describe the extent to which scores in a distribution differ from
each other. Computing a measure of variability is important because without it a measure of central
tendency provides an incomplete description of a distribution. Measures of variability communicate
the differences among the scores, how consistently close to the mean the scores are, and how spread
January 2014
out the distribution is. The common Measures of variability are
range, variance and standard
The Range: One way to describe variability is to determine how far the lowest score is from the
highest score. The descriptive statistic that indicates the distance between the two most extreme scores
in a distribution is called the range. It is the difference between the highest and the lowest scores in a
distribution. The formula for computing the range is: Range = Highest score – Lowest score .
For example, the scores of 0, 2, 6, 10, and 12 have a range of 12- 0= 12. The less variable scores of 4,
5, 6, 7, 8 have a range of 8- 4= 4. The perfectly consistent sample
of 6, 6, 6, 6, 6 has a range of 6- 6=
0. Thus, the range does communicate the spread in the data. However, the range is a rather crude
measure. It involves only the two most extreme scores it is based on the least typical and often least
frequent scores. Therefore, we usually use the range as our
sole measure of variability only with
nominal or ordinal data. It is also informative to report the range along with statistics that are used
with interval and ratio scores.
The variance and the standard deviation: The other measures of variability are the variance and the
standard deviation. The variance and the standard deviation are used to describe how the scores differ
from each other. We calculate them; however, by measuring how
much the scores differ from the
mean. Because the mean is the center of a distribution when scores are spread out from each other,
they are also spread out from the mean. When scores are close to each other, they are also close to the
Standard Deviation (S or SD): Standard deviation provides information on the average distance
that each score in a distribution is away from the mean. The measure of variability that more directly
communicates the “average of the deviations” is the standard deviation.
The formula to find S is:
 M )2
Where, S = Standard deviation
Xi = the ith score
M = mean of the scores
N = total number of scores
Note: A larger S (Standard Deviation) implies more variability in scores and a smaller S implies that
the scores are close to each other or more similar.
January 2014
Variance: Variance is simply expressed as the square of standard deviation. That is Variance = S 2 =
SD2 When a frequency distribution is given the formula will include the frequency of each observation
 f (X
 M )2
S 
 f (X
 M )2
The sample variance is the average of the squared deviations of scores around the sample mean. We
compute the variance by finding the average squared deviation. The symbol for the sample variance
. Always include the squared sign because it is part of the symbol. The capital S indicates that we
are describing a sample, and the subscript X 2 indicates that it is computed for a sample of X scores.
January 2014
8.0. Evaluating the Test and Test items
8. 1. Attributes of Good Test
As assessment is central to the recognition of achievement, the quality of the assessment is therefore
important to provide credible certification. Credibility in assessment is assured through assessment
procedures and practices being governed by certain principles. The four most commonly used quality
indicators of tests are validity, reliability, fairness and practicality. Both are essential to effective
testing and should be understood by anyone working on tests.
8.1.1. Validity
Validity in assessment refers to measuring what it is supposed to measure (fitness for purpose), be it
knowledge, understanding, subject content, skills, information, behaviors, etc. Validity in assessment
would constitute:
assessment procedures,
instruments ,and
materials have to match what is being assessed.
The assessment must assess the learner’s ability to perform. Therefore, the assessment should stay
within the parameters of what is required – not less than the unit standard or qualification, or more
than the unit standard or qualification.
In order to achieve validity in the assessment, assessors should:
• state clearly what outcome(s) is/are being assessed,
• use an appropriate type or source of evidence,
• use an appropriate method of assessment,
• select an appropriate instrument of assessment,
When designing an assessment, the assessor must look at the specific outcome(s), the assessment
criteria and the range so as to determine the kind and amount of evidence required from the learner.
The kind and amount of evidence will also determine the assessment method and instruments to be
January 2014
selected and used. The assessment criteria, the range, contexts and underpinning knowledge indicated
in the unit standard, will inform these decisions.
Factors affecting Validity
There are many factors that can influence/ affect any of the validity measures. Some of them
1. Inadequate /absence of test planning. Writing test items without using table of specification.
2. Factors in the test itself affect validity, such as unclear directions, vocabulary difficulty,
inappropriate level of test difficulty (too difficult or too easy), poorly constructed items (due to,
say clues), ambiguity, test items inappropriate for the objectives being measured, test too short,
improper arrangement of items, and identified patterns of answers.
3. Factors in test administration and scoring could be mentioned as influential factors of validity,
such as insufficient time to complete the test, unfair aid to individual pupils, cheating during
examination, and unreliable scoring, especially of essay question answers
4. Factors in the students response, when the assessment occurs or such as emotional disturbance, a
response set and working for speed rather than accuracy.
8.1.2. Reliability
Reliability is defined as the consistency of test scores across evaluators or over time.
refers to the same judgments being made in the same or similar contexts each time a particular
assessment for specified stated intentions is administered. It shows the extent to which a measurement
instrument yields consistent, stable, and uniform results over repeated observations or measurements
when the test is administered under the same testing conditions (Miller, Linn and Gronlund (2009).
Assessment results should not be perceived to have been influenced by variables such as:
• Assessor bias in terms of the learner’s gender, ethnic origin, sexual orientation, religion,
like/ dislike, appearance and such like
• Different assessors interpreting unit standards or qualifications inconsistently
• Different assessors applying different standards
• Assessor stress and fatigue
• Insufficient evidence gathered
• Assessor assumptions about the learner, based on previous (good or bad) performance
January 2014
To avoid such variance in judgment (results), assessments should ensure that each time an assessment
is administered; the same or similar conditions prevail. Also, that the procedures, methods, instruments
and practices are the same or similar.
Assessors should use checklists, or other objective forms of assessment, in addition to other
assessment instruments.
Internal and external moderation with clear and systematic recording
procedures for assessment should be in place.
Factors affecting reliability
Factors that can affect the reliability of a test are:
Length of test: refers to the number of test items in a test. The longer is the test, the higher is
the reliability. Because a longer test will provide a more adequate coverage of behavior being
measured and the scores are apt to be less distorted by chance factors such as guessing
Difficulty of test: refers to the easiness or difficulty of a test to the group. Tests that are too
easy or too difficult for the group members will tend to provide scores of low reliability. This
is due to the fact that both easy and difficult tests result in a restricted spread of scores.
Objectivity of a test: refers to the degree to which equally competent scorers obtain the same
results. If the test items are of the objective type (e.g., multiple choice), the judgment or
opinion of the scorers does not influence the resulting scores. When such highly objective
procedures are used the reliability of the test results are not affected by the scoring procedures.
Validity and Reliability in a Test
As discussed so far, validity is about the accuracy and appropriateness of the interpretations and
inferences (evaluation) drawn from the results of a test (measurement). Whereas, reliability is about
the consistency of results (measurement) obtained from an assessment, based on the control,
education, and/or elimination of measurement error.
The Relationship of Validity and Reliability with Analogy
The relationship between reliability and validity is sometimes confusing to persons who encounter
these terms for the first time. Reliability (consistency) of measurement is needed to obtain valid
results, but we can have reliability without validity. That is, we can have consistent measures that
provide the wrong information or are interpreted inappropriately. The target-shooting illustration in the
figure below depicts that reliability is a necessary but not sufficient condition for validity.
January 2014
Figure 13: Reliability (consistency repeatability) and Validity (accuracy)
“Right-pull shooting”
“Scatter Shoot”
“Sharp shoot”
Reliable but not Valid
Not Reliable and not Valid
Both Valid and
Reliable Shooting
8.1.3. Fairness and Wash-back Effect Fairness
The issue of fairness can be seen as the task of treating all learners equally and giving them an equal
opportunity to contribute to the assessment process to demonstrate their ability. Therefore, all students
taking tests should have reasonable opportunities to manifest their knowledge and ability without any
difficulty. Teachers should take care of their personal feelings in order not to interfere with the fair
assessment of the students or not to bias in the assignment of scores. The Principles of fairness or
equity require that students should be given plenty of chances to show what they do, and that their
knowledge and skills are being assessed through multiple methods (Miller, Linn and Gronlund, 2009).
Potential Sources of Bias and Distortion
1. Potential barriers to accurate assessment common to all methods
A. Barriers that can occur within the student
• Language barriers
• Physical handicap
• Emotional upset
• Peer pressure to mislead assessor
• Poor health
• Lack of motivation at time of assessment
January 2014
• Lack of test wiseness (understanding how to
• Lack of personal confidence leading to
take tests)
evaluation anxiety
B. Barriers that can occur within the assessment context
• Noise distractions
• Poor lighting
• Discomfort
• Lack of proper equipment
• Lack of rapport with assessor
C. Barriers that arise from the assessment itself (regardless of method)
• Directions lacking or vague
• Poor reproduction of test questions
• Poorly worded questions
• Missing information
2. Potential barriers to accurate assessment unique to each method
A. Barriers with multiple-choice tests
• Lack of reading skills
• Incorrect bubbling on answer sheet
• More than one correct response choice
• Clues to the answer in the item or in other
• Incorrect scoring key
B. Barriers with extended written response assessments
• Lack of reading or writing skills
• Biased scoring due to stereotyping of
• No scoring criteria
• Inappropriate scoring criteria
• Insufficient time or patience to read and score
• Evaluator untrained in applying scoring
• Students don’t know the criteria by which
they’ll be judged
C. Barriers with performance assessment
• Lack of reading skills
• Insufficient time or patience to observe and
• Inappropriate or nonexistent scoring criteria
score carefully
• Evaluator untrained in applying scoring
• Student doesn’t feel safe
• Unfocused or unclear tasks
• Bias due to stereotypic thinking
• Tasks that don’t elicit the correct performance
January 2014
• Biased tasks
• Insufficient sampling
• Students don’t know the criteria by which
they’ll be judged
D. Barriers when using personal communication
• Sampling enough performance
• Problems with accurate record keeping (Arter & Busick , 2001). Wash-back Effect
Wash-back is an aspect of impact, or a facet of consequential validity. It is sometimes referred to as
backwash. It occurs in the testing instrument rather than the statement of desired learner outcomes that
determines the nature of the curriculum and the course of instruction. Achievement Tests and national
entrance examinations and the Oral Proficiency Interview all exert wash-back effects.
A positive wash-back effect occurs when the assessment procedures correspond to the learning
outcomes. For instance, if a program sets a series of communicative performance objectives and tests
the students using performance assessments (e.g., role plays, interviews) and personal-response
assessments (e.g., self-assessments, conferences), a powerful and positive wash back effect can be
created in favor of the communicative performance objectives. Positive wash back occurs when the
tests measure the same types of materials and skills that are described in the objectives and taught in
the courses.
8.1.4. Practicability
Practicability refers to ensuring that assessments take into account the available financial resources,
facilities, equipment and time. Assessments that require elaborate arrangements for equipment and
facilities, as well as being costly, will make the assessment system fail. Where the ideal assessment
requires specialized equipment and facilities, such assessment could be done by means of a simulation
or by means of collecting evidence in the school. To conclude: Qualities of a Good test are:
January 2014
8.2. Improving the Quality of Test Items through Item Analysis
After a test has been administered and scored, it is usually desirable to evaluate the effectiveness of
the items. This is done by studying the students' responses to each item. When formalized, the
procedure is called item analysis, and it provides information concerning how well each item in the
test functioned.
Item Analysis is made primarily to improve the quality or effectiveness of questions/ items to
determine which items will be kept for final or future use of a test. As exemplified by Kehoe (1995),
there are a variety of techniques for performing an item analysis which can be both quantitative and
qualitative. The former focuses on issues related to measurement of item difficulty and item
discrimination. The latter primarily includes the content of the test like Content validity to enhance the
quality of criteria –referenced test.
8.2.1. Quantitative item analysis
Quantitative item analysis is a numerical method or technique for analyzing test items employing
student-response alternatives or options that enable us to enhance the quality or utility of an item. It is
ideally suited for examining the usefulness of multiple –Choice formats since it can be done by
identifying distracters or response options that are not doing what they are supposed to do (Miller,
Linn, Gronlund, 2009). Similarly, constructed response like essays can also be analyzed for its
difficulty and discrimination power.
The multiple choice item-analysis result provides the following information:
1. The difficulty level (DL) of the item.
2. The discriminating power (DP) of the item.
3. The effectiveness of each alternative.
Thus, item-analysis information can tell us if an item was too easy or too hard, how well it
discriminated between high and low scorers on the test, and whether all of the alternatives functioned
as intended. Item-analysis data also helps us detect specific technical flaws, and thus provides further
information for improving test items.
January 2014
Even if we have no intention of reusing the items, item analysis has severa1 benefits.
1. It provides useful information for class discussion of the test
2. It provides data that helps students improve their learning by revealing errors and
misconceptions, and to provide a remedial work.
3. It provides insights and skills that lead to the preparation of better tests in the future. Item-analysis procedure for Multiple Choice
There are a number of different item-analysis procedures that might be applied to tests. The following
steps outline a simple but effective procedure. Let us use , as an example, 30 test papers to illustrate
the steps:
1. Arrange all 30 test papers in order from the highest score to the lowest score.
2. Select approximately 27% of the papers with the highest scores and call this the upper group (8
papers). Select the same number of papers with the lowest scores and call this the lower group (8
3. Set the middle group of papers aside (14 papers). Although these could be included in the analysis,
using only the upper and lower groups simplifies the procedure.
4. For each item, count the number of students in the upper group who selected each alternative.
Make the same count for the lower group.
5. Record the count from step 3 on a copy of the test, in columns to the left of the alternatives to
which each count refers. The count may also be recorded on the item card or on a separate
sheet, as follows:
A B* C
Upper 8
Lower 8
-- correct answer
Item 1. Alternatives
6. Estimate item difficulty, by determining the percentage of students who answered the item
correctly. The simplest procedure is to base this estimate only on those students included
in the item-analysis groups.
January 2014
Thus, sum the number of students in the upper and lower groups (8 +8 = 16); sum the number of
students who selected the correct answer (for item 1, above, 4 + 2 = 6); and divide the first sum into
the second and multiply by 100, as follows: Index of Item Difficulty = 6/16 x 100 = 37.5%
Although our computation is based on the upper and lower groups only, it provides a close
approximation of the estimate that would be obtained with the total group. Thus, it is proper to say that
the index of difficulty for this item is 37.5 percent (for this particular group).
Note that since "difficulty" refers to the percentage of answering the item correctly, the smaller the
percentage figures the more difficult the item.
The formula for computing item difficulty level is as follows: DL= R/T X 100
Where P = the percentage of students who answered the item correctly;
R = the number of students who answered the item correctly; and
T = the total number of students who tried the item.
7. Estimate item discriminating power, by comparing the number of students in the upper and lower
groups who answered the item correctly. Note in our sample item above those 4 students in the upper
group and 2 students in the lower group selected the correct answer. This indicates positive
discrimination, since the item differentiates between students in the same way that the total test score
does. That is, students with high scores on the test (the upper group) answered the item correctly more
frequently than students with low scores on the test (the lower group).
Although analysis by inspection may be all that is necessary for most purposes, an index of
discrimination can easily be computed. Simply subtract the number in the lower group who answered
the item correctly from the number in the upper group who answered the item correctly, and divide by
the number in each group. For our sample item, the computation would be as follows: Index of Item
Discriminating Power = 4-2/8= 0.25. Thus, the formula for computing item discriminating power (DP)
is as follows:
1/2 T
DP = the index of discriminating power; Ru = the number in the upper group who
answered the item correctly; RL = the number in the lower group who answered the item correctly;
and T = the total number of students included in the item analysis.
January 2014
The discriminating power of an item is reported as a decimal fraction; maximum positive
discriminating power is indicated by an index of 1.00. This is obtained only when all students in the
upper group answer correctly and no one in the lower group does. For our illustrative upper and
lower groups of 10, the computation for an item with maximum discriminating power would be as
follows: DL=8-0/8= 1.00
Note that this item is at the 50 percent level of difficulty (the upper 8 answered it correctly; and the
lower 8 missed it). This explains why test makers are encouraged to prepare items at the 50 percent
level of difficulty for tests. It is only at this level that maximum discrimination is possible. Zero
discriminating power (.00) is obtained when an equal number of students in each group answer the
item correctly. Negative discriminating power is obtained when more students in the lower group
than in the upper group answer the item correctly. Both types of items should be removed from tests
and then discarded or improved.
8. Determine the effectiveness of the distractors, by comparing the number of students in the upper
and lower groups who selected each incorrect alternative. A good distracter will attract more
students from the lower group than the upper group.
Thus, in step 5 of our illustrative item analysis it can be seen that alternatives A and D are
functioning effectively; alternate C is poor since it attracted more students from the upper group,
and alternative E is completely ineffective since it attracted no one. An analysis such as this is
useful in evaluating a test item, and, when combined with an inspection of the item itself, it
provides helpful information for improving the item.
The above steps for analyzing items can be modified to fit particular situations. In some cases
inspecting the data, rather than computing the difficulty and discriminating power, may be all that is
necessary. Also, in selecting the upper and lower groups it may be desirable to use the top and bottom
25 percent if the group is large, or the upper and lower halves if the group is small. The important
thing is to use a large enough fraction of the group to provide useful information. Selecting the top and
bottom 27 percent of the group is recommended for more refined analysis and applying other
statistical refinements.
January 2014
Most of the procedures that are employed for multiple choice item analysis can also be applicable for
essay type items except distracters analysis. Since the essay items are subjective in their nature,
students may express the item from different angles based on their level of understandings. For this
reason, they may not have scored the same mark for an item like they do for objective test items. The
following formula can help us to analyze essay test items.
Difficulty Level (DL) = Sum of the upper group score + sum of the lower group score X100
Maximum score point for an item X sum of total group (U + L)
Discrimination Power (DP) = Sum of the upper group score - sum of the lower group score
(Maximum score point for an item) X (½ sum of total group (U+L))
For example: Let us assume 20 students took an Essay type item which has 3 points. After the teacher
scored all the students’ work on the specific item, he or she arranged students’ mark from the highest
to the lowest. Then, like multiple choice item analysis, he/she may take the upper group and the lower
groups proportionally. From the upper group, 6 students scored the highest point (3) and 4 of them
scored 2 points and from the lower group students , 5 students scored 2 points and the rest 5 scored 1
point . Now let’s calculate the item difficulty level (DL) and discrimination power (DP) the item as
The sum of the upper group score = (6 x 3) + (4x2) =26 points
The sum of the lower group score = (5 x2) + (5 x 1) = 15 points
DL of the item = (6 x 3) + (4x2) + (5 x2) + (5 x 1) X 100 = 26 + 15 X 100 = 68 %
3 x 20
DP of the item = (6 x 3) + (4x2) - (5 x2) + (5 x 1)
3 X ½ (20)
= 26- 15
3 X 10
11 = 0.36
Based on the above information, the item is reasonably good since the difficulty level is 68% and the
discrimination level is 0.36 that fulfills the characteristics of good item as described in tables 26 and
8.2.2. Interpreting item-analysis data
Since a relatively small number of students are used when classroom tests are analyzed, item-analysis
information should be interpreted with great caution. Both the difficulty and the discriminating power
of an item can be expected to vary from one group to another.
January 2014
Thus, it doesn’t seem wise to set a minimum level of discriminating power for the selection of items,
or to distinguish between items on the basis of small difference in their indexes of discrimination.
Other things being equal, we should favor items at the 50 percent level of difficulty and items with the
highest discriminating power. However, the tentative nature of our data requires that we allow for a
wide margin of error. If an item provides a positive index of discrimination, if all of the alternatives
are functioning effectively, and if the item measures an educationally significant outcome, it should be
retained and placed in an item file for future use.
Table 26: the Difficulty Level (DL) of the item and its interpretation
DL in %
> 75 %
Interpretation of the Item in DL
Very easy item
40% - 75%
Reasonably good/average item
26- 39%
Difficult item
< 25%
Very difficult item
As the percentage gets lower than 40%, the more difficult becomes the item. But as the percentage
gets higher than 60%, the easier becomes the item. That is, items of higher indices of difficulty are
easy items; whereas items of lower indices of difficulty are difficult items. The fifty percent (50%)
level of difficulty (DL) is assumed to be average level of difficulty item. The 25% - 75% of level of
difficulty is assumed to be acceptable level of difficulty item. Below 25% and above 75% of level of
difficulty are not accepted level of difficulty for classroom achievement tests. Generally, level of
difficulty of a good item should fall between 40% and 60% and its discriminating power exceeds 0.4.
Table 27: The Discriminating power (DP) of an item and its interpretation
Index of DP
Item evaluation in DP
0.40 and above
Very good / average and above average item
0.30 to 0.39
Reasonably good item but possibly subjected to Improvement
0.20 to 0.29
Marginal item and need improvements
0.19 and below
Poor item to be rejected or discarded or needs profound revision
January 2014
An item of maximum positive discriminating power (positive one) would be one where all pupils in
the upper group get the item right and all pupils in the lower group gets the item wrong. An item of no
discriminating power (zero) would be one where an equal number of pupils from both the upper and
the lower groups got the item right. Item answered correctly by all examinees or incorrectly by all
cannot discriminate at all and should be revised so that they discriminate or they should be discarded.
The following figure can represent the acceptable area an item for a classroom test based on its
difficult level and discrimination power.
Figure 14: Acceptable area of DL and DP of an item
Discrimination power
Acceptable Area
100 %
Difficulty Level
When items are kept in an item file and reused after a period of time, it is a good practice to record the
item-analysis data on the card to create “item bank” each time the item is used. An accumulation of
such data will show the variability in an item's indexes of difficulty and discriminating power and thus
make the information more interpretable. After finishing the item analysis procedure, the classroom
teacher writes the item history on a card or piece of paper like presented in Table 27 below.
January 2014
Table 28: Writing Item history for item bank at school level
Language Focus
Recall the present perfect tense to express an
indefinite time in the past
Cognitive Level
Level of Difficulty
Medium (0.50)
Level of Discrimination
1. My brother ____________ at Agena for 10 years.
He is now planning to go to Wolkite.
had lived
has worked
January 2014
Classroom assessment committee should be organized at the national educational assessment and
examinations agency, regional education bureaus, zonal education departments and Woreda education
offices and at school levels. The duties and responsibilities can be modified in the context of the
9.1. National and Regional Level
The National Educational Assessment and Examinations Agency should conduct classroom related
assessment and assist the relevant regional offices in developing their capacities since it shall have the
powers and duties to:
conduct educational assessment on the basis of analysis of the results of the national exams and
continuous assessment;
based on educational assessment findings and regional and international best practices, provide
professional assistance and advise to the relevant regional organs;
undertake other related activities that are conducive to the attainment of its objectives.
Besides, it can work jointly with regions and teacher training institutions to introduce capacity
building on classroom assessment practices into teacher training programs (both pre- and in-service
9.2. Teacher Training Colleges and Universities
Colleges and universities should revise their curriculum so that it will include more courses on student
assessment and evaluation. The courses that are delivered in colleges and universities concerning
student assessment and evaluation of student learning should be supported with practice so that
teachers will develop the knowledge and skill necessary to conduct classroom assessment during pre
and in-service training program.
January 2014
9.3. Schools/ School Cluster centers
Classroom level – Assessment for learning, assessment as learning and assessment of learning all
occur here. It involves students, parents and teachers by:
supporting the implementation of the most appropriate learning opportunities for students
providing feedback to students and identifying future learning
supporting partnerships between parents, students and teachers
teachers modifying teaching programs to best support learning.
Whole School Level – Assessment of learning and Assessment for learning:
To make assessment as a culture, a committee headed by the assistant head teacher and school
guidance and counseling should be established
The committee can have the following duties and responsibilities:
1. overseeing/supervising record keeping in school;
2. guiding both the experienced and new teachers on the techniques of classroom assessment;
3. developing time-table for classroom assessment;
4. ensuring availability of appropriate materials for classroom assessment;
5. facilitating the development of assessment instrument and ensuring the validity of instruments;
6. liaising with similar committee of other schools to ensure uniformity of procedures;
7. ensure the impartiality of teachers as much as possible.
January 2014
Arter J. A. & Busick , K. U.,( 2001). Practice with Student-Involved Classroom Assessment. Portland:
Assessment Training Institute.
Adams, P. (2004), Classroom assessment and social welfare policy: addressing challenges to
teaching and learning (, retrieved
on December 22, 2011.
AERA, APA, & NCME (1999). Standards for educational and psychological testing.
Washington, DC: AERA.
Airasian, P. (1994). Classroom Assessment, Second Edition, NY: McGraw-Hill.
Airasian, P., and Russell M., (2007. Classroom Assessment: Concepts and Applications (6th ed).
New York: McGrath Hill.
Alabama Department of Education (2002). Evaluation 4(4)., retrieved on February 21,
Alausa, Y.A. (2004): Continuous Assessment in our schools: advantages and problems, Luwanda:
Kolin Foundation Arandis.
American Academy of Arts and Sciences (2006) . Improving Education through Assessment,
Innovation, and Evaluation. , retrieved on February 20, 2012.
Anderson, L. W. & Lauren A. S ( eds) (1994). Bloom's Taxonomy: A Forty-Year Retrospective.
Chicago National Society for the Study of Education: Allyn and Bacon
Anderson, D. R. et al (1996). Testing to Learn – Learning to Test. Washington D.C.
International Reading association and AED
Angelo, T A. and Patricia, C.K (1993). Classroom Assessment Techniques: A
Handbook for College Teachers; San Francisco: Jossey-Bass, Inc.
Atkin, J. M., Black, P., & Coffey, J. (2001). Classroom Assessment and the National Science
Standards. Washington, DC: National Academies Press.
Bartram, D. (2005) The Great Eight competencies: A criterion-centric approach to validation.
Journal of Applied Psychology, 90, 1185–1203
Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom
assessment. Phi Delta Kappan, 80(2),139- 148.
Black,P. & Wiliam,D.(1999a).Assessment for learning : Beyond the Black Box, assessment
January 2014
Reform Group. University of Cambridge School of Education.
Brualdi, A. (1998). Implementing performance assessment in the classroom. Practical
Assessment, Research & Evaluation, 6(2)., retrieved
February 21, 2012.
Braun, H., and Kanjee , A. (2006). Using Assessment to Improve Education in Developing
Nations. In J. Cohen, D. Bloom, and M. Malin, eds., Educating All Children: A Global
Agenda. Cambridge, MA: American Academy of Arts and Sciences.
Breslow, L. (2007) . Teaching and Learning Laboratory, Massachusetts Institute of Technology.
([email protected]), retrieved on December 25, 2011.
Burton, S.J., Sudweeks, R.R., Merrill, F.P., Wood, B. (1991). How to Prepare Better MultipleChoice Test Items: Guidelines for University Faculty. Brigham young University Testing
Caine, R.N., & Caine, G. (1997). Education on the edge of possibility. Alexandria, VA: ASCD.
Clarke, M. (2012). What matters Most for Student Assessment Systems: A framework Paper.
Working Paper No 1: Washington D.C: The World Bank
Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Review of
Educational Research, 58, 438–481. Gardner, H. (1991). The unschooled mind. New York:
Basic Books.
Chappuis, J. (2007). Learning Team Facilitator Handbook: A Resource for Collaborative Study of
Classroom Assessment for
Student Learning. Portland, OR: Pearson Assessment Training Institute
Department of Education of Republic of South Africa (2008). National curriculum Statement
Grades 10-12: (General) Subject Assessment Guidelines Mathematical Literacy Preface
to Subject Assessment Guidelines. (, retrieved on
December12, 2012.
Desalegn Chalchisa (2004). Continuous Assessment in lower Cycle Primary schools. In IER
Flambeau: Vol 12 Number 1
Dietel, R.J., Herman,J.L., R.A(1991). What Does Research Say about Assessment? North
Central Education Laboratory, Oak Brook)
Earl L. (2004) Assessment As Learning: Using Classroom Assessment to maximize Student
Learning: Experts in Assessment series. Corwin Press Inc., Thousand Oaks. California.
Ethiopian Academy of Sciences (2012). Workshop Report on Quality of Primary Education in
Ethiopia. Addis Ababa
January 2014
Felder, Richard M. and Brent, Rebecca (1999). How to Improve Teaching Quality in Quality
Management Journal, 6(2), 9-2
Frisbie, D. A., & Waltman, K. K. (1992). Developing a personal grading plan. Educational
measurement: Issues and practice, 11(3), 35-42.
Gallagher, J. D. (1998). Classroom assessment for teachers. Upper Saddle River, NJ: Merrill.
Gipps. C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer
Dochy, F (2001). Educational Assessment: Major Developments. In: Smelser, Neil J, Baltes,
B. Alto, P. and Berlin (eds.). International Encyclopedia of the Social & Behavioral
Sciences, New York: Elsevier Science Ltd. P. 4343
Gronlund, N. E. (1998). Assessment of student achievement. Boston: Allyn and Bacon.
Haladyna, T.M , Downing S.M, Rodriguez, M.C (2002). A Review of
Multiple-Choice Item-Writing Guidelines for Classroom Assessment: In Applied
measurement in Education, 15(3), 309–334 (Lawrence Erlbaum Associates, Inc.)
Hale, C. D. & Astolfi, D. (2007). Measuring learning and performance: A primer., retrieved on June 17, 2012 .
Hamidi, E (2010) Fundamental Issues in L2 Classroom Assessment Practices: &, retrieved on
October 26, 2011.
Hattie, J. A. (2009). Visible learning: A synthesis of over 800 meta-analyses related to
achievement. New York: Routledge.
Hibbard, K. M. and others. (1996). A teacher's guide to performance-based learning and
assessment. Alexandria, VA: Association for Supervision and Curriculum Development.
Hopkins, K. D. (1998). Educational and psychological measurement and evaluation. Boston:
Allyn and Bacon.
Hoy,C and Gregg,N (1994). Assessment: The Special Educator’s Role. Pacific Grove:
Brooks/Cole Publishing Company.
ICDR (1999). Teacher Education Handbook. Addis Ababa: Fininne Printing and Publishing S.C
ICDR (2004). Continuous Assessment and Its Application. Unpublished
IEQ (2003). Continuous Assessment: A Practical Guide for Teachers. Washington D.C.:
American Institutes for Research/USAID.
January 2014
Kehoe, J. (1995). Basic item analysis for multiple-choice tests. Practical Assessment, Research
& Evaluation, 4(10) : , retrieved on October 25,
2012 .
Kathleen and Strickland, J (2002). Engaged in Learning Teaching English,6—12. Portsmouth:
Kubiszyn.T and Borich.G (1987). Educational Testing and Measurement: Classroom Application
and Practice.(2nd ed).London: Scott, Foresman and Company.
Linn, R. L., & Gronlund, N. E. (1995). Measurement and assessment in teaching. Upper Saddle
River, NJ: Merrill.
Literacy and Numeracy Secretariat (2010). Assessment for Learning: Video Series Descriptive
Feedback Viewer’s Guide a resource to support the implementation of GROWING SUCCESS
Assessment, Evaluation and Reporting in Ontario Schools(1st ed.), Covering Grades 1 – 12.,
retrieved on February 02, 2013.
Looney, J. W. (2011), “Integrating Formative and Summative Assessment: Progress Toward a
Seamless System? OECD Education Working Papers, No. 58, OECD Publishing.doi:
10.1787/5kghx3kbl734-en, retrieved on June 12, 2013.
Manitoba Education, Citizenship and Youth (2006). Rethinking Classroom Assessment
with Purpose in Mind. Winnipeg: Crown
Mazibuko, E.Z & Ginindza,B (2006). ‘Making Every Child A Winner!’ The Role Of Continuous
Assessment In Enhancing The Quality Of Primary Education In Swaziland. University of
McMillan, J. H. (1997). Classroom assessment: Principles and practice for effective instruction.
Boston: Allyn & Bacon
McAlpine, M ,(2002 ) . Principles of Assessment. Robert Clark Centre for Technological
Education, University of Glasgow. Bluepaper Number 1
McKinsey & Company (2007). How the World’s Best Performing School Systems Come Out On
Top. London: McKinsey & Company.
Mehrens and Lehmann (1984). Measurement and Evaluation in Education and psychology (3rd
Chicago: Holt Inc.
Merlin C. Wittrock (eds) (2000). Taxonomy for Learning, Teaching, and Assessing: A Revision
of Bloom's Taxonomy of Educational Objectives. Allyn and Bacon.
January 2014
Meyers, C. & Jones, T.B. (1993). Promoting active learning: Strategies for the college
classroom. San Francisco: Jossey-Bass.
Miller, S. P. (2002). Validated practices for teaching students with diverse needs and abilities. Boston:
Allyn and Bacon.
Miller, M.D, Linn, R.L, Gronlund, N.E (2009).Measuring and Assessment in Teaching (10th ed).
PEARSON Education, Inc. Upper Saddle River (New Jersey).
MoE (2012). Professional Standard for Ethiopian School Teachers. Addis Ababa : MoE
MOE (2010). Education Sector Development Program IV (ESDP IV): Program Action Plan.
Addis Ababa: MOE.
MoE ( 2008). General Education Quality Improvement Program (GEQIP) Plan : Addis Ababa : MoE
Moskal, Barbara M. (2003). Recommendations for developing classroom performance
assessments and scoring rubrics. Practical Assessment, Research & Evaluation, 8(14)., retrieved on February 23, 2012.
( 1990 ). Classroom assessment and the National Science Education Standards
(, retrieved on 21/12/2011). Washington D.C: National Academy
Organization of American States (2006). Assessment in Competency Based Education. New
York: Routledge Falmer.
New Brunswick Assessment Program (2013). Framework for Provincial Assessments.
Fredericton: Assessment and Evaluation Branch, Anglophone Sector.
Nitko, A.J (1996). Educational assessment of students (2nd ed). Columbus, OH: Merrill
NOE (2002). Concepts and Techniques of Continuous Assessment and Evaluation: Prepared for
Primary Teachers. Addis Ababa: Birhanina Selam Printing Press.
NOE (2004). Guidelines for Continuous Assessment. Addis Ababa: NOE.
Popham, W. J. (2000). Modern educational measurement (3rd ed.). Boston: Allyn & Bacon.
Popham, W. J. (1999). Classroom assessment: What teachers need to know. Boston: Allyn &
Popham, W. J. (1995). Classroom assessment: What teachers need to know. Ohio: Allyn &
Price, M., Carroll, J., O’Donovan, B. and Rust, C. (2011). If I was going there, I wouldn’t start
from here: A critical commentary on current assessment practice. Assessment and
Evaluation in Higher Education, 36(4), 479-92.
January 2014
Bank, , retrieved on July25, 2013.
Rodriguez, M. C. (2004). The Role of Classroom Assessment in Student Performance on
TIMSS. In Applied Measurement in Education 17(1):1-24.
Rudner,L. and Schafe (2002). What Teachers Need to know About Assessment .Washington,
D.C: National Education Association.
Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education:
Principles, Policy and Practice 5, 77–84.
Shepherd, E., Godwin, J., Thalheimer, W., (2004). Assessments through the Learning Process.
White paper published at
Smith, K. (2009). Multiple Measures: From Test Takers to Test Makers in Educational
Leadership Vol. 67 No 3: pp 26-30
Stiggins ,R.J, ( 2005) Student-Involved Assessment for Learning (4th ed). Upper Saddle
River(NJ) : Merrill/Prentice Hall.
Stiggins, R. J. (2002). Assessment crisis: The absence of assessment FOR learning. Phi Delta
Kappan, 83(10), 758–7
Stiggins, R. J. (1994). Student-centered classroom assessment. New York: Macmillan Publishing
Stiggins , R. J., Arter, J. A., Chappuis , J., and Chappuis, S. (2004). Classroom Assessment
Student Learning: Doing It Right—Using It Well (Portland, OR: ETS Assessment Training
Stix, A. (1997). Empowering students through negotiable contracting. New York : ERIC
Document Reproduction Number ED411274 .
TGE (1994). Education and Training Policy. Addis Ababa: Ministry of Education
UNESCO (2009). Continuous Assessment Hand Book and Guidelines for Tutors in Primary
Teacher Education in Uganda. National Commission for UNESCO, Kampala
USAID (2008) .Teacher’s Handbook on Formative Continuous Assessment Grade Two.
Addis Ababa: Amnuel Printing Press
USAID (2012). Continuous Professional Development for Primary and Secondary Teachers,
Leaders and Supervisors in Ethiopia:
Teachers Handbook On Formative Continuous
Assessment Grades 1-4. Addis Ababa : MoE
January 2014
UTC (2002), Walker Teaching Resource Center:
Assessment/assessment.html#assumptions/ retrieved on 24/01/2012.
Vanides, Jim; Yin, Yue, Tomita, Miki; Ruiz-Primo,Maria,Araceli (2005:28). Using Concept
(NSTA). , retrieved on 26 /08/2013.
Veeravagu, J., Muthusamy, C., Marimuthu, R., & Michael, A. (2010). Using Bloom's taxonomy to
gauge students' reading comprehension performance. Canadian Social Science, 6(3),
Annex 1: Summary of Cognitive domains and descriptions
recalling or remembering something without
define, describe, identify, label, list,
necessarily understanding, using, or changing
memorize, point to, recall, select, state
alter, account for, annotate, calculate, change,
understanding something that has been
convert, group, explain, generalize, give examples,
communicated without necessarily relating it
infer, interpret, paraphrase, predict, review,
to anything else
summarize, translate
using a general concept to solve problems in a apply, adopt, collect, construct, demonstrate,
particular situation; using learned material in discover, illustrate, interview, make use of,
new and concrete situations
manipulate, relate, show, solve, use
breaking something down into its parts; may
analyze, compare, contrast, diagram, differentiate,
focus on identification of parts or analysis of
dissect, distinguish, identify, illustrate, infer, outline,
relationships between parts, or recognition of
point out, select, separate, sort, subdivide
organizational principles
blend, build, change, combine, compile, compose,
creating something new by putting parts of conceive, create, design, formulate, generate,
different ideas together to make a whole.
hypothesize, plan, predict, produce, reorder, revise,
tell, write
Related Behaviors
January 2014
accept, appraise, assess, arbitrate, award, choose,
judging the value of material or methods as
conclude, criticize, defend, evaluate, grade, judge,
they might be applied in a particular situation;
prioritize, recommend, referee, reject, select,
judging with the use of definite criteria
Annex 2: Multiple-choice questions based on Bloom’s Taxonomy
Knowledge questions
Outcome: Identifies the meaning of a term.
Reliability is the same as:
A. consistency.
B. relevancy.
C. representativeness.
D. usefulness.
In the area of physical science, which one of the following definitions describes the term
A. The separation of electric charges by friction.
B. The ionization of atoms by high temperatures.
C. The interference of sound waves in a closed chamber.
D. The excitation of electrons by high frequency light.
E. The vibration of transverse waves in a single plane.
Outcome: Identifies the order of events.
What is the first step in constructing an achievement test?
A. Decide on test length.
B. Identify the intended learning outcomes.
C. Prepare a table of specifications.
D. Select the tem types to use.
Comprehension questions
Outcome: Identifies an example of a term.
Which one of the following statements contains a specific determiner?
A. Africa is a continent.
B. Africa has 54 countries.
C. Africa has big lakes and rivers.
D. Africa’s population is increasing.
Outcome: Interprets the meaning of an idea.
The statement that “test reliability is a necessary but not sufficient condition of test validity” means
A. a reliable test will have a certain degree of validity.
B. a valid test will have a certain degree of reliability.
C. a reliable test may be completely invalid and a valid test completely unreliable.
Outcome: Identifies an example of a concept or principle .
Which of the following is an example of a criterion-referenced interpretation?
January 2014
A. Derik earned the highest score in science.
B. Erik completed his experiment faster than his classmates.
C. Edna’s test score was higher than 50 percent of the class.
D. Tricia set up her laboratory equipment in five minutes.
Which one of the following describes what takes place in the so-called Preparation stage of the
creative process, as applied to the solution of a particular problem?
A. The problem is identified and defined.
B. All available information about the problem is collected.
C. An attempt is made to see if the proposed solution to the problem is acceptable.
D. The person goes through some experience leading to a general idea of how the problem can be
Application questions
Outcome: Distinguishes between properly and improperly stated outcomes
Which one of the following learning outcomes is properly stated in terms of student performance?
A. Develops an appreciation of the importance of testing.
B. Learns how to write good test questions.
C. Realizes the importance of validity.
Outcome: Tests for the application of previously acquired knowledge (the various memory
Which one of the following memory systems does a piano-tuner mainly use in his/her occupation?
A. Echoic memory
B. Short-term memory
C. Long-term memory
D. Mono-auditory memory
Outcome: Improves defective test questions.
Directions: read the following test question and then indicate the best change to make to improve the
Which one of the following types of learning outcomes is most difficult to evaluate objectively?
A. A concept.
B. An application.
C. An appreciation.
D. None of the above.
The best change to make in the previous question would be to:
A. change the stem to incomplete-statement form.
B. use letters instead of numbers for each alternative.
C. remove the indefinite articles “a” and “an” from the alternatives.
D. replace “none of the above” with “an interpretation.”
Analysis questions
Directions: Read the following comments a teacher made about testing. Then answer the questions
that follow by circling the letter of the best answer.
“Students go to school to learn, not to take tests. In addition, tests cannot be used to indicate a
student’s absolute level of learning. All tests can do is rank students in order of achievement, and this
January 2014
relative ranking is influenced by guessing, bluffing, and the subjective opinions of the teacher doing
the scoring. The teacher-learning process would benefit if we did away with tests and depended on
student self-evaluation.”
Outcome: Recognizes unstated assumptions.
Which one of the following unstated assumptions is this teacher making?
A. Students go to school to learn.
B. Teachers use essay tests primarily.
C. Tests make no contribution to learning.
D. Tests do not indicate a student’s absolute level of learning.
Outcome: Identifies the meaning of a term.
Which one of the following types of test is this teacher primarily talking about?
A. Diagnostic test.
B. Formative test.
C. Pretest.
D. Summative test.
Directions: Read carefully through the paragraph below, and decide which of the options A-D is
“The basic premise of pragmatism is that questions posed by speculative metaphysical propositions
can often be answered by determining what the practical consequences of the acceptance of a
particular metaphysical proposition are in this life. Practical consequences are taken as the criterion
for assessing the relevance of all statements or ideas about truth, norm and hope.”
Outcome: Analyze whether a word fits with the accepted definition of pragmatism.
A. The word “acceptance” should be replaced by “rejection.”
B. The word “often” should be replaced by “only.”
C. The word “speculative” should be replaced by hypothetical.”
D. The word “criterion” should be replaced by “measure.”
Synthesis question:
Directions: Read the following comments a teacher made about testing. Then answer the questions that
follow by circling the letter of the best answer.
“Students go to school to learn, not to take tests. In addition, tests cannot be used to indicate a
student’s absolute level of learning. All tests can do is rank students in order of achievement, and this
relative ranking is influenced by guessing, bluffing, and the subjective opinions of the teacher doing
the scoring. The teacher-learning process would benefit if we did away with tests and depended on
student self-evaluation.”
Outcome: Identifies relationships. Which one of the following propositions is most essential to the
final conclusion?
A. Effective self-evaluation does not require the use of tests.
B. Tests place students in rank order only.
C. Test scores are influenced by factors other than achievement.
D. Students do not go to school to take tests.
Annex 3: Measuring Affective Learning Domain
There are five levels in the affective domain moving through the lowest order processes to the
January 2014
Receiving: The lowest level; the student passively pays attention. Without this level no
learning can occur.
Responding : The student actively participates in the learning process, not only attends to a
stimulus; the student also reacts in some way.
Valuing: The student attaches a value to an object, phenomenon, or piece of information.
 Organizing: The student can put together different values, information, and ideas and
accommodate them within his/her own schema; comparing, relating and elaborating on what
has been learned.
 Characterizing: The student holds a particular value or belief that now exerts influence on
his/her behavior so that it becomes a characteristic.
The affective domain includes the manner in which we deal things emotionally, such as feelings,
values, appreciation, enthusiasm, motivations, and attitudes.
The five major categories with behavior descriptions
Affective Domain
demonstration and evidence to be
'key words' (verbs which describe the
activity to be trained or measured at
each level)
open to experience,
willing to hear
listen to teacher or trainer, take
interest in session or learning
experience, take notes, turn up,
make time for learning experience,
participate passively
ask, listen, focus, attend, take part,
discuss, acknowledge, hear, be open
to, retain, follow, concentrate, read,
do, feel
react and participate
attach values and
react, respond, seek clarification,
interpret, clarify, provide other
references and examples, contribute,
question, present, cite, become
animated or excited, help team, write,
argue, challenge, debate, refute,
confront, justify, persuade, criticise,
value system
participate actively in group
discussion, active participation in
activity, interest in outcomes,
enthusiasm for action, question
decide worth and relevance of
ideas, experiences; accept or
commit to particular stance or
qualify and quantify personal
views, state personal position and
reasons, state beliefs
adopt belief system
and philosophy
self-reliant; behave consistently
with personal value set
act, display, influence, solve, practice,
build, develop, formulate, defend,
modify, relate, prioritize, reconcile,
contrast, arrange, compare
As justified by Airasian (1994), skills in the psychomotor domain describe the ability to physically
manipulate a tool or instrument like a hand or a hammer. Psychomotor objectives usually focus on
January 2014
physical and kinesthetic skills (including keyboarding, using technical instruments and other skills)
since it concerns with how a student controls or moves his body.
Since being able to write cursive style requires the student to manipulate an object, a pencil or pen, to
produce a product, the written letters, this is a psychomotor objective.
The proposed levels are:
I. Imitation (Perception): The ability to use sensory cues to guide motor activity. This ranges from
sensory stimulation, through cue selection, to translation. Examples: Detects non-verbal
communication cues. Estimate where a ball will land after it is thrown and then moving to the correct
location to catch the ball. Adjusts heat of stove to correct temperature by smell and taste of food.
Adjusts the height of the forks on a forklift by comparing where the forks are in relation to the pallet.
Key Words: chooses, describes, detects, differentiates, distinguishes, identifies, isolates, relates,
II. Manipulation (follow instructions): Readiness to act. It includes mental, physical, and emotional
sets. These three sets are dispositions that predetermine a person's response to different situations
(sometimes called mindsets). Examples: Knows and acts upon a sequence of steps in a manufacturing
process. Recognize one's abilities and limitations. Shows desire to learn a new process (motivation).
This subdivision of Psychomotor is closely related with the “Responding to phenomena” subdivision
of the Affective domain. The early stage in learning a complex skill includes imitation by trial and
error. Adequacy of performance is achieved by practicing. Examples: Performs a mathematical
equation as demonstrated. Follows instructions to build a model. Responds hand-signals of instructor
while learning to operate a forklift.
Key Words: copies, traces, follows, react, reproduce, responds, displays, moves, proceeds, reacts,
III . Precision (Mechanism): This is the intermediate stage in learning a complex skill. Learned
responses have become habitual and the movements can be performed with some confidence and
proficiency. The skillful performance of motor acts that involve complex movement patterns.
Proficiency is indicated by a quick, accurate, and highly coordinated performance, requiring a
minimum of energy. This category includes performing without hesitation, and automatic
performance. For example, players will often utter sounds of satisfaction or expletives as soon as they
hit a tennis ball or throw a football, because they can tell by the feel of the act what the result will
produce. Examples: Maneuvers a car into a tight parallel parking spot. Operates a computer quickly
and accurately. Displays competence while playing the piano
Repair a leaking faucet. Drive a car. Key Words: assembles, builds, calibrates, constructs, dismantles,
displays, displays, fastens, fixes, grinds, heats, manipulates, measures, mends, mixes, organizes,
NOTE: The Key Words are the same as Mechanism, but will have adverbs or adjectives that indicate
that the performance is quicker, better, more accurate, etc.
IV. Articulation (Adaptation): Skills are well developed and the individual can modify movement
patterns to fit special requirements. Examples: Responds effectively to unexpected experiences.
Modifies instruction to meet the needs of the learners. Perform a task with a machine that it was not
originally intended to do. Key Words: adapts, alters, changes, rearranges, reorganizes, revises,
V. Naturalization (Origination): Creating new movement patterns to fit a particular situation or
specific problem. Learning outcomes emphasize creativity based upon highly developed skills.
Examples: Constructs a new theory. Develops a new and comprehensive training programming.
January 2014
Creates a new gymnastic routine. Key Words: arranges, builds, combines, composes, constructs,
creates, designs, initiate, makes, originates.
The five psychomotor categories and behavior descriptions
Psychomotor Domain
Level category
or 'key words' (verbs which
or 'level'
demonstration and evidence to be describe the activity to
be trained or measured at
each level)
Imitation copy action of watch teacher or trainer and repeat copy, follow, replicate,
another; observe action, process or activity
repeat, adhere
and replicate
Manipula reproduce
carry out task from written or re-create, build, perform,
from verbal instruction
execute, implement
Precision execute
skill perform a task or activity with demonstrate, complete,
expertise and to high quality show, perfect, calibrate,
independent of without assistance or instruction; control,
able to demonstrate an activity to
other learners
Articulati adapt
and relate and combine associated construct,
activities to develop methods to combine,
to meet varying, novel requirements integrate, adapt, develop,
satisfy a nonformulate,
Naturaliz automated,
define aim, approach and strategy design, specify, manage,
for use of activities to meet invent, project-manage
of strategic need
related skills at
strategic level
Annex 5: Checklist for best practice of assessment
These questions should help teachers reflect on ways of ensuring best practice.
1. What is the purpose of the assessment?
A. Is it to identify individual strengths and weaknesses?
B. Is it to measure progress and provide feedback?
C. Is it to measure individual attainment against a specific learning goal or standard?
2. What is the role of the assessment in the program of learning?
A. Is there an overall plan for assessment that aligns the learning outcomes, the learning and
teaching process, and the knowledge and skills to be acquired?
January 2014
B. Does the assessment match the sequence of knowledge acquisition and/or skills development
in the learning/training program?
C. Have you avoided excessive assessment by considering learning workloads both within and
across subjects?
D. Have you (the teacher) reduced over-assessment by identifying opportunities for using
integrated assessment?
E. Is there a regular review of assessment practice and its impact on learners?
F. Have you thought about your own practice?
G. Do you use a variety of assessment methods to minimize the limitation of any one method?
H. Is there an over-emphasis on written assessments that does not reflect the stated learning
I. Have the learning outcomes and the criteria for success been explained to learners?
J. Is the level of language used in the assessment appropriate to the learning outcome?
K. Have you ensured that the assessments do not pose any unnecessary barriers to any individual?
L. Has each assessment been scrutinized by colleagues to ensure that it meets the intended
M. Is the assessment to be taken individually, by a group or a complete class?
3. How will the assessment be marked? Have you developed a set of specimen
solutions/assessment scheme?
4. How is the marking to be standardized?
Annex 6: Sample of Assessment Instrument
Alternative Response Questions
Assertion/Reason Questions
Aural/Oral Tests
Case Studies
Cloze questions
Completion questions
Expressive Activities
Extended Response Questions
Grid Questions
Matching Questions
Multiple Choice Questions
Multiple Response Questions
Oral Questions
Practical Exercises
Professional Discussion
Question Papers
January 2014
Restricted Response Questions
Role Plays
Self-Report techniques
A. Log Books
B. Personal Interviews
C. Questionnaires
Short Answer Questions
Structured Questions
Annex 7: Record format of Classroom Assessment
Classroom Assessment
Adapted from ICDR (2004)
January 2014
Not yet to do
Does this
Annex 8: Student Progress Report Formats based on Statements of MLC for Individual Student
Individual Mark Sheet
Name of the student ___________________
Subject-English Grade One Semester One and Two
Critical Skill
Listen to and respond to greetings
Listen to and carry out classroom instructions
Listen to and show/touch parts of their body
Listen to and show/touch classroom objects
Listen to words and phrases and match them to
pictures of familiar objects
Respond with short answers to simple questions
Give and ask for personal details such as names and
Give details of their family members and talk about
their family
Tell the quantity - numbers 1 – 10
Name people, animals and objects in pictures identify
the main colours of an object
Respond to questions using yes and no
Listen to songs and rhymes and repeat with
appropriate actions
Ask for an object in the classroom
Say the names and sounds of letters
Read Initial letters and match them to pictures
Point to letters spoken by the teacher
Read 25 key words related to people, animals, objects
and colours
Handle pens and pencils properly for writing purposes
Copy patterns that will lead to the formation of letters
Copy simple pictures of familiar objects
Copy all 26 small and capital letters of the English
alphabet keeping to the lines
Source : USAIDS(2012 :78 )
Annex 9: Assessment Rubrics
January 2014
Supporting for few students (1Poor)
Supporting for some students (2 Good)
Supporting for most
Students ( 3 Best)
Collecting evidence of learning
1. The majority of
1. There is a general sense that
1. All assessments are aligned with the intended
assessments are a poor
assessment should align with
learning (standards/benchmarks).
match for the learning
learning but the practice is
2. Tasks routinely collect evidence of the most
being assessed.
important learning.
2. There is rarely
2. While not a required practice,
3. Assessments tasks are routinely designed ahead
reference to specific
most teachers list the learning
of teaching.
intended learning on
standards to be assessed on each
4. Many tasks assess ‘in context’.
chunk assessment tasks. ‘chunk’ assessment.
5. Curriculum documents include a full repertoire of
3. Written tests are the
3. Assessments may occasionally on-going assessment tasks for teacher to select from.
norm even for learning
assess learning that was not
6. All ‘chunk’ assessments are clearly tagged with
which are not readily
the intended learning, drawn from the school wide
assessed this way.
4. There is a wide range of
set of intended leaning (standards/benchmarks).
4. Assessment tasks are
assessment in use, but more for
7. Assessment tasks are regularly differentiated.
rarely differentiated.
the sake of variety than
8. There is a clear ‘map’ of common assessment
5. Assessments tasks are alignment.
misaligned with what
5. Differentiated tasks are evident 9. Pre-assessment is routine.
was taught in the
in some classrooms.
10. Most teachers use on-going assessment
6. Some grade level teams and
strategies (no hands up, exit cards, one minute
6. Common assessments departments are using common
essay, etc.) routinely and show from their practice
are non-existent.
assessments, but there are few
that they understand it essential role; policy is in
7. Assessment takes are
place and monitored that commits all too routine
often design only after a 7. Contextual tasks are in use
unit has been taught.
only sparingly.
8. Pre-assessment is
8. Pre-assessment is administered
only very occasionally
– no
Feedback to
9. There is little or no
policy requires it.
9. On-going assessment is in
but the by 1. There are clear protocols guiding the timing
1. There are no
1. The
is understood
deeper understanding that it is an
protocols guiding
and type of required feedback.
the timing, type of
2. Some protocols are in place to guide
2. Clear guidelines for the return of work are in
required use of
its use.
3. A suggested time frame for the return
3. Teachers fully understand that learning cannot
2. Assessment is
of work may be in place.
happen without feedback.
viewed largely as a
4. Some teachers may be recording
4. Learners are consistently given feedback they
way to audit
anecdotal evidence from their informal
can act on and are permitted by policy to do so
learning, not as an
without penalty.
5. There may be a list of suggested ways 5. Feedback is at the center of the discussions
of offering feedback.
about improving assessment.
3. Grades are
6. Grades are often a preferred form of
viewed as adequate
feedback, with other forms used at
feedback for most
teacher discretion.
Evaluating Evidence
January 2014
1. Learners typically
are unaware of
learning expectations.
2. Learners are heavily
reliant on teachers to
know if and to what
extent they are
3. Teachers use their
own criteria to
determine ‘grades’.
4. Grade averaging
and the use of zeros
are widespread.
5. Although there is a
school-wide grading
scheme, there is no
understanding of what
each grade represents
6. Only academic, easy
to assess learning is
7. 'No second chances'
is the predominant
8. ‘Penalty’ is a strong
part of the assessment
1. Many teachers use criteria
and rubrics, but there are no
school wide guidelines.
2. Self assessment is
occasionally a feature on
3. Exemplars are in use but
there is disagreement about
whether they stifle creativity.
4. Department and grade level
teams have established some
guidelines for what grades
5. Individual teaches may give
learners ‘second chances’ but
there are no guidelines,
6. Many learners would say
that teachers are pretty much
in charge of the evaluation
7. Most of the learning
evaluated is based in the
curricular standards.
8. Learners occasionally have
second opportunities to show
their learning, but it is not
9. There is a sense that
learning is less
1. Learners are fully aware of what is expected of them.
2. Learners are full participants in the evaluation process.
3. Exemplars, rubrics and criteria are in routine use and
given to students ahead of teaching.
4. There are shared rubrics for trans disciplinary
5. There are clear guidelines on what is meant by each
‘grade’ and continual examination of work products and
processes to refresh understanding.
6. There is no grade averaging or use of zeros in grading.
7. There is as much emphasis student dispositions as on
academic learning.
8. Self-assessment is a standard, required feature for all
9. Evaluation is always criteria-based – comparing
learning to the curricular standards.
10. Learners routinely, by policy, have second and third
opportunities to show evidence of their learning without
Recording Evidence
1. There is no
systematic process for
recording evidence of
learning. Teachers feel
they need to generate
grades just to have
something to report on.
2. Records are kept
according to types of
tasks rather than types
of learning.
3. Records are often
4. Records are often
just mechanical.
5. Assignments are
often considered full
1. Grade levels/departments
have agreed on similar ways to
record learning.
2. Many teachers may keep
anecdotal records.
3. Teacher may still be
struggling with how much to
4. Teachers are recording
evidence of learning primarily
by task type, not specific
5. Records of dispositions and
big understandings are sparse
but attempted.
1. There is a full, systematic, shared process for
recording evidence of learning.
2. Teachers record only the evidence which fully
supports progress.
3. Records are kept according to learning standards.
4. There are a variety of forms of record keeping
addressing the four types of learning.
5. There is a clear distinction between work that is
strong evidence of learning and work that is practice.
January 2014
Communicating Evidence
1. Results of learning
are given on single
2. Reports are
frequently made when
it is too late to make
3. Results of
assessment are
commonly misused.
4. Learning results are
typically not used to
adjust teaching,
1. Traditional reporting
processes are in place (report
cards at set time, progress
reports, parent conferences.)
2. Set report times, rather
than learner needs, drive the
reporting practice.
3. Most reporting processes
are aimed at parents, possibly
next schools.
1. All forms of reporting are based on specific learning.
2. Learning results are communicated when there is still
time to act on them.
3. Learning results are consistently used to modify
4. All reports are’ action’ oriented, suggesting next steps
for learners and teachers.
Annex 10: Round 1 Workshop Participants at DIH- Adama (June 5 -16/2005 EC)
Educational Background
Qualif Specialization
Work Place
Ato Zerihun Duressa
Ato Arega Mamaru
[email protected]
Ato Bekele Geleta
Ato Fekadu Bogale
[email protected]
[email protected]
Ato Walelign Admasu
[email protected]
Ato Daniel Zewdie
Ato Tamrat Fitie
Educational Research
-Educational Psychology
-MA in Distance Education
Educational Psychology
[email protected]
[email protected]
Ato Yosef Mihret
[email protected]
Ato Fikremariam Regasa
Ato Effa Gurmu
Ato Yilikal Wondimeneh
Ato Tamrat Tefera
Measurement & Evaluation
Measurement &Evaluation
[email protected]
[email protected]
[email protected]
Name of the Participants
January 2014
Annex 11: Round II CA Manual validation Workshop Participants’ Profile at DIH- Adama (August
15 -22/2005 EC)
Participant's Name
Zerihun Duressa
V/G/ Director
Lake Bedlu
Educ. Psych
Yohannes Tigro
Cell Phone
[email protected]
[email protected]
[email protected]
Nigussi Worku
BB Coaching
Hosaena TTC
Misganaw Alene
Henok Jemal
Gon U
Bekele Geleta
Tesfaye Degefa
Fekadu Bogale
Tamirat Fitie
M& E
TDP expert
Yosef Mihret
Tamiru Zerihun
NEAD Director
Arega Mamaru
Gezahagn Degife
Adult &
TP Licensing &
Yilikal Wondimeneh
Fikremaryam Regasa
M & Eation
Tamirat Tefera
Zelalem Wole
January 2014
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
National Educational Assessment and Examinations Agency (NEAEA)
Dire International Hotel, Adama, October 11-14, 2013 Participants profile
Name of Participants
Work Place
Cell Phone
PM office
[email protected]
Head, Gambela REB
[email protected]
SNNP REB, Assess Expert
[email protected]
MoE, Expert
[email protected]
AAU, School of Education
[email protected]
Director General, NEAEA
[email protected]
Ato Zerihun Duressa
Deputy Director Gen,NEAEA
[email protected]
Ato Tamiru Zeryhun
Head, National Educ assess
[email protected]
Ato Kefelgn Tsigie
Head, Exam dev & Admin
[email protected]
Ato Abera Diriba
[email protected]
Ato Berhanu woldesemayat
Head, Certificate, NEAEA
[email protected]
Ato Dereje Andargachew
Auditor, NEAEA
Ato Molla Habte
[email protected]
Ato Ayehualem Tereda
Head, Prop& Procur , NEAEA
[email protected]
W/ro Kelemuwa Zegeye
Head, Re& Resul Com NEAEA
[email protected]
Ato Arega Mamaru
NEAEA, Assess Expert
[email protected]
Ato Yilikal wondimeneh
NEAEA, Assess Expert
[email protected]
Ato Bekele Geleta
NEAEA, Assess Expert
[email protected]
Ato Effa Gurmu
NEAEA, Assess Expert
[email protected]
Ato Abiy Kefyalew
NEAEA, Assess Expert
[email protected]
Ato Mengistu Admassu
NEAEA, Assess Expert
[email protected]
Ato Solomon Teferi
NEAEA, Assess Expert
[email protected]
Ato Fikremariam Regassa
NEAEA, Exam Dev Expert
[email protected]
W/ro Emebet Tesfaye
Office Assist, NEAEA
[email protected]
Ato Fikadu Bogale
NEAEA, Exam Dev Expert
[email protected]
Ato Getnet Mamo
NEAEA, Exam Dev Expert
[email protected]
Ato Getaneh Tarekegn
NEAEA, Exam Dev Expert
[email protected]
Ato Belay Endeshaw
NEAEA, Exam Dev Expert
[email protected]
Ato Minas Gebremeskel
NEAEA, Exam Dev Expert
[email protected]
Ato Worku Gebremikael
NEAEA, Exam Dev Expert
[email protected]
Ato Amare Gebru
NEAEA, Exam Dev Expert
[email protected]
Ato Mulugeta Bejitual
NEAEA, Exam Dev Expert
[email protected]
Ato Tirfe Berhane
NEAEA,Exam Adm Team Lead
[email protected]
Ato Tiruneh Felegehyiwot
NEAEA,Exam Adm Expert
O913 327851
[email protected]
H.E. Ato Bogale feleke
Ato Ashinie Ogestin
Ato Seifu Bekele
Ato Debebe W/Senbet
Ato Dame Abera (cad PhD)
Ato Araya Gebregziabher
January 2014
[email protected]
Ato Ashenafi Tesfaye
NEAEA,Exam Adm Expert
[email protected]
W/ro Aynalem Hailu
NEAEA,Exam Adm Expert
[email protected]
Ato Sahilu Bayisa
Director, Licen & Relic , MoE
sahlu.bayis[email protected]
W/ro Abebech Negash
Director, Inspection, MoE
[email protected]
Dr. Getnet Demissie
Dean , Haro Uni, sch of educ
[email protected] /
[email protected]
Dr. Menna Olango
Dean , Hawasa Uni, sch of ed
[email protected]
Ato Berhanu Mekonnen
Dean , Dilla Uni, sch of educ
[email protected]
Ato Tadesse Regassa
Dean , Jima Uni, sch of educ
[email protected]
Ato Mebratu Belete
Dean , Wolayta Uni, sch of ed
[email protected]
Ato Lake Bedlu
Rep Dean , BDU Uni, sch of ed
[email protected]
Ato Nuredin Mohammed
Dean , Wollo Uni, sch of educ
[email protected]
Ato Thomas Ayana
Dean , Welega Uni, sch of ed
[email protected]
Professor Dirbssa Dufera
Director, IER , AAU
[email protected]
Dr. Tsehai Jemberu
[email protected]
Dr. Mulu Nega
[email protected]
Dr. Dessalegn Chalchissa
[email protected]
Dr. Firdissa Jebessa
[email protected]
Ato Tayachew Ayalew
Advisor, MoE , General Edu
[email protected]
Ato Girma Woldetsadik
World Bank, Ethiopia
[email protected]
January 2014