Sunday, January 31, 2010

Evaluating student use of technology tools

How should teachers evaluate student use of technology tools? For projects, use a rubric derived from the learning goals of the project. Assess the quality of the artifact content, the quality of the form of the artifact, and maybe the quality of collaboration. For the result of a computer-based direct instruction tool, assess the student on the mastery of the concepts using either the assessments built into the tool, or some teacher-generated pre- and post-assessment matched to the content presented in the tool. Time spent on the exercise might be another assessment criteria.

[You can stop reading now if you want. The rest just wanders around. That first paragraph was an automated response; then I started to think about it some more; but if you read to the end, you will see that I ended up back at the beginning anyway. Sigh.]

But wait. I am equating assessment with evaluation. Some of the academic literature, though, distinguishes between the two. Wiggins and McTighe (2006), in the glossary of their Understanding by Design (2nd ed.) view assessment as a more general category of attempting to determine a student's progress in a rather human way ("the implication is that in an assessment, the teacher makes thoughtful observations and disinterested judgments, and offers clear and helpful feeback"). Evaluation, on the other hand, suggests attaching a value to student performance, hence a quantification and possibly a ranking. "A teacher can assess a student's strengths and weaknesses without placing a value or grade on the performance." A Google search on "assessment vs. evaluation" turns up ten hits. One hit, the Assessment Center Toolbox page, outlines a number of distinctions. Generally evaluation leads to a grade; assessment leads to an adjustment of teaching and perhaps learning. (Although if all of the students received low grades, one might want to reconsider teaching strategies, no?)

On the other hand, Linn and Gronlund (1995), Measurement and Assessment in Teaching, 7th ed., makes no obvious distinction. Nor does Nitko (2001), Educational Assessment for Students. I can't find any attempt to distinguish between them in the NCTM Grades 6-8 Mathematics Assessment Sampler (2005), which refers to the NCTM 1995 Assessment Standards for School Mathematics. The NCTM assessment standards use "evaluating" as a rationale for assessment.

Underneath the distinction is a whiff of the eternal tussle between the direct instruction/behaviorist camp and the constructivist camp. Objectivists struggle to assign a quantity to everything as a proxy for reality. Constructivists struggle to find more elusive indicators of growth and mastery through authentic student artifacts. The implication is that objectivists evaluate; constructivists assess. (My personal take: (1) "objectivist", used here because it is in common use in education circles (see e.g., Roblyer and Doering, 2010, Integrating Educational Technology into Teaching (5th ed.), is a poor choice of term, but that leads to an entirely other digression; (2) both strategies for teaching in fact have a valid place in the overall teaching process, and I think a really good teacher knows when to use which.

This leads to another subtlety in the posed question. If the goal of teaching is to push student thinking up the ladder of Bloom's taxonomy, a teacher would want to see evidence of the analysis, evaluation and creation (or analysis, synthesis and evaluation in the original scheme). Recognizing evidence of higher-order thinking, or genuine relevant novelty requires intellectual flexibility on the part of the teacher. The best a good teacher can do is to hint to the student what that looks like because it (by definition?) cannot be quantified.

This difficulty (if not impossibility) of quantifying novelty or true understanding is reflected in the maddeningly vague rubric terms in Wiggins and McTighe (2006, see above). Two rubrics are needed (or two distinct parts to the rubric): a rubric for understanding, and a rubric for performance. The rubric for understanding must, by necessity, consist of soft, subjective, Bloomish terms; and the teacher must really work at expressing what real understanding looks like.

So following from these two dimensions (understanding vs. performance; which can roughly be translated into content vs. form), evaluating (or assessing) student use of technology tools, at least in the problem-solving / inquiry / discovery / constructivist mode falls into two categories. First and foremost is the content of the project -- does it reflect real understanding of the material? Second, does the form, the presentation or performance, reflect creativity or facility with the chosen tools. And we might add a third category, related to the social dimension of the knowledge construction / acquisition process: how well did the student do with the collaboration part?

Which brings this back to the beginning.

jd

No comments: