Sunday, January 31, 2010

Evaluating student use of technology tools

How should teachers evaluate student use of technology tools? For projects, use a rubric derived from the learning goals of the project. Assess the quality of the artifact content, the quality of the form of the artifact, and maybe the quality of collaboration. For the result of a computer-based direct instruction tool, assess the student on the mastery of the concepts using either the assessments built into the tool, or some teacher-generated pre- and post-assessment matched to the content presented in the tool. Time spent on the exercise might be another assessment criteria.

[You can stop reading now if you want. The rest just wanders around. That first paragraph was an automated response; then I started to think about it some more; but if you read to the end, you will see that I ended up back at the beginning anyway. Sigh.]

But wait. I am equating assessment with evaluation. Some of the academic literature, though, distinguishes between the two. Wiggins and McTighe (2006), in the glossary of their Understanding by Design (2nd ed.) view assessment as a more general category of attempting to determine a student's progress in a rather human way ("the implication is that in an assessment, the teacher makes thoughtful observations and disinterested judgments, and offers clear and helpful feeback"). Evaluation, on the other hand, suggests attaching a value to student performance, hence a quantification and possibly a ranking. "A teacher can assess a student's strengths and weaknesses without placing a value or grade on the performance." A Google search on "assessment vs. evaluation" turns up ten hits. One hit, the Assessment Center Toolbox page, outlines a number of distinctions. Generally evaluation leads to a grade; assessment leads to an adjustment of teaching and perhaps learning. (Although if all of the students received low grades, one might want to reconsider teaching strategies, no?)

On the other hand, Linn and Gronlund (1995), Measurement and Assessment in Teaching, 7th ed., makes no obvious distinction. Nor does Nitko (2001), Educational Assessment for Students. I can't find any attempt to distinguish between them in the NCTM Grades 6-8 Mathematics Assessment Sampler (2005), which refers to the NCTM 1995 Assessment Standards for School Mathematics. The NCTM assessment standards use "evaluating" as a rationale for assessment.

Underneath the distinction is a whiff of the eternal tussle between the direct instruction/behaviorist camp and the constructivist camp. Objectivists struggle to assign a quantity to everything as a proxy for reality. Constructivists struggle to find more elusive indicators of growth and mastery through authentic student artifacts. The implication is that objectivists evaluate; constructivists assess. (My personal take: (1) "objectivist", used here because it is in common use in education circles (see e.g., Roblyer and Doering, 2010, Integrating Educational Technology into Teaching (5th ed.), is a poor choice of term, but that leads to an entirely other digression; (2) both strategies for teaching in fact have a valid place in the overall teaching process, and I think a really good teacher knows when to use which.

This leads to another subtlety in the posed question. If the goal of teaching is to push student thinking up the ladder of Bloom's taxonomy, a teacher would want to see evidence of the analysis, evaluation and creation (or analysis, synthesis and evaluation in the original scheme). Recognizing evidence of higher-order thinking, or genuine relevant novelty requires intellectual flexibility on the part of the teacher. The best a good teacher can do is to hint to the student what that looks like because it (by definition?) cannot be quantified.

This difficulty (if not impossibility) of quantifying novelty or true understanding is reflected in the maddeningly vague rubric terms in Wiggins and McTighe (2006, see above). Two rubrics are needed (or two distinct parts to the rubric): a rubric for understanding, and a rubric for performance. The rubric for understanding must, by necessity, consist of soft, subjective, Bloomish terms; and the teacher must really work at expressing what real understanding looks like.

So following from these two dimensions (understanding vs. performance; which can roughly be translated into content vs. form), evaluating (or assessing) student use of technology tools, at least in the problem-solving / inquiry / discovery / constructivist mode falls into two categories. First and foremost is the content of the project -- does it reflect real understanding of the material? Second, does the form, the presentation or performance, reflect creativity or facility with the chosen tools. And we might add a third category, related to the social dimension of the knowledge construction / acquisition process: how well did the student do with the collaboration part?

Which brings this back to the beginning.

jd

Saturday, January 30, 2010

More essential questions

Here are some more essential questions, perhaps more practical than my first list, though not as engaging I think. By themselves they probably aren't useful for the student as the start of an inquiry-based unit, but they can be made concrete for a particular unit. For example, question #2 could be expanded and concretized into "What are different ways to represent North Lawndale? Create three different portraits of your neighborhood -- one that would appeal to a poet, one to a statistician, and one to an ecologist." (Or something like that.)

The letters and numbers in parentheses refer to different learning or teaching standards. If no other abbreviation is shown, they refer to the Illinois Learning Standards. NETS-T refers to the National Educational Technology Standards for Teachers. IPST refers to the Illinois Professional Teaching Standards.

jd


Math

1. How can algebra be used to solve real world problems (including rates and proportions, percent of change, formulas and weighted averages)? (6D, 8D)

2. What are different ways of representing the world? (5C, 8B, 10A, 25A, 27B)

3. What are different ways of representing the world mathematically? (8B, 10A)

4. How can math representations help us in understanding the world? (8B, 10A)

5. How can graphs represent linear / direct relationships? (8B)

6. What does this mean? "The map is not the territory." (5C, 7C, 8B, 10A, 25A, 27B)

7. Is an algebraic problem-solving strategy better than, say, "guess or check" or "draw a diagram"? (6D, 8D, 11A)

8. Should students be allowed to use calculators in class? (6C, 7C)

9. In plane geometry, why do the angles of a triangle add up to 180º? (9A, 9B, 9C)

10. What would geometry look like if the angles of a triangle added up to more than 180º? (9A, 9B, 9C)

11. What would geometry look like if the angles of a triangle added up to less than 180º? (9A, 9B, 9C)

12. Why are vertical angles congruent? (6B, 8D, 9C)

13. How is algebra different from the arithmetic you have learned up to now? (6A, 6B, 8A, 8C)

14. How can you find the value of pi (without looking it up)? (7B, 7C,8D, 9A)


Science:

15. What does an ecosystem need to be self-regulating? (12B)

16. What happens to an ecosystem when a new species is introduced? (12B)


Social Science / Technology

17. How do new tools affect society? (11A, 13B, 16C, 18C, 27B)

18. Why do new tools affect society? (11A, 13B, 16C, 18C, 27B)

19. How do new tools affect the way we see things? (11A, 13B, 16C, 18C, 27B)

20. Do new tools change the way people think, or are new tools the result of a change in people's thinking? (11A, 13B, 16C, 18C, 27B)


General:

21. Does your ISAT score show how smart you are? (10A, 10B, 11A, 26B)


For teacher education:

22. How do people learn? (IPTS 1A, 1C, 2A, 2B, 2C, 2D)

23. Do children learn differently at different ages, and if so, how? (2B, 2C)

24. What is an essential question? (1H)


For technology teachers:

25. What is literacy? (IPTS 1A, 1B)

26. What is technology literacy? (NETS-T 1, 2, 3, 4, 5)

27. Should technical skills be taught separately or as part of a regular lesson? (NETS-T 1, 2, 3, 4, 5)

28. How should schools deal with the increased media use / exposure of today's students? (NETS-T 4, 5)

29. At what age should children start using computers? (IPTS 2B; NETS-T 4, 5)

30. Do computers help students learn? (IPTS 1F, 6C)

Essential questions

Essential questions are considered a key aspect of inquiry-based learning. Here is my first pass at creating a list of essential questions:

What are the 39 Steps?

Who do you love?

Why is the ocean near the shore?

How many seas must a white dove sail before she sleeps in the sand?

How many holes does it take to fill the Albert Hall?

Who wrote the Book of Love?

Who will stop the rain?

How come you say you will when you won't?

How come you say you do when you don't?

How many ears must one man have before he can hear people cry?

How many deaths will it take till he knows that too many people have died?

Will the circle be unbroken?

¿Que es mas macho? Light bulb or school bus?

Why does a cube have six sides?

"Hey buddy -- why the long face?" (A horse walks into a bar. Bartender says, ...)

When the DVD says that the image has been re-formatted to fit my TV screen, how do they know what size my TV is?

Knock, knock. Who's there?

What ails thee?

I have some feedback for you. Would you like to hear it?

jd

Friday, January 29, 2010

Generation M2: Media in the lives of young people

The Kaiser Family Foundation released an important study earlier this month on the media use of young people. Titled Generation M²: Media in the Lives of 8- to 18-Year-Olds, the study is the latest in a series of surveys of media use of young people over a 10-year-period.

A few quotes from the "Key Findings" section jumped out at me:
"Over the past five years, young people have increased the amount of time they spend consuming media by an hour and seventeen minutes daily, from 6:21 to 7:38—almost the amount of time most adults spend at work each day."

"Youth who spend more time with media report lower grades and lower levels of personal contentment... Heavy media users are also more likely to say they get into trouble a lot, are often sad or unhappy, and are often bored."

"Two groups of young people stand out for their high levels of media consumption: those in the tween and early teen years (11- to 14-year-olds), and Blacks and Hispanics."
In an interesting discussion with technology teachers and principals, some felt that we should embrace this phenomena, and find ways to engage students and deliver instruction in more mediated ways. Others suggested media zones at schools, like the smoking patio of old (my high school had one), where kids would be free to use their cell phones and MP3 players. Still others had an opposite response: schools should provide more unmediated spaces, like basketball programs or chess clubs or cheerleading, and more opportunities to engage with the outdoors.

I see another implication of the study: we should be emphasizing media literacy (e.g., how to "read" a television show or an advertisement or a music video or even a video game) more. We have a detailed set of standards and goals and performance objectives for the written word. Perhaps an expanded set of learning goals for other media, so that students can become more critical consumers of all of the media they are exposed to...

jd

Monday, January 25, 2010

Teacher evaluations

The speech by Randi Weingarten, president of the American Federation of Teachers, giving the union okay to teacher evaluations based on student test performance may have given Illinois governor Pat Quinn the political cover to sign the "Performance Evaluation Reform Act of 2010" on January 15. The bill "requires every school district to incorporate student performance as a significant factor in teacher and principal evaluations" according to the governor's press release.

The bill is an example of the extortions demanded by the Obama/Duncan administration from cash-strapped states and school districts to qualify for "Race to the Top" funds ("Dangling Money, Obama Pushes Education Shift"; "State Looks at Doubling Cap on Charter Schools"). There have been some brave "just say no"s from some states and districts. See e.g. "Texas Shuts Door on Millions in Education Grants", "In Race for U.S. School Grants Is a Fear of Winning").

An article in the Atlantic Monthly this month, "What Makes a Great Teacher?", praising the Teach for America alternative certification program, also gives a nod to teacher evaluations based on student performance (referring to the District of Columbia, which begins a new evaluation system where half of a teacher's performance "score" is based on student standardized test performance).

On a related note, someone passed along a link to a 2008 article from The New Yorker by Malcolm Gladwell, "Most Likely to Succeed: How do we hire when we can’t tell who’s right for the job?". I hope to dig into the reasoning in the article real soon now. The article is especially interesting because it references some of the (academic) economic research behind the push for teacher evaluations based on student test performance.

jd

Saturday, January 23, 2010

Three caucuses challenge Stewart

For reference, here are links to the websites of three Chicago Teacher Union caucuses that are challenging the current leadership in the upcoming May, 2010 election.

Caucus of Rank and File Educators (CORE)

Caucus for a Strong Democratic Union (CDSU)

Pro-Active Chicago Teachers and School Employees (PACT)

I can't find a web site for current CTU president Marilyn Stewart's United Progressive Caucus (UPC).

jd

Teaching technology

Posed in class: Should technology (computer skills, typing, software instruction, etc.) be taught as separate skills or as part of an integrated lesson?

What do we really mean by technology? Certainly computers are technology, as well as the different software that run on the computers. But technologies can be read in a much larger sense, a la Marshal McLuhan, as the tools we use to interact with the world, the artifacts that mediate our relationship with the world -- media. In this sense, the phonetic alphabet is technology, so is writing, the printed word, the arts in general, number systems and so on.

Given a broad definition of technology, one could argue that teaching has always involved both the teaching of technology as well as concepts, understandings, cultures mediated by the technology (and hence shaped if not determined by the technology). So much of early schooling is devoted to mastering the phonetic alphabet and the written word and numeration. When teachers teach phonemes and the alphabet and addition, they are teaching technology. Running with that idea, what is the best way to teach those skills? Isn't it a mix of direct instruction of basic skills, supported by guided and independent practice which can eventually lead to inquiry-based learning to extend and deepen the skills and cement their real-world relevancy? I think so.

So the short answer to the question above is: both. Skills taught without practical application are quickly lost. Trying to incorporate skills into a project without some prior introduction to those skills is inefficient and probably counter-productive. [I am thinking here of an article by Kirschner, Sweller and Clark, "Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching", from Educational Psychologist, 2006. In a teeny nutshell, the authors argue that constructivist etc. learning place a heavy load on the cognitive functions, and if basic knowledge isn't there to support the constructivist learning, it is inefficient and ineffective. Guided learning is critical to create the structures that experiential learning can build on: "The advantage of guidance begins to recede only when learners have sufficiently high prior knowledge to provide 'internal' guidance." (p. 75)]

One immediate question that arises when talking about new (i.e. unfamiliar) technologies is who will do the technology instruction? The NETS for Teachers standards expect the classroom teacher to be able to do the instruction ("design and develop digital-age learning experiences and assessments" and "model digital age work and learning"). This expectation is also implicit in the Chicago Public Schools Technology Academy Program (TAP, aka Technology Magnet Cluster Program; see for example the CPS Tech Academy wiki) which is investing heavily in teacher training programs. However, in practice (and following from the distinction in instructional modes described by Kirschner et al above), depending on the mix of teacher skills and infrastructure, it may make more sense to separate technology direct-skill instruction, and do it in a separate lab setting; but coordinate it with traditional subject lesson units (albeit technology-enhanced) in the classroom. That is, let the technology teacher introduce technologies and basic skills (e.g., in the lab during a prep period). Students then deepen their skills working on technology-rich (and inquiry-rich) projects, supported in constructivist style by their more tech-savvy peers, the regular classroom teacher (as able), or a technology coach. The teacher's priority would be encouraging quality content and higher order thinking.

Over time ideally the regular classroom teacher has the time and support to deepen technology skills to better support students in their new media work. But the complexity and mix of new technologies, as well as good pedagogical practice, calls for a division of teaching labor.

jd

Qualitative science as a theory of problem-solving


This is a concept map of qualitative science as a model for doing "inquiry-based learning". It is perhaps mis-titled as "qualitative science as a theory of problem-solving." "Problem-solving" to me is a negative way of framing inquiry -- "problems" indicate something is wrong, they are obstacles to be overcome as opposed to opportunities to be embraced. The spirit of qualitative science (or Goethean science) is much more in the inquiry-as-opportunity domain than the problem-as-obstacle one.

The most complete expression of qualitative science in the education world can be found in the Waldorf movement.

The references for the concept map are below.

Davis, J. (2006). "The Goethean approach and human artifacts." Retrieved from http://www.gocatgo.com/texts/goethe.artifact.pdf January 17, 2010.

Holdrege, C. (Summer, 2005). Doing Goethean science. Janus Head. 8(1).

Miller, D. ed. and trans. (1988). Johann Wolfgang von Goethe: Scientific studies. New York: Suhrkamp Publishers.

Root, C. (Spring, 2006). Conversation between friends: An inspiration for Goethe's phenomenological method. In context. 15. Ghent, NY: The Nature Institute.

Steiner, R. (1996). The education of the child. Great Barrington, MA: The Anthroposophic Press.

The map was done with Kidspiration (mainly to develop skills with the tool since we use at my school. Images from Wikipedia (Creative Commons license or Wikimedia Commons); http://www.clipartof.com/gallery/clipart/conversation.html; or images distributed with Kidspiration.

jd

Bad PD

I am thinking about the last awful professional development session I attended, mainly because I am taking a course this winter in staff development and educational technology, and that's what the first assignment asks us to do.

This of course is a difficult assignment because they are all generally bad, although usually not for the same reasons. And mea culpa some of them are ones I have done. To pick one (not mine) at random: the organizer asked everyone to be sure to be at the training by 8:00 a.m. On a Saturday morning, but had failed to tell the trainer to be there then. So the PD started an hour late to a group of grumpy teachers. The first hour (or at least it seemed like an hour) was spent going over a tedious history of the organization behind the topic of the PD. The entire material that we needed to cover could probably have been covered in an hour, but was successfully stretched out to three hours. No follow-up. And it was several months before everything was in place to actually apply what we learned about in the training.

Another session, one that I did facilitate, turned out to be severely overbooked by the training organizers. There was a serious failure to think through ahead of time which sessions training attendees would go to (and for many of them, there were no options). The assigned room for the training did not have enough seats, let alone tables or work surfaces. The session was supposed to provide an overview of features of the MacBook and Mac OS X for teachers new to the Mac. A session like this will have lots of questions and confusions, so the ratio of presenters and helpers to attendees is critical. In this case there were two of us in a crowded room trying to help forty or more people. I'm not sure what anyone got out of the session.

Some takeaways: There is some formula governing the the technical content of the material, the expertise of the learners, and from those two what the ratio of teacher to learner should be. In general, I think, the lower the level of expertise, the lower the ratio of teacher/learner. Adequate facilities are important (room size, seating, line-of-sight, sound, heating/cooling, lighting, table space if relevant, etc.) Starting on time and respecting the time of the learners is just good manners. Follow-up would be nice.

The National Staff Development Council has developed standards for staff development. Most of the problems described above failed on the process standard of design (trainings incorporated inappropriate or weak learning strategies) and weak content (failed to deepen content knowledge given the amount of time).

jd

Sunday, January 17, 2010

CORE launches a Testing Task Force

I joined some 400 other people on January 9 at the Caucus of Rank and File Educators (CORE) Educational Summit. Currents and events at CPS seem to be building to a head. Between the funding crisis, Huberman's performance management mania, hit lists and turnarounds and closings, and, well, the general social crisis that comes through the front door of the school every day, some sort of resistance has to, needs to develop. The Chicago Teachers Union is an obvious organized force to address what's going on, but as far as I can tell, it is a no show. One of CORE's main principles is a member-driven union which I think at this point is sorely needed. (CORE announced a slate for the upcoming CTU elections.)

Since "teacher evaluation" is the big stick to beat teachers with right now, and the big end of that stick is standardized multiple-choice tests, I was especially interested in CORE's new Testing Task Force: "CORE is launching a testing task force to look into how the misuse of standardized tests interferes with making schools that serve the best interests of our students."

The announcement flyer continues: "Our first task will be to encourage teachers that they are not alone in their feeling that tests are terrorizing students, parents, teachers and entire schools. The current testing regime replaces the joy of learning with the bureaucracy of learning."

As I have written in some earlier posts, CPS teachers are working on two testing fronts. ISAT remains the main measure of Adequate Yearly Progress mandated by No Child Left Behind. ISAT is still the main "high stakes" event for the student (it affects their promotion) and for the school (it affects probation, etc.). The second front is the new Scantron Performance Series, which will be administered three times a year, is done online, and provides immediate scores. It is being phased in this year and will replace the Benchmark Assessment in 2010-11. It is a "computer adaptive" test, which means that questions will increase or decrease in complexity until the software determines a score for the student. The test is standards-based, and will provide a number of metrics, including a scaled score, approximate grade-level performance, a normed national percentile ranking and a Lexile score.

I believe the Scantron test will become the main tool for assessing teachers. CPS will expect a student's score to increase throughout the year. And depending on what statistical voodoo the district uses, the amount of expected increase may vary from school to school. A student's score could increase absolutely, but if the increase does not meet CPS expectations, the teacher has not added enough "value" to the student, and so has failed. Since classroom teachers are being targeted as the sole agent responsible for student advance, the scores will be reflected back on the classroom teacher.

Describing the standardized testing regime as a form of terrorism is on the mark I think, given the pressure put on teachers who are presented with the spectre of turnaround, closure and unemployment. Teachers are not being motivated, they are just being demoralized.

From the CORE workshop on January 9, I heard two messages coming from panelists about testing. One message says that high stakes multiple choice tests are okay, but the opacity around them makes them suspect. And so the pushback is to make it possible to see the tests and challenge the kinds of questions on them and the scoring methodology -- i.e., to make "better" tests.

A second message challenges the whole premise of standardized testing -- that they are, in fact, designed to "sort and track," as one panelist put it. They are very limited assessment tools. This is of course a much more radical position.

As teachers we recognize the importance of student assessment. But it needs to be responsible and effective assessment that matches the student. Multiple choice tests can have a useful role, under specific circumstances, but for many reasons, right now the standardized testing regime is an education disaster.

In a major speech to the National Press Club last week, Randi Weingarten, the president of the American Federation of Teachers, outlined her "New Path Forward for Public Education." In the speech, Weingarten opened the door to significant changes in teacher evaluation, including assessing the "student performance" of their charges. The evaluation of teachers is a valid expectation (but keep the term "accountability" out of it! Reject the intrusion of market-speak into the sphere of teaching!). But teaching takes place within a complex matrix of factors. The teacher is not a lone actor. To her credit, Weingarten added that her "new path forward calls on principals, administrators and elected officials to ensure that teachers get the tools, time and trust they need to do their jobs."

BUT BUT BUT. Without a strong, effective organization (read "union"), that second part will not happen. Teachers will be expected to perform miracles because they do not have the "tools, time and trust". "Tools, time and trust" are the preconditions for successful teaching. This second part of her message will be lost, and all the dogs will hear is "the union says it is okay to evaluate teachers on test scores." The particularities of a given school and community will be disappeared. And teachers, especially the experienced ones at the top of the pay grade, will be hung out to dry.

jd