Sunday, May 29, 2011

Drunk driving

We had a staff meeting at my school on Friday. Classroom teachers received forms for sorting their students into Tier 1, 2 and 3 groups for implementing Response to Intervention, or RTI. K-2 grades will use DIBELS data to sort students, and grades 3 - 8 will use Scantron data. Since I work on organizing the Scantron testing at my school, and end up sifting through the data, I was especially interested in the choice of Scantron data to slot students into RTI tiers.

The form passed out came from our Area office, and uses Scantron National Percentile Rankings (NPR) for sorting. Students in the 24th percentile and up fall into Tier 1 (the lowest priority for special interventions). Students between the 11th and 24th percentiles fall into Tier 2, and students under the 11th percentile fall into Tier 3. (I may be off by a percentile or two -- I'm doing this from memory.)

I have written about the problems of standardized testing and "data-based decision making" before, likening it to the drunk looking for his car keys under a streetlight, not because that is where he dropped them, but because that is where the light is. The light in this case is the pale beam of standardized test data -- it doesn't tell us nearly enough about the student, and for many of the neediest students, it is hopelessly distorting. We won't find the keys to student success with the data, but it's easy for administrators to collect, and to pretend that it means much more than it does. Drunk on data, and wandering off course from the outset.

For example: After we completed the math testing this Spring, about 8 percent of students at my school dropped more that 100 points from their Fall scores. The numbers were worse for reading -- almost 11 percent of students had dropped more that 100 points. These are big numbers, well outside of the statistical error range. Excepting epidemic of brain injury, such a drop can only be attributed to subjective student factors: boredom, disinterest, a desire to be done with test, difficulty with the computer medium, etc.

The belief that these initial numbers were faulty was confirmed when we made students retake the test. Most of the re-takers did much better, erasing most of the drop, and in many cases swinging into solid gains for the year.

The point, again, is that testing students is not the same as taking their temperature, and so much depends on the subjective factor. But the RTI strategy doesn't appreciate the subjective factor -- it's all about the data, as goofy as it may be. One student at my school went from dropping over 200 points, landing in the 7th percentile, to increasing over 50 points for the year (not a great gain, but keeping him at grade level) after he retook the test -- moving from Tier 3 (deserving special interventions), to Tier 1 (no special interventions). [Value-added alert!] It should be said that the possibility of testing into a higher percentile is not very likely, so the general danger is that students will be assigned to tiers they should not be in, consuming time and resources that perhaps should go to needier students.

One of the ironies in all of this is that all of teachers I have talked to already have a sense of their students abilities, and could quickly categorize their students without the Scantron numbers in front of them. After all, they assess their students every day. And the teachers can tell which numbers are off. And all would probably say that the time and the resources to provide interventions to all the students that really them is just not nearly enough; and the process to get the help needed is too long and places too much burden on the already over-stretched teacher.

jd

Saturday, May 28, 2011

Clarification

I want to clarify my previous post. As a reminder of one of the many pitfalls awaiting the tech-heavy lesson.

I am teaching a course at Dominican U., Integrating Technology Into the Curriculum. On the first night of class, I have the "candidates" (how DU refers to folks in the teacher education program, to distinguish from "students", whom the future teachers will be teaching) get set up with the basic Web 2.0 tools. They create a blog if they don't have one already, set up a wikispaces account to work on a course wiki we create, and they set up a Diigo account to begin a professional library of web resources and also to experience social bookmarking.

For blogging, I suggest Google's blogger, mainly because it is what I am familiar with. I haven't created a new blogger account in a while, but "it worked fine when I tried it".

In class, however, things went differently. After the candidates created their blogs, Google prompted them to enter a phone number as a final confirmation step. I assume this is to prevent mass creation of bogus blogs for whatever spammish purpose. The confirmation process was a surprise to me, beyond the inscrutable, illegible "type these letters" images Google usually uses. The privacy warning flags went up immediately in most everyone's mind I think, compounded by having watched, a few minutes earlier, the Onion's Google Opt-Out Village hilarity, which only compounded the distrust). "Okay, I'll sacrifice myself for the class. You can use my cell number if you don't want to provide your number." Except we were in the "Lower Level" of Parmer Hall (trans. "basement"), with no cell signal. So big embarrassment. As soon as I did get a cell signal, I received a dozen or so texts from Google for the new blogs, but the texts did not identify which blogs they went to, so they were useless.


Google Opt Out Feature Lets Users Protect Privacy By Moving To Remote Village

I understand why Google has the additional confirmation steps. But the cell phone number request seems too much, especially if you are trying to create the accounts someplace where there is no cellphone coverage. Perhaps an email address confirmation is too easy to automate and circumvent Google's defenses. I don't know what a better mechanism might be, but it certainly interfered with what I hoped to do.

And hence the previous post, done during class to illustrate how to make a blog posting, and how to comment on a blog posting.

Now, I think a fundamental rule of using tech in the classroom is to go through all of the steps first, before class -- a dress rehearsal. Which of course I didn't do. On the other hand, perhaps it was the multiple attempts to create new blogs from the same IP address or pool of IP addresses perhaps triggered some additional confirmation process. A classic quality assurance engineering problem -- not testing under the actual conditions of use -- and how do you easily simulate, ahead of time, a class of students doing the same thing at the same time? Yes I know there are special apps to simulate multiple users doing something at the same time, but I don't see using such for the case described above. Experience is perhaps the better guide -- I have seen similar problems when creating GMail accounts in a class, so I should have known there might be issues.

Memo to self: In the future, ask adult students to create their accounts before class. For younger students, stick with sites where I can create their accounts.

jd

Monday, May 9, 2011

Sunday, May 8, 2011

Common Core and math

The new Common Core State Standards (CCSS) are coming, ready or not. Illinois and 43 other states have adopted them. As if everything else going on in education wasn't enough, CCSS will be a Big Deal for teachers in all grades when they go into effect in the 2014-15 school year (which right now sounds like it is sometime in the 23rd century).

For a good write-up about CCSS in relation to math education, see the latest column by the J. Michael Shaughnessy, president of the National Council of Teachers of Mathematics, titled "CCSSM and Curriculum and Assessment: NOT Business as Usual". From his write-up, expect new curricula, lots of PD and powerpoints, and much general wailing and gnashing of teeth as the oil tanker of math education is turned.

Two things stand out for me re: the math standards (especially the way Shaughnessy explains it).

One is the emphasis on both math content and math practice. According to CCSS, there are eight math practices that students should master:
  1. Make sense of problems and persevere in solving them.
  2. Reason abstractly and quantitatively.
  3. Construct viable arguments and critique the reasoning of others.
  4. Model with mathematics.
  5. Use appropriate tools strategically.
  6. Attend to precision.
  7. Look for and make use of structure.
  8. Look for and express regularity in repeated reasoning.
With the exception "model with mathematics" (#4), the math practices outlined in the new core standards are more generally life-persistent skills in thinking and solving problems. If teachers are allowed to organically infuse the classroom with these practices, education may well look very different.

The other thing about Shaughnessy's write-up that stands out for me is how standardized testing will change to reflect the new standards. He includes links to some initial draft assessments, including the Math Assessment Project (MAP) and the Inside Mathematics initiative. The sample assessments are much more about solving problems -- "performance tasks" -- than simple skill assessment (as is the case of most of the current standardized tests).

This emphasis on performance tasks is supposed to be reflected in the two initiatives to revamp standardized testing. (Illinois is supporting the Partnership for the Assessment of Readiness for College and Career (PARCC) initiative; the other one is SMARTER Balanced Assessment Consortium (SBAC); both are supported by the Department of Education.) Both initiatives will be administered online. PARCC calls their tests, to be administered four times a year, "next-gen assessments".



I am not sure how "performance tasks" (think of ISAT's extended response as a possible example) will be done online, if at all. Both consortia are talking about computer-adaptive tests for portions of the tests, which makes me wonder how they will be different from, say, the Scantron Performance Series tests we are taking right now. From PARCC's powerpoint, it appears that the assessment process will drive the instructional frameworks that will end up driving the implementation of CCSS. This doesn't have to be a bad thing (that is, tail wagging dog), it all depends on whether really good assessments can be developed from CCSS that actually can assess a student's math practice.

The general drift in math standards, to me, reflects similar changes made to the National Educational Technology Standards (NETS), revised in 2007. NETS focuses not on technical skills, but on how the tools are used. Only one of the six NETS for Students standards refers to technique ("Technology Operations and Concepts"), the other five emphasize creativity, communication and collaboration, information fluency, critical thinking, and citizenship. That is, both CCSS math practices and NETS emphasize a meta-approach to learning -- how to think, how to create.

The mere existence of NETS does not mean of course that they are implemented in a deep way. Nor will the mere existence of thoughtful standards for math practice mean that they will improve math education. So much depends on if teachers will be allowed to implement them.

jd