I am enrolled in the Technology in Education program at National-Louis University. I may add comments about the program itself in a future post, but not now. The program requires me to take two one-credit workshops, and today, Saturday, I am in the first session of a workshop on Web 2.0. The workshop is being taught by Randy Hansen, who also happens be to the Program Director.
I know that Web 2.0 is one of those ideas that has been out there for a while now. And it is more of an idea or a concept than a specific technology, or maybe a bundle of technologies that make the realization of a general idea possible. And that General Idea is that we are not just the objects of the web, but also the subjects (in sort-of-Brechtian terms); or not just consumers but also producers (in ho-hum market terms).
Our first assignment today is to create a blog in Blogger, but I am recycling this already existent blog.
Posting now...
jd
Saturday, March 14, 2009
Sunday, March 1, 2009
A tangled mess
No, I am not referring to the several milk crates of external PC speakers we removed from the classrooms, with their accompanying tangle of wires, now in one -- I was tempted to say "Gordian knot", but that implies there is some prize for untangling the mess -- umm ... knotted mess. What a silly thing to provide with every classroom computer. Much better to have provided headsets...
No, I am referring to the mess of state math benchmarks, performance descriptors and assessment frameworks. (I understand that each of these is designed with a different goal in mind but...)
In Illinois, each content standard, including math, has five benchmarks "that describe what students should know and be able to do" (from the the Illinois State Board of Education's (ISBE) "Introduction - Design for Performance Standards" document). The benchmarks are organized into 5 broad grade level categories: early elementary, late elementary, middle/junior high, early high school, and late high school.
The standards and benchmarks explain "what" students should know at different periods of their education. "Performance descriptors" were added around 2002 to specify "how well students perform at various points on an educational development continuum." The performance descriptors use "stages", labelled A through J. Stages A-C map to the "early elementary" level, D-E to "late elementary", F-H to middle school, I to early high school, and J to late high school. These stages can span grades. For example, stages E, F and G correspond to sixth grade, to address students that are a
bit behind schedule, on schedule and ahead of schedule. Three of the stages correspond to the state achievement test (ISAT) expectations: C for third graders, E for fifth grade, and H for eighth grade. "The other stages are not meant to explicitly correspond to the missing grades between." (Introduction referenced above, emphasis in the original.) The document above also provides a clear pyramid of the relationship of goals, standards, benchmarks and performance descriptors (see graphic).
Still with me? The first problem is that the performance descriptors use a different numbering scheme, indicating that they do not map clearly to the next level up, that is, to the benchmarks. For example, consider Benchmark 6.C.2a.
6.C.2a Deciphered:
And Benchmark 6.C.2b (so same standard and level) states "Show evidence that computational results using whole numbers, fractions and decimals are correct and/or that estimates are reasonable."
But the Performance Descriptors are not mapped to the benchmarks. Or at least I have not been able to find such a document from ISBE. The closest things I found were documents from the Chicago Public School (CPS) Standards-Based Curriculum Initiative (SBCI). Unfortunately, the initiative appears to be shut down, and I can no longer find the documents they produced anywhere on the Internet. I have copies of the English Language Arts for Grade 6 and for Grade 7, and the math documents for Grades 6, 7 and 8. [If anyone reads this, and has access to the other grade documents, I would very much like to see them!] The SBCI documents matched the Performance Descriptors with the Benchmarks (although they did not number them, where they are numbered in the ISBE documents). Some Performance Descriptors apply to multiple benchmarks, a problem which the SBCI authors tried to address. For example:
Performance Descriptor 6.C, stage F, number 1 in the ISBE descriptor document, says "Select and use appropriate operations, methods, and tools to compute or estimate using whole numbers with natural number exponents." SBCI lists this Performance Descriptor with Benchmark 6.C.2A and Benchmark 6.C.2B.
I suppose this is a relatively minor problem, and at least the SBCI documents exist to aid in translating between the benchmarks and the performance descriptors, if one has a copy of their document for the grade in question.
The problem is more difficult with the "Assessment Framework", which is one more Illinois system for organizing math learning tasks. According to the ISBE Assessment Framework website:
There are at least two practical problems that arise from these parallel coding schemes. I have been putting together an online searchable database of online math activities (let me if you would like to see the work-in-progress). A user can assign benchmarks, performance descriptors and assessment framework items to resources he or she has located on the Internet. It would be very nice if setting one type (e.g. assessment framework) could automatically set a corresponding (or closely related) benchmark and performance descriptor (the latter especially, since it is more specific). But to do this, I need to create my own translation table. I started on this dreary task, but my brain quickly fogged over with the mind-numbing vocabulary of learning standards, benchmarks, etc. I would hope that the designers of the standards and tasks schemes would have done this already [please let me know, anyone!].
A second problem comes when trying to use CPS math assessments to do data-driven instruction. The CPS Math Benchmark Assessment (CMBA) only categorizes its questions to the standard level. This is too blunt -- the questions should be categorized by performance descriptor and assessment framework objectives (especially since the sample ISAT test questions are matched to assessment framework objectives), so that a teacher can know what general skill or concept to teach students that struggle with particular questions. Yes, the teacher should be able to determine that from the question, but teachers are notoriously short of time, so why not provide some direction for them already?
In addition, the numbering schemes provide a handy key for locating resources to address specific learning tasks, whether in the commonly used curricula or online. Without such a key, the teacher must do a lot of extra work. (I should note that the CPS Reading Benchmark Assessment does categorize questions by assessment framework; and the CMBA does provide distractors for each question, which are a big help in narrowing down what may have been the student's misunderstanding.)
As I noted, the standards, benchmarks, performance descriptors and assessment frameworks are designed for different things -- what a student should know vs. how well a student should know it at a given stage vs. what tasks they will be assessed on on a standardized test. But these categories are interrelated, and should correlate. According to "understanding by design" principles (and probably good educational principles in general), assessment should follow from
the Big Goals of learning, so it should be possible (or rather, it should be inherent in the design) to correlate the assessment framework items with specific benchmarks and performance descriptors.
And these correlation tables could exist for all I know. But I have been unable to locate them online. If anyone knows of such a thing, please let me know!
I could expand this further to the babel of different state standards and the various additional categories each state has come up with. (As an aside, see The State of State Math Standards 2005 by David Klein et. al. via the Thomas B. Fordham Institute, for an evaluation of different state math standards. Illinois received a C, up from a D in 2000.) Matching different state standards is sufficiently complex to allow the space for at least one commercial service to develop, Academic Benchmarks, to help curriculum publishers match their materials up to different state frameworks. I assume their general methodology is to develop a lingua franca of standard/benchmarks/etc. and then develop a database of state standards that they key to the common dialect. The National Council of Teachers of Mathematics standards, expectations and "focal points" may be sufficient, or at least a good starting point. I made a weak attempt to do this using BrainPOP videos as a common reference point, since they have been mapped to different state benchmarks, etc. ("powered by Academic Benchmarks"), but their videos don't hit every benchmark or descriptor, so the result was spotty (not to mention being an incredibly tedious task).
I think an open source, online tool is needed, that would look something like a currency converter. Need to translate Tennessee "Student Performance Indicator" 6.1.1 to a similar Illinois Performance Descriptor? Enter the Tennessee number on one side, select the source state and the destination state, and click the "Translate" button...
jd
No, I am referring to the mess of state math benchmarks, performance descriptors and assessment frameworks. (I understand that each of these is designed with a different goal in mind but...)
In Illinois, each content standard, including math, has five benchmarks "that describe what students should know and be able to do" (from the the Illinois State Board of Education's (ISBE) "Introduction - Design for Performance Standards" document). The benchmarks are organized into 5 broad grade level categories: early elementary, late elementary, middle/junior high, early high school, and late high school.
The standards and benchmarks explain "what" students should know at different periods of their education. "Performance descriptors" were added around 2002 to specify "how well students perform at various points on an educational development continuum." The performance descriptors use "stages", labelled A through J. Stages A-C map to the "early elementary" level, D-E to "late elementary", F-H to middle school, I to early high school, and J to late high school. These stages can span grades. For example, stages E, F and G correspond to sixth grade, to address students that are a

Still with me? The first problem is that the performance descriptors use a different numbering scheme, indicating that they do not map clearly to the next level up, that is, to the benchmarks. For example, consider Benchmark 6.C.2a.
6.C.2a Deciphered:
- "6" is a learning goal -- Illinois learning goal 6 is "number sense";
- "6.C" is an Illinois learning standard -- "Compute and estimate using mental mathematics, paper-and-pencil methods, calculators and computers";
- "2" indicates the benchmark level (1 = early elementary, 2 = late elementary, 3 = middle school, etc.);
- "a" identifies the specific benchmark: "Select and perform computational procedures to solve problems with whole numbers, fractions and decimals." Standards that have only one corresponding benchmark omit the lower-case a.
And Benchmark 6.C.2b (so same standard and level) states "Show evidence that computational results using whole numbers, fractions and decimals are correct and/or that estimates are reasonable."
But the Performance Descriptors are not mapped to the benchmarks. Or at least I have not been able to find such a document from ISBE. The closest things I found were documents from the Chicago Public School (CPS) Standards-Based Curriculum Initiative (SBCI). Unfortunately, the initiative appears to be shut down, and I can no longer find the documents they produced anywhere on the Internet. I have copies of the English Language Arts for Grade 6 and for Grade 7, and the math documents for Grades 6, 7 and 8. [If anyone reads this, and has access to the other grade documents, I would very much like to see them!] The SBCI documents matched the Performance Descriptors with the Benchmarks (although they did not number them, where they are numbered in the ISBE documents). Some Performance Descriptors apply to multiple benchmarks, a problem which the SBCI authors tried to address. For example:
Performance Descriptor 6.C, stage F, number 1 in the ISBE descriptor document, says "Select and use appropriate operations, methods, and tools to compute or estimate using whole numbers with natural number exponents." SBCI lists this Performance Descriptor with Benchmark 6.C.2A and Benchmark 6.C.2B.
I suppose this is a relatively minor problem, and at least the SBCI documents exist to aid in translating between the benchmarks and the performance descriptors, if one has a copy of their document for the grade in question.
The problem is more difficult with the "Assessment Framework", which is one more Illinois system for organizing math learning tasks. According to the ISBE Assessment Framework website:
The Illinois Assessment Frameworks are designed to assist educators, test developers, policy makers, and the public by clearly defining those elements of the Illinois Learning Standards that are suitable for state testing. They are not designed to replace local curricula and should not be considered state curricula. They define the content that may be assessed on ISAT...The Assessment Frameworks (AF from here on) are organized by standard, but that is the extent of the attempt (again, as far as I can tell) to correlate the framework with either benchmarks or performance descriptors. The AF has its own coding scheme. All items (referred to as "objectives") for a particular standard are numbered sequentially by grade (see the llinois Mathematics Assessment Framework Grades 3–8, 2006). For example, for item 6.6.12
- The first "6" indicates the standard (in this case, "number sense")
- The second 6 indicates the grade level (grade 6)
- The "12" indicates the learning objective to be assessed, numbered sequentially from 1 for standard 6, grade 6.
There are at least two practical problems that arise from these parallel coding schemes. I have been putting together an online searchable database of online math activities (let me if you would like to see the work-in-progress). A user can assign benchmarks, performance descriptors and assessment framework items to resources he or she has located on the Internet. It would be very nice if setting one type (e.g. assessment framework) could automatically set a corresponding (or closely related) benchmark and performance descriptor (the latter especially, since it is more specific). But to do this, I need to create my own translation table. I started on this dreary task, but my brain quickly fogged over with the mind-numbing vocabulary of learning standards, benchmarks, etc. I would hope that the designers of the standards and tasks schemes would have done this already [please let me know, anyone!].
A second problem comes when trying to use CPS math assessments to do data-driven instruction. The CPS Math Benchmark Assessment (CMBA) only categorizes its questions to the standard level. This is too blunt -- the questions should be categorized by performance descriptor and assessment framework objectives (especially since the sample ISAT test questions are matched to assessment framework objectives), so that a teacher can know what general skill or concept to teach students that struggle with particular questions. Yes, the teacher should be able to determine that from the question, but teachers are notoriously short of time, so why not provide some direction for them already?
In addition, the numbering schemes provide a handy key for locating resources to address specific learning tasks, whether in the commonly used curricula or online. Without such a key, the teacher must do a lot of extra work. (I should note that the CPS Reading Benchmark Assessment does categorize questions by assessment framework; and the CMBA does provide distractors for each question, which are a big help in narrowing down what may have been the student's misunderstanding.)
As I noted, the standards, benchmarks, performance descriptors and assessment frameworks are designed for different things -- what a student should know vs. how well a student should know it at a given stage vs. what tasks they will be assessed on on a standardized test. But these categories are interrelated, and should correlate. According to "understanding by design" principles (and probably good educational principles in general), assessment should follow from
the Big Goals of learning, so it should be possible (or rather, it should be inherent in the design) to correlate the assessment framework items with specific benchmarks and performance descriptors.
And these correlation tables could exist for all I know. But I have been unable to locate them online. If anyone knows of such a thing, please let me know!
I could expand this further to the babel of different state standards and the various additional categories each state has come up with. (As an aside, see The State of State Math Standards 2005 by David Klein et. al. via the Thomas B. Fordham Institute, for an evaluation of different state math standards. Illinois received a C, up from a D in 2000.) Matching different state standards is sufficiently complex to allow the space for at least one commercial service to develop, Academic Benchmarks, to help curriculum publishers match their materials up to different state frameworks. I assume their general methodology is to develop a lingua franca of standard/benchmarks/etc. and then develop a database of state standards that they key to the common dialect. The National Council of Teachers of Mathematics standards, expectations and "focal points" may be sufficient, or at least a good starting point. I made a weak attempt to do this using BrainPOP videos as a common reference point, since they have been mapped to different state benchmarks, etc. ("powered by Academic Benchmarks"), but their videos don't hit every benchmark or descriptor, so the result was spotty (not to mention being an incredibly tedious task).
I think an open source, online tool is needed, that would look something like a currency converter. Need to translate Tennessee "Student Performance Indicator" 6.1.1 to a similar Illinois Performance Descriptor? Enter the Tennessee number on one side, select the source state and the destination state, and click the "Translate" button...
jd
Wednesday, December 31, 2008
Kidspiration vs Inspiration

I wrote an explanation of the above graphic and was careful to save it as I went, and then published it, and the text had disappeared. Thanks blogger. So this is a re-write, which I will keep brief.
The graphic above is my attempt to get a feel for Kidspiration, and also to compare it with its big sister, Inspiration. I wanted to get a feel for what we would be missing if we purchased Kidspiration for our Mac lab, instead of Inspiration. It turns out not much for our students (we are a pre-K through 8th grade school). Inspiration provides more formatting tools and export options, towards creating serious, quality presentations. But Kidspiration provides the same basic concept mapping tools, and adds a lot of extra teacher friendly tools like the ability to create activities, and include teacher comments. It also includes a bundle of activities, plus some special math learning tools, including digital versions of pattern blocks, fraction tiles and base ten blocks.
Our older kids (say, 7th and 8th graders) may be put off by the "kid" part of Kidspiration, but that can be worked around I think, while still getting the benefits of "visual learning".
jd
Sunday, December 14, 2008
Using iChat for video conferencing
We did our first school-school video conference a week ago (12/5). Our Technology Magnet Cluster partner school is Kellman Elementary, near Sacramento and Polk. Shane Jonas, the lead tech teacher at Kellman proposed using Apple's iChat as an easy way to have students at the two schools meet. And guess what? It really was an easy, easy way to have the students interact. We recorded the Dvorak side of the conversation, and you can see it on the Dvorak website.
Our building has a funky wireless infrastructure (only one access point on the third floor), and I ran into some network issues at the beginning (not shown on the video). It was a classic error of testing out the conferencing on the second floor, where the connection is strong (three strategically placed access points), and assuming that it would work the same on the third floor, and not allowing enough time to set things up on the third floor before starting the chat. Shame. Once we started the conference, though, everything worked just fine. The students were very patient.
Kellman had a video camera with a Firewire connection, which seems to be a requirement for an external video camera for iChat. This allowed the Kellman side to be much more flexible with the video (though this is hard to see on the video). We used the built-in camera on the MacBook (only USB cameras here), so the students had to scrunch together so that they could all be seen by Kellman, and the Dvorak teacher, Ms. Minter, had to remember to step in front of the MacBook to be seen by our camera. Ms. Minter's classroom has a wireless "audio enhancement system", which amplified the sound nicely for the microphone in the MacBook and enabled the Dvorak students to also easily hear the Kellman students. Kellman used an external microphone; I'm not sure how they did the audio.
Except for the initial technical difficulties, I think it was a very successful first time out. The students seemed fascinated by the whole thing, and there are a lot of possibilities for this kind of interaction and collaboration going forward.
For next time: Allow more time for set-up and pre-conference testing; also help the students understand the importance of preparing remarks ahead of the conference.
jd
Our building has a funky wireless infrastructure (only one access point on the third floor), and I ran into some network issues at the beginning (not shown on the video). It was a classic error of testing out the conferencing on the second floor, where the connection is strong (three strategically placed access points), and assuming that it would work the same on the third floor, and not allowing enough time to set things up on the third floor before starting the chat. Shame. Once we started the conference, though, everything worked just fine. The students were very patient.
Kellman had a video camera with a Firewire connection, which seems to be a requirement for an external video camera for iChat. This allowed the Kellman side to be much more flexible with the video (though this is hard to see on the video). We used the built-in camera on the MacBook (only USB cameras here), so the students had to scrunch together so that they could all be seen by Kellman, and the Dvorak teacher, Ms. Minter, had to remember to step in front of the MacBook to be seen by our camera. Ms. Minter's classroom has a wireless "audio enhancement system", which amplified the sound nicely for the microphone in the MacBook and enabled the Dvorak students to also easily hear the Kellman students. Kellman used an external microphone; I'm not sure how they did the audio.
Except for the initial technical difficulties, I think it was a very successful first time out. The students seemed fascinated by the whole thing, and there are a lot of possibilities for this kind of interaction and collaboration going forward.
For next time: Allow more time for set-up and pre-conference testing; also help the students understand the importance of preparing remarks ahead of the conference.
jd
Sunday, November 30, 2008
Edubuntu and thin client computing
The link below goes to a project I had to do as part of my work towards a technology specialist certification through National Louis University.
The project lays out setting up a thin client computer lab using Edubuntu. I am excited about the possibilities of this: it is cost-efficient, it saves on labor time; it extends the life of otherwise obsolete desktop computers; and it taps into the growing open source movement and the great software coming out of that world.
Of geeky techno interest might be the last appendix, where I describe creating a mini-thin client set up on my Macintosh laptop (Macbook), using VMWare Fusion to create a virtual environment inside which I installed Edubuntu, and then used the Ethernet adapter and the Airport adapter as my two network interfaces. The wired connection went to a switch into which I plugged an aging Dell laptop which served as my thin client. And it worked! Maybe the technical details will be useful to somebody out there.
Here is a link to the document: Thin Client Computer Lab Project
It is a PDF, and on the fat side -- almost 1mb -- pictures!
jd
The project lays out setting up a thin client computer lab using Edubuntu. I am excited about the possibilities of this: it is cost-efficient, it saves on labor time; it extends the life of otherwise obsolete desktop computers; and it taps into the growing open source movement and the great software coming out of that world.
Of geeky techno interest might be the last appendix, where I describe creating a mini-thin client set up on my Macintosh laptop (Macbook), using VMWare Fusion to create a virtual environment inside which I installed Edubuntu, and then used the Ethernet adapter and the Airport adapter as my two network interfaces. The wired connection went to a switch into which I plugged an aging Dell laptop which served as my thin client. And it worked! Maybe the technical details will be useful to somebody out there.
Here is a link to the document: Thin Client Computer Lab Project
It is a PDF, and on the fat side -- almost 1mb -- pictures!
jd
Saturday, November 15, 2008
Here's my part to promote the 2008 "Give One Get One" (aka G1G1) program (aka G1G1 2008) of the One Laptop per Child (aka OLPC) project.
The program begins 11/17/2008; orders are being handled through http://www.amazon.com/xo.
The following is lifted from the OLPC wiki:
-jd
The program begins 11/17/2008; orders are being handled through http://www.amazon.com/xo.
The following is lifted from the OLPC wiki:
Join our community mailing list, grassroots@lists.laptop.org, to discuss how to get the word out about the new campaign.
Blog it, add a comment about it to every article about OLPC and the XO.
* Social site updates -- Facebook, Twitter[1], MySpace : there are OLPC accounts on many of these sites which need maintenance and regular updating. For instance some 2007-era badges and promotions need to be updated to link to the Amazon site.
* Viral marketing. Put http://www.amazon.com/xo in your e-mail signature. Mention G1G1 in blog posts. Comment on misinformed or incomplete articles online, and include the link and the date, Nov. 17.
-jd
Sunday, November 2, 2008
Use of audience response system in 7th grade class
Our use of technology at Dvorak is moving along much faster than this blog would suggest. I will try to update the blog more often!
Below is some text that accompanies a three minute rough video that I put on our website (dvoraktech.org) about the use of the Turning Technologies TurningPoint audience response system that we are using in some of our classes.
Link to audience response system video
This clip shows the use of the TurningPoint audience response system in Ms. Minter's 7th grade class. I think this clip is interesting for a couple reasons.
First, although the PowerPoint slides are pretty basic, they indicate the best practices of a master teacher: Ms. Minter took the initiative to see how the "clickers" (which is how the response system has come to be known) would work in her class. The slides only had A, B, C and D on them, but this worked because the students were given a text to refer to (last year's benchmark assessment test booklet). The students were given the question number; answering the question involved some reading and referring to the test text. It was exciting to see a teacher take up the new technology and play around with it to see how it would work in practice. Ms. Minter came up with a simple, workable -- and successful -- way of putting brand new technology to use.
Second, Ms. Minter and I were surprised to see how engaged students were with the clickers. This is evident in the clip when they start doing the New Year's Eve countdown, and their response when the correct answer is shown. I'm not sure why the clickers are so popular -- perhaps it is a combination of the instant feedback and recognizing oneself as being part of a group.
The entire clip is about 3-1/2 minutes long, and shows one complete question sequence. Ms. Minter set the timer at three minutes, which in practice turned out to be a bit long. But that's all part of good teaching practice -- design, try, reflect, revise, try again, and so on. The camera work is a bit shaky -- I didn't have a tripod at the time. And I haven't mastered editing with iMovie, so the ending is a bit messy; on the other hand maybe it adds to the authentic feel...
jd
Below is some text that accompanies a three minute rough video that I put on our website (dvoraktech.org) about the use of the Turning Technologies TurningPoint audience response system that we are using in some of our classes.
Link to audience response system video
This clip shows the use of the TurningPoint audience response system in Ms. Minter's 7th grade class. I think this clip is interesting for a couple reasons.
First, although the PowerPoint slides are pretty basic, they indicate the best practices of a master teacher: Ms. Minter took the initiative to see how the "clickers" (which is how the response system has come to be known) would work in her class. The slides only had A, B, C and D on them, but this worked because the students were given a text to refer to (last year's benchmark assessment test booklet). The students were given the question number; answering the question involved some reading and referring to the test text. It was exciting to see a teacher take up the new technology and play around with it to see how it would work in practice. Ms. Minter came up with a simple, workable -- and successful -- way of putting brand new technology to use.
Second, Ms. Minter and I were surprised to see how engaged students were with the clickers. This is evident in the clip when they start doing the New Year's Eve countdown, and their response when the correct answer is shown. I'm not sure why the clickers are so popular -- perhaps it is a combination of the instant feedback and recognizing oneself as being part of a group.
The entire clip is about 3-1/2 minutes long, and shows one complete question sequence. Ms. Minter set the timer at three minutes, which in practice turned out to be a bit long. But that's all part of good teaching practice -- design, try, reflect, revise, try again, and so on. The camera work is a bit shaky -- I didn't have a tripod at the time. And I haven't mastered editing with iMovie, so the ending is a bit messy; on the other hand maybe it adds to the authentic feel...
jd
Subscribe to:
Posts (Atom)