This page last changed on Jun 01, 2009 by kbell.

Table of Contents

Unknown macro: {section}
Unknown macro: {column}
Unknown macro: {column}

Sample student work

This loop gives teachers a selection of student work on a particular step, either in real time during class, or at home when they are in the dashboard/portal. Perhaps the teacher clicks on a "teacher edition" map that then pops into a different view of that step, prompting for "view all work" "view random sample", or "view work for selected students."

The loop would be useful for purposes of "slowing down" (as described by Bob) during class and giving teachers a chance to see student work without peering over shoulders. It would be useful for professional development, in the sense of being a kind of "report" that could be called up again over the summer

technical/operational considerations This is probably one of the easiest to do, and will just require reporting capabilities from the various tools we already have. It could be integrated into the existing TELS portal for teachers. We could also imagine a separate dashboard application that could run on a handheld (I'm thinking about the iPhone or a tablet PC) for use in the classroom.

Bob's 11 loops

  1. Progress data. Reports where students are in the project.
  2. Inquiry index. A measure of how systematic students are in inquiry tasks.
  3. Basic automatic performance measures. Results from embedded assessments and homework based on multiple-choice questions. Include number right, number of tries/amount of help required.
  4. Basic performance measures that require teacher scoring. Short answers. Annotated graph and model snapshots.
  5. Specialized automatic performance measures. Results from numerical questions, KI tasks using Principle Maker, and graph interpretation skills using Smart Graphs.
  6. Levels of difficulty. Nothing motivates students more than providing levels of difficulty of some task. Some activities can have built-in levels and these can be reported to teachers.
  7. Results from conceptual probes. Data on conceptual understanding obtained by having the teacher ask all students to respond to a question, possibly involving a rich object.
  8. Advanced automatic performance measures. Automatic semantic analysis of open response items. Measures of student use of spelling and grammar checkers. Teachers would know how many errors the software detected and how many of these were edited away.
  9. Measures of peer discussions. Assign tasks to groups of student pairs and record the amount of interaction, the number of rich objects exchanged, and the number of key words relevant to the task that are used.
  10. Measures of advice from student experts. The project would encourage students to become expert in something and provide an expert certification process (possibly using "level" challenges). Once certified, students would help others. The teacher would want to know who is certified in what area and how much help they provided.
  11. Trouble tickets. Perhaps students who are having problems could submit an anonymous trouble ticket and the teacher, or anonymous expert students, could respond

Marcia's 10 loops

I think each teacher could select their top 10. Which are feasible?

During Instruction

Offer "alerts" for certain types of information. Anything collected each day can be displayed during instruction but it is too much information. Alerts could indicate the student group that has the characteristic. They could also indicate the class % of the characteristic.
Teacher could select 1 -3 characteristics such as:

Reflection notes: for selected notes:

  1. Use of complete sentences
  2. Use a set of key words
  3. Sample or all responses

Visualizations: for selected visualization

  1. Number of runs
  2. Number of variables changed
  3. Use of controlled experiments [if relevant]
  4. Number of revisits from other steps 

Each day

  1. Create a report for all classes [typically 150 students, in 75 2 person groups]. Offer separate reports by class period.
  2. Could produce the report as soon as the class logs out......
  3. Information the teacher could select to have in the report:
  4. Class characteristics
  5. List all students who did not log in (Absentees)
  6. Histogram of time spent logged in
  7. Histogram of overall progress by activity and step.
  8. Histogram of progress by activity and step from last day logged in [by group, class, all classes]

Run characteristics

  1. Time required to download all curnits [histogram]
  2. Number of failures to load something.
  3. Frequency of return to the main server.
  4. Progress/time required in getting all information back to main server

Reflection notes: for each note:

  1. Number of words written
  2. Use of complete sentences
  3. Use a set of key words


  1. Responses [All written work, work of selected students, random selection of notes]
  2. Histogram of answer length.
  3. with complete sentences;
  4. using each key word.

Visualizations: for each visualization

  1. Number of runs
  2. Number of variables changed
  3. Frequency of changing variables [or some other indicator of interactivity]
  4. Use of controlled experiments [if relevant]
  5. Number of revisits from other steps


  1. Histogram of number of runs, variables changed, revisits
  2. Percent of controlled experiments

Principle maker: for each principle

  1. Number of tries
  2. Accuracy of final principle
  3. Sample alternative principles

Graphs, drawings, other productions

Number of trials [if relevant]

Sample responses [all, selected students, random selection]


  1. Percent of groups participating; average notes by group
  2. Length of notes, histogram of note length
  3. Frequency of revisits to discussion
  4. Summary of comment types [question, and, but, or]
  5. Frequency of comments on notes submitted by others [can you expect that someone has commented on your idea???]

Embedded Test questions

Pretest-posttests; challenge questions

  1. Frequency of response to each question
  2. Answer length, complete sentences, key words for each explanation.
  3. Sample explanations [all, selected students, random selection]
  4. Accuracy and distracter selection for each multiple choice or short answer response.

For each unit

  1. Summary of the information for each day
  2. Organize by day or by run.
  3. Organize by class or all classes.
Document generated by Confluence on Jan 27, 2014 16:42