Projects: UDL : Reports for teachers & researchers
This page last changed on May 20, 2008 by sfentress.
via the Portal
First, a vocabulary problem is arising that needs to be fixed. Sometimes units (like friction34, i.e. friction blue) are being referred to as activities. This will cause confusion. In the screenshot below (from the portal), the columns to the right of students' names are Activity 1, Activity 2, and Activity 3 - but a mouseover shows that the columns, in fact, represent UDL grades 5-6 electricity, friction, and plants units. The labels in the top row should read Unit 1, Unit 2, and Unit 3---or, if it's feasible, electricity 56, friction 56, plants 56.
(There might be more than one way to use the portal to find this type of information. For example, the portal currently includes a screen for an individual student showing which units he or she has used; clicking a check mark on that screen might lead to a detailed view of the student's engagement in the10 activities for that unit, per above.)
Through the portal, it is currently feasible for a teacher or researcher to see a particular student's saved work on a selected unit (e.g., friction 34). That's useful. A variety of ways to look at saved work might be useful to teachers and researchers, of which this is one:
In several of these cases (items 2 and 4 above, particularly), it will be useful to develop a UI for teachers and researchers to quickly move from one student to another within a class without having to navigate back and forth among several screens.
Presumably all or most of the UDL-related reports would be for researchers, although our vision is that teachers need to be able to assign features to students (as shown, for example, in some of the original PowerPoint slides showing Setup by Class and by Student---see last page).
We have discussed the need for data collection through the portal that will allow us to do research associating students with their special needs (e.g., English language learner; identified as Special Ed with an IEP; poor reading skills). Next year, a teacher should be required to enter this type of data for all students in his or her class or classes. We need the information in order to associate behaviors (e.g., clicking "help" buttons) and outcomes (e.g., post-test scores) with students' learning needs. (We also need a way to handle this information that protects students' confidentiality, and we need to brainstorm some more about that. Perhaps these data are only provided to researchers in files that replace students' names with ID numbers.)
My understanding is that the MAC project produced hundreds of thousands or millions of bits of information, making it a challenge to analyze those data. Our work on UDL will benefit from developing an analysis plan ahead of time---what data do we especially need to collect; why; and how do we expect to use it?
For example, if we offer a choice of language, and if teachers get to designate which students use a language other than English, we would want to know which teachers took advantage of this feature, for which students, on which units? We would analyze: do teachers tend to do this for all students in a class whose first language is not English, or only some? We would want to include a survey or interview question for teachers who have used UDL units asking them if this feature was useful.
Will students be allowed to turn the non-English language on and off at will? That would raise more complex data collection and analysis issues; e.g., it would be far more difficult to associate a score, or time spent, or satisfaction on a unit with the language used by the student. Our research questions might be simple: how often do teachers use this feature, for which students, why, and how useful do they believe it is to those students? (A complex research question, surely beyond the scope of this project, would be to test whether students who study a UDL science unit in their native language have better outcomes than those who study it in English if that is not their native language. As usual, random assignment of limited-English proficient students to a condition would strengthen credibility of the findings.)
If teachers assign scaffolding levels, we would want similar information in some type of report: which teachers take advantage of the feature, for which students, on which units? Later, we might ask them how and why they made those decisions, and how useful they believe the feature was to students. If students can also select different scaffolding levels, we would want to know on which items they do that, and how (e.g., how many times they change to a different level).
For coaches and technical help, we would want to know who uses them, at what point in which unit (i.e., on which page), and be able to aggregate easily (X% of this class, Y% of that one, used a coach for such-and-such unit, and most often they used it for this or that page). We would want to gather information from teachers and students (via surveys and/or interviews or focus groups) about how useful they found the coaches and technical help.
Technical help may be multi-dimensional, and we want as much detail as feasible. E.g., a student might be able to click on a smart graph to get different types of help.
We should keep this very simple, perhaps collecting nothing more than the total cumulative time a student spends on a unit, between the time they start the pre-test to when they complete the post-test. Their time using a unit may well accumulate over several school days.
We might be able to insert some research questions in each unit's wrap-up, such as asking whether the unit was too easy, too hard, or just right; or whether the technology worked.
It will be a challenge to learn what features are useful, not just how often they are used. Asking teachers and students is one feasible approach; analyzing which students used what features and correlating to their needs is another.
|Document generated by Confluence on Jan 27, 2014 16:49|