Archives for June 2007
Looking for InqScribe work?
If you’re a transcriber who uses InqScribe, and are looking for job opportunities, please contact us if you would like us to pass your name on to folks looking for transcribers. Please send us email.
Some Research on Showing Evidence Tool Starting to Appear
It looks like some researchers are starting to mine the data in our Showing Evidence Tool (OK, OK it's not 'ours', it's Intel's, but we designed it and developed it and it's our baby).
Issam Abi-El-Mona and Barbara Hug of University of Illinois, Champaign did a study of student data across grade levels. Here's the paper from ACM: Showing Evidence; Analysis of studentsí arguments in a range of settings (you'll need a free ACM Web Account to download the pdf).
Given that Intel now has a database of thousands of student projects using Showing Evidence, it's certainly tempting to mine the data to evaluate student's ability to construct arguments. And I'm certainly very happy to see people doing research on the tool and looking forward to seeing findings. But...
...In all of my work with technology-rich curricula, the most interesting evidence of student learning is never represented in the tool or the artifacts that students create with the tool. Usually, all the interesting stuff happens in the discussions and arguments between students as they use the tool, and between students and teachers as they work or present their findings. Who knows what was going on in the classroom, how the teacher may have been pushing the students in different directions, what kinds of interactions the students may have been having, etc. The tool really is just a catalyst for all of these interesting interactions.
Setting that aside, there must be SOMETHING we can do with that dataset. One way to approach something like this would be to figure out a constraint, a common baseline for comparison (beyond just the tool itself, or even a classroom). I believe (but I could be wrong) that there are a few pre-fab (Intel-crafted) activities that teachers are using. It could be interesting to compare intra-class and inter-class arguments to see how they differ on the same topic. Another possibility might be to look for similar projects (e.g. funding NASA vs funding NEH).
The embarrassing thing is I was at this conference and completely missed the paper.