Archives for February 2006

AERA, NARST Programs Available

Advance programs for NARST (pdf) and AERA (searchable website) are out. Have a look.


posted February 28, 2006 by eric | permalink | Add comment

Exploratorium News: Dennis Bartels Named Director

The current president of TERC is named Director of the Exploratorium effective this spring. Sherry says Bartels will be the first Director without a “hard sciences” background.

Culminating an international search, the Board of Directors of the Exploratorium has selected Dr. Dennis M. Bartels, a nationally known science education and policy expert, to be the new director of San Francisco’s acclaimed museum of science, art and human perception – an informal science education institution. Dr. Bartels will be joining the museum on May 1, 2006. Virginia Carollo Rubin has served as Acting Director since September 2005.


posted February 24, 2006 by eric | permalink | Add comment

Yahoo's Javascript Tools

Yahoo has posted some really useful developer tools for web page interactivity. Their User Interface library is a set of utilities and controls for Javascript and AJAX development. And their Design Pattern Library summarizes Yahoo’s approach to a number of common challenges in web design. (Some of the patterns seem a bit too obvious, but all in all it’s a great start to documenting common solutions to recurring problems.)


posted February 23, 2006 by eric | permalink | Add comment

Using Cornrows to Learn Fractals

In the tradition of using cultural knowledge as a touchstone for situating and motivating learning, here is a clever project at RPI that uses cornrow design to teach about fractals. A related book, African Fractals: modern computing and indigenous design, is also available.


posted February 23, 2006 by eric | permalink | Add comment

Campfire debuts

The folks at 37signals have released a new web tool called Campfire that provides a group chat solution for businesses. Campfire provides a persistent website where users can create chat rooms, post files, and log transcripts. As with other 37signals tools, it is designed with elegant simplicity, has a pleasing look and feel, and requires only a web browser. I’m eager to test it out.


posted February 22, 2006 by matt | permalink | Add comment

ICLS Registration Open

You can now register online for ICLS 2006, which runs June 27 to July 1 in Bloomington, Indiana. Submissions to the doctoral consortium are due March 1, and early registration ends May 15. If you haven’t been to ICLS, it’s a fun, smallish conference with a very multidisciplinary feel.

ICLS is the official conference of the International Society of the Learning Sciences. The Learning Sciences takes an interdisciplinary approach to the study of learning, cognition, and development in real world contexts. Learning scientists believe that any investigation of teaching and learning must consider context, cognition, and learning architecture, which we treat as inextricably intertwined. All who are interested in the study of learning in context and the design of learning environments are invited to participate in ICLS 2006, where the leading researchers in the field will discuss their work. Consistent with the theme, Making a Difference, we welcome proposals that examine the scalability of learning environments and the impact of Learning Sciences on educational policy and learning theory.


posted February 20, 2006 by eric | permalink | Add comment

Making Decisions: Guts vs Numbers

As I mentioned in a previous post, we (my family) are in the process of considering a move to another city, primarily because my wife is exploring other jobs, and this is an opportunity for us to move closer to family (what with a two year old and a second brewing in amniotic soup). But the decision to move is always a toughie, especially when you’ve become entrenched in a city as wonderful as Chicago. (Moving doesn’t mean leaving Inquirium, it just means expanding our sphere of operations from two cities to three.)

This was a perfect job for our own decision-making tool: SeeSaw.

Our SeeSaw question was: Where should we live? We had four solutions:
1. Stay in Chicago
2. Move to Los Angeles
3. Move to San Diego
4. Move to the Bay Area

California is the target because that’s where all of our immediate family are.

It was really helpful to be able to talk through the individual criteria together as we created the list. For example, we started by listing “professional opportunities” as a generic criteria, but in working through the details, we realized that there were really multiple dimensions of professional opportunities: mine vs my wife’s, and current jobs-in-hand vs future job opportunities.

Ranking the importance allowed us to be able to articulate the relative importance of the criteria, e.g. being close to family and professional opportunities are both equally important, weather is less important. We had a slight difference of opinion about the importance of weather…if this were only me, I’d probably rate weather higher than the 4:Medium.

The cool thing about doing this with SeeSaw is that we always get confronted with a question of whether or not our stated preferences actually matches our gut feeling. As always happens when using SeeSaw, we found a few surprises. Given the criteria we were looking at, I totally expected “Move to San Diego” to come out on top. But instead “Staying in Chicago” came out on top, with a “Move to the Bay Area” coming in dead last.

The Bay Area was no surprise, given the ridiculous housing market, commute issues, and current lack of promising job leads. LA’s lackluster performance was no surprise either.

But taking a closer look at San Diego vs Chicago revealed that the lack of real job opportunities in San Diego was probably the biggest strike against San Diego. Given that we just got a bite for a job there, if we bump up professional opportunities from Neutral to For (e.g. she gets the job), then San Diego would come out ahead, slightly edging out Chicago.

Still, the fact that staying in Chicago surprisingly ranked so high is making us more carefully reflect on whether we actually want to move.

If you want to try out SeeSaw for yourself, check out our web site: http://www.inquirium.net/seesaw/


posted February 14, 2006 by ben | permalink | Add comment

Showing History in other contexts

One of the things that I really like about the Pandora music service is that, in addition to having a really simple interface, they also display a history of the songs that you played. With so many other streaming music services (Rhapsody comes to mind), what you played is lost as soon as it’s over. (Pandora streams music to you based on “stations’ you define. They then use data from the Music Genome project to find other similar music.)

Browsers have made the idea of having a history relatively common-place and it’s hard to imagine doing any kind of surfing without it. But history is still relatively rare in other applications (Photoshop, a few word processing applications and version control software come to mind). Mostly history is used as a means to undo some action.

But in a lot of educational software, especially inquiry software, you need the history to see what you were thinking or trying previously. You don’t necessarily want to undo the action, you just need to see what you did. For example, if you’re setting up a simulation of planetary orbits, it’s important for you to be able to know what you’ve tried and what’s worked because you might have to try a lot of different approaches before you get something that works.

We’re currently trying to build in some kind of useful history with our historical GIS map application so that you can revisit maps and queries that you’ve generated previously. This is very rough still, but the idea is provide some kind of time stamp so that you (and the teacher) can see when you hit certain pages, the “Layer” shows the active data layer that you’re retrieving data from, and the “Selected Data” shows the data that you clicked on.

One of the interesting design challenges with implementing history is: do you auto-capture everything, or do you let the user decide what to capture? If you automatically capture everything, you run the risk of having too much information and not being able to find what you need. If you let the user decide what to capture, then the user may forget to capture important events, either because they forgot (being too caught up in the investigation), or they didn’t recognize it as being important at the time. We used user-triggered capture with the Progress Portfolio because we felt that it was important for students to learn to recognize what was valuable and to consciously grab it. With the historical GIS application, the jury’s still out. I think it’s going to be important to be able to see the complete history of what you’ve selected, especially if you’re grabbing a lot of data points (e.g. checking total population in 3 or 4 counties). You don’t necessarily want to be bogged down with clicking “Save this data” every time.


posted February 10, 2006 by ben | permalink | Add comment

Web Page Design circa 2006

Ooh, a great summary of current best practices in web page design. Go look. Lots of examples.

This is where I try to sum up the current state-of-the-art in graphic design for web pages, and identify the distinctive features that make a web page look fresh, appealing and easy to use.

I’ll update this article over time as new features stick out.

Here’s the takehome list of key design features:


posted February 10, 2006 by eric | permalink | Add comment

Intel-based Macs, Classic, and HyperCard

Ever since the Intel-based Macs were announced, there was concern about whether those new machines would be able to run old, pre-Mac OS X software. Now the first Intel-based iMacs are shipping, and this concern has shifted from theoretical to real.

The problem is that Apple is not supporting the Classic environment on Intel-based Macs. This essentially means, [as Jorg Brown put it](a href=”http://apple.slashdot.org/comments.pl?sid=171546&threshold=1&commentsort=0&mode=thread&cid=14292525), that

…no Mac program written prior to 1999 will run - at all - on the new Intel-based Macs. In fact, most 2001 programs won’t either. (By contrast, many 1984 apps do run on today’s machines.)

Well, this isn’t good. I’ve used Macs since 1984, so I have a huge archive of files that I want to be able to continue to access. But in my case it’s an archive, not stuff I’m using every day, so it’s not a big deal to keep an older machine around just to access those files.

But it’s a different story if you have software you rely on every day that was never ported to OS X. For example, HyperCard stacks. HyperCard stacks were fairly unique because they contained both data and programming code. HyperCard stacks were also very popular in educational settings because it provided a fairly easy way to design and create custom software. For its time, HyperCard enabled a great deal of innovation within the educational technology community.

Porting a stack from HyperCard to something else wasn’t simply a matter of exporting the data; someone also had to reengineer the stack’s HyperTalk source code by hand. That made the process more expensive and explains why a lot of HyperCard stacks are still in use.

People who had the time and money probably moved away from HyperCard years ago. SuperCard eased the migration to OS X, and MetaCard (later Runtime Revolution) even supported limited porting of HyperCard stacks to Windows and Unix. In fact, Runtime Revolution has been suggested as a migration path for HyperCard users who want a new Intel-based Mac.

In some cases, that may work, but there’s one caveat. Many HyperCard stacks relied on small compiled plugins called XCMDs that handled certain tasks that HyperTalk could not. There were several freely available XCMD libraries, including the Dartmouth XCMDs, Frederic Rinaldi’s XCMDs, a set from Apple Developers, and others. Anyway, the ease with which one could grab these XCMD librarie and integrate them into their own stacks meant that a lot of stacks depended on these XCMDs. And here’s the problem: virtually all of these XCMDs consist of 68k code, not PowerPC. That means that while Runtime Rev may be able to port the bulk of a HyperCard stack and run it on Intel hardware, it will not be able to port the XMCD, leaving a nice, gaping hole in the middle of the stack.

Some developers will be able to work around this limitation, but again, those folks probably moved away from HyperCard years ago. What’s really needed is a simple, bombproof way for non-developers to run those stacks (and, while I’ve been running on about HyperCard, all those other pre-1999 applications too). Something kind of like, well, the Classic environment.

There are a couple of emulation projects that look very promising for doing this. SheepShaver has the current buzz, as it basically emulates a PowerPC processor (PearPC is another open-source project emulating a PPC). Add a ROM from an older Mac and an older Mac OS (supposedly 9.0.4 is ideal) and you’re set. The installation and setup process is a bit ugly right now, but I’d expect that to improve rapidly now that it’s getting so much attention.

But for HyperCard (and actually for a lot of software that is pre-1994 or so) SheepShaver really isn’t the right solution. That’s because pre-1994 software was written for 68k CPUs, not PowerPCs. When Apple made the transition from 68k to PowerPC, it made sure to include emulation code that let the PowerPC-based Macs run 68k code — albeit slowly.

So if you use SheepShaver to run old 68k code on an Intel-based computer (Mac, Dell, whatever) what you’re doing is emulating a PowerPC chip emulating a 68k chip. You’d better believe it’s going to be slow… supposedly PowerPC-based apps run at about an eighth of the native processor speed, so 68k-based apps are going to be even slower.

That’s why I think Basilisk II is going to turn out to be a much better solution for running ancient Mac apps. Basilisk is a 68k emulator, so we’re only dealing with one level of emulation, not two. Basilisk shares some code with SheepShaver and takes a similar approach. The main difference is that Basilisk targets a 68k CPU running Mac OS 7.5/8.1, where SheepShaver targets a PowerPC running Mac OS 9.0.4.

The only case in which Basilisk won’t work is if the application you need to run is PowerPC-only. But I’d bet that most, if not all, of the software people run under Classic today is either a fat binary (PowerPC code and 68k code in one application) or a 68k-based application. (I’d welcome any counterexamples.) So while it may be counterintuitive to downgrade to the older 68k binary, doing so may actually improve performance.

Hopefully, with Basilisk and SheepShaver, we’ll be able to continue to run legacy Mac applications. In some ways, having Apple drop Classic support is a blessing, because it means these emulators, which run on Apple and non-Apple hardware, are getting more attention. Just remember to grab your old Mac’s ROM before you retire it for good!


posted February 10, 2006 by eric | permalink | View comment

Languishing educational software

Two separate events got me a-ponderin’…

Tom Hoffman at eSchoolNews suggested that Intel release Showing Evidence (which, in case you’re new to this site, we designed and developed) as open source to ensure its survival over the long run. I’m also in the process of putting together a portfolio of our work in anticipation of meeting a client, so I’ve been revisiting our old projects.

…our software is languishing!

We’ve now been in business long enough that some of the software we created no longer runs on recent operating systems (e.g. we have software that works on Mac Classic, but not OS X. Thank goodness Microsoft is so slow.). Other software has barely seen the light of day (e.g. there weren’t enough resources to release and support the software).

Let’s face it, it costs money to keep technology curricula updated and supported. It’s great to have a big pot of money to fund development, but these projects need a plan to be self-sustaining in the long run. As much as I’d like to see these projects succeed, we’re simply not set up to do both the maintenance and support needed to keep such curriculua up and running. And even if we wanted to be, our clients rarely have the resources to support such an endeavor. Intel is actually a rare example of an organization large enough to potentially keep a project running.

This seems to imply that there ought to be SOME cost associated with using the materials, rather than giving anything away for free. The guys at 37signals mentioned that when they raised their prices from $49 to $99, they actually got some emails THANKING them for the price increase because the customers felt more assured of the long term viability of the product. Granted their customers are businesses with more money to burn.

I wonder if for future projects, we should add a clause in our contract that says if the software languishes, we (Inquirium) will be given the opportunity to release it?

So let’s say that we do use the community barn-raising approach to keep things running: Is it feasible to create enough of a community around each and every piece of software? Who would make up such a community? Commercial educational developers like us? Researchers with development chops? Dedicated teacher users who plan on using the software every year in their classes? Moonlighting curriculum developers who don’t get enough satisfaction out of their daily work? Who would be able to provide the disk space and bandwidth?


posted February 03, 2006 by ben | permalink | Add comment