Learning from failure

The problem with edtech evangelism is that it assumes the most valuable lessons are learned from other people’s success. This is why our lives fill up with stories of exciting tools that have transformed this that or the other thing. Exhausting, really.

Given the importance of failure to innovation, it’s interestingly rare to find blogs, lists, journals, or conferences focused on failure, in any field. The Ten Most Awful Mistakes in Online Course Design.  Ten Tools I’ll Never Try Again.  Seriously, I’d find those useful. But we don’t do it this way.  We don’t fly keynote speakers around the world to tell us what they did wrong, from the cartwheeling Prezi that nauseated the audience to the webinars doomed by lag, and the drop box assignments that were impossible to return—or the MOOC in which experienced and highly engaged learners left class early and set up an alternate public discussion of the misalignment of course purpose to course environment, for all to see.

But perhaps we should promote these experiences, because failure is how most of us learn. The only way to test whether or not a new tool does what it promises to do, for example, is to try it in a populated course. This doesn’t always produce splendid results, as Curt Bonk and his Blackboard partners are currently finding out.  Lisa M. Lane’s original blog on her decision to quit their experiment is worth reading, but it’s the appreciative discussion in the comments between the MOOC’s course leaders and the unconvinced that’s really gripping.

Could this awkward experience have been predicted?  Should either Blackboard or Bonk have given some thought to the possibility that an all night party with an open bar might attract a bigger crowd than would comfortably fit the venue?  Was the suck-it-and-see communication strategy really the way to go, given the sensitivity and confusion around openness (and Blackboard’s investment in open) right at the moment? Were expectations driven too high by the opportunity to interact directly with such a well-known eLearning pioneer?

It seems that for some, the problems were embedded in the environment itself—the unchanging assumptions behind the design of online forum tools (“very 1999”, as Lisa Lane puts it). This is the tiny, familiar failure that’s difficult to avoid. Most LMS discussion tools pass the test of course planning simply by being there.  Blog? Wiki? Forum? Yup, we’re good.  It’s only when you try to use them with 50, 100, 1000 students or more, that you start to see how awful they are.

And this isn’t a trivial limitation.  Just as in the face to face classroom, presence is the hinge on which the online gate swings: it’s the simple, human claim that learners make on one another’s time, that enables us all to feel that we’re exchanging ideas with people, not just testing ourselves against prescribed content. If this is important with a small class, imagine how much more vital it is once enrolment cranks up to the thousands. Without it, really, what differentiates a massive online class from a difficult day at the mall?

The comments on Lisa Lane’s blog will ring bell after bell with anyone who has tried to use the communication tools of a standard LMS to engage students in discussion. Somehow, the leading vendors have managed to miss the fact that users want to tailor their online presence, just as they choose how to present their physical appearance. The capacity to craft a personality that works for you is really critical to anyone’s sense of composure in a community of strangers. Users expect to control how their name appears, to use a thumbnail of their choice as their visual avatar, to share images, videos, feeds and links, and to be given some kind of personal site in which they customise, organise and publish what’s meaningful to them, in order to show not only what they think, but to share information about the public online networks through which they move.

This dominos into the second critical issue. Anyone trying to engage with an online class needs to keep up with a snowballing volume of input from others.  Messages need to bundle intuitively, to thread and scroll and quote properly. Users expect to be able to flag, sort and prioritise, to use powerful search tools, and to send quick notes of appreciation with likes, favourites and reposts. Some will save and print, because that’s the way they like to work; and others will need it all to be readable on a mobile screen half the size of a postcard. Everyone will need to skim for new messages, and some will want push notification.  The singular, standout value of learning through online discussion is that it’s a self-transcribing conversation that fosters review and reflection—but none of this works if you can’t find what you’re looking for.

When we undertook an institutional RFP for a new LMS last year, we expected that big companies whose R&D focus must be on the ways in which people use technology and networks to communicate, learn and manage their lives would develop social learning tools that would synchronise with these trends. Any campus LMS has to sit open on a student laptop where the next tab is Facebook (or Tumblr, Twitter and so on).  Students multitask their online learning against a background of rapid, pervasive interaction with friends they genuinely care about; if we’re going to ask them to concentrate instead on staying in the class space, couldn’t LMS designers at least help us out with a social design that’s halfway engaging?

They could—of course they could—so the fact that they don’t is revealing.  My guess is that LMS designers who don’t develop their social tools have made a rational calculation that it’s better for them if we handle this bit. The institutional shift to learning platforms, described so well by Phil Hill in a recent post for e-Literate has really been driven by desperate teachers looking for ways to compensate for awful LMS design. So platforming is a strategy of accommodation that works reasonably well for institutions and exceptionally well for LMS vendors; it’s the admission that L is rapidly uncoupling from M. The inclusion of a few engaging social options like WordPress or Google Apps doesn’t expose the big vendors to much risk providing everything’s bolted to the platform: in fact, it perpetuates dependency on whoever provides the core tools for managing the platform itself.

But as more institutions embrace the platform model, there’s going to be much more churn in social features, tools and options, because this is a wide open space for edupreneurs. Academics will come under increasing pressure to try new things, pushed by articulate, informed student demand. Negative feedback won’t wait for teacher evaluations—it’ll be on Facebook.  This is why it’s so helpful that Lisa Lane, Curtis Bonk and their colleagues are modelling their failure-to-communicate debate in the open, in the truer sense.

Because there’s a whole world of learning through failure up ahead, and we’re going to appreciate these pointers from our peers that show us both how to give feedback constructively, and how to respond to it openly.

Update:

Michael Feldstein’s fresh post that touches on the distinction between edupreneurs and teacherpreneurs (besides a number of other excellent points) really helped me think through some ways forward.  As ever, Instructure  come off well — but will they be able to sustain their personal approach as they grow? Time will tell.

7 thoughts on “Learning from failure

  1. Maybe then in our training sessions we should complement our best practices discussions with “worst practices” ones as well? 🙂

    As a side note, in openness initiatives failure actually *is* celebrated. As Clay Shirky puts it in Failure For Free (chapter 10 of Here Comes Everybody):

    “Open source is a profound threat, not because the open source ecosystem is outsucceeding commercial efforts but because it is outfailing them. Because the open source ecosystem…relies on peer production, the work on those systems can be considerably more experimental, at considerably less cost, than any firm can afford….”p.245

    Maybe, in it’s recent foray into open source, and it’s apparent failure in serving Lisa Lane Blackboard is on to something? 😉

    Like

  2. That’s a lovely point, Luke, and thanks so much also for the pointer to ‘Failure for Free’. The possibility that Blackboard’s open source ventures are a just a cunning front for its real mission, which is to own the global market in failing in public, is a compelling new way of looking at their rise!
    Seriously, though, I think we should be starting new conversations about “worst practices” and epic fails as part of a much more open-minded training conversation. Somehow we need to encourage those who’ve been working online for a while to tell their worst stories. And we do need to invite more students into this conversation, which I think means we need to figure how to create the right atmosphere for such a sensitive exchange. Not so easy.

    Like

  3. I’m curious to know. Have you been inside the course? Yes the blackboard discussion forums are old school. I knew that coming in. Yes it is not all wide open as the change11 or the PLENK events. (I recall that at least in PLENK10 Some of the discussions were not wide open but inside moodle and you had to be registered to participate) From observing within the course many people are getting a great deal out of the course materials and discussions. Curt’s live session, was worthy of a keynote at a conference and people around the globe saw it at no cost. If something does not work exactly as planned is it a failure? I think this closing remark from Fred Hass is sums it up in a way that is reflective of my own reaction to the course so far.
    ” There definitely seems to be some serious criticism flying around the blogosphere, which I have joined. Still, in spite of any criticisms I am still fascinated and want to continue.”
    http://haaslearning.wordpress.com/2012/05/05/thoughts-on-the-curtis-bonk-mooc-and-learning-management-systems/

    Like

    1. Thank you so much for asking this question, Chris; you’re absolutely right that I should clear up something that’s misleading in the title of this post. I haven’t been inside the course (although I went to have a look because I found everyone’s discussion so interesting, and I couldn’t get in without subscribing, whereas I can read the blog discussions very readily, a point that Vanessa Vaile has also made), and I should be clearer that I’m not commenting either way on the attributes of the course itself.

      It would be fairer to say “Learning from Constructive Criticism”, or “Learning When Something Doesn’t Quite Do What It Says On The Tin”.

      What drew me into this is the way that the course convenors and participants have responded so openly and substantially to the issues that were raised, creating a significant pop-up opportunity for all of us to stop and reflect on a really critical puzzle in the LMS market: why are the social tools so awful? Their openness seems to me to be both professionally generous and courageous, on all fronts: it’s exactly the way to respond. Most of us reading would have had variants of the exact same experiences — an aspect of our course planning that looked much better at the recipe stage than it did when it came out of the oven. Many of us have taken the same remedial steps as some of the commenters, and many of us are wondering why we had to.

      So the real target of my interest in failure is that I think LMS design has failed to keep up with the expectations created by standard social networking environments, leaving many educators awkwardly strung between systems that handle enrolment and assessment, and systems that students actually want to use. And in many tiny instances this triggers a failure by educators to stop and think about how a system will work when a large number of people pile into it.

      To me, this failure starts with the standard rubric based practices of an RFP — educators need to contribute some tougher standards at this point, or at least a clearer understanding of why the presence of nominally social tools in an LMS may not amount to a hill of beans.

      Thank you so much for the link to Fred Haas’s blog, Chris. Really interesting.

      Like

  4. Really good post, Kate. Besides the learning from failure concept, you capture the concept of presence or identity quite well. Many of the feature problems in legacy LMS solutions, or lack of social tools, are based on system design based on courses and not on learners.

    Like

    1. That’s it in a nutshell, Phil. I hadn’t seen it half so clearly, but yes, it’s the effort to replicate the implied architecture of a course, rather than to understand the everyday practices of a learner. I honestly think this area is wide open for edtech entrepreneurs: anyone who can figure out an engaging system capable of scaling up will have a global army of educators on their side.

      Like

      1. For all the talk of disruption, I don’t think we’ve seen “an engaging [learner focused] system capable of scaling up” as yet. Clearly the precursors are all there but the absolute game-changer doesn’t quite feel like it has arrived. Importantly, an understanding of failures is critical in edging towards the elusive disruption.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s