Trying out Zeega for an #OpenEd12 Recap

A few days ago a new storytelling/mashup/presentation tool named Zeega came across my RSS reader. It is still in private alpha (not even beta!) but I was intrigued and so submitted a request for an account, and to my pleasant surprise the next day had an account.

Zeega is slightly more complicated than your standard presentation tool – not a lot more, but it uses the idea of different “sequences” that can branch, and so a quick view of some short video tutorials was very helpful to get going with the software. I can see how one could use this as a start forward presentation tool quite easily, but they also included a series of examples of other projects people had made to show how this can be much more than a linear presentation tool (I quite liked the one on Geodesic Domes and drew some inspiration from it for my own.)

Another thing that makes Zeega stand out is its media harvesting mechanism – you can link zeega up with your dropbox account, but more interesting is the bookmarklet, which lets you add media from flickr, youtube and soundcloud (or indeed any regular media asset) to your “library” and once inserted into a resulting animation, includes a reference back to the original (a nice-to-have feature I could see in the future would be to choose only CC licensed materials, and also to allow users to specify how attributions should be made, but for now the current way works great.) Once you’ve gathered materials into your library, it’s a simple thing to drag and drop them on any frame of your show, where they can then act as links, back-ground soundtracks, etc. Zeega also has maps integrated with it, a feature I didn’t explore in the first story I created but which I can see adding a useful element at times.

Zeega is definitely still in alpha, but is another great example of how far web-based applications have come. It wasn’t that long ago that the same sort of functionality could only be found in a thick desktop client, and one that was no doubt web unawares. But even in its early stages, zeega is another example of a new bread of mashup storytelling tool that I believe any instructor with a bit of gumption could use to create much more engaging materials, or any student for that matter. It gets both the authoring and workflow pieces mostly right. Check out the example I created as my test drive, my own recap of #opened12 using sounds and images from all over the web.

Open Textbook Authoring Tools Part 4 – The Rest

Well, we are nearing the end of this series on Open Textbooks, just one or two posts to go. Before we leave off this section on Authoring Tools, though, I wanted to provide some annotated links to a host of others I have discovered in my travels in case they were of use to someone. Some of these exemplify the model of web-based, multi-format output open tools that I have focused on, and could easily themselves been written up in more detail had I the time, while others are stand-alone tools that, while they have their uses, fall short of my own vision for an authoring platform that breeds openness, sharing and remixes.

Web-based platforms

Pandamian

Had I found Pandamian before today, I would have included it in the writeup on WordPress and Pressbooks as open textbook authoring tools. While I don’t know that it is in fact wordpress-based, functionally it is almost identical to pressbooks. While I couldn’t find a simple way to reorganize chapters, it allows for web-based authoring of books that are exportable with a single click in all of PDF, .epub, and .mobi. The web-based version allows for commenting just like Pressbooks. Also like Pressbooks, it is currently only available as a service (though my hope is that Pressbooks, having been built on wordpress, offers more exit strategies and potentially self-hosting in the future.)

Annotum

Also likely should have been mentioned in the WordPress write-up is Annotum, an open source, wordpress-based solution for scholarly publication. It is focused on scholarly writing specifically, but it is able to output to multiple formats and so is deserving of note here.

Connexion

Connexion is a bit of an odd duck in my opinion, yet definitely deserves a nod as it has been going for many years now, and is one of the few “repositories” to have continued growing and improving over time. There are a number of Open Textbooks that have already been built in Connexions, including those that have been included by the Community College Open Textbook Collaborative (e.g. Finite Applied Mathematics.) One of the big advantages is that it is built from the ground up around structured, well-marked up text. Not only does this allow for export to different formats, it allows for an ecosystem of reuse within the system. Like much OER, it is not clear the extent to which people actually make use of the ability to remix and recombine existing content into new stuff, even when the system allows for it to be done quite easily, but there does seem to be some evidence of it in Connexions.

Rice University, the home of Connexions, also just announced it was entering into the Open Textbook playing field in an even bigger way with its OpenStax project (great name!), which aims at first to deliver “publisher quality” open textbooks for the 5 highest enrollment courses in the state.

iPad- or eBook-only tools

I hopefully don’t need to explain again my thoughts on targetting the iPad on its own as a platform for open textbooks, or for that matter looking only to eBooks. Sheer madness, regardless of how nice the results look. Still, there are those who will be tempted. For them, I simply offer links to the following three authoring tools. And my best wishes.

  • Demibooks Composer – http://demibooks.com/composer/
  • Redjumper Book Creator – http://www.redjumper.net/bookcreator/
  • Genwi – http://genwi.com/

I should add thanks to the New Kind of Book site for its two bookmaking roundups for some invaluable references that turned up a couple of these that were new to me.

Desktop Tools, Readers, Clippers, Transformers and Other Gadgets

Very briefly I’ll mention a number of other tools that are handy for the fledgling open textbook author to know about

  • sigil – an open source, cross-platform, desktop, WYSIWYG authoring tool for ePubs. Very handy to have to do clean up on some of the results from these other automated approaches I have mentioned. I don’t know that I would advocate authoring solely in sigil, but it is very handy to have.
  • e-cub – another free desktop authoring tool for ePubs. Not as powerful as sigil but it’s my go to when using other tools has failed – it seems to open anything I throw at it and is otherwise dependable.
  • calibre – in the earlier days, this was one of the only free desktop eBook tools around. It’s still worth installing, but I regularly find it borking on things and usually only turn to it when nothing else has worked.
  • Open Office eBook extension – In a pinch, authors could compose work in Open Office, the open source word processor, and export both as HTML/Mediawiki, ePub and print to PDF. It’s not ideal, but for a fast, cheap and easy way to get going, it’s worth knowing about.
  • GrabMyBooks epub creator browser extension – I like the idea of this partly because I am always advocating that people look at incorporating tools that fit into their existing workflow rather than tools that bolt on in addition to what they already do. That’s the main idea behind the Open Educator as DJ and Augmenting OER with Client-side Tools. This plugin lets you grab content as you surf the web and add it to a collection that can then be published as an ePub.
  • ePub reader browser add-on – I love this add-on for Firefox because when I click on ePub links, this renders them right in my browser. Meaning I don’t need to use a special reader if I don’t want to, but also I can check to see if the book is really something I want to read before downloading it, synching it to my eReader, etc.

Additional Reading

Finally, a couple of recent reports and books that you may find useful in thinking about both book authoring but also the future of what we call “books.” The first is the just released JISC Digital Monograph Technical Landscape study which ultimately seems a much more thorough look at many of the issues I have tried to cover over the last 5 posts. If you are really interested in the topic of formats and tools, then you should spend the time to read it. The other is a free early edition (the first three chapters) of an in process book called Breaking the Page by Peter Meyers, who also maintains the above-mentioned “New Kind of Book” site. If the first three chapters are any indication, the finished book will be worth the wait, as Peter speculates on ways in which technology can improve and change the way we interact with books. The first three chapters deal with Browsing (think new forms of Table of Contents), Searching (think new forms of Indexes) and Navigating (what happens when linear isn’t the only dimension you can arrange things in). It is given me all sorts of ideas on how ebooks will transform in ways that still respect all of the useful things real physical books allow us to do.

Open Textbook Authoring Tools Part 2 – WordPress and Pressbooks

I moved this blog on to the wordpress platform in 2007 (I think.) I built a open learning search portal on wordpress in 2009. I have participated and helped organize a bunch of different “wordpress in education” events here in BC, and maintain wordpress installations for both BCcampus and etug. So I probably don’t need to tell you, I <3 wordpress.

A few years back, the folks at the Center for History & New Media at George Mason University spearheaded a very cool project to build a new digital humanities tool in one week. The result was the Anthologize plugin for WordPress which allows you to collect together a set of blog posts and publish them in a variety of web, print and eReader friendly formats.

Not long after (maybe even before) I became aware of both Comment Press and its successor digress.it. While neither of these are in and of themselves publishing or packaging tools, in greatly expanding the ways in which readers and editors could comment on a text at the paragraph level, they added to an emerging vision of WordPress as a web platform for authoring multi-format books in a dynamic, networked way.

So when I began last year thinking about platforms that met all my goals for an open textbook platform, I pretty much knew I had these in my back pocket and that with not too much finesse or effort they could serve quite well.

And I still think that. But before we really got underway with our Open Textbook pilot, I kept scanning the horizon to see what other options might have come up since I found these. And boy am I glad I did, because I stumbled on Pressbooks.

Pressbooks

Pressbooks is the work of Hugh McGuire, who also previously founded LibriVox, the biggest site in the world for audio versions of public domain works. Pressbooks is built on top of WordPress, and offers the same simplicity for authoring books that those of us who blog have come to know and love. Actually, it offers a BETTER system – the Pressbooks folks have customized the backend dashboard and interface of wordpress to suit it even better to authoring books specifically (see figure 1.) At first I had thought they had simply taken Anthologize and further customized it, but I recently learned that this was not the case. In addition, they have created a couple of custom post types to accomodate all of the additional book metadata fields that have accrued over the fears (see figure 2.)

Output is where Pressbooks really shines. To test it out I created my own book using the same “Intro to PowerPoint” content I tried porting to mediawiki. Again, there was no simple IMS CP to Blog import functionality, but given the fairly small amount of content, it didn’t take much more than an hour to setup the basic pages and copy the content over.

Actually, this point deserves some attention, because even more so than Mediawiki, Pressbooks didn’t like crufty HTML. And when your legacy content is coming via Word-to-HTML via Desire2Learn output, crufty is the order of the day!

But after a few go rounds to clean it up (and no small effort on Hugh’s part – thanks!!) I had a web-based version of the text, as well as both an ePub and printable PDF. Now as in the case of the mediawiki experiment, these results were produced automagically but similarly could be manually massaged after the fact. But more than this, Pressbooks also supports exports, a native format of the industry standard Adobe InDesign application, meaning that you can deliver the content of your book, properly marked up, to a professional designer and save them a ton of hassle. Similarly pressbooks supports uploading custom CSS to style ePubs, which means you can style these until your heart’s content (see figure 3 for all export formats)

Tale of the Tape

So how does this approach fare? Let’s run it through the criteria I outlined in last post and see:

  • collaborative authoring – whether via multiple authors on a single chapter, or by divvying up the book, this is no problem
  • can be done “out in the open” – absolutely, though one can make it private if one chooses to
  • results in all of a web, print and eBook version – definitely
  • is easy for authors and readers alike to use – I’m maybe biased, but I thought it was dead simple
  • is free/cheap and open/extensible (and produces open standards-based content) – yep, yep and yep (but let’s revisit below)
  • limits the choices upstream of what authors and reusers want to do with the book as little as possible – I’d say the answer was absolutely yes – this does not seem to be a “lock in” game at all.

When I have shown this to a few trusted colleagues, one of the first questions they’ve asked has been “have the components that customize wordpress to make it pressbooks themselves been open sourced?” It’s a fair question and an obvious one in the circles I run in. The answer currently is no. This is being offered as a service, albeit currently a free one. This is slightly troubling, but something that I hope to discuss further with Hugh and team to see what the way forward looks like. That said, given the wide variety of export formats, and my affinity for letting others man the widgets if I can, I absolutely hope and expect there is a way to use this as a service and to be diligent about exit strategies, flexibility and autonomy.

There is ultimately no one solution that will work for everyone and every scenario when it comes to open textbooks. As I try to describe in my talk February 7th (slides here or else feel free to join us online at 1:30pm PDT), it is a question of balancing affordances with what your users need, what you can do, and what you’d love to enable. But for now, Pressbooks has risen VERY quickly to the top of my list of approaches that I think do a good job of balancing all of these and providing a self-service, inexpensive platform to move forward with open textbooks. – SWL

A Day in the life of an “OER Librarian”

OK, so “OER Librarian” is a bit of a stretch – much as I might secretly harbour a desire to be a librarian, I don’t even play one on TV. But recently I was asked to help find some suitable Open Textbook alternatives for a collaborative program in ICT here in BC, and I wanted to reflect on this process and this potential role of “OER Librarian.”

The Request

The initial request was to find suitable open textbook replacements for the “Foundations of Web Development” course and two database courses, “Database Design” and “Database Management.” These are but 3 of 18 courses that make up the program, all of which have both course outlines and learning outcomes well described and existing commercial textbooks in use. Both of these are VERY handy to have as reference when looking for alternatives.

As an aside, one thing I found odd was searching for “textbooks” at all in these areas. We’ll leave aside the whole question of what, in the networked, digital and open age, constitutes a “textbook” anymore – that’s an issue I plan to pick up in my next post. But when it comes to “ICT” and specifically technologies like web development and MySQL, the furthest thing from my mind when I think of learning these is to turn to a “textbook.” The web is literally strewn with good tutorials and references in these areas, ones that aren’t static but live and grow with new releases and learning by their communities. And these were for courses delivered entirely online! And yet…

As my contact explained to me, students were themselves asking for a physical textbook to accompany their course in cases where one didn’t already exist. Fair enough. And in addition, while the instructors were well aware of the reams of materials available for free online and how they could simply point to these, increasingly they were tiring of the ever-present link-rot, finding that each term whole sections of their course would contain broken links due to the seemingly natural decay on the web. Hence – open textbooks!

The Process

I started searching specifically for materials for these 3 courses and quickly realized that I was finding candidates not only for these subjects, but for many of the related courses and topics in the program. This led me to my first insight and action, which was that while you may be searching for one specific thing, it would be foolish of me to simply discard these related quality results. So I expanded the page where I was capturing all of the candidates I’d found to include all of the courses in the ICT Program. It turns out this was a wise thing to do, as even though I hadn’t been asked to find replacements for these others, in two cases when the instructors say how well the free and open candidates fit the course, they felt these would be easy choices. Score one for the good guys!

I am not ashamed to say that one of the first places I turned to was freelearning.ca, the OER search portal I built on top of delicious and Google CSW. The first thing I learned was that it had broke (doh!) The whole delicious move had caused some things to go out of whack. Once fixed, I found a few resources, but even though we’ve tried to constrain the open textbook search to just textbook sources, I admit a fair bit of cruft still gets through.

The next obvious (to me) place to turn was College Open Textbooks. They have a large collection of open textbooks classified by Subject that is up to date and added to regularly, in my experience. This turned up some decent possibilities.

College Open Textbooks is a curated collection, and one of the sources they pull from is Connexions. But a direct search of Connexions didn’t find anything particularly different. Similarly wikieducator and wikibooks, other sources the College Open Textbooks aggregates, didn’t offer a lot more than I had found earlier.

I kept trying a bunch of individual sites, FlatWorld Knowledge, Free Technology Academy, and FLOSS Manuals. In each found a few good candidate open textbooks. But still no motherload. I decided to turn to some of the major aggregators/OER portals, the two biggest IMO being OER Commons and MERLOT. I was encouraged to see in MERLOT that “open textbooks” had become a category I could refine a search on, and did find some decent choices But the specificity of this filter is thwarted by a lack of quality control on what gets classified as such, and by the seeming desire to be as inclusive and comprehensive as possible. OER Commons suffers from a similar fate, and in each case finding duplicate upon duplicate would easily discourage most faculty.

The full set of sources I searched is available here. To that list I would add both straight-ahead Google and Bing searches, which were by and large not very productive – lots of results, very few of which were either textbooks or open.

What I Found

There are 18 courses in the ICT Program (not including the Capstone project which doesn’t use a text). In around 7 hours of search I managed to turn up 41 potential candidates for 12 of the sources. Not all of these are explicitly “textbooks”; maybe half are, the other half being courses or manuals that could serve this purpose. The informal feedback I received from my contact at the ICT Program was that in two cases the candidates seemed holus bolus like good replacements. In two others there were ones that with work might serve as the basis for a new textbook.

So let’s say, for arguments sake, that this effort results in 4 commercial texts being replaced with free and open alternatives. These courses are delivered by 4 partnering institutions. So maybe conservatively 50 students/year x 4 courses? At  $100/textbook? That’s a $20,000/year. Even if we include a $10,000 one time cost to transform 2 of these to be more suitable, that’s still a potential $10,000 savingsin the first year passed on to these students. It is of course not as simple as that, but seems very easy to illustrate the value of this exercise and approach.

What I Learned

  • Google on its own won’t save you – generic Google search was nearly useless for this – using “textbook” as one of your search terms doesn’t particularly help, nor does it’s advanced search with usage rights particularly guarantee that you will actually be able to reuse the results.
  • Open is as Open Does – if you want free (and open) textbooks about ICT, teach open source platforms and apps. While there was an absolute dearth of open textbooks around proprietary platforms like Microsoft, there were serveral high quality open textbooks on Linux easily available. This sounds obvious, and clearly there are proprietary softwares that people still want to get formal instruction about, but we have to remember that one of the freedoms preserved by free software is the freedom to LEARN.
  • OER Portals can help… – there are a few decent OER portals, but there is definitely no single “one stop shop.” If asked to recommend to faculty only a couple of general portals (e.g. not specific to any one discipline) then I’d likely focus on OER Commons and Merlot as the two best candidates. That said, discipline-specific engines and portals will almost by definition be better at helping you find what you need, and the extent to which one exists in your specific area, you are fortunate.
  • …and yet have some conflicts of their own – in their haste to bulk up their collections, repositories and other search engines have in fact done themselves a disservice as there is often now a lot of crap in them, or a lot of resources of huge heterogeniety of granluarity, usability and quality.
  • The “reusability paradox” is a real thing – and its corollary is also true: the large the granularity of thing you are trying to find, the less likely you are to find an “exact match” on any of the specific items.
  • It’s all about “the flow” – even if you have some subject matter expertise, if you are not the person ultimately responsible for assembling the curriculum or teaching the course, there will need to be at least one more pass by the people who are, as ultimately they are the ones who will use it. This is not to say that this “OER Librarian” role is useless, far from it, but the ideal for me remains a persistent workflow like I described in the “Open Educator as DJ” where seeking & collecting open content is not something that happens once a year for a few days but an ongoing part of the open educators workflow. Serendipity does not work to a schedule!
  • But until then… – Still, what it is showing me is the possibilities of some hybrids; I can foresee a dynamic approach, supported by any number of systems (a wiki might work well) in which, say, a course description and basic outline is first shared, and various content found at that level by someone with some search expertise, and then both the course units and corresponding searches iterated by instructor/subject matter expert and “oer librarian.” If done in something that allowed for easy “clipping” and republishing of collected work into a new textbook, this iterated approach could go a long way to the creation of a new text that worked at all the levels of granularity it needed to.

Your Favourite Open Textbook Examples?

While I predicted that 2011 would be the “Year of the Open Textbook” (and I don’t think I got that wrong), for me personally it’s looking more like 2012 will be. BCcampus is hoping to help catalyze the production of a number of open textbooks here in BC. While we’re still working on the funding, we’ve created a site to document the work and have been doing research on potential authoring models & platforms (see also the draft of my upcoming talk) as well as existing sources of open textbook content.

Another step that seems obvious to me is to find good examples (regardless of discipline) to be inspired by. Which is where you come in – I would really appreciate links to your favourite examples of open textbooks. Of especially great interest are examples of what I think of as “hybrid” open textbooks, ones that are available in all of web, print and eBook formats. While this used to be just a dream, of writing once but reproducing in many forms, this is incresingly a reality, and one I’d like to see good examples of.

So, what are your favourite examples of Open Textbooks? – SWL

SEO as Enclosure – Another Real World Example

Wikipedia Device, aimed at the elderly

I know in the past people have given Stephen and others lots of grief about their stance on the Non-Commercial clause. And I admit that, while I understood the theoretical possibilities Stephen was concerned about, that commercial entities often seek to obscure or enclose free resources so that even if the original is still literally “open” it becomes effectively lost, I initially wrote that off as edge-case fear mongering.

But over the last few years I have come to see this not as an edge case at all but is actually a real practice that we see emerging over and over, whether it be in various threats to “net neutrality” or SEO practices that effectively bury the free versions of content. This post is just a brief note about yet another example that came up in conversation with a potential partner in government who wants to share openly some training resources aimed at helping immigrants to Canada have their foreign credentials accepted and become members of professional organizations in Canada.

I raised the question of “flavours” of Creative Commons license simply because the current configuration of SOL*R supports the 2.0 Attribution Share-Alike license and wanted them to realize they had a choice. This gave them some pause, and then mentioned that actually, one of the challenges faced when communicating with new immigrant populations in general is that there are certain groups (e.g. immigration lawyers and others who “facilitate” the process) who have a strong motive to short circuit official channels so that they can communicate “on behalf” of new immigrant clients (read – “charge them lots of money for things the government actually provides for free.”) Fair play to Google, the top unsponsored hits for “Immigrate to Canada” are indeed government websites, but the first one is a sponsored commercial link, and on that same first page of results are a number of commercial “immigration consulting” services pretty much masquerading as government sites.

All of which is simply to add yet another to what seems to me to be the long and ever-expanding list of examples of ways in which commercial entities, usually through legal if not totally ethical means, obscure what should be free and public resources. This is not make believe or edge case. This is in fact the modus operandi of capital. – SWL

OLNet Fellowship Week 2 – Initial Thoughts on Tracking Downloaded OERs

As I mentioned when I first posted that I was coming to the UK for this fellowship, my main focus is how to generate some data on OER usage after it has been downloaded from a repository. In looking at the issue, it became clear that the primary mechanism to do so is actually the same as to track content use for sites themselves, by using a “web bug” in the same sort of way that many web analytics apps do, but instead of the tracking code being inserted into the repository software/site itself, it needs to be inserted into each piece of content. The trick then becomes

  • how do we get authors to insert these as part of their regular workflow
  • how do we make sure they are all unique / at what level do they need to be unique
  • how do we easily give the tracking data back to the authors.

My goal was to do all this without really altering the current workflow in SOL*R nor requiring any additional user accounts.

The solution I’ve struck upon (in conversation with folks here at the OU) is to use pwiki an open source analytics package with an extensive API to do the majority of the work, and to then work on how to insert this into the existing SOL*R workflow. So the scenario looks like this:

1a. Content owners are encouraged (as we do now) to use the BC Commons license generator to insert a license tag into their content. As part of the revised license generator, we insert an additional question – “Do you wish to enable tracking for this resource?”

1b. If they answer yes, the license code is ammended with a small html comment –

<!–insert tracking code here–>

1c. The content owner then pastes the license code and tracking placeholder into their content as they normally would. We let them know that the more places they place it into their content, the more detailed the tracking data will be. We also can note that this is *only* for web-based (e.g. html) content.

2. The content owner then uploads the finished product as they normally would.

3a. Each night a script (that I am writing now) runs on the server. It goes through the filesystem, and every time it finds the tracking placeholder:

  • based on the files location in the filesystem, it deconstructs the UUID assigned it in SOL*R
  • uses the UUID to get the resource name from SOL*R through the Equella web services
  • re-constructs the resource home url from its UUID
  • sends both of these to the piwik web service, which in return creates a new tracking site as well as the javascript to insert in the resource
  • finally writes this javascript where the tracking placeholder was.

4a. Finally, in modifying the SOL*R records, we also include a link to the new tracking results for each record that has it enabled.

4b. For tracking data the main things we will get is:

  • what are the new servers this content lives on
  • how many time each page of content in the resource (depending on how extensively they have pasted the tracking code) has been viewed, both total and unique views
  • other details about the end users of the content, for instance their location and other client details

I ran a test last week. This resource has a tracking code in it.  The “stock” reports for this resource are at http://u.nu/3q66d It should be noted: we are fully able to customize a dashboard that only shows *useful* reports (without all the cruft) as well as potentially incorporate the data from inside Equella on resource views / license acceptances. This is one of the HUGE benefits of using the SOL*R UUID in the tracking is that it is consistent both inside and outside of SOL*R.

I am pretty happy with how this is working so far; while I have expressed numerous times that I think the repository model is flawed for a host of reasons, to the extent to which it can be improved, this starts to provide content owners (and funders) details on how often resources are being used after they are downloaded, and (much like links and trackbacks in blogs) offer content owners a way to follow up with re-users, to start conversations that are currently absent.

But… I can hear the objections already. Some are easy to deal with: we plan to implement this in such a way that it will not be totally dependent on javascript. Others are much more sticky – does this infringe on the idea of “openness”? What level of disclosure is required? (This last especially given that potentially 2nd and 3rd generation re-users will be sending data back to the original server if the license retains intact.)

I do want to respect these concerns, but at the same time, I wonder how valid they are. You are reading this content right now, and it has a number of “web bugs” inserted in it to track usage yet is shared under a license that permits reuse. Even if it is seen as a “cost,” it seems like a small one to pay, with a large potential benefit in terms of reinforcing the motivations of people who have shared. But what do you think – setting aside for a second arguments about “what is OER?” and “the content’s not important,” does this seem like a problem to you? Would you be less likely to use content like this if you knew it sent usage data back? Would anonymizing the data (something piwik can easily do) ease your mind about this?

OLNet Fellowship – Week 1 Highlights

At the rate it seems to be going, my month here in Milton Keynes will be over in the blink of an eye, but my first week is coming to a close and I wanted to reflect on some of the things I’ve learned and experienced so far.

Community and Open Education

Two examples I came across my second day here really spoke to me about new ways of thinking about OER/Open Education in relationship to people and communities. The first is the iSpot project managed by Doug Clow, one of my colleagues here in the Institute of Educational Technology where the OLNet team from the OU is housed.

As Doug explained, the site allows people to post photos of they’ve taken of local species, and crowdsources their identification. The site has a sophisticated reputation system that awards participants and also identifies those with formal expertise in different fields and weighs their input accordingly. The OU have partnered with a number of BBC Television nature shows and radio programmes to popularize the site, so they are attracting an audience who then participate out of and existing passion and interest. The genius is To *then* weave OU courses into/around this community site and content, using it both as potential course content but also as a conduit for interested informal learners to find formal learning opportunities if they chose, and also interact and be supported in their informal learning community by discipline experts. When Doug described this to me my jaw dropped; it is so obvious yet really a brilliant turn. Too often in formal higher ed we have had the “build it and they will come” belief about our OER efforts, and when that hasn’t happened we’ve then shifted our focus to “building communities” around our content. But that is so wrongheaded. Communities exist already, and where they don’t, it’s not simply a matter of them forming around content, per se. By leading with a site that helped users scratch an itch they already had, however small, (“I keep spotting this bird in my back yard but I don’t know what it is”) and then building tools to support peer engagement and discussion, as well as personal identity and reputation, they’ve set the stage for community to form and share knowledge and only THEN weave formal offerings in and around this. It’s probably not perfect, but I think it offers strong suggestions as to how institutions can engage civil society in a way that leads to a permeable boundary between existing informal learning communities and formal learning institutions/scholars.

The second example was a bit different yet still inspiring. Another researcher on the OLNet project, Andreia Santos, gave a short talk on an initiative at the Brazilian university Unisul to experiment with ways to attract new learners through a mixture of Open Education, peer support and social networking. If I understood correctly (and I’m not sure I completely did, so I hope Andreia will see this and chime in with a correction or pointer to a longer write up), the university has begun offering access to a block of 10 courses, a mixture of open resources from the OU and themselves, within their own learning environment (so not just ‘content’ but a full VLE experience…). The part that tickled my fancy was that they do so during one of their “breaks” (in their case the Winter break that happens in June/July) and are in part marketing it to friends and families of existing students. This seems like a smart idea in that not only do they have stronger ties and so their message is much more convincing, but they themselves end up taking some of these courses to and because of their familiarity with the environment end up becoming a form of peer support. I understand that this year they have introduced a nominal fee but that students can take as many of the courses as they want and get a form of certificate at the end. Like I said, different than iSpot but still I think a strong example of interacting with community and existing ‘social networks.’

Repositories – some mothers do ‘ave ’em

Another part of my experience so far has been to listen to talks on a few different repository projects that shall remain nameless. The learning here wasn’t particularly new for me, but it did continue to confirm beliefs I’ve long held about the weak points of this approach: that they typically do not tap in or reinforce individual motivations for sharing; that their model of ripping content out of its original context for download goes against the grain of the web (more on this soon, as part of my Fellowship work on “OER Tracking”); and that they are a solution begged by the questions of VLEs/LMS silos, sharing modeled on “publishing” and that is ony half-heartedly committed to sharing. But… the one good thing I guess is that it made me feel slightly better about my own work, that I’m not the only one who’d hit these problems nor had to learn the hard way that content doesn’t build networks that share, people do.

On being at the OU

If I haven’t already made it clear, it is a HUGE honour for me to be a visiting academic with the OU through the OLNet Fellowship program. This institution has been (and still is) a global leader in the field of distance learning and open education, and there is a tangible passion here for the belief that education can radically improve people’s lives for the better. The opportunity to be physically here for a month is even more special to me because on a day to day basis I work from my home office, and while I am surrounded by a global network of peers who I talk with daily, the chance to be surrounded by so many smart people passionate about open learning, as well as have access to some fantastic services on this lovely campus is one I will never forget. I’d be remiss if I did not extend a special thanks to Karen Cropper and Janet Dyson for helping me find my way in the first few days and make me feel really at home, and a special thanks to “Liam and the librarians” for broadening my social horizons.

There’s lots more to tell, especially around my specific project of tracking OERs outside of the bounds of the repository (which I think we’ve now got a plausible model of how to do) but I’ll leave that for another post. For now I’ll leave it that it is good to be back in the land of great cheese and delicious warm beer with so many rich opportunities to learn ahead of me.

What is the most “successful” “formal” “OER” project?

Simple question, right – what is the most “successful” “formal” “OER” project? Except, not so simple, which is why the scare quotes. I asked the question on twitter, and got some interesting answers so far:

I don’t think there is one “right” answer, but I do think it is a useful question to ask; firstly because it asks us to dig into the assumptions behind each of the terms I scare-quoted. By “successful” do you mean: most accessed/viewed? most re-used? increased the profile of the institution the most? provided the best return on investment? improved student learning the most? decreased some of the crises facing the world the most? All of the above? (good luck with that!) And what’s meant by “formal”? Or “OER” for that matter?

I’m not hoping to spark a definitional skirmish – lord knows we’ve all seen enough of those. But I am sincere in wanting examples, however you choose to define the terms. Because from where I’m sitting, the projects that fulfill the criteria of “successful” “formal” “OER” projects are few and far between, yet I remain absolutely personally committed to the causes of education and open sharing. The tension between these two seemingly contradictory statements (plus the fact that I derive my livelihood working on “formal” “OER” projects) should be plain, and seeking some examples is in a way asking for help both in how I’m approaching my work but also where I am choosing to put my efforts in this life. As The Reverend constantly reminds me, “you can’t live wrong rightly,” and I’m feeling pretty tired of struggling with round holes and square pegs, trying to convince people to let go of The Fear. – SWL

Look out Milton Keynes, here I come! – My OLNet Fellowship on tracking OER Reuse

http://olnet.org/

I’m still not 100% clear on whether I can tell anybody about this, but… too late now. Earlier this year I took a bit of a flyer and submitted an application for an OLNet Fellowship, which offered the chance to work with the folks at the renowned Open University in the UK on issues around Open Education. I am not a full-time Academic and don’t have an enormous publication record, but I’d like to think I’ve paid some dues in the trenches working on, and thinking and writing about, Open Education. Apparently so did they, because much to my pleasant surprise I was awarded an “Expert Fellowship,” a category seemingly designed to suit odd-balls like myself that work in the lofty heights of Academia but ain’t got no papers 😉

But there’s a point to this post apart from saying “wohoo Scott” (wohoo!) Actually 2 points. The first is a shout out to colleagues in the UK that I will be in Milton Keynes from June 23 until July 24th. I am not clear yet the extent of my mobility will be, but I’m certainly hoping that the month offers some opportunities to visit and learn with colleagues in the UK. If you are interested, please do let me know and we’ll try to make it happen.

The second point of the post is to share a bit of what I am going to be working on. As many of you know, I run an “open educational resource” repository (cue loud groan.) In our model, and it seems far from unique, teaching resources aimed primarily at instructors are typically downloaded and reused in some other context. While it is possible to ‘point’ to content hosted in our system, in most cases this is not how it is used.

One of the problems with this model (and sheesh, don’t I wish there were only one) is that the content owners don’t get a good sense of the popularity of their resources and where else they are being used. As a blogger and long time creator of web content that has been reused, I know that getting feedback on how often your stuff is viewed and from where, whether it be in the form of Trackbacks, or services like Google Analytics, can be a big shot in the arm. Sure, it is hopefully not the only thing that motivates you, but it doesn’t hurt.

So my proposal is to research the myriad different ways this kind of usage tracking can be implemented specifically in the context of OER (with a high sensitivity to finding approaches conducive to freedom and not any sense of ‘restriction’), select one and implement it in my real world repository. It is a big fish to fry and I do not think the problem is exclusive to OERs but in general applies to digital media. While I do hope to report on general approaches I also know that having a specific context to work in will be helpful. So expect to hear more (and get more pleas of “help!”) in the coming months.

Anyways, hope I do end up getting to meet some of you conspirators who ’til now have been just URLs or avatars. And I hear the English countryside is lovely that time of year… – SWL