Another Half-baked Idea (in which Scott dangerously treads on librarian toes)- OPACs, OA and Wikipedia

Back in December I had another one of my half-baked ideas that I want to run by the larger community before doing much more on it. One day, while reading a wikipedia article, I thought “This is a well known topic (I can’t recall which now) – wouldn’t it be great if students could automatically be prompted that there were full, scholarly BOOKS in their library on this topic in addition to this brief wikipedia article?” (Don’t get me wrong, I LOVE wikipedia, and to get the overview there is often nothing better, but in some instances it offers only a  brief glimpse of a deep subject, as is an encyclopedia’s proper role.)

Now you all know of my fondness for client-side mashups and augmenting your web experience with OER; this passion was kindled by projects like Jon Udell’s LibraryLookup bookmarklet (annotate Amazon book pages with links to your local library to see if the book is in) and COSL’s OER Recommender (later Folksemantic, a script that annotates pages with links to relevant Open Educational Resources.) What I love about these and similar projects is that they augment your existing workflow and don’t aim at perfection, just to be “good enough.” In all cases, what these types of automated annotation services require are two things: 1) some “knowledge” about the “subject” they are trying to annotate (in the LibraryLookup case the ISBN in the URL, with folksemantic – I’ve never been clear!) and; 2) a source to query  (your local library OPAC/a database of tagged OER resources) hopefully in a wel structured way with an easily parseable response.

So what struck me while looking at the wikipedia page is that (following the Principle of Good Enough) the URLs by and large follow a standard pattern (e.g. http://en.wikipedia.org/wiki/%page_name%) where %page_name% is very often a usable “keyword” for a search of some system (condition #1 above) and that library OPACs contain a metric shitload of curated metadata including both keyword and title fields (close to condition #2 above.)

So the first iteration of the idea was “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to your local library catalog of books on that subject.”

Now one of the weaknesses of the LibraryLookup approach was the need for a localized version of the script for each OPAC it needed to talk to. Means it doesn’t spread virally as well as it might and is often limited to tech savvy users. So the next obvious (well at least to this non-librarian) iteration was

“Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links to query WorldCat instead”

in the hopes of performing a single query that can then be localized by the user adding their location data in WorldCat. But… as a number of librarian friends who I ran this by pointed out, WorldCat is pay-to-play for libraries, and in BC at least does not have wide coverage at all. Still, a step in the right direction, because further discussion brought me to the last iteration of…

… “Wouldn’t it be great if I could write a combined LibraryLookup/Folksonomic script that annotated wikipedia pages with subject-appropriate links that instead of annotating with an OPAC/book references, used fully open resources, but instead of OER (which folksemantic already does), use a service like OAIster with it’s catalogue of 23 million Open Access articles and thesis.”

Liking this idea more and more, I then realized that OAIster had since been incorporated into WorldCat (though I must admit, not finding it very intuitive to figure out how to query *just* OAIster/open access resources).

So this is where I got to, but I was fortunate to talk through the idea with two fantastic colleagues from the library world, Paul Joseph from UBC and Gordon Coleman from BC’s Electronic Library Network. And I am glad I did, because while they didn’t completely squash this idea, they did refer me to a large number of possible solutions and approaches in the library world to look at.

While it’s not “client side,” (which for me is not just a nicety but actually an increasingly critical implementation detail) a small tweak to WorldCat’s keyword search widget embedded in mediawiki/wikipedia looks like it would do the trick.

Paul pointed me towards an existing toolbar, LibX, that is open source, customizable by institution, and extensible that could (and who knows, maybe already does) easily be extended to do this by the looks of it.

Paul also reminded me of the HathiTrust as another potential queryable source, growing all the time.

And the discussion also clue’d me in to the existence of the OpenURL gateway service, which seems very much to solve the issue of localized versions of the librarylookup bookmarklet and the like.

So… is this worth pursuing? Quite possibly not – it seems like pretty well covered ground by the libraries, as it should be, and it’s the type of idea that if it hasn’t been done, I am MORE than happy for someone else to run with it. I am looking for tractable problems like this to ship code on, but I’m just as happy when these ideas inspire others to make improvements to their existing projects. The important things to me are:

  • approaches which meet the users where they already are (in this case Wikipedia or potentially mediawiki)
  • approaches that don’t let existing mounds of expensive metadata go to waste (heck, might as well use it!)
  • approaches that place personalization aspects on the client side; increasingly we will be surfing a “personalized’ web, but approaches that require you to store extensive information *on their servers* in order to get that effect are less desireable; the client is a perfect spot, under the end users control (look, I’m not naive)
  • approaches that fit into existing workflow in the “good enough” or 80/20 approach

I think this fits all of the above; if you have other criteria I’d love to hear them (certainly these aren’t MY only ones either.) If you do know where this idea has been implemented, please let me know. And if my unschooled approach to the wonderful world of online library services ticks any librarians off, my sincerest apologies – I’ve always said that “librarians ae the goal tenders of our institutions” (I was a defence man, this is a big compliment) and my only goal is to bridge what feels like a massive divide between educational technologists, librarians and, most importantly, learners. – SWL

Sharing your PLE just got a little bit easier

Big hat tip to Gerry Paille for knowing me well enough to realize that the huge Firefox Add-On nut that I am would be extremely excited to learn about a new feature/service for Firefox called “Collections.

Basically, the Collection part of the site (and the related Add-On Collector Add-On – ha!) allow people to create collections of add-ons, annotate each of the add-ons with commentary, share these with other users who can subscribe to these collections!

So, for instance, if you are interested in some of the key add-ons to help yourself become an Open Educational DJ (ahem) you may want to check out my “Open Educator as DJ” collection which I just published, and better yet, subscribe to it, so that as new tools get added they are pushed to you.

Clearly, the PLE is more than just one tool, more than just the browser, and definitely more than MY use of either of these. But for me, the browser, and the various ways I can pimp it out, are a big component of my workflow as both an educational DJ and network learner, but one which has always been really challenging to share with people. With Firefox Collections, that just got a lot easier. – SWL

MOCSL Tools and focusing on user empowerment

http://cosl.usu.edu/projects/mocsl/

So Day 3 at Open Ed 2007 is underway. DJ RSS just rocked the mic with a free form exposition on openness, free tools, mashups and remix culture.

The conference has been really worthwhile on a number of levels – opening my mind to how the needs for localization might be reconciled with the ongoing divergence of needs between leanrers from developed and developing world, and opening my heart to the moral imperative that is OER.

But there was one low point for me, which was learning that funding for COSL’s “Making Open Content Support Learning” toolset had not been renewed in this current round of funding.

Funders will fund what they fund, not much I can do about that; I just wanted to write this post so that I could say publicly what I’ve said privately to a number of folks here, which is that I am really sad to hear about this, because I think these tools and this effort were really promising and important, because they focus on individual learner empowerment in the networked world of OER resources, something you can probably tell from the short movie on client-side tools that I released yesterday, I believe to be an important aspect of improving the chances of OER content efforts to effect learning across the internet as a whole. Funded or not, I plan to continue pointing people to these tools, evangelizing them and their like, and finding my own ways to continue working on these kinds of learner-centric tools. And hopefully this post is understood in the spirit it was written, simply the honest words of an often impolitic blogger . – SWL

My OpenEd Demonstrator – Augmenting OER with Client-Side Tools

http://www.edtechpost.ca/gems/opened.htm

Back in June I submitted a paper proposal to OpenEd 2007. In August, the day before I was to go camping, I heard back that while my proposal hadn’t been accepted, I was invited to participate in a ‘Demonstrator’ session (basically a Poster session set up at the end of Day 2).

I have to admit that I was a bit crushed at first. But very quickly I turned this around; not only did I realize that this was a good decision by the organizers in terms of my proposals’ content and the general tenor of the accepted presentations, I also realized that doing a ‘demonstrator’ in the right way would give me an opportunity to reach a wider audience than doing a straight presentation.

So the result is this 10 minute Flash movie demonstrating a few of the ways learners can augment their experience of OERs (in fact the web in general) using client-side (mostly) tools that they control. This idea of client-side tools (by which I mean extensions, bookmarklets and Greasemonkey scripts) really appeals to me because it starts to shift the locus of control back to the learner and away from centrally provisioned server tools. The point in doing this? Well, in addition to simply raising awareness of these techniques, the point in presenting this specifically at OpenEd is as a small challenge to what I see as a past tendency towards monolithic (and not mashup friendly) content in some of the formal OER projects, and to counter what seems to me like the chauvinism that people are going to consume your OER courses on your site, in the way you dictate. In my mind, OERs will really start to succeed when they can augment our experience of the learning space that is the entire internet, instead of sitting off to the side and requiring learners to self-identify that they want an OER. As I say in my final slide “People need their OER even when they are not on an OER site!”

Was this a successful experiment? Well, in my mind, not totally. I really wanted to show more examples, for instance like WikiProxy, of Greasemonkey scripts that dynamically link to supplemental resources without a lot of semantic underpinnings. You know, loosely connected. But I couldn’t get WikiProxy working properly, ran out of time in my own development efforts (but more on this soon) and as much as I think the new OER Recommender by COSL is a good illustration of this technique, it felt kind of superfluous to demo this where it was actually developed 😉

I also think one can validly challenge the extent to which the techniques I demonstrate actually enhance learning. I think they do, but I can see how others would disagree. So my question to you – what other ‘client side enhancements’ have you found that learners can use independently to augment existing coontent and improve their learning experience on the web. I am really interested to hear more ideas!

There are other pieces that I didn’t get to show but that if you are interested you can find out more in my del.icio.us links for the presentation. Specifically, how you can perform some of these tricks in other browsers (through things like Turnabout and Creammonkey), how organizations can distribute these tools through mechanisms like custom toolbars, customized portable apps on cheap thumb drives and how yoyu can turn Greasemonkey scripts into proper extensions. Enjoy! – SWL

Google Scholar & OpenURL Firefox Extension

http://www.ualberta.ca/~pbinkley/gso/

As soon as Google Scholar hit the streets there was quite a stir in the library community and various ponderings about how to tie it into existing library systems, so it was inevitable that someone would develop this, but this quickly!!! A Firefox extension which, when you perform a Google Scholar query, also sends queries to your institution’s OpenURL resolver, and in cases where your University owns a licensed copy of the cited article creates a link directly to it. Too cool! – SWL