OLNet Fellowship – Week 2 Reflections

So I’m a little behind on this (since I’m now in Week 3) but still wanted to jot a few notes down, as I had some fantastic discussions last week.

Meeting with JORUM – Using DSpace as a Learning Content Repository

One of the highlights last week was a trip to Manchester to meet with Gareth Waller and Laura Shaw of the JORUM project. Back when we started our own repository work in BC I liaised with folks from JORUM, setting up a few conference calls to share details on how we were tackling our similar problems, but we’d fallen out of touch, and facilitated through meeting Jackie Carter last January at ELI, this was a chance to renew the connections.

One reason I wanted to meet was that JORUM’s model is very similar to our own, so I wanted to see if my ideas on how to track OERs after they’ve been downloaded from a repository resonated with them, and whether they were already employing some other technique to do so. Turns out they were of interest and to date these are (as I had suspected) numbers they were not currently collecting but eager to have, so that was a useful vote of confidence.

But the other major reason I had for my visit was to learn more about the work they had done on JORUM Open to turn DSpace into a platform for sharing learning resources. It had been almost 4 years since I last concluded that while you could try to jimmy a LOR into DSpace, it wasn’t an ideal fit – DSpace “out of the box” really caters to the deposit and archiving of documents but isn’t optimized to deal with the specialized (read “arcane”) formats of learning content.

Which is why I wanted to see how the JORUM folks were doing it; sure enough, Gareth Waller has coded many new features into the product that make it a much better fit to handle “learning” content. While I’m not yet certain it provides a simple exit strategy out of our existing commercial platform, the work Gareth has done represents a big step towards that, and I would highly recommend any other institutions already involved with using DSpace specifically for learning content to contact him.

Planning for Succession – How to enable what comes after the LMS

The rest of the week was spent with my nose to the grindstone trying to code up the hooks to incorporate piwik tracking codes into resources uploaded to SOL*R. As a treat that weekend, I travelled to Cardiff, Wales, my old stomping grounds from my Graduate degree days, to spend 3 nights with Martin Weller and his family.

We spent most of the weekend biking around the city and a good deal of time in Llandaff Fields, near Martin’s home. On Sunday afternoon we did a large circuit of the park while Martin’s daughter was at riding lessons, and it was one of those settings and strolls that beg for epic conversation. And this did not disappoint. Two ideas in particular resonated with me.

The first was the notion of “succession” of technology, to borrow a metaphor from ecology. Martin has written on this a number of times before, both in articles and in his book on VLEs. But we were discussing it in the context of the recent acquisition of Wimba and Elluminate by Blackboard (as well as in light of my recent reading of Lanier’s “You are not a gadget” in which he discusses the idea of “technological lock-in” and “sedimentation”), so put a slightly new spin on it, I think.

Now metaphors can both enable and obscure, but to follow this one for a bit, one can look at the current institutional ed tech landscape as a maturing landscape where variety is diminishing and certain species becoming dominant. But far from reaching an ultimate stable climax, there are disruptors, the latest and possibly largest being the financial crisis. These disturbances open the opportunity for new species to flourish. But… unless we’re suggesting the disturbances are so large as to restart the entire succession process (which some indeed do suggest) we’re likely instead to see adaptations to this specific force, often in the form of seeking cheaper options.

So far, pretty conventional story – mature open source scoop some existing customers when the pricepoint gets too high. Except this is where I am seeing a real opportunity for the next generation approach to creep in (I’m pretty much going to abandon the metaphor here, as I’m no ecologist, that’s for sure.) Some of us have been enthused by the prospect of Loosely Coupled Gradebooks as a technology that can unseat the dominant, monolithic LMS. But to date, there have been only a few convincing examples, and it seems like a bit of a “can’t get there from here” problem (made worse by Blackboard’s predatory acquisition strategy.) Which is where the bridging strategy comes in – we need to take Moodle (and I guess Sakai though I am lot less keen on that prospect) and focus on isolating and improving its gradebook function; as it is, Moodle already represents a very viable alternative (as the increasing defections to it show), but as it is, it doesn’t represent a Next Step, nor will adopting it “as-is” move online learning in formal contexts further. But adopting it in combination with developing its gradebook functionality to ultimately become the hub for a loosely coupled set of tools. Maybe this isn’t that revelatory, but it became clear to me that a path forward for schools looking to leave not just Blackboard, but LMS/VLEs in general, goes through Moodle as it is transformed into something else. At least that seems doable to me, and something I hope to discuss with folks in BC as a strategy.

A new Network Literacy – Sharing Well

Throughout our walk, the second recurring theme was how, for both scholars and students, bloggers and wiki creators, open source software developers and crowdsourcers of many ilk, there is a real talent to sharing in such a way that it catalyzes further action, be it comments, remixes or code contributions.

Howard Rheingold uses the term “Collaboration literacy” as one of the 5 new network literacies he proposes, and I guess, barring any other contender, that it’s not a bad term, but it does strike me that there is a real (and teachable) skill here, one that many of us have experienced; either in the “lazyweb” tweet that is so ill-conceived that it generates no responses at all, or often in envy marvelling at bloggers who manage to generate deep discussion on what seems like the barest of posts, yet one which clearly strikes the right note. “Shareability”? Ugh, right, maybe leave it alone, I mean do we really need another neologism? Still, it does seem worthy of note as a discrete skill that people can increasingly cultivate in our networked, mash-up world.

OLNet Fellowship Week 2 – Initial Thoughts on Tracking Downloaded OERs

As I mentioned when I first posted that I was coming to the UK for this fellowship, my main focus is how to generate some data on OER usage after it has been downloaded from a repository. In looking at the issue, it became clear that the primary mechanism to do so is actually the same as to track content use for sites themselves, by using a “web bug” in the same sort of way that many web analytics apps do, but instead of the tracking code being inserted into the repository software/site itself, it needs to be inserted into each piece of content. The trick then becomes

  • how do we get authors to insert these as part of their regular workflow
  • how do we make sure they are all unique / at what level do they need to be unique
  • how do we easily give the tracking data back to the authors.

My goal was to do all this without really altering the current workflow in SOL*R nor requiring any additional user accounts.

The solution I’ve struck upon (in conversation with folks here at the OU) is to use pwiki an open source analytics package with an extensive API to do the majority of the work, and to then work on how to insert this into the existing SOL*R workflow. So the scenario looks like this:

1a. Content owners are encouraged (as we do now) to use the BC Commons license generator to insert a license tag into their content. As part of the revised license generator, we insert an additional question – “Do you wish to enable tracking for this resource?”

1b. If they answer yes, the license code is ammended with a small html comment –

<!–insert tracking code here–>

1c. The content owner then pastes the license code and tracking placeholder into their content as they normally would. We let them know that the more places they place it into their content, the more detailed the tracking data will be. We also can note that this is *only* for web-based (e.g. html) content.

2. The content owner then uploads the finished product as they normally would.

3a. Each night a script (that I am writing now) runs on the server. It goes through the filesystem, and every time it finds the tracking placeholder:

  • based on the files location in the filesystem, it deconstructs the UUID assigned it in SOL*R
  • uses the UUID to get the resource name from SOL*R through the Equella web services
  • re-constructs the resource home url from its UUID
  • sends both of these to the piwik web service, which in return creates a new tracking site as well as the javascript to insert in the resource
  • finally writes this javascript where the tracking placeholder was.

4a. Finally, in modifying the SOL*R records, we also include a link to the new tracking results for each record that has it enabled.

4b. For tracking data the main things we will get is:

  • what are the new servers this content lives on
  • how many time each page of content in the resource (depending on how extensively they have pasted the tracking code) has been viewed, both total and unique views
  • other details about the end users of the content, for instance their location and other client details

I ran a test last week. This resource has a tracking code in it.  The “stock” reports for this resource are at http://u.nu/3q66d It should be noted: we are fully able to customize a dashboard that only shows *useful* reports (without all the cruft) as well as potentially incorporate the data from inside Equella on resource views / license acceptances. This is one of the HUGE benefits of using the SOL*R UUID in the tracking is that it is consistent both inside and outside of SOL*R.

I am pretty happy with how this is working so far; while I have expressed numerous times that I think the repository model is flawed for a host of reasons, to the extent to which it can be improved, this starts to provide content owners (and funders) details on how often resources are being used after they are downloaded, and (much like links and trackbacks in blogs) offer content owners a way to follow up with re-users, to start conversations that are currently absent.

But… I can hear the objections already. Some are easy to deal with: we plan to implement this in such a way that it will not be totally dependent on javascript. Others are much more sticky – does this infringe on the idea of “openness”? What level of disclosure is required? (This last especially given that potentially 2nd and 3rd generation re-users will be sending data back to the original server if the license retains intact.)

I do want to respect these concerns, but at the same time, I wonder how valid they are. You are reading this content right now, and it has a number of “web bugs” inserted in it to track usage yet is shared under a license that permits reuse. Even if it is seen as a “cost,” it seems like a small one to pay, with a large potential benefit in terms of reinforcing the motivations of people who have shared. But what do you think – setting aside for a second arguments about “what is OER?” and “the content’s not important,” does this seem like a problem to you? Would you be less likely to use content like this if you knew it sent usage data back? Would anonymizing the data (something piwik can easily do) ease your mind about this?

Sni.ps Attribution Tool

http://sni.ps/

Sni.ps is a service that has come across my desk a dozen times in the last year, referred on to me by everyone from trusted colleagues, the director of my org, and the developer himself (with whom I should note I have worked before and consider a friend). I had looked at it briefly before, but the last time someone sent me a prompt I thought it time to take a better look and write it up.

The premise is simple enough – the service provides a bookmarklet that, when clicked, creates an overlay of whatever page you were looking at. This overlay allows you to then select content on that page, for which it generates ’embed code’ to paste on your own site. Doing so will reproduce the content along with an annotated attribution link back to the original source.

There’s a few small other twists to it – the attribution link does use a microformat that describes it as an ‘attribution,’ it looks like RDF data is being created which will associate the cited data with both source and destination, and, if you create an optional account, this account becomes a central storage spot for all of your snipped content.

So the idea seems appealing. And to give credit to the developers, it is quite easy to use, and while there might have been ways to reduce the steps even further, really, it is reasonably slick. The fact that you can use it without an account is very cool. And it’s free.

But like so many things, a large part of the question of whether it will get adopted is whether the effort to use the tool (or any change to existing workflow that the tool asks you to make) is worth the payoff, makes it easier to accomplish something you were already doing, or easy to accomplish something you’re not already doing but might, if made easy enough.

The act of copying the content itself doesn’t seem to be made particularly easier, so sni.ps value proposition seems to lie in providing an easier way to create attributions. Morally this seems to resonate – other than what seems like a few fringe cases, there doesn’t seem to be any real resistance in the open content/open education community that Attribution is a reasonable requirement for reuse. So we seem to be saying we want to attribute original sources, and indeed the pratice of the bloggers (and educators) I respect would also seem to support this. Indeed, Alan even coined a neologism for it:

Linktribution

But the word “Attribution” sounds vague.

So I tossed out a new word — Linktribution– attribution via a web link, or offering a “linktribute”.

So does sni.ps make it any easier to do? Well, in my limited experience so far, not particularly. Neither the microformat nor the RDF are of any immediate benefit to me that I can see either (though I am not opposed to creating them if it’s easy enough, which this is). Having a store of ‘attributed’ content – yes, I could see that having some value.Enough to make me change my workflow? Not sure.
I *want* to like sni.ps. But I’m not sure. I’m going to keep trying to use it for a few more weeks, see if it rubs off. The reason I am blogging it, though, is partly so that others can have a look and give me their sense as to its usefulness, and their willingness to adapt their workflow to include something like this. What do you think? As a blogger, would you use this? As someone working on open content or open education, would you evangelize this to your users? – SWL

BC “Learning Content Strategies” meeting

http://tinyurl.com/5tqmz8 

Most of you will know one of my long term projects has been to help share online learning resources across BC and beyond. One of the main stumbling blocks to effective sharing has been the diverse (divisive?) environments in which the material are produced/housed/assembled (at last count there are at least 5 major flavours of LMS in our 26 institutions, as well as sundry other ones and non-LMS approaches as well).

I’ve always held that a top-down “standards” approach isn’t the answer; not only is my project not big enough to compell that kind of change, I am thoroughly sceptical of any of the current standards-based approaches to actually work across all of these LMS. Plus for any “solution” to be adopted, it needs to reflect local realities and priorities at institutions, and be seen to solve local problems before it (or at least, as it) solves the ones of sharing outside the institution.

Add to this the fact that I am loathe to highlight only solutions that would simply further entrench LMS-based solutions or that don’t take into account the learning we’ve all been doing about the role of openness, or the new approaches which social software and other loosely-coupled technologies can offer, and we faced a quandry. How to frame a meeting that brought up the issues, highlighted the common pain points, and ALSO presented both LMS-oriented and other approaches to learning content/learning environments?

Thanks to a suggestion from Michelle Lamberson, we decided that framing the day around the conceit of “Learning Content Strategies” was the perfect way to bring all of this together (seems obvious now, but we struggled for a while for the right frame.)

After a very brief intro from me, we kicked off the day with an hour long discussion of common problems and challenges around learning content. I facilitated this, getting the discussion going with a set of questions that people answered using iClickers. (As an aside, while I recognize lots of potential problems with clickers, I was frankly blown away by how well the iClicker technology itself worked. Truly simple to use and functioned flawlessly.) It felt to me like a good start to highlighting some of the common problems people are facing and laid the groundwork for the rest of the day.
The next step was to showcase work of a few institutions around the province who, in my experience, have developed different approaches to developing content indepedant of their LMS environments. Katy Chan from UVic, Enid McCauley from Thompson Rivers and Rob Peregoodoff from Vancouver Island University all graciously shared with us some insight into their content development processes and the factors that shaped their choices. The important thing that came out of this for me is that none of these approaches is the “right” one, just the “right” one for their context – they ranged from standalone HTML development, to industrial XML production, to Macromedia Contribute, and each had its strengths but also possibly its complications. It’s a tradeoff, you see, like any choice. But they certainly gave their peers in the audience lots to think about.

After lunch I trotted out my dog and pony show, highlighting some of our offerings from BCcampus as well as launching the new Free leaning site. I still live in hope that some of these offerings will resonate with our system partners (a boy can dream) and already there seems to be some renewed interest, which is heartening.

The afternoon was given over to a completely different set of approaches to the problem. Like I said, while the vast majority of our institutions use LMS as their primary online learning platform, that is not the future, or at least, not the future I hope for, so we wanted to expose people to some approaches already happening in the province that are outside the LMS, ones that used loosely-coupled approaches or “openness” as an enabler.

First up was Brian Lamb and Novak Rogic from UBC, and I’m pretty sure their demos of moving content to and fro using WordPress, Mediawiki, their fabulous “JSON includes” and “Mediawiki embeds” techniques left some jaws dropped on the floor. A hard act to follow indeed, but Grant Potter from UNBC did a great job, showing off their own work with blogs and wikis for shared and distributed content development.

Finally, since all the presentations to date had been from a somewhat “institutional” perspective, I thought it important to get an instructor up there to show what a single person can do with the current technologies, and who better to do so than Richard Smith from SFU. Worried though he claimed to be about following @brlamb and co. on stage, he needn’t have – his session was a blast, showing off many web 2.0 tools that he uses with his students. I think some of the biggest value from that session was challenging the notions of the handhel instructor, of the assumption that media must have high production values to be useful, and that this tech is just for “distance” learners. Richard basically made the case that he is able to offer more than 100% seats in his class by always having remote and archived materials for the students. I’m pretty sure this turned more than a few heads.

In the end, my nicely laid plans for orderly rountable discussions were thrown out the window, and I tried as best I could to facilitate a whole room discussion on the fly. I think it went pretty well;  we tore through many of the real challenges people face, from single sign-on to copyright, offering some new ways to think about these and identifying what I hope are some things we can keep working on together as a province.

In all honesty, this meeting went as well, even better, than I had hoped. My goal was not to propose a single solution (as I do not believe there is just one solution) but to bring the problems to light, to get people to acknowledge they exist, and to give them a chance to see some different ways to deal with them, and talk amongst themselves. My experience with this group and with the ed tech professionals in BC in general is, give them a chance to talk and share and don’t be surprised at the number of collaborations and shared solutions that emerge. I have great hope that this is just the start of the conversation and of renewed efforts. – SWL

BCcampus OER site – Free Learning

http://freelearning.bccampus.ca/

If you read ed tech blogs, especially the ones I read, then conversations about “open content” and “open education” feel like they have been going on forever. Indeed, at the Open Education conference this year, we celebrated 10 years of Open Education, so it’s been at least that long.

But my experience travelling around my own province for the last few years is that OER is still not a widely publicized phenomenom, and that faculty and ed tech support staff are still living with “scarcity mentalities” when it comes to the availability of free and open educational resources.

So as one small step to address this, we built this new site, Free Learning. There are many other good OER portals out there. If faculty and students were already using these, then we wouldn’t have a problem. But, in my experience, they are not, and as someone who works for the Province of BC, I have a hard time justifying marketing budgets for sites like those. So we built this one, also to give more play to locally developed resources that are Fully Open.

But in building this, I did not want to create a monster we would then have to maintain forever more. I wanted something that was simple to use and provided straightforward value to end users, but was also simple (and free) for us to maintain. Thus we built the site in WordPress. Using the Exec-PHP plugin allowed us to include some additional PHP web service calls to the SOLR repository to display the Creative Commons resources there in a tagcloud, something that system does not do natively.

I am especially proud of the OER and Open Textbook search pages. These provide a tagcloud of sites stored in Delicious, and then allow users to perform a constrained Google search over just these sites.  You are guaranteed that the sites you search are explicitly “open educational resources” from high quality, well known producers. Adding new sites to the list of sites, to the tag cloud and to the Coop engine is as easy as tagging them in Delicious. Since I already do this… it means no extra work. The site ticks along simply because I am online and find new OER sites all the time.

Total time on the project (including wonderful work by my colleagues Victor Chen and Eric Deis) was maybe a week. The bigger job now is getting it known and used around the province. I demo’d it to folks from around BC at the “Learning Content Strategies” meeting we held last Friday. Hopefully that is the start. And we also mentioned that this site could itself be a service for them – using WPmU, we can easily spawn another version of this site that responds to their own domain name, with their own branding, yet still uses the same background engine (meaning their effort is almost none, if that’s what they want). The cost is close to zero, but for our trouble they get a custom branded OER portal they can market to their own faculty. See – I DON’T CARE if you use THIS site or SOME OTHER SITE. I just care that people ACCESS OPEN EDUCATIONAL RESOURCES. And if re-branding this makes it easier for them to sell this to the people they support, great. I’ll happily provide you a version, or give you any of the code. That is what building it on top of open source technologies (and freely available services) allows us to do.

Please have at ‘er, let me know if it is useful, or any criticisms or complaints you might have. After all, we aim to please. – SWL

Making the case for “Fully Open” Content

I’ve asked twitterites a few times but haven’t got much of a reply yet, so I’m hoping readers have a reference or two to throw my way. Here’s the question – I work on a project that helps share educational resources. We currently support two licenses, a Creative Commons license and a regional consortia license called the “BC Commons” which facilitates sharing amongst the public post-secondary institutions in BC. Obviously this latter is not a “fully open” license as it does limit who can see and reuse the content. We’ve always seen it, I think, as an interim step, a way to get people into the habit of sharing their content but in a ‘safe’ way (and a way that the funders, the BC government and taxpayers, could be convinced of the immediate benefits).

Increasingly we are looking to try and increase the use of “fully open” licenses like Creative Commons, but in order to take this step we need to make the case to funders (as well, ultimately, to the content owners) as to why publishing under a fully open license is a better idea, for them, for the funders and ultimately the taxpayers.

So, I am looking for as many good references as I can find to help make the case. I wish it were enough to simply point people at David Wiley’s BCNet talk from 2007 [audio here | video here] (heck, it was given here in BC) because if you ask me, slam dunk!

Unfortunately, I need more, especially actual studies of the benefits or effects of sharing in a fully open way (and especially where a group moved from a more closed to more open model of sharing). Anything that can support or illustrate these kinds of arguments:

  • making resources fully open increases the number of accesses (and reuse) of resources, both within and outside of the original constituency
  • resources that are made fully open will have more improvements made to them, and thus end up as higher quality resources at no cost, then resources that aren’t
  • making resources fully open can provide additional returns for the organizations that do so in the form of increased brand recognition, increased student enrollments, better prepared existing students, etc.
  • making resources fully open leads to increased opportunities for partnership
  • making resources fully open does not substantially impact revenues to the content owner or institution (and indeed may increase it)

Anything is helpful, and I assume there are others trying to make this case in their own jurisdictions. Do you know of any studies that we can cite to substantiate the above propositions? Or indeed other propositions we should be staking the case on? –SWL

New Round of BC’s Online Program Development Fund

http://www.bccampus.ca/EducatorServices/CourseDevelopment/
OPDF/CallForProposals.htm

So while this may be of interest mostly to local readers, I thought I’d post on it because I think there’s a few things we are doing in this round that may be of wider interest.

This is the 5th round of BC’s Online Program Development Fund (OPDF), a province-wide fund that BCcampus (my employers) administer on behalf of the provincial Ministry of Advanced Education.

This year’s $750K call is notable, I think, first off for it’s inclusion of “Co-created Content” as one of the funding categories. This is an effort to acknowledge this phenomenom and support the co-creation of learning resources by students and faculty under a license that seeks to offer these for successive groups of students to build on.

The second thing possibly of more general interest is a new inclusion which asks the proponents to describe their strategy for seeking out existing freely reusable learning resources that could be leveraged in their project. This is an effort to promote one of the values underlying the fund, that good, free content should be reused where appropriate. The call does not dictate that existing content must be reused, but instead simply asks proponents what efforts they have made in this direction. It also does not stipulate where this content might come from – sure, we’d love people to look in SOL*R for suitable reusable content, but we hope they’ll bring in pieces from the thousands of other places you can find free learninng resources online.

Finally, another small innovation in the call is around how to promote interoperability practices. Like it or not, the majority of the content that’s been produced through past funds has been done in one of the course management systems supported in our province (WebCT 4, 6 and Vista, Blackboard, Desire2Learn and Moodle and a few home-grown ones are the current crop). While it is seductive to think one could simply specify a “standard” for content, this is for me problematic because a) it would be a top down approach that would likely not reflect the actual practices in the province and b) almost certainly wouldn’t simply “just work” anyways because of the uneven support across the CMS for even basic specs like Content Packaging. Instead, this call is an attempt to get people to at least factor the issue into their planning and describe how they plan to address it. From my perspective there is not ONE way to get content to work across these systems, nor does it have to even be in any of these systems at all. What it does need to be is as useful as possible to other faculty in the province (and ideally out of it too, but the funds’ mandate is specifically to foster content development in the province) regardless of the choices they make on their own, and the call simply asks people to describe their strategy to achieve this.

Blogging about “official” work stuff always makes me uncomfortable – not only have I been known to cock up before, it’s not an “official” part of my job. As is always the case, the words here represent my personal views and do not necessarily reflect those of my employer. If you want to know more about the OPDF, then read the call directly, don’t just take my word on it! – SWL

Have you told your faculty about the Creative Commons?

I run a repository service in B.C. (god, why does that always feel like the start of a stereotypical A.A. confessional, “My name is Scott and I am a recovering Learning Object Repository manager…”) We currently support sharing materials under two different licenses, either the Internet-wide Creative Commons (specifically the Attribution-Share Alike 2.5 Canada flavour) or a BC-specific consortial license called the BC Commons.

Part of my job is to take the dog and pony show around to institutions, so increasingly I am in front of faculty from across the province presenting on this. Typically, to introduce the idea of these licenses, I start with the Creative Commons, because given it’s massive adoption, clearly everyone will have heard of it, right?

WRONG!!!  In well over a dozen presentations recently, I have NEVER had more than a 35% recognition rate for the Creative Commons (and that’s including librarian conferences!) and sometimes as little as 1 in 20 will have heard about it.

I know I’ve gone off on this before on cogdog’s comment area, but this is still just staggering to me. And I don’t really mean that as a critique on the faculty themselves, though neither do I want to praise inattentivenesses. But seriously, we, and by that I mean both those of us supporting faculty in general, and also those working on “openness and sharing,” need to do a better job of communicating basic things like the very existence of the Creative Commons. It shocks me to have to write that in 2007, but that’s my reality.

How about you – what’s the awareness level of Creative Commons in your organization? Any ideas on simple (free, easy) ways of increasing this awareness? Or is “mass retirement” the solution to your information literacy woes? Love to hear your ideas or stories to the contrary. – SWL

XERTE – Free Visual Editor for SCORM compliant Flash Learning Objects

http://www.nottingham.ac.uk/~cczjrt/Editor/

Wow, I feel really torn about posting about this at all. When I stumbled across this today I was quite excited; while the promise of content interoperability has been there for quite a while now, the availability of easy to use tools for producing such content outside of the CMS delivery environments has been scarce. So any time I see a tool like this I am anxious to check it out. more…
Continue reading “XERTE – Free Visual Editor for SCORM compliant Flash Learning Objects”

UBC Arts Flash-Based Learning Tools available for download

http://www.learningtools.arts.ubc.ca/index.htm

must…get…back…to…work…just…one…more…post…

Like I said, “affable tools for rich media manipulation” – a few years back I wrote about the availability of some Flash-based authoring tools from the UBC Arts Computing group. Since then, they have created many more; in addition to the original timeline tool, they’ve developed a multimedia learning object authoring tool, a vocabulary memorization platform,’ a language pronunciation tool and a very cool character stroke recorder for Asian characters.

In the past these had all been freely available, but only in a version hosted on the UBC server. Now all of these tools are available for free download so you can install them on your own server and offer them to faculty for use in your own environment. I am also looking forward to working with these guys to integrate these tools with SOL*R and to see them work with other environments. – SWL