Tag Archives: evaluation

Effects of Information Distributions Strategies on Student Performance in a CMS


This is one of those papers where I find myself thinking “freakin’ amazing, I can’t believe it” (yes, I really think like that) but by the end I’ve been reduced to, “ok, but a sample of 50 students? And all of them graduate students of education from 3 courses?” I’m not saying that invalidates the results, and the paper itself actually seems well written. But if you do buy into its arguments, then this SHOULD be sending shockwaves (at least shivers) through ed tech departments (and the people who fund them) across the world. Why? Because it throws into serious doubt the value of course management systems when used (predominantly, as other studies, like Morgan’s, have shown) as really expensive web filing or content management systems in support of face to face courses. This doesn’t necessarily sound the death knell for CMS; as the study concludes, instead one could draw the conclusion that if you want to see positive effects on pedagogy by using a CMS then use them, well, pedagogically, not as a glorified filing cabinet. But still, it does start to put to the test the conventional wisdom that simply giving people access to reading materials ahead of time will inevitably increase their learning. (First seen in Distance Educator.) – SWL

U of T's CMS Selection Consultation Process


Early this year the University of Toronto issued an RFI to select an organization-wide CMS. This site is part of the public documentation of the process. The results of the faculty and student surveys are of particular interest to me – in some places there seem to be a slight disconnect (say between the facultys’ and students’ perception of the need for quiz and test support) while in others I feel rather vindicated by the results (in particular, the overwhelmingly lackluster demand for PDA and mobile device access to the CMS.)

I have so far not been able to track down anything public on the results of their RFI process, but this news item posted today on the Sakai site which states “Our intended long-range goal is to use Sakai as the educational platform for its more than 65,000 students and 6,000 faculty members. A pilot group of units (including FIS) have committed themselves to adopting it immediately and demonstrating its long-term viability in the U of T context” as well as the nomination of Jutta Treviranus to the Sakai Board of Directors seems like a strong indication of what the results might be. Expect more of these types of competitions to be happening in the next year as people are faced with license renewals and the need for large scale change management processes to facilitate product “upgrades.”- SWL

SUNY Learning Network's Next-Generation Technology Strategy Recommendations


Over on e-Literate, Michael Feldstein shares this link as well as some back story to the above document, a report which lays out the goals, principles, and key functional requirements for a next-generation learning enviornment for SUNY Learning Network. It’s well worth a look and the principles it holds up are laudable. I was especially pleased of the use they made for the Edutools CMS comparative framework. They seem to have taken and used it much as it is intended, as a factual and non-evaluative description of current CMS functionality (and not as the prescriptive or evaluative document some folks have on occassion misunderstood it as). – SWL

Updated "Framework for the Pedagogical Evaluation of eLearning Environments"


Presented as an ‘update’ to an earlier 1999 report of the same title, it seems actually both an update and a re-working, and in my opinion greatly improved.

After an overview of the current (2003) state of affairs in VLE adoption in the UK (interesting in it’s own right) the authors gone on to explicate their framework. They base it on two theoretical models of teaching and learning – Stafford Beer’s Viable System Model (coming from a cybernetic perspective) and Diane Laurillard’s conversational framework. The explication is a bit of a slog but worth the read, and critical if you were going to buy into their framework.

They then go on to establish a set of evaluative questions built around the structural or recursive levels of “The Module,” “The Learner” and “The Programme.” Finally they look at a number of current systems in the light of this framework, including WebCT Vista, Blackboard Academic Suite
Granada Learnwise, FirstClass, LAMS, COSE and Moodle. These last three are particularly interesting as all have been heralded for the ways in which they challenge conventional VLE/CMS models. As a credit to the report, if not the framework, it manages to recognize the innovations in these systems and the value they bring without forsaking the important developments in dealing with enterprise level problems that the larger commercial CMS have been focusing on.

Finally, they sum up their findings and point to some of the key developments in VLEs since their 1999 report, including: increased programme level support, a greater level of flexibility, more thought given to supporting pedagogical innovation, a greater variety of student tools, more “Open systems” and some improvement in accessibility. All of which seems about right. – SWL

CMS Evaluation report for the Swiss Virtual Campus


Summary report of an evaluation of 5 CMS platforms (a strangish mix of well known CMS, European ones and very training-focused Learning Management Systems) performed for the Swiss Virtual Campus in spring 2003. They looked at Blackboard, Clix, IBT-Server,
Qualilearning/Luvit, Globalteach, and WebCT Vista. None of them seemed to match exactly what they were looking for, but the recommendation seems to have been to provisionally go with Vista until something better came along. The full evaluation site, along with detailed reports on each of the products is also available. – SWL

LMS Selection Site from Simon Fraser University


Site to support the selection of a new CMS that is still ongoing at SFU. If you dig around there’s some generally useful material – these results of a technical comparison between WebCT and Desire2Learn by B.C.’s Ministry of Education (though I’m not sure how they got it), as well as this latest progress report that contains a lot of interesting anecdotal feedback gather from various stakeholder sonsultation sessions on what they are actually looking for (short answer – ‘better’ systems that are ‘easier’ to use, look nicer, and are infinitely customizable ;-)
Continue reading

Guide to Institutional Repository Software


Really helpful report from George Soros’ Open Society Institute that looks at the currently available open source institutional repository systems that comply with the Open Archives Initiative metadata harvesting protocols. (Note these aren’t ‘learning object’ repositories per se – these are typically more focused on archiving scholarly publishing and other institutional materials, though through things like z39.50 and the IMS digital repositories interoperability spec it may end up that your searches go against these repositories and more.)

You’ll have seen this already over at OLDaily (you do read Stephen already, don’t you?) – this post was more a personal note as this was one of those ‘just in time’ nuggets that float through the blogosphere and land on your desktop seconds before you knew you needed them. Hurray!. – SWL

Assessment of 5 leading open source CMS from Commonwealth of Learning


Not sure how this one got past me, must have been the summer doldrums, but back in June this report commissioned by the Commonwealth of Learning evaluating the field of current open source course management systems was released. It provides a fairly extensive analysis of the 5 shortlisted products (Moodle, LON-CAPA, ILIAS, dotLRN and ATutor) and ends up recommending ATutor for adoption with ILIAS coming in second.

It’s an interesting recommendation. One could contrast it with the recent piece from Rob Reynolds at xplana that looked at some of the same products, but with a very different evaluation framework. It’s also a bit unfortunate that it wasn’t able to assess Stanford’s Coursework, which was only then being released. I guess the other small fault I would find with it is that it takes a naive view of product selection based on feature assessment, as it simply provides a total of the various assessments, thus considering all features as being of equal weight, though it does separate out systemic issues from functional features. And if I was the Moodle guy, I think I might cry foul over a few of their ‘subjective assessments.’ Still, a very worthwhile resource and reference. – SWL