Automagic Map of #opened12 Attendees

As registration filled up for OpenEd 2012, I began to wonder where people were coming from, and what kind of representation we were getting across the globe.

Step 1 – Geocoding the Attending Organizations

When people registered, we did not collect physical address info, just names, email addresses and organization names. Still, I thought, that has to be enough, right?

I knew that using a query like http://maps.google.com/maps/geo?output=csv&q=Vancouver would return CSV values for that location, yet I couldn’t think of a simple way to turn an entire list of organization names into a map (this was one of those “I’m bored in this meeting and want to do something in 5 minutes” exercises.)

Enter the network to the rescue, mainly in the form of Tony Hirst (who I knew would know the answer) and Alec Couros. Tony pointed me to a post he had written earlier this year that highlighted the Google doc function =ImportData. By using that function and concatenating the Google Maps API query string with the placename/organization name I already had, it really was simple to get all of the organizations geocoded to then place them on a map.

Two caveats

  1. Google spreadsheets limit the use of the =ImportData function to 50 times per workbook, not sheet, so with around 170 distinct names to geocode, there was a bit of futzing around to put these in different workbooks, run the function, then copy/paste the resulting geocodes into a master sheet.
  2. Automatic geocoding based on organization is not an exact science – using the names exactly as entered in the registration forms did result in 140 good addresses out of 170, but the rest either returned no results, or else in a few cases bad results – BCcampus, the organization I work for, was placed somewhere in the Straights of Tawain! Still, that’s about an 82% success rate with no effort, and the resulting ones were easily fixed by replacing the org name with either a city name or specific address.

Step 2 – Mapping these coordinates

Once you have the resulting sheet of organization names and longitude & latitude data from the first step, the next step is fairly easy. I had stumbled upon Google Fusion Tables myself, an experimental feature aimed at combining datasets and visualizing them in new ways.  Tony mentioned these would handle my data automatically, and sure enough it did, importing the existing Google spreadsheet with one click, and with another turning it into a map.

But I actually ended up going with another approach suggested by Alec Couros, MapAList. MapAList is a 3rd party service that also works off of Google documents, and a simple wizard allows you to select your spreadsheet, worksheet and values you want to map and generates a map along with nice html embed code to use. I think either way works fine, I just ended up liking this one as Fusion’s URLs confused me and I ended up sharing one on twitter that pointed to the unvisualized data.

Below is the resulting map. The big learning here for me – the power of the =ImportData function. Without something like this, you end up having to write some code (not complicated code, but code nonetheless) that steps through your list, generates a http request for each one to the API end point, receives the resulting response, parses the response and compiles the outputs into some format you can use. This is not a super complicated program, but 95% of end users aren’t going to do this. But the above approach seems really feasible, and given the availability of HTTP based APIs that return CSV or JSON, opens up a huge realm of data to non-programmers who can still handle a spreadsheet (which, as you’ll recall, was the home computer’s first killer app.) – SWL

By the time I get to Phoenix…

…I’ll hopefully have the materials finished for the pre-conference workshop on Personal Learning Environments I am leading along with Chris Lott and Jared Stein at this years’ WCET Conference. If not, I figure I’m always good for a bit of song and dance (though I must admit I’ve always been more fond of Isaac Hayes’ version):

[youtube:http://www.youtube.com/watch?v=8MMRTahbQSw]

The day is shaping up, though, to be a good one. We are going to try two streams. The first, mainly led by Chris, is for people new to blogging, RSS and syndication techniques (as these seem fundamental to many people’s notion of a PLE). The second, which Jared and I will share, is split between “Growing Your Network by Moving Your Office Online” and my session on Mashing up your PLE.The sessions will be very hands on, the hope being that people walk away with their PLE tuned up and more able to accomodate this method of network learning in both their own practice and with their students.

If you are planning on attending the WCET conference, consider joining us for this full day session on the Wednesday, November 5th. If the past is any indicator, it will be a funky good time in Phoenix that day. – SWL

My interview with the CogDog

http://cogdogblog.com/conversations/scott-leslie.mp3

As part of the cogdog’s recent tour down under, he interviewed a number of blog colleagues for quotable quotes. I just found the one I did with him now and listened to it for the first time (what, like this is a revealing admission, from a blogger?)

I must admit I’m actually kind of proud how it turned out – I must have had my coffee that day and been slightly less sleep-deprived than usual, because this is probably as coherent a statement of what I think and what I am interested in right now as I’ve produced. Thanks for the great questions Alan, and for helping me frame these scattered thoughts a little better. – SWL

Heave ho, scallywags, there’s events listings o’er thar to liberate

(Avast, me hearties, this is the last of the pirate postings. Just be glad they weren’t podcasts 😉

So the other ‘mashups’ itch I’ve been wanting to scratch recently revolves around events listings, specifically a list of ed tech conferences that’s been around for a few years. Now before ye raise the topsails and give chase, hear me out – the landlubber who created and maintains this list every year is to be much praised, as I have done so in the past, as are the folks at CIDER for posting it as HTML.

But in this age of participatory media and user generated content, does it make any sense for lists like these to get created and maintained by one person, in a Word document?

Aye, you say, but it was probably the easiest tool at hand for what was a selfless act of giving back to the community. Right you are; but howseabout I shows ya how to take this page, database-enable it and allow others to add to new events to it in about 5 minutes with free, easy-to-use web-based tools. Come aboard all ye who’s coming aboard… Continue reading “Heave ho, scallywags, there’s events listings o’er thar to liberate”

Back on my feet and ready to sail the seas of trapped information, ya scurvy dogs!

Ahoy mateys, so that “moose fever” – turned into pneumonia for me! On top of which my entire family got sick too. But we’re finally over that now, so time to break the silence and set sail on the seas of end-user mashups.

As much as I felt some small discouragement with the NV mashups workshop because of certain technologies blowing up during the session and us not sticking with a more hands-on format, I have not given up on the dream of exploring mashups for non-programmers and have continued on, scratching a few of my personal itches.

Continue reading “Back on my feet and ready to sail the seas of trapped information, ya scurvy dogs!”

Mashups for Non-Programmers – an experiement gone slightly awry

So, we were one of the sessions first up at this morning’s Moosecamp. At the last minute we decided to change the format; originally we had wanted to try and stay true to the ‘camp’ ethos and do very little presenting and a lot of co-creating with the audience. But competition is fierce for attention at Northern Voice, and there are too many good sessions that I wanted to attend too, so we cut it down from theh originally planned 1 1/2 to 2 hour we had hoped for to a quick 45 minute show and tell, with the hope that anyone who got really inspired would meet us latter to get hands-on with the tools.

D’Arcy kicked it off and his set of examples worked pretty well, but right at the end, Pipes failed. Hard to tell if it was the Pipes app itself or an overloaded network conneection. I was up next, and even though I had a few Pipes-based examples to show, I luckily had a few others too in my bag. Unfortunately, one of the service, OpenKapow, seemed to not respond at the same moment, and Dapper, which I was using to illustrate how to create data sources where none exist, was sooo slooow that we had to move on. Oh demoitis, you cruel beast.

We at least tried to seize the moment and turn it into a teachable moment, illustrating that while there has been a true explosion of services, as “non-programmers” we are largely subject to their availability whims. 

Brian followed on with a parable of his efforts over the years with Aggrssive, which while I know he is hard on the results I still think was and is a valiant effort to create a software package to allow us to host our own feed mashups, something many of us at institutions require if we want to introduce these techniques into production.

And finally, Chris Lott brought a rock-solid performance, with hhis various experiments in Ning and Google Co-op working great.

I don’t noticed how many people we convinced that the potential for non-programmers to mashup content are there; that wasn’t so much our goal. For me the session was meant as an experiment on how far non-programmers could in fact go, and hopefully there were at least a few in the crowd who were inspired to push on further. If you are interested, the wiki page that we used to organize the session is chock full of additional examples and technologies to start creating your own mashups. Good luck! – SWL

Visio version of Scott Wilson’s UML Mashup Stencil

http://www.edtechpost.ca/gems/mashup.vss

I liked the UML icons that Scott Wilson produced and shared for the OminGraffle tool, but couldn’t use them ‘as is’ because OminGraffle doesn’t exist for the PC. So I asked Scott if he could share the source with me so I might somehow get them into Visio, the tool I most often use to whip up such drawings. He kindly went one better and produced an exported stencil for Visio, out of which I created the same set of mashup shapes for Visio. Just drop it in “My Documents – My Shapes” to be able to use it in Visio. Happy Diagramming! And thanks again Scott! – SWL

Library Mashup Competition Winners

http://www.talis.com/tdn/forum/84

I am currently participating in a cool exercise in prognostication on emerging technologies and learning and one of my votes/pleas for a disruptive technology in the academy is “mashups” (which I realize aren’t properly a specific “technology” so much as a technique, but whatever.)

So it was with great pleasure that I stumbled on Jenny Levine’s post on the Talis Library Mashup competition. The full list of entries is here, and while it feels a bit tame, it is definitely a start. The library seems definitely like one of the potential on-campus sources to be mashed up. What are the others? Well, to serve as the basis for a mashup, on my read at least, you need to be providing 2 things; some data and a way to get at it (an API, web service/XML feeds, screenscraping, or other mechanism for access, the more public the better). And there’s the rub, it seems. While more and more Web2.0 companies (holy cow – 291 on this list) are offering APIs that are being mashed up (arguably often with a still-unknown value proposition) is your IS department publishing the API for your SIS on your campus website? You CMS? Why would they do this? Well, that’s the other side of the mashup phenomenom – often-times the companies making their data available don’t yet know all the ways it could be used, but appear to be correct in the belief that if you publish it, it will get used, often in unexpected or improved ways.

It’s likely the sources on campus that will serve mashups anytime soon aren’t the “enterprise” systems but departmental or discipline-based ones (various GIS-based systems seem ripe to combine the Google and Yahoo maps of the world; text collections with things like Yahoo’s term extraction service, etc). And I don’t want to trivialize the challenges to security and privacy in accessing some of the enterprise data. But right now it feels like a brick wall – ask and you’ll get a strong ‘No’; not a considered one but the idea just rejected out of hand. But you know the trick; keep asking, eventually you’ll wear them down (or they’ll retire 😉 – SWL

Dynamically Wikipedia-fying Text: Drawdoc and Wikiproxy Greasemonkey script

http://nagle.u1i.net/drawdoc/autolinker.php and http://wikiproxy.whitelabel.org/greasemonkey.html

Both of these accomplish pretty similar things – take an existing web page, and turn proper nouns/key terms into links to wikipedia automatically.

Drawdoc is currently a web-based app (but not hard to see how it could be a service instead) that employs Yahoo’s term extraction service to identify the salient terms in a document, and then offers possible image matches from Yahoo images, and annotates those terms with links to either Wikipedia, Google or Yahoo to the selected terms.

The Wikiproxy Greasemonkey script works slightly differently, as a Greasemonkey script that appears to just look for ‘Proper Nouns’ on a page and then annotate them as the page is rendered with links to wikipedia. So works on the client side, but the effect is similar, a text automatically annotated with key words to wikipedia.

In both cases what seems lacking is a connection to wikipedia that actually confirms there is something to link to before creating the link. Not surprising. That’s not how they are intended to work, they are lightweight mashups. But the IDEA here is important – start thinking about collections you have on your campus that are pedagogically significant to your students – how tough would it be to code a greasemonkey script that then rendered key terms in your online course as a link to that collection.. of anatomical images? of learning objects? …you get the idea. Why do this? Well, in the case of an approach like drawdoc, it could become an automated annotator for your CMS-based courses, saving time and effort. With a greasemonkey-type approach, it could potentially become a tool that augmented the students experience of materials you didn’t create and don’t control with links to content in collections you trust.

Mashups are here. They’re even commonplace, almost. But just wait until they start invading the academy. You can already get a list of the available ‘web 2.0 APIs’ (that is almost inevitably incomplete) – do you know what’s available inside your own institution? …you’re either on the bus, or it’s running over you… exciting times indeed. – SWL