Failing and flailing at code4lib north
At the recent code4lib north meeting at McMaster University (shouts to Nick Ruest and John Fink for the idea and for making it reality!), I tossed in “fun with failure” as a lightning talk. I had no idea how prophetic this title would prove to be. Not only did I have technical troubles of my own making (the flailing), I spoke for nearly 15 minutes instead of five (the failing, but other lightning talkers did that, too, so it was group failing). Was a fun talk, but it’s a serious topic, I think, if we want innovative libraries.
See the rest of the awesome videos (less flailing) here.
500 Student Voices: an idea
While listening to a student panel at the Future of Academic Libraries Symposium at McMaster University, an idea for a mass feedback round hit me. The audience was really grooving to what they had to say, but as someone asked the group, were they really representative? With that, an idea was born.
Here’s how it could work. At about the same time on the same day (no need for precision), 100 libraries across North America (or the world) gather five students in a room, ask them to respond to a series of questions about libraries, film the whole thing, and ultimately post the results on YouTube or any video platform. What do we get? A cross-sectional slice of student views, and a chance to see if students are saying the same things everywhere, or if important differences emerge based on institutional type, location, country, etc.
What we need is a better name, and a tiny bit of central organization. The rest would be up to participants to pull off, and that’s the beauty. At the very least, we could have a lot of fun, and who knows, getting 500 students involved in a project like this might yield benefits we cannot even imagine.
Anyone game?
Twitter and Europop
Like a lot of people I’ve spoken with about social media, I tend to use Facebook to interact with friends and family, and try to keep Twitter a bit more professional in tone. That said, we all let our hair down once in a while, and yesterday saw a lot of hair downletting. Why? The 2011 Eurovision Song Contest, that’s why.
Allow me to apologize to anyone who follows me on Twitter and had to endure my steady Eurovision smack talk (sample: British expat singing cheesy song for Romania = bathroom break). Then again, when I see “serious” library folk such as Lorcan Dempsey tweeting about it (expressing his wish to be watching rather than sitting in an airport), I think perhaps we’re all alike in our ability to enjoy some pretty silly events.
Eurovision is pure, unadulterated camp. As such, it brings out the best in the Twitterverse. Below, a sample of my favorites of the day:
- podwangler: Oooo, Azerbaijan. Arm waviness. Oh no, it just went all awful and duettish. Ew.
- TomFidler: #Azerbaijan cracking out the white trousers. It was only a matter of time….
- Konnolsky: There is man in Smolensk who can do what sand artist do but with piss in snow
- roryenglish: Waiting for sand lady to write “Can I defect? I need asylum” #Ukraine
- misterebby: “we are angels we are crystal meth” ? Am I hearing things wrong?
- simon_ryan: Great song Serbia. The 60’s only got there a few weeks ago but valiant effort
- Yessica89: Echt, even serieus.. Dat dansje van Spanje.. Wat IS dat?! Lijkt wel een apensprongetje ofzo
- des2k: Greece. And to think we bailed them out to the tune of €3bn
Responsive and responsible developers: a tale of Twitter
I chose that title hoping to write a positive story about how a group of software developers did the right thing when it comes to the terms of use they attach to their products. As with so many intellectual property stories these days, this one appears to have gone off the rails a bit, although not all is lost at this point.
Thanks to a March 14 exchange on Twitter, I learned about colwiz, a research and collaboration tool being developed in the UK. It’s pretty slick, and clearly has found popularity among researchers in short order. Being a copyright/intellectual property geek, I went first to their terms of service and read up. As I replied via Twitter:
CNI Spring 2011 notes
After a several year hiatus, I made it back to a CNI meeting and attended a number of interesting sessions. Below are some notes, with links to the posted presentation materials. My commentary on the talks is set in square brackets to distinguish it from the summary notes.
Transformational technologies in libraries
When I interviewed here at McMaster, I was asked to speak on the topic of transformational technologies in libraries. Generally, I think the phrase “transformational technology” is overapplied by a tech-enraptured press, so I geared my talk toward making distinctions between cool and transformational.
Who’s zoomin’ who?
Enough already with the U.S. government’s habit of naming a “czar” to solve problems. It’s a ridiculous moniker, but that’s beside the point. Instead of rational debate leading to sensible policy, presidents throw a czar at some perceived crisis. Generally, the results are poor (cf.- War on Drugs).
Somehow I had overlooked that the U.S. now has an intellectual property czar (czarina?), Victoria Espinel. I learned that fact today when I read that the Obama administration wishes to make streaming a federal offense. What’s next from this administration? A Minister of Crushing Users’ Rights?
TEDxMcMasterU 2011
As one of the people fortunate to get a ticket, this past Saturday I attended the first TEDxMcMasterU. A band of McMaster students brought it all together and did a fantastic job given the fact that they had to live up to the TED name. It’s great to see this kind of initiative and ambition in students.
At some point during the day, in fact, it struck me that the exercise of planning a TED event is to some degree an end in itself as much as the talks are. TED events aren’t just symposia, they are happenings with high expectations. One borrows the TED name, but I imagine that it was a ton of work for the students to put this together, so again I offer my thanks for their efforts. Read more…
Paying Google
Within days of Google’s 2004 launch of their ambitious book scanning plan, cynical librarians (myself included) wondered how long it would take for Google to market their new toy to libraries.
As it turns out, a long time. It’s now 2011, and we’re still waiting for Google to come knocking offering content for money. Is this strategic, or perhaps related to their epic struggles with rightsholders? Likely the latter.
This occurred to me today when I recalled–for the 300th time–that Google offers no API for Scholar. While noodling around for information on the current state of that issue, I found a comment on Jonathan Rochkind’s blog that made something go bing in my head. Commenter Marty pointed out that Google relies on publisher largesse to get at the article metadata, and that many of these publishers have an interest in driving use toward their own tools (e.g.- SciVerse). It would follow that Google faces many publisher-imposed restrictions about what they can do with Scholar, which would explain the lack of development and an API.
That leads to a question. Money makes everything move, so would we be willing to pay for this API? Google could make the publishers happy by paying them for access to their data, and we could pay Google for use of their API. It would be like licensing a database, except I don’t want a crappy interface and a sales call, I want an API so that my library can access the data from our own interface and manipulate it to fit our needs. In a nutshell, we’d be paying Google to aggregate publisher data, something we currently do in a variety of semi-satisfactory ways.
Is this crazy talk?
Critical online library services?
What seems like many years ago (it was 2004), a science librarian colleague said that if we had to pull the plug on online library services for lack of funds, the link resolver should go last. This flew at the time in the face of conventional wisdom, which would have placed the public catalog at the top of the pile.
It’s 2011, and I wonder how many librarians would still rank the catalog first. I know many would, having had such conversations in recent months.
Here’s the question: if one were to toss four common online services into the mix, specifically:
- public catalog
- link resolver
- digital collections (of locally digitized materials – not IR files)
- proxy solution (remote access; most commonly EZproxy)
how would one prioritize this list? Without hesitation I would put EZproxy at the top, followed very closely by the link resolver (the latter is largely useless without the former for most users). The catalog is a ways behind, while digital collections vary wildly by institution. At the library I just left, they are utterly inconsequential, while at my present employer more than just the odd librarian would notice if they went dark.
What gives me pause are the expenses associated with these various tools. Granted, it’s not fair to compare EZproxy straight up with the catalog, not least since the latter is just part of an ILS behemoth. But–and this is the crux–should we begin retooling/staffing our libraries to reflect user priorities rather than our internal priorities? Many people talk around this point, but show me the library that treats EZproxy with nearly the attention they spend on, say, whether to display some random MARC field in the public catalog. Yet, EZproxy is the gatekeeper to an increasingly large portion of our collections (i.e.- all licensed materials), which represent collectively an annual investment in the millions. Outages are like lost productivity in a factory.
Would appreciate hearing your comments and/or your rankings.



