Skip to content

Access 2014 Calgary Wednesday notes

October 1, 2014

calgary pano

Tuesday’s notes
Thursday’s notes

We’re All Disabled! Part 2: Building Accessible Web Services with Universal Design
Cynthia Ng, BC Libraries Coop

Offered an introduction to the topic, asking the audience to contemplate their own practices and sites. Then shifted to a more practical or applied section, where she reviewed a number of tools and practices that will help us build accessible sites.

This was the second talk I’ve seen here that mentioned ARIA, something I had not heard about previously. ARIA is Accessible Rich Internet Applications, a suite of tools to create accessible Internet applications, rather than–I gather–offering users who need accessible sites just a stripped-down text version of your content.

Showed us a media clip that had a visual description, i.e.- a verbal gloss that explains the non-verbal action in the video. Also noted that media that autoplays is a problem, including carousels. Her conclusion: “death to the carousel.” She admits one can create an accessible carousel, but that these are rare, and when used, often in an inaccessible fashion (autoplay again).

Repeated what many of us have preached for years when it comes to content, which is to be brief, use solid structure, avoid tables, etc. This is perhaps the worst Achilles heel of library websites, namely, our collective tendency for prolixity.

When Campus IT Comes Knocking: A New Model for UBC Library IT in the 21 Century
Paul Joseph, UBC

Started with some wry preambles, including one noting that UBC IT doesn’t know what to call him, so they call him the ‘business architect.’ Lots of laughs.

Boils down the entire story to one of cash; this is about “streamlining” and saving money. Consolidation results from a chronic state of underfunding and lingering budget crises.

They also once had a staff of 11, but had six resignations in 16 months, and received only two contract employees in return. This weakened their ability to do their work, of course.

The third reason for this was a budget deficit reduction program that the library was enduring. The fourth was the ‘enterprise architecture framework’ used at UBC, which encourages consolidation, in a nutshell.

The entire library was subjected to a business analysis, which sounds like an interesting process. This puts people into general categories, rather than the departmental framework of the library organization. This process took five months, and he spoke positively about how it encouraged him to view libraries differently. The result was a capability map, which has three categories: collections, engagement, and management. Each of these broke down further into three elements, for a resulting matrix of nine elements. Alas, in this matrix, IT gets stuffed into one element under management, which is predictable but lamentable. The nine elements themselves have functions listed under them, which are fairly granular and recognizable by people working in libraries.

On the IT side, they mapped 134 applications to the capability map. The vast majority fall under collections, which isn’t really surprising. There’s a dropoff under engagement and management to some degree. They did indentify 23 that support IT or facilities, and one can make the point that such applications should, in fact, be done at the enterprise level. Of the applications, nearly a third are UBC-developed, and 61 of them are server based and most run on Linux. Key: 65% of their applications are complex and require multiple servers or integration with other applications. This result surprised central IT and made them nervous. 53% of their applications are public facing, i.e.- have a Web frontend. By comparison, one of their faculties had only 12% of its applications public. He showed further results that indicated how complex their application suite is, including an absolute mess of a diagram (that got laughs) that could make no sense to a non-library IT person. Other diagrams–such as one for course reserves–were equally complex and convoluted.

They have 19 physical servers, and in general a fairly large and complex hardware environment. Central IT felt as though this could have been consolidated. They also had a number of security issues related to application versioning, etc.

He enumerated the challenges that were identified, as well as the partnership opportunities that emerged, such as leveraging central services and standardizing where possible. Risks were also identified, such as data loss, resource loss, security concerns, and a lack of scalability and flexibility.

Not able to speak about the recommendations that resulted, since they are confidential. They are in the midst of a transition, as well.

In response to a question I asked about doing things differently to fend off consolidation, the main point he made was that they could have done more to articulate their value and importance inside the library. Our own organizations often see us as “drones” who work in the basement and keep the desktops running, and the more critical contributions are entirely overlooked. This is a valuable response for those of us tasked with managing this work and building the case for more resources.

Taking Control of Discovery: In-house development to improve student experience and break down silos
Sonya Betz, MacEwan; Sam Popowich, U Alberta

Sonya began with a quick review of the existing problems we have with our discovery systems. Highlighted the fact that while most of our traffic still comes from desktops/laptops that this traffic has plateaued or even dropped, while mobile and tablet has grown significantly.

Sam noted that they have similar issues, with a few extra thrown in. For one, they have data silos, knowledge bases, etc. They also have some sustainability issues. He noted they ask themselves the question “who uses our site anyway?”

He ran through the technologies that they’re using to build a discovery tool. Not surprisingly, it’s open source heavy. Neat list. As he noted later in his talk, this focus on open source is fairly new.

At MacEwan, they’re using vendor APIs and trying to integrate them into a single interface. They did this a number of years ago with an iOS app, but that has the inherent limitation of being iOS-only. It was valuable experience, though, as she noted. Building on that, they wanted to create such a tool that works on all devices. It integrates various vendor APIs along with local content sources into a single search. They want the vendor data, but not their UI. They roll all of this into a custom Drupal search module. Use the Drupal 7.0 ZURB Foundation theme to get a fully responsive design.

One challenge they face is that not everything can be integrated, so they have to “pick and choose based on impact.” They are launching it in beta in the next few weeks and are aiming for production by January.

U of Alberta is also looking to launch their service in January. In order to support the shift to open source, they’ve written a developer handbook that offers best practices, policies, and procedures, e.g.- how best to use GitHub, Vagrant, and Ansible (which is a Chef/Puppet type technology that allows scripting of complex operations). He noted that they have developers and sys admins together in a team, but it’s “not quite scrum, not quite DevOps,” but they are working more effectively than they used to do. They use birds-of-a-feather sessions to foster this, and find new ways to communicate and share information (new platforms, getting everyone an account, etc.).

His three main components:

  • how to break down data silos
  • how to improve the user experience
  • how to get information out into the user space (discoverable content)

Breaking down data silos means using a set of tools to get data into a form that Blacklight can ingest and understand. Described the tools and procedures that he uses to do this, including Solrizer, RSolr, etc., but noted that this doesn’t solve all problems.

For the user experience, they want a single search box that leads to the established bento box display of results. He noted it has to work for first year students and established university researchers who have higher needs. Do these higher order needs actually exist? Do these kind of people use our interfaces?

They concluded with a set of questions:

  • is there an industry shift towards vendors providing services rather than interfaces?
  • do users still use our sites to search for, discover, and access our materials?
  • how do we integrate our information into their workflow so that they don’t have to come to us?
  • how can we provide our data and services in easily reusable forms? (open APIs, open data – let users build their own interfaces)

Adding e-resources license information to library systems- three libraries’ approaches
Jenny Jing, Queens U; Marc Lalonde, U of Toronto; Amaz Taufique, Scholars Portal; Christina Zoricic, Western U

Talking about OUR: Online Usage Rights. Amaz gave a quick review of how the 2011 copyright changes impacted library information systems. The solutions on hand at the time (Verde, homegrown systems) didn’t do the job, so they set out to create something easy to implement, that is multilingual, and preferably free or at least cheap. The only thing at the time that came close was UBC’s Mondo License Grinder. UBC had open sourced it, so it was identified as the only viable option.

They consulted with librarians on desired features, and with lawyers on the language. The license information comes from CRKN data that is then shared down to the consortial level, so to OCUL and others.

Christina noted that Western’s work was driven by the end of the Access Copyright agreement on December 31, 2013. Their goal was to get all of their licensed content linked to a license record. Their team had two librarians, two contract library assistants, and support from Library ITS, their customer service committee, and the copyright advisor. One decision they made was to abandon the licensing module inside Sierra, and opted instead to use OUR from Scholars Portal. They had to clean up nearly 1000 ERM resource records, representing over 1500 electronic licensed products.

Jenny described how they integrated OUR into various interfaces at Queens. Many of the issues had to do with proprietary vendor interfaces where it’s hard to get information from an external source into the interface, not least since each uses a different framework. Mark described Toronto’s project, which differed in the details, but was also based around how to get this license data attached to resources in all of their Web services.

Linked Data is People: Using Linked Data to Reshape the Library Staff Directory
Jason Clark, Scott Young, Montana State

Their project was essentially a demonstration of using linked data to take something human readable–in this case, a staff directory–and make it machine readable. Gave an overview of LOD and noted the five-star model described by Berners-Lee, which sets out an ascending model of implementation, starting with just getting stuff out on the Web with an appropriate license to, at the top, linking your data to other data to provide context.

Scott went into what others are doing with LOD in libraries, noting some talks at CNI (one example from UNLV), DLF (upcoming panel), and elsewhere about exciting and interesting projects that are underway.

Scott pointed out that there are a lot of acronyms out there that represent tools for and ways of working with linked data. One thing one has to do is make choices about which tools to use for a given project. In their case, for example, VIAF didn’t work, but the ORCID database did. They also found that DBpedia had most of the elements they needed, so employed it, too. The idea, simplified, is to link people (disambiguated via ORCID) to concepts (e.g.- data management) that exist in DBpedia.

Their results were improved ranking (SEO), a new UI, almost a five-star LOD example, and a visualization. Achieving top ranking for their results was not the primary objective, but it has been one of the results when searching for a name of a person who works there. They also just released a network visualization view, currently in beta. Uses D3.js.

Unlocking the Door
Bobbi Fox, Gloria Korsman, Harvard

They used student focus groups to inform this project. Even though they were asking about the library homepage, they got a lot of feedback about course reserves, even though this resides in Harvard’s dated LMS. The students indicated dissatisfaction with the interface, and preferred to search for things rather than being presented with a list (ctrl-F isn’t a thing for them). They used the catalogue more than the LMS to look for reserves, which isn’t an optimal solution. They also had a set of standard questions they often get from students with regard to reserves, all of which indicated issues with locating the content.

Their response was to create a new course reserves interface that streamlines access and only requires authentication when necessary. Their name was Course Reserves Unleashed, which then got the acronym ecru (e is obvious).

One of their challenges was to create a tool that would work with the variety of systems in use at Harvard, which has many libraries operating somewhat autonomously from the sound of it. They also wanted it to be open source (it is, available on GitHub). Written in Java, for which she apologized, of course, being at Access.

Piping Hot: Little Bins in Big Workflows
Alex Garnett, Simon Fraser

Gave a brief overview of the Linux tradition of writing programs that do one thing, and do it very well, rather than complex programs that are doing many things at once (e.g.- MS Access). Showed how terminal commands can do simple and elegant things with the myriad executables found in a Linux install.

Showed a concrete example, where two lines of commands can be used to convert PDF files to PDF/A, turning what sounds like a hellish problem (when, for example, ProQuest says it wants PDF/A and you’ve got PDF) into a simple matter. Showed other examples as well, where one could have created a Web app to achieve results, but a quick shell command is easier and reliable. The third example he showed involved redacting PDFs, i.e.- blacking out certain text in a PDF. Was typically done by hand, but using ghostscript it was possible to do this from a terminal session. It’s a five line script that uses five programs, so once he figured out the steps, it’s dead simple. It’s mainly a matter of reading the documentation.

The User Experience Study: Student Views on the Principles of Legal Research Website of the University of Ottawa’s Brian Dickson Law Library
Margo Jeske, U Ottawa

She presented work done largely by her colleague Channarong Intahchomphoo. The thrust of their work was to create a site that would allow them to develop better research and information literacy skills, while alleviating the workload of librarians that results from a heavy teaching load (it’s the largest law school in Canada). They created four modules that replaced three weeks worth of class time. As an aside, she pointed out that this was all done bilingually.

Students liked the modules, but the chief complaint was that they were too long. That seems to be a chronic issue with this type of initiative based on how often one hears this. One reason they liked the modules was that they had no class the week they were doing a module.

%d bloggers like this: