Day 1 of ELUNA: Highlights included Ross Singer’s umlauts, Oren Beit-Arie’s industry evaluation, and SFX user experiences.
There was a general meeting in the morning with a presentation by the heads of Ex Libris. Matti Shem-Tov and Dan Trajman of Ex Libris talked about Ex Libris’ general vision including: Primo, their product to unify publishing, front-end, end user experience for all of Ex Libris’ products; Increased use of existing open-source software for example Lucene; Reduce the number of supported versions of their software by improving upgrade paths. They hinted at changes in their development methodology, and major changes in customer support, with a unified customer support system. Emphasized was the importance of seperating administrative layers from the presentation layer, in other words modularization and giving customers the opportunity to select whatever pieces were actually needed.
First breakout session for me was a series of three different application/services which took advantage of the SFX API. All were logical extensions and served as good examples of why we need APIs to these extremely expensive products. CDL presented some of the pieces of their Common Framework. I found this to be the best of the three implementations since it places the API within a larger set of tools to provide services.
I went to a presentation by #code4lib’s Ross Singer detailing Georgia Tech’s project to widen the scope of the link resolver’s services. In my mind the Umlaut goes ahead and tries to clean up alot of the messiness that silos create. It queries a whole range of services at once to assemble in a sense a composite result set for any given query. It’ll query amazon, google, socialbookmarkers and probably anything else that has a functional query and present API, and pull results/metadata/the kitchen sink from them. There is some relevency stuff thrown in and the results look relevent and useful from the example queries.
On the other hand I’m not sure if Ross’ approach of layering more things onto the link resolver is the best approach, but in the end code talks and everything else walks. Also honestly, I haven’t really studied or looked into how people are doing relevancy for library resources. My initial thoughts is that the semantics of the queries that an academic audience is sending are completely different from the ones an average googlezon user is sticking in the text box. This would mean that alot of the assumptions in what are relevent have to carefully examined and weighed. I also imagine relevency as one of those problems fifty different groups can be working on and fifty solutions are made and all of them are right.
At the end of Ross’ presentation Oren Beit-Arie of Ex Libris who was lurking in the crowd asked a pointed question as to the scalability and scope of harvesting and the nature of how many different targets/services do you need or want to assimilate. I think this definetly lead into the next session where the Oren Beit-Arie gave a presentation on Ex Libris’ industry vision, library trends and the company’s response to them. Spearheading the company’s response is Primo, what could be termed Ex Libris’ next generation public face to resource discovery and delivery. He gave a quick demo of Primo’s functionality. One thing that I’m not sure that librarians understand, and can not be emphasized enough, is the power of harvesting, repurposing, using, and manipulating library resource metadata. There is a reason Google works so well and it is because they throw tons of really cheap machines to basically mirror the entire Internet, index it and then do really cool things. Primo seems to be a step in the that direction for library metadata. Of course my main hopes/concerns is that once Ex Libris decouples presentation from administration and management layers that there should be very little to stop a site from rolling your own discovery and delivery face. On top of that there is alot of pressure within an academic institution to have your own customized portal or what not for subject, special collection, or project slices. It makes alot of sense to me for unique resources to just expose themselves to Google and other search engines and just take care of discovery that way.
I have a bit of concern about the emerging groupthink on Web 2.0 and libraries that Oren’s talk covered. There is plenty of smoke and a bit of fire burning but where it is all going, is it becoming an uncontrollable conflagration or is it exploding into something significant, I have no idea.
Anyways alot to chew on.