LIS 855: Reading Journal Week 10

Unit topic: Data Standards & Silos

Reading list:

  1. Todd Carpenter (2008) Improving Information Distribution Through Standards. Presentation at ER&L 2008.
  2. Oliver Pesch “Library Standards and E-Resource Management: A Survey of Current Initiatives and Standards Efforts.” The Serials Librarian, Vol 55 No 3, 2008, pp 481-486.
  3. Paoshan W. Yue “Standards for the Management of Electronic Resources” in Mark Jacobs (Ed) Electronic Resources Librarianship and Management of Digital Information: Emerging and Professional Roles, Binghamton NY: Hayword pp155-171.
  4. Peter T. Shepard (2010) “Counter: Current Developments and Future Plans.” Chapter in The E-Resources Management Handbook. (2006-present) Editor Graham Stone, Rick Anderson, Jessica Feinstein.

I am amazed by the human analytical-obsessive instinct that makes us want to parse out value to the smallest possible units.  I’m thinking of the increasing trend amongst e-publishers (now spreading to COUNTER’s thoughts for the future) of trying to measure not only the usage of journal titles, or even of journal issues, but actually down to individual journal articles, as described in Shepard’s article.  Why?!  Does this do anything besides help tenure-track scholars say to their tenure committees: hey, look, I may only have published [insert low #] articles, but they were downloaded/clicked on/clicked from [insert impressively high #] times!  If journals are still published as journals and not as individual articles, why does it help us to develop usage tracking for individual articles?  Is the e-information industry thinking of heading towards some kind of package deal where they single out all the most highly-used articles, put them together, and sell those at a premium price?  This is reminding me rather funnily of the old (at least I think it’s no longer done) practice in the rare book trade of buying a complete manuscript and then slicing it up into individual pieces that the dealer could then sell at exorbitant prices that add up to more than he would have gotten for the book as a whole.  Information technology may change, but people don’t, eh?

Ugh.  The soreness in my neck and upper back is not conducive to my blogging today.  Okay okay, how about this: search clicks–credible measure of database utility/value or not?  My brain’s going all kinds of places today, so this is reminding me of the 5-clicks-to-Jesus game on Wikipedia.  Ever played that one?  Basically I’m saying: it’s hard to know what people are using databases for.  What if institutions are using certain databases specially for deep-linking to e-reserves pages?  That’s no clicks (or one click possibly), but high utility.  Also, even if people are searching in databases in order to explore the literature on a topic, whether they investigate more or fewer links from a search results page could have to do with lots of things.  If a user looks at a search results page and can tell immediately which items are relevant to their interests, that could result in fewer clicks as they could zoom in straight for the links they need.  If they can’t tell, they may end up clicking on more things, but the utility/value of the database goes down.  Good abstracts could lead to fewer or more clicks: fewer if they help the user know exactly which articles are useful and thus ending their exploration sooner, more if they are SO good that the user figures they’d be well-off enough just reading all the abstracts, regardless of whether or not the articles were terribly relevant to their topic.  So…yeah…I’m kind of thinking this might be a rather shape-shifting tree to be barking up.

My question is: how much net profit are the vendors/publishers actually making out of all these counting pyrotechnics?  And how much convenience or efficiency do libraries and users get out of it?

Today I am tired and not feeling like indulging the tech craze.

Leave a Reply

Your email address will not be published. Required fields are marked *