I am not sure if I could actually select a “most important earlier entry”, but I can say that I learned a great deal about electronic resources, licensing, and ERMs over the course of the semester. One of the ways in which I have noticed a change in thinking is when I am looking for articles using Find It or on the library staff page seeing the “SFX Admin” link. I feel that I now have a fairly clear understanding as to how link resolvers, A-to-Z listing services, and OpenURLs function. To that end, I could possibly state that my entry for week 11 is one of my more important posts. It was in that post where I explored the content discovery tools and really began to put the bigger picture of how Find It / SFX and Ex Libris works. Related to that post are those from weeks 8 and 9, where I discussed TPMs and the different types of ERM systems. After the readings for week 11 and two presentations on SFX and Find It, the readings from weeks 8 and especially 9 began to make more sense. Although week 8 is not exactly about ERM systems and their functions, knowing how systems may determine who is an authorized user and how authentication works helped me with further understanding ERM systems.
In general, I found the second half of the course to be less intuitive than the first half. This is probably due to the fact that I have a background in electronic resources and have studied copyright and Fair Use prior to this course. Still, it was helpful to further develop my understanding of the earlier units before diving into ERM systems and functions. It would be difficult to discuss how ERM systems work without a base knowledge of licensing and copyright issues. Of particular importance is the TEACH, which I knew very little about, so that unit was quite informative, particularly during the Q and A session with Lipinski. Of the latter entries, I was particularly interested in perpetual access and digital archiving. I am glad that some organizations are trying to make sure that publishers' websites are archived, but institutions having perpetual access to a purchased or licensed product is an important topic.
I truly enjoyed the course, thank you!
ERM Musings
Thursday, December 16, 2010
Thursday, December 9, 2010
Unit 14: Perpetual Access
Perpetual access, not to be confused with digital preservation (Stemper and Barribeau) is a right to access the content of a licensed resource or subscription resource. The reading for this week addresses the problems and concerns relating to perpetual access, as well as digital preservation practices. The issues stem from the question of once a library cancels online access to a journal, what happens to the content for which they already paid? Perpetual access is a key component to the preservation of online/electronic resources. This issue directly ties into the access versus ownership debate. As more institutions are moving towards a pay per view model, fewer institutions are concerned with perpetual access.
There are several reasons for this trend, including patron pressure to have the article now, the availability of content where accessing an online licensed issue of an article may be available at an earlier date the owning the print version, and of course, financial considerations. Many libraries, as we all are aware, are canceling print versions of journals that are available online due to cost, which is not just the price of the journals themselves, but the cost of maintaining adequate shelf space and staff. The latter concern is more related to the license negotiations, which take more time and effort than many ER librarians have. Furthermore, what happens when the licensed content is purchased by a new publisher or vendor? Frequently, the terms of agreement may change, possibly rendering any perpetual access clause useless. Plus, few aggregators offer perpetual access, which is too bad as they are usually more affordable.
Some of the key issues, as outlined by Watson and others, included changing formats, data migration, and a few solutions. Outdated software, hardware, as well as pricing are big concerns. While it seems easy enough to say, hey, let's just preserve that content digitally, in actuality, this is a big undertaking, and not just in the technical arena. There is the initial cost, including staff training, followed by a need to transfer the content to new media every ten years or so in order to avoid digital decay. And that is just for the content that is still accessible and not on outdated equipment, media, or software.
What about emails, blogs, and other website type of information? While there is the Wayback Machine, wikis and blogs, just to name a couple, are not archived well as they are frequently changing. Surprisingly, online reference resources are also in this “frequently changing and therefore, difficult to archive” category. Not everything can be preserved, so tools, like The Digital Preservation Coalition have been developed in order to help librarians decide which types of electronic resources they should preserve.
Certain third party organizations are key in helping to preserve electronic content. These include JSTOR, LOCKSS, and Portico. JSTOR is a subscription service that provides access to back issues of journals. Subscriptions may be purchased on a collection basis. Each journal has a rolling embargo on the content, which varies based on the publisher. JSTOR is also considered to be very trustworthy and reliable among librarians and patrons alike. Since they only provide access to back issues, it is like a digital archive of journals, but not the most recent years. LOCKSS (Lots Of Copies Keep Stuff Safe) provides an archiving service where participating libraries maintain a LOCKSS server that preserves copies and free and subscribed materials. Each server also serves to back up the others in the event of catastrophe. There is a concern about the data format becoming obsolete, but so far, LOCKSS has proven itself to be a reliable service.
Portico, also funded through the Mellon Foundation, archives content, but not at the library itself, assuming that libraries do not wish to house or maintain the appropriate infrastructure. The archive is “dark”, meaning that participating libraries do not have access to the content unless the original source (publisher) stops providing its content. Furthermore, the data is normalized in order to help with migration to new, and possibly more stable, formats in the event that file formats change. Normalization is not to change the content, but to keep is accessible. According Fenton, Portico is more concerned with preserving the intellectual content, not the original webpage or digital source as no one (apparently) is interested in preservation for preservation’s sake. (Did Fenton really need to roll her eyes when she mentioned librarians and preservation while she discussed the pricing model?) PubMed and Google Book Search (formerly Google Print) are also providing digitization services, although some may disagree as to the success of Google Books in particular.
Libraries are also banding together in order to help preserve and archive content. For example, the consortia, like the CIC, may go in on content, like all of the Springer Verlag and Wiley publications and house them in an off-site, environmentally controlled storage facility for preservation's sake. Institional repositories are also a viable, yet underused, source for archiving content. Steering committees have started in order to determine some possible solutions to the preservation issue of electronic content and the print versions, with national libraries taking the lead role in providing those services. In addition, due to pressure from libraries, publishers are taking a more active role in digital preservation by signing on with the three main third party systems or by depositing in to a national archive or repository.
Seadle looks to address the issue of “what is a trusted repository in a digital environment?” While the same preservation vocabulary for digital and hard copies of materials are used, the process and needs are quite different. The initial and primary focus is usually on technology, especially in the case of electronic journals. But according to Seadle, this may be to the detriment of social organization, the idea behind keeping rare books in vaults and locked cases. Nothing can damage a physical object like an unrestrained reader. Therefore, we need trust for successful archiving, whether it be physical or digital. While it may be easy to trust a museum to keep a rare manuscript safe, what about digital content? Perhaps we need to have a level of distrust in order to ensure a safe copy of the content. If we distrust the media for example, we may create multiple copies in multiple formats housed different locations, including off-site servers. Not bad for an untrustworthy format.
We should also distrust proprietary software and use more open source software and systems, as they tend to not be in it for the money. Furthermore, Seadle addressing integrity and authenticity in relation to digital objects. If a digital object is marked up or the format changes, it may loose integrity, which somehow devalues the item. Thus, preserving digital content as a bitstream is a more stable, at least in terms of format, method of maintaining an object's integrity. In terms of authenticity, this is tricky. There usually is no one genuine copy of a digital work. However, using bistreams with timestamps and checksums may help determine the authenticity of a digital object.
So, should we trust LOCKSS? It is a community supported project, as opposed to a corporate entity. It is a network of collaborators using open source software and Seadle compares it to the apparent trustworthy appearance of Linux, on which the majority of library servers are based. LOCKSS archives content through a bitstream, without normalizing its content. Even though normalization may help with possible future formatting issues, that process does inherently loose some content through compression. Each bitstream has a timestamp that aids in determining the first, and therefore, authentic version of the content. LOCKSS also get permissions from publishers and content provider prior to crawling their websites for content. Basically, according to Seadle, we really should trust LOCKSS, as it is the best system around.
There are several reasons for this trend, including patron pressure to have the article now, the availability of content where accessing an online licensed issue of an article may be available at an earlier date the owning the print version, and of course, financial considerations. Many libraries, as we all are aware, are canceling print versions of journals that are available online due to cost, which is not just the price of the journals themselves, but the cost of maintaining adequate shelf space and staff. The latter concern is more related to the license negotiations, which take more time and effort than many ER librarians have. Furthermore, what happens when the licensed content is purchased by a new publisher or vendor? Frequently, the terms of agreement may change, possibly rendering any perpetual access clause useless. Plus, few aggregators offer perpetual access, which is too bad as they are usually more affordable.
Some of the key issues, as outlined by Watson and others, included changing formats, data migration, and a few solutions. Outdated software, hardware, as well as pricing are big concerns. While it seems easy enough to say, hey, let's just preserve that content digitally, in actuality, this is a big undertaking, and not just in the technical arena. There is the initial cost, including staff training, followed by a need to transfer the content to new media every ten years or so in order to avoid digital decay. And that is just for the content that is still accessible and not on outdated equipment, media, or software.
What about emails, blogs, and other website type of information? While there is the Wayback Machine, wikis and blogs, just to name a couple, are not archived well as they are frequently changing. Surprisingly, online reference resources are also in this “frequently changing and therefore, difficult to archive” category. Not everything can be preserved, so tools, like The Digital Preservation Coalition have been developed in order to help librarians decide which types of electronic resources they should preserve.
Certain third party organizations are key in helping to preserve electronic content. These include JSTOR, LOCKSS, and Portico. JSTOR is a subscription service that provides access to back issues of journals. Subscriptions may be purchased on a collection basis. Each journal has a rolling embargo on the content, which varies based on the publisher. JSTOR is also considered to be very trustworthy and reliable among librarians and patrons alike. Since they only provide access to back issues, it is like a digital archive of journals, but not the most recent years. LOCKSS (Lots Of Copies Keep Stuff Safe) provides an archiving service where participating libraries maintain a LOCKSS server that preserves copies and free and subscribed materials. Each server also serves to back up the others in the event of catastrophe. There is a concern about the data format becoming obsolete, but so far, LOCKSS has proven itself to be a reliable service.
Portico, also funded through the Mellon Foundation, archives content, but not at the library itself, assuming that libraries do not wish to house or maintain the appropriate infrastructure. The archive is “dark”, meaning that participating libraries do not have access to the content unless the original source (publisher) stops providing its content. Furthermore, the data is normalized in order to help with migration to new, and possibly more stable, formats in the event that file formats change. Normalization is not to change the content, but to keep is accessible. According Fenton, Portico is more concerned with preserving the intellectual content, not the original webpage or digital source as no one (apparently) is interested in preservation for preservation’s sake. (Did Fenton really need to roll her eyes when she mentioned librarians and preservation while she discussed the pricing model?) PubMed and Google Book Search (formerly Google Print) are also providing digitization services, although some may disagree as to the success of Google Books in particular.
Libraries are also banding together in order to help preserve and archive content. For example, the consortia, like the CIC, may go in on content, like all of the Springer Verlag and Wiley publications and house them in an off-site, environmentally controlled storage facility for preservation's sake. Institional repositories are also a viable, yet underused, source for archiving content. Steering committees have started in order to determine some possible solutions to the preservation issue of electronic content and the print versions, with national libraries taking the lead role in providing those services. In addition, due to pressure from libraries, publishers are taking a more active role in digital preservation by signing on with the three main third party systems or by depositing in to a national archive or repository.
Seadle looks to address the issue of “what is a trusted repository in a digital environment?” While the same preservation vocabulary for digital and hard copies of materials are used, the process and needs are quite different. The initial and primary focus is usually on technology, especially in the case of electronic journals. But according to Seadle, this may be to the detriment of social organization, the idea behind keeping rare books in vaults and locked cases. Nothing can damage a physical object like an unrestrained reader. Therefore, we need trust for successful archiving, whether it be physical or digital. While it may be easy to trust a museum to keep a rare manuscript safe, what about digital content? Perhaps we need to have a level of distrust in order to ensure a safe copy of the content. If we distrust the media for example, we may create multiple copies in multiple formats housed different locations, including off-site servers. Not bad for an untrustworthy format.
We should also distrust proprietary software and use more open source software and systems, as they tend to not be in it for the money. Furthermore, Seadle addressing integrity and authenticity in relation to digital objects. If a digital object is marked up or the format changes, it may loose integrity, which somehow devalues the item. Thus, preserving digital content as a bitstream is a more stable, at least in terms of format, method of maintaining an object's integrity. In terms of authenticity, this is tricky. There usually is no one genuine copy of a digital work. However, using bistreams with timestamps and checksums may help determine the authenticity of a digital object.
So, should we trust LOCKSS? It is a community supported project, as opposed to a corporate entity. It is a network of collaborators using open source software and Seadle compares it to the apparent trustworthy appearance of Linux, on which the majority of library servers are based. LOCKSS archives content through a bitstream, without normalizing its content. Even though normalization may help with possible future formatting issues, that process does inherently loose some content through compression. Each bitstream has a timestamp that aids in determining the first, and therefore, authentic version of the content. LOCKSS also get permissions from publishers and content provider prior to crawling their websites for content. Basically, according to Seadle, we really should trust LOCKSS, as it is the best system around.
Unit 13: Reflections on the Worklife of an ER librarian
If I could sum up this week's readings in one sentence, it would be don't panic and take advantage of any training opportunities, including paying attention to your colleagues' skill sets, as well as reaching out to other areas and departments. We had three different, but very related articles addressing the changing nature of the ER librarian. Albitz and Shelbern trace the evolution of the roles and responsibilities of the ER librarian, Griffin provides a great list of resources for help and education, while Affifi looks to the business world for a framework to work through the process of acquiring and using electronic resources.
Albitz and Shelbern began by looking at articles on position descriptions and job postings for ER librarians, following it up with a survey conducted through the ARL membership group to determine how accurate the actual tasks and responsibilities were to the job announcements, which also indicate how much ER management and implementation has changed since around 1997-8. The main focus of the electronic resource management articles was on the resource or database itself, not on staffing issues or who should and can administer the systems.
However, the authors found two good articles, one where the authors analyzed position description between 1990 and 2000 where the words digital and/or electronic appeared in the title. The second article, Fisher, looked at 298 position announcements from American Libraries, finding the ER positions tend toward public services and reference-type positions, as opposed to technical services. A third article by Albitz analyzed position announcements between 1997 and 2001 for academic libraries as published in College & Research Libraries News. ER librarian are found in many departments and prior knowledge was not necessarily a prerequisite for getting the job. So, how do the actual tasks reflect the position descriptions?
Albitz and Shelbern found that while ER librarians typically worked in technical services, with or without also working in public services, they typically report to public services and the primary responsibilities had been for reference and user education. However, public services were no longer highlighted in the descriptions, as the most common responsibility according to the surveys was ER coordination. The second most common activity is a combination (or equal frequency) of acquisitions, license negotiations, and technical support. This was highlighted by Judith’s presentation, when she stated that she frequently needed to figure out why a link was not functioning properly. However, from the surveys, reference and bibliographic instruction was down. Keep in mind that the survey was conducted in 2005, while the articles only went up to 2001. Everything changed since then, at least technologically speaking, so this difference is not a surprise.
ERs are still relatively new, standards are being worked on (DLF, NISO), but there are always delays and it takes time to get the appropriate stakeholders to adopt new standards. Meanwhile, back at the library, it is not always clear which types of skills are necessary to the effective management and utilization of ERs. Therefore, Affifi approaches this problem by looking at process mapping as a way to deal with these issues.
Similar to business process re-engineering (BPR) with roots in total quality management (TQM), process mapping breaks down a process, such as acquiring and administering an ER, into steps with clear beginning and end points. Basically, it helps to analyze a product and how to more effectively utilize said product or process. Several studies and examples indicate that process mapping can be useful to libraries and librarians. One process mapping output can be the input of another, which will how the processes are connected; a very useful tool in a complicated system. Affifi provides a sample flow chart and textual output of possible process mapping for ER systems and products, beginning with the vendor contacting a library about acquiring a resource. What is interesting is that Affifi leaves out “routine product maintenance” from the process mapping, which includes troubleshooting. That is one of the more common tasks performed by ER librarians, according to other articles. Still, libraries have been borrowing models from the business world, and I see how this model could easily benefit an ER librarian, as illustrated by the case study.
Albitz and Shelbern began by looking at articles on position descriptions and job postings for ER librarians, following it up with a survey conducted through the ARL membership group to determine how accurate the actual tasks and responsibilities were to the job announcements, which also indicate how much ER management and implementation has changed since around 1997-8. The main focus of the electronic resource management articles was on the resource or database itself, not on staffing issues or who should and can administer the systems.
However, the authors found two good articles, one where the authors analyzed position description between 1990 and 2000 where the words digital and/or electronic appeared in the title. The second article, Fisher, looked at 298 position announcements from American Libraries, finding the ER positions tend toward public services and reference-type positions, as opposed to technical services. A third article by Albitz analyzed position announcements between 1997 and 2001 for academic libraries as published in College & Research Libraries News. ER librarian are found in many departments and prior knowledge was not necessarily a prerequisite for getting the job. So, how do the actual tasks reflect the position descriptions?
Albitz and Shelbern found that while ER librarians typically worked in technical services, with or without also working in public services, they typically report to public services and the primary responsibilities had been for reference and user education. However, public services were no longer highlighted in the descriptions, as the most common responsibility according to the surveys was ER coordination. The second most common activity is a combination (or equal frequency) of acquisitions, license negotiations, and technical support. This was highlighted by Judith’s presentation, when she stated that she frequently needed to figure out why a link was not functioning properly. However, from the surveys, reference and bibliographic instruction was down. Keep in mind that the survey was conducted in 2005, while the articles only went up to 2001. Everything changed since then, at least technologically speaking, so this difference is not a surprise.
ERs are still relatively new, standards are being worked on (DLF, NISO), but there are always delays and it takes time to get the appropriate stakeholders to adopt new standards. Meanwhile, back at the library, it is not always clear which types of skills are necessary to the effective management and utilization of ERs. Therefore, Affifi approaches this problem by looking at process mapping as a way to deal with these issues.
Similar to business process re-engineering (BPR) with roots in total quality management (TQM), process mapping breaks down a process, such as acquiring and administering an ER, into steps with clear beginning and end points. Basically, it helps to analyze a product and how to more effectively utilize said product or process. Several studies and examples indicate that process mapping can be useful to libraries and librarians. One process mapping output can be the input of another, which will how the processes are connected; a very useful tool in a complicated system. Affifi provides a sample flow chart and textual output of possible process mapping for ER systems and products, beginning with the vendor contacting a library about acquiring a resource. What is interesting is that Affifi leaves out “routine product maintenance” from the process mapping, which includes troubleshooting. That is one of the more common tasks performed by ER librarians, according to other articles. Still, libraries have been borrowing models from the business world, and I see how this model could easily benefit an ER librarian, as illustrated by the case study.
Unit 12: E-Books: Audio and Text
Electronic and audio books sure do come in a variety of sizes, styles, and services. The two required articles for this week provided a nice summation and overview of what is available and what librarians should look for in selecting an appropriate e-Book/audio book service. I had not considered how the catalog might look or what it may contain, which I do realize is quite silly. The articles primarily covered Audible, OverDrive, NetLIbrary, TumbleBooks, and TumbleTalking Books. An interesting note on the publishers in relation to the catalog is that some services use the date of a book becoming digital to classify it as a Frontlist title, when really, it may have been originally published in 1957. Overdrive is an example of such a practice in that when hard copy books are digitized and put into their catalog, the date of the digital copy is what the librarian and/or patron sees when searching.
Other considerations a collection development librarian may need to address when selecting a service includes content characteristics, file formats and usability, whether to purchase or lease the content, integration with the OPACs, circulation models, sound quality, and administrative modules. In terms of content characteristics, a librarian should consider abridged vs. unabridged versions and availability, and the narrator/narration style. Is the narrator the author or an actor and can the patron search by narrator? Furthermore, is the narration done by one person, a celebrity, or a cast of characters? It was interesting to think of the digital book as a cross between a motion picture and print book.
Here are the basics for each of the systems that may also address more of the above considerations:
Audible: Some suppliers allow patrons to burn CDs, but not all, which is why it is a good idea to check the content suppliers prior to selecting a service. In terms of circulation, the library must have a player to circulate and only one copy of a title may be checked out at a time. Audible uses a proprietary format, but it is compatible with most mp3 plays and computers. There are no administration modules and about 23,000 titles, but not all suppliers allow access to libraries.
OverDrive: most usable format without proprietary formats, aside from Bill Gates, of course. Titles are delivered in DRM-protected WMA files in 1-hour parts. Patrons may play the titles on a play on a PC with Over Drive Media Console, and depending on the supplier, may burn the files to a CD. The library can decide the circulation procedure and the number of titles a patron may check out at a time. Overdrive offers an unlimited simultaneous users plan as an option for best sellers and also offers leasing in 50 title increments. In addition, libraries can add a title to their collection prior to actually owning or leasing it in order to determine demand. In terms of the catalog and other administrative tasks, Overdrive does provide MARC records, but at $1.50 a title. In terms of administrative modules, Overdrive is great. The purchased title report, in particular, will aid with collection development as the report will state how many titles are checked out, the turnover, and cost per circulation. The report is also available as a spreadsheet, where the librarian could sort by author, for instance. The web site statistics report is also interesting as it tracks what patrons are searching, putting those results into a patron interest reports. However, how is that done and what about privacy? Overdrive also uses 82 subject headings, which come from the suppliers and many books within the master collection have at least two headings. While this seems good, what about controlled vocabulary? Are there several headings that really could be one? Still, this seems a very good option for most libraries.
netLibrary and Recorded Books: this service is primarily useful for academic libraries, although the interface is not the greatest, at least for the eBooks. At the time of the articles, netLibrary had about 850 titles with at least 30 more per month. A library will pay an annual subscription fee, which provides access to all the titles. The pricing fee is based on overall circulation and total population and comes with free MARC records. In terms of formats, the audio books may be played on any media player as DRM protected WMA files and may also be transferred to other devices. Patrons, however, may not burn the files onto CDs. For some reason, patrons must have a personal account with netLibrary in order to use their recorded books. Why? In any case, by doing so, patrons have access to a preview version of the title and two full version: a web browser format and the media player version. netLibrary has its site looked at for accessibility by a third party and representatives who support screen-reading software.
TumbleBooks and TumbleTalkingBooks: both are from Tumbleweed Press and provide animated audio books for children. The first option (TumbleBooks) provides online streaming of the titles using Flash, but no downloading. So, it requires a computer. The titles are also available on CD-ROM. In terms of pricing, there is not a consortial model, but there are group rates. As for TumbleTalkingBooks, a subset of TumbleBooks, libraries may acquire titles for about $20 per book, yet may return poorly circulation books in some instances. The titles still must be read online (no CD-ROM option), but they are building a downloading option. Patrons may not download or transfer the titles to other devices, including CDs. TumbleTalkingBooks provided unlimited simultaneous access, as well as some large print options, up to 34 point font, and some highlighting options. Few of the providers have highlighting or placeholding features, so this was a nice consideration.
I really liked Frank Kurt Cylke’s 3-page strategic plan to approaching audio books, which involved staying on top of the literature and consumer groups (i.e. target audience). I also appreciate their forward thinking approach looking beyond the CD as a medium and using digital books on Flash memory. I wonder what happen now that Flash may be on its way out in lieu of HTML 5. Still, as discussed by the gentleman following Frank (Dan?), I am intrigued by the player used for the Flash memory system. It is about the size of a standard hard cover book, has few, if any, movable parts, which equates to easily repairable and reliable, and has reusable formats. That last point is an important one, especially after learning more about the Playaway, which is essentially a disposable form of reading. Furthermore, how good of sound quality will a disposable system have? Sound quality should be a key consideration point when selecting a service or system.
Other considerations a collection development librarian may need to address when selecting a service includes content characteristics, file formats and usability, whether to purchase or lease the content, integration with the OPACs, circulation models, sound quality, and administrative modules. In terms of content characteristics, a librarian should consider abridged vs. unabridged versions and availability, and the narrator/narration style. Is the narrator the author or an actor and can the patron search by narrator? Furthermore, is the narration done by one person, a celebrity, or a cast of characters? It was interesting to think of the digital book as a cross between a motion picture and print book.
Here are the basics for each of the systems that may also address more of the above considerations:
Audible: Some suppliers allow patrons to burn CDs, but not all, which is why it is a good idea to check the content suppliers prior to selecting a service. In terms of circulation, the library must have a player to circulate and only one copy of a title may be checked out at a time. Audible uses a proprietary format, but it is compatible with most mp3 plays and computers. There are no administration modules and about 23,000 titles, but not all suppliers allow access to libraries.
OverDrive: most usable format without proprietary formats, aside from Bill Gates, of course. Titles are delivered in DRM-protected WMA files in 1-hour parts. Patrons may play the titles on a play on a PC with Over Drive Media Console, and depending on the supplier, may burn the files to a CD. The library can decide the circulation procedure and the number of titles a patron may check out at a time. Overdrive offers an unlimited simultaneous users plan as an option for best sellers and also offers leasing in 50 title increments. In addition, libraries can add a title to their collection prior to actually owning or leasing it in order to determine demand. In terms of the catalog and other administrative tasks, Overdrive does provide MARC records, but at $1.50 a title. In terms of administrative modules, Overdrive is great. The purchased title report, in particular, will aid with collection development as the report will state how many titles are checked out, the turnover, and cost per circulation. The report is also available as a spreadsheet, where the librarian could sort by author, for instance. The web site statistics report is also interesting as it tracks what patrons are searching, putting those results into a patron interest reports. However, how is that done and what about privacy? Overdrive also uses 82 subject headings, which come from the suppliers and many books within the master collection have at least two headings. While this seems good, what about controlled vocabulary? Are there several headings that really could be one? Still, this seems a very good option for most libraries.
netLibrary and Recorded Books: this service is primarily useful for academic libraries, although the interface is not the greatest, at least for the eBooks. At the time of the articles, netLibrary had about 850 titles with at least 30 more per month. A library will pay an annual subscription fee, which provides access to all the titles. The pricing fee is based on overall circulation and total population and comes with free MARC records. In terms of formats, the audio books may be played on any media player as DRM protected WMA files and may also be transferred to other devices. Patrons, however, may not burn the files onto CDs. For some reason, patrons must have a personal account with netLibrary in order to use their recorded books. Why? In any case, by doing so, patrons have access to a preview version of the title and two full version: a web browser format and the media player version. netLibrary has its site looked at for accessibility by a third party and representatives who support screen-reading software.
TumbleBooks and TumbleTalkingBooks: both are from Tumbleweed Press and provide animated audio books for children. The first option (TumbleBooks) provides online streaming of the titles using Flash, but no downloading. So, it requires a computer. The titles are also available on CD-ROM. In terms of pricing, there is not a consortial model, but there are group rates. As for TumbleTalkingBooks, a subset of TumbleBooks, libraries may acquire titles for about $20 per book, yet may return poorly circulation books in some instances. The titles still must be read online (no CD-ROM option), but they are building a downloading option. Patrons may not download or transfer the titles to other devices, including CDs. TumbleTalkingBooks provided unlimited simultaneous access, as well as some large print options, up to 34 point font, and some highlighting options. Few of the providers have highlighting or placeholding features, so this was a nice consideration.
I really liked Frank Kurt Cylke’s 3-page strategic plan to approaching audio books, which involved staying on top of the literature and consumer groups (i.e. target audience). I also appreciate their forward thinking approach looking beyond the CD as a medium and using digital books on Flash memory. I wonder what happen now that Flash may be on its way out in lieu of HTML 5. Still, as discussed by the gentleman following Frank (Dan?), I am intrigued by the player used for the Flash memory system. It is about the size of a standard hard cover book, has few, if any, movable parts, which equates to easily repairable and reliable, and has reusable formats. That last point is an important one, especially after learning more about the Playaway, which is essentially a disposable form of reading. Furthermore, how good of sound quality will a disposable system have? Sound quality should be a key consideration point when selecting a service or system.
Wednesday, December 8, 2010
Unit 11: Finding Content: Discovery Tools
A to Z lists
On the surface, this seems like a simple list where a library's electronic resources may be organized alphabetically and provided online. Okay, so far so good. A staff member may just update the lists and links from time to time, no problem. But what happens when a journal title is listed in numerous licensed resources with varying dates of overall availability and full text access. The example provided by Weddle and Grogg on the full text access of The Journal of Black Studies is a prime example of what kind of complications may occur. That title is available from six vendors, although it was not necessarily clear which exact title in the case of Sage Publications, with dates ranging from 1970 to 2004, depending on the vendor. It would be challenging, and probably a full-time job, to maintain that kind of a list for every journal title, especially at a large institution. Therefore, many libraries are opting for an A to Z listing service like Serials Solutions in order to maintain their lists. With this type of service, an ERM librarian will check the titles his or her library subscribes to and the service populates the local knowledgebase, which is basically the back end journal title and database searches.
So what happens with the list? How may patrons Find It useful? While they may not directly see what is happening, patrons do experience the benefits of OpenURLs and link resolvers. Link resolvers, like SFX from Ex Libris, use the information from the knowledgebase and pull together connections of an institution's e-resources in order to point the patron to an appropriate copy of the article s/he is wishing to access. So, a link resolver take the information on the source from the OpenURL and redirects the patron to the article, the target. An OpenURL is a structure for sharing metadata about a resource using particular standards, such as Z39. Basically, an OpenURL is like a MARC record with the added features of dynamic linking (i.e. not static) where the OpenURL recognizes the user, is sent to that user's link resolver and knowledgebase, compares the metadata in the knowledge base for a match to the query, and then links that information back to the user. This is what we see on the front end in FindIt.
How are the appropriate objects linked back to the user? Magical metadata fairies? Sometimes it appears so. In actuality, there are these things called Digital Objects Identifiers (DOI) registered through an agency, like CrossRef. A DOI is a persistent unique identifier and can be used as or within the metadata of the OpenURL, which according to Brand is becoming more common. A publisher will assign an object a DOI and then deposit that DOI with the corresponding URL into the CrossRef database. CrossRef connects the DOI with the URL providing the user with an appropriate copy of the desired resource. However, it should be noted that these is only for items at publisher's websites. If the resource is from an aggregator or an Open Access source, then a link resolver is still needed.
What about searching across multiple e-resources you might ask? Enter federated searching. While this is still relatively new, the idea is that by using a federated searching tool, librarians and users will be able to search across multiple e-resources. However, there are varying standards and usability issues. To that end, NISO started the Metasearch Initiative in order to bring together vendors, libraries and others to create a set of standards and protocols that would make federated searching more productive. Recently, UW – Madison added a federated searching function (Xerxes?) on its database page, which I have found quite useful.
On the surface, this seems like a simple list where a library's electronic resources may be organized alphabetically and provided online. Okay, so far so good. A staff member may just update the lists and links from time to time, no problem. But what happens when a journal title is listed in numerous licensed resources with varying dates of overall availability and full text access. The example provided by Weddle and Grogg on the full text access of The Journal of Black Studies is a prime example of what kind of complications may occur. That title is available from six vendors, although it was not necessarily clear which exact title in the case of Sage Publications, with dates ranging from 1970 to 2004, depending on the vendor. It would be challenging, and probably a full-time job, to maintain that kind of a list for every journal title, especially at a large institution. Therefore, many libraries are opting for an A to Z listing service like Serials Solutions in order to maintain their lists. With this type of service, an ERM librarian will check the titles his or her library subscribes to and the service populates the local knowledgebase, which is basically the back end journal title and database searches.
So what happens with the list? How may patrons Find It useful? While they may not directly see what is happening, patrons do experience the benefits of OpenURLs and link resolvers. Link resolvers, like SFX from Ex Libris, use the information from the knowledgebase and pull together connections of an institution's e-resources in order to point the patron to an appropriate copy of the article s/he is wishing to access. So, a link resolver take the information on the source from the OpenURL and redirects the patron to the article, the target. An OpenURL is a structure for sharing metadata about a resource using particular standards, such as Z39. Basically, an OpenURL is like a MARC record with the added features of dynamic linking (i.e. not static) where the OpenURL recognizes the user, is sent to that user's link resolver and knowledgebase, compares the metadata in the knowledge base for a match to the query, and then links that information back to the user. This is what we see on the front end in FindIt.
How are the appropriate objects linked back to the user? Magical metadata fairies? Sometimes it appears so. In actuality, there are these things called Digital Objects Identifiers (DOI) registered through an agency, like CrossRef. A DOI is a persistent unique identifier and can be used as or within the metadata of the OpenURL, which according to Brand is becoming more common. A publisher will assign an object a DOI and then deposit that DOI with the corresponding URL into the CrossRef database. CrossRef connects the DOI with the URL providing the user with an appropriate copy of the desired resource. However, it should be noted that these is only for items at publisher's websites. If the resource is from an aggregator or an Open Access source, then a link resolver is still needed.
What about searching across multiple e-resources you might ask? Enter federated searching. While this is still relatively new, the idea is that by using a federated searching tool, librarians and users will be able to search across multiple e-resources. However, there are varying standards and usability issues. To that end, NISO started the Metasearch Initiative in order to bring together vendors, libraries and others to create a set of standards and protocols that would make federated searching more productive. Recently, UW – Madison added a federated searching function (Xerxes?) on its database page, which I have found quite useful.
Unit 10: Data standards
Wow do we have a lot of standards and almost need an ERM system just to keep them straight. Pesch provides some nice charts that can aid in the understanding of the standards and how they are utilized. Here is the breakdown based on the life cycle of electronic resources according to Pesch:
Acquire:
ONIX SPS
ONIX SOH
ONIX PL
ICEDIS
ERMI license terms
SERU
Access:
Z39.2 (MARC)
ONIX SOH
ICE DIS
Z39.50
MXG
Z39.91
Z39.88 (open URL)
Administer:
ERMI license terms
ONIX PL
ONIX SOH
Z39.2
TRANSFER
ONIX SRN
Support:
None
Evaluate:
COUNTER
Z39.93
SUSHI
ONIX SPS
Renew:
None, usually built-in with acquisition
So why do we need all of these standards? Well, since the way in which libraries acquire and support electronic resources is complicated, with multiple vendors and platforms, having standards should create a higher level of interoperability. Now, if we can get all of the vendors and publishers on board with standards, and perhaps actually using the same standards (I know that might be asking a lot), we could really get somewhere. However, as addressed by Carpenter of NISO, creating the standards and best practices, like SUSHI and SERU, is the easy part. Getting institutions, vendors, and publishers to use them can be challenging.
As Yue argues, and I agree, we must have at least a basic understanding of common standards and initiatives for each phase of the life cycle (selection, acquisition, administration, bibliographic/access control, and assessment), which include DLF ERMI, NISO/EDItUER (ONIX), XML-based schemas, and project COUNTER.
The DLF ERMI was established in October 2002 following a Web Hub from 2001 and resulted in a workshop on ERMS standards in May 2002. This workshop was attended by librarians, agents, and vendors, so basically all of the appropriate stakeholders. The aim of the project and workshop was to address the development of ERMS through interrelated documents defining functional requirements and data standards that could be used by the stakeholders. They came up with 47 functional requirements for ER management, charts outlining the processes associated with managing ERs through their life cycles, an ERD (ha, I know what that is!) of the relationships between entities of ERMS, and a report on how to apply XML. Overall, the final report received very positive feedback, but also provided findings for further research: consortium support and functionality, usage data, and data standards. Phase 2 (2005) will focus on data standards, license expression, and usage data. Another interesting tidbit from this article that is echoed in many of our readings is that MARC does not work well for ERs and we really should just be using XML, so I am happy to be taking that course next term.
Acquire:
ONIX SPS
ONIX SOH
ONIX PL
ICEDIS
ERMI license terms
SERU
Access:
Z39.2 (MARC)
ONIX SOH
ICE DIS
Z39.50
MXG
Z39.91
Z39.88 (open URL)
Administer:
ERMI license terms
ONIX PL
ONIX SOH
Z39.2
TRANSFER
ONIX SRN
Support:
None
Evaluate:
COUNTER
Z39.93
SUSHI
ONIX SPS
Renew:
None, usually built-in with acquisition
So why do we need all of these standards? Well, since the way in which libraries acquire and support electronic resources is complicated, with multiple vendors and platforms, having standards should create a higher level of interoperability. Now, if we can get all of the vendors and publishers on board with standards, and perhaps actually using the same standards (I know that might be asking a lot), we could really get somewhere. However, as addressed by Carpenter of NISO, creating the standards and best practices, like SUSHI and SERU, is the easy part. Getting institutions, vendors, and publishers to use them can be challenging.
As Yue argues, and I agree, we must have at least a basic understanding of common standards and initiatives for each phase of the life cycle (selection, acquisition, administration, bibliographic/access control, and assessment), which include DLF ERMI, NISO/EDItUER (ONIX), XML-based schemas, and project COUNTER.
The DLF ERMI was established in October 2002 following a Web Hub from 2001 and resulted in a workshop on ERMS standards in May 2002. This workshop was attended by librarians, agents, and vendors, so basically all of the appropriate stakeholders. The aim of the project and workshop was to address the development of ERMS through interrelated documents defining functional requirements and data standards that could be used by the stakeholders. They came up with 47 functional requirements for ER management, charts outlining the processes associated with managing ERs through their life cycles, an ERD (ha, I know what that is!) of the relationships between entities of ERMS, and a report on how to apply XML. Overall, the final report received very positive feedback, but also provided findings for further research: consortium support and functionality, usage data, and data standards. Phase 2 (2005) will focus on data standards, license expression, and usage data. Another interesting tidbit from this article that is echoed in many of our readings is that MARC does not work well for ERs and we really should just be using XML, so I am happy to be taking that course next term.
Tuesday, November 30, 2010
Unit 9: Electronic resource management systems: vendors and functionalities
For this week, we explored more reasons as to why electronic resource management (ERM) systems exist, the desired qualities and functions, and how libraries may acquire such systems. No problem, right? Well, as per usual, this is more complicated than one might imagine, depending on that person's imagination, of course. Both Maria Collins and Margaret Hogarth provide us with a nice introduction to ERM systems and the key players. With the growing number of electronic resources and the decrease of libraries purchasing on a title-by-title basis, librarians have found that they need some kind of a system to help keep track of when to start the license negotiations, be able to easily generate A-Z lists, and be able to integrate such functionalities, as well as others, into the Integrated Library System (ILS) or Public Access Management System (PAMS). In short, it is up to the library to make any system or authentication changes and implementations required in a license while the vendor do nothing, which is why we have ERMs.
So how did this come about and what do librarians desire in an ERM system? An effective ERM should have one “point of maintenance”, which most ILS are capable of, or programmed to handle and should streamline the workflow, including maintenance of spreadsheets and databases (license status, etc.). Many libraries created their own ERM systems starting in 1998 with open source systems, and eventually began to work in a more collaborative environment. In 2002, DLF ERMI began in order to look at “the functional requirements desired in an ERM, identify data elements and common definitions for consistency, provide potential XML schemas, and identify and support data standards” (Collins 184). The results of this work was published in a report in 2004, outlining the preferred standards, features, and design of an ERM, including data standards, vendor costs, and system migrations, including those listed above. Some of the preferred standards include EDItEUR, ONIX, SUSHI, and COUNTER. The final three are also mentioned in several Against the Grain articles on ERM systems as a positive example of increased cooperation between libraries, vendors, and publishers. Moreover, as mention by Hogarth, one of the primary concerns to librarians is the issue of standards.
As there are several choices when determining which ERM system to chose for you library, how do you chose? Many librarians want some kind of flexibility, with little manual data entry, good standards in naming conventions, limited staff training and time, interoperability, and a good link resolver. Another consideration is whether or not to use a third party or see if your ILS has an integrated ERM system, such as Ex Libre's VERDE. However, Collins warns against, or at least to be wary of, using the same vendor for both the ILS and ERM system as the library may end up with an under-supported system. If looking at a one size fits all type of system, make sure to know each vendor's track record on development. A one vendor system should have a knowledgeable title management database, link resolver, and A-to-Z listing services. For third party systems, such as Serials Solutions, how well will it integrate with the ILS? This is the primary concern. A third party system should be able to easily link and manage subscriptions, including aggregated resources. There are also subscription agents, such as EBSCO. These systems may be acceptable, but be careful if using another subscription agent or another A-to-Z listing service or link-resolving tools. If using open access or own system, the library must be able to sustain its own systems resources, but can tailor the ERM system to local needs.
All of the options will involve dedication staff and typically includes the need to enter data manually. For example, with EBSCO's system, one of the primary complaints from the Against the Grain articles is that it does not support or update non-EBSCO data efficiently. Serials Solutions 360 had difficulty mapping data to and from the link resolver and it was difficult to get the COUNTER data. Collins suggests that a library should conduct an internal and external needs assessment prior to making the final decision. Considerations should include the type of library, mission statement, size of library and collection, technological infrastructures, cost vs. need, existing tools and ILS, and interoperability. In short, what does you library need and how will the system resolve problems? Hogarth also provides a detailed look at various systems, including open source systems, as well as numerous bullet points of what to consider when choosing a system.
So how did this come about and what do librarians desire in an ERM system? An effective ERM should have one “point of maintenance”, which most ILS are capable of, or programmed to handle and should streamline the workflow, including maintenance of spreadsheets and databases (license status, etc.). Many libraries created their own ERM systems starting in 1998 with open source systems, and eventually began to work in a more collaborative environment. In 2002, DLF ERMI began in order to look at “the functional requirements desired in an ERM, identify data elements and common definitions for consistency, provide potential XML schemas, and identify and support data standards” (Collins 184). The results of this work was published in a report in 2004, outlining the preferred standards, features, and design of an ERM, including data standards, vendor costs, and system migrations, including those listed above. Some of the preferred standards include EDItEUR, ONIX, SUSHI, and COUNTER. The final three are also mentioned in several Against the Grain articles on ERM systems as a positive example of increased cooperation between libraries, vendors, and publishers. Moreover, as mention by Hogarth, one of the primary concerns to librarians is the issue of standards.
As there are several choices when determining which ERM system to chose for you library, how do you chose? Many librarians want some kind of flexibility, with little manual data entry, good standards in naming conventions, limited staff training and time, interoperability, and a good link resolver. Another consideration is whether or not to use a third party or see if your ILS has an integrated ERM system, such as Ex Libre's VERDE. However, Collins warns against, or at least to be wary of, using the same vendor for both the ILS and ERM system as the library may end up with an under-supported system. If looking at a one size fits all type of system, make sure to know each vendor's track record on development. A one vendor system should have a knowledgeable title management database, link resolver, and A-to-Z listing services. For third party systems, such as Serials Solutions, how well will it integrate with the ILS? This is the primary concern. A third party system should be able to easily link and manage subscriptions, including aggregated resources. There are also subscription agents, such as EBSCO. These systems may be acceptable, but be careful if using another subscription agent or another A-to-Z listing service or link-resolving tools. If using open access or own system, the library must be able to sustain its own systems resources, but can tailor the ERM system to local needs.
All of the options will involve dedication staff and typically includes the need to enter data manually. For example, with EBSCO's system, one of the primary complaints from the Against the Grain articles is that it does not support or update non-EBSCO data efficiently. Serials Solutions 360 had difficulty mapping data to and from the link resolver and it was difficult to get the COUNTER data. Collins suggests that a library should conduct an internal and external needs assessment prior to making the final decision. Considerations should include the type of library, mission statement, size of library and collection, technological infrastructures, cost vs. need, existing tools and ILS, and interoperability. In short, what does you library need and how will the system resolve problems? Hogarth also provides a detailed look at various systems, including open source systems, as well as numerous bullet points of what to consider when choosing a system.
Subscribe to:
Posts (Atom)