Thursday, December 16, 2010

Final Reflections

I am not sure if I could actually select a “most important earlier entry”, but I can say that I learned a great deal about electronic resources, licensing, and ERMs over the course of the semester. One of the ways in which I have noticed a change in thinking is when I am looking for articles using Find It or on the library staff page seeing the “SFX Admin” link. I feel that I now have a fairly clear understanding as to how link resolvers, A-to-Z listing services, and OpenURLs function. To that end, I could possibly state that my entry for week 11 is one of my more important posts. It was in that post where I explored the content discovery tools and really began to put the bigger picture of how Find It / SFX and Ex Libris works. Related to that post are those from weeks 8 and 9, where I discussed TPMs and the different types of ERM systems. After the readings for week 11 and two presentations on SFX and Find It, the readings from weeks 8 and especially 9 began to make more sense. Although week 8 is not exactly about ERM systems and their functions, knowing how systems may determine who is an authorized user and how authentication works helped me with further understanding ERM systems.

In general, I found the second half of the course to be less intuitive than the first half. This is probably due to the fact that I have a background in electronic resources and have studied copyright and Fair Use prior to this course. Still, it was helpful to further develop my understanding of the earlier units before diving into ERM systems and functions. It would be difficult to discuss how ERM systems work without a base knowledge of licensing and copyright issues. Of particular importance is the TEACH, which I knew very little about, so that unit was quite informative, particularly during the Q and A session with Lipinski. Of the latter entries, I was particularly interested in perpetual access and digital archiving. I am glad that some organizations are trying to make sure that publishers' websites are archived, but institutions having perpetual access to a purchased or licensed product is an important topic.

I truly enjoyed the course, thank you!

Thursday, December 9, 2010

Unit 14: Perpetual Access

Perpetual access, not to be confused with digital preservation (Stemper and Barribeau) is a right to access the content of a licensed resource or subscription resource. The reading for this week addresses the problems and concerns relating to perpetual access, as well as digital preservation practices. The issues stem from the question of once a library cancels online access to a journal, what happens to the content for which they already paid? Perpetual access is a key component to the preservation of online/electronic resources. This issue directly ties into the access versus ownership debate. As more institutions are moving towards a pay per view model, fewer institutions are concerned with perpetual access.

There are several reasons for this trend, including patron pressure to have the article now, the availability of content where accessing an online licensed issue of an article may be available at an earlier date the owning the print version, and of course, financial considerations. Many libraries, as we all are aware, are canceling print versions of journals that are available online due to cost, which is not just the price of the journals themselves, but the cost of maintaining adequate shelf space and staff. The latter concern is more related to the license negotiations, which take more time and effort than many ER librarians have. Furthermore, what happens when the licensed content is purchased by a new publisher or vendor? Frequently, the terms of agreement may change, possibly rendering any perpetual access clause useless. Plus, few aggregators offer perpetual access, which is too bad as they are usually more affordable.

Some of the key issues, as outlined by Watson and others, included changing formats, data migration, and a few solutions. Outdated software, hardware, as well as pricing are big concerns. While it seems easy enough to say, hey, let's just preserve that content digitally, in actuality, this is a big undertaking, and not just in the technical arena. There is the initial cost, including staff training, followed by a need to transfer the content to new media every ten years or so in order to avoid digital decay. And that is just for the content that is still accessible and not on outdated equipment, media, or software.

What about emails, blogs, and other website type of information? While there is the Wayback Machine, wikis and blogs, just to name a couple, are not archived well as they are frequently changing. Surprisingly, online reference resources are also in this “frequently changing and therefore, difficult to archive” category. Not everything can be preserved, so tools, like The Digital Preservation Coalition have been developed in order to help librarians decide which types of electronic resources they should preserve.

Certain third party organizations are key in helping to preserve electronic content. These include JSTOR, LOCKSS, and Portico. JSTOR is a subscription service that provides access to back issues of journals. Subscriptions may be purchased on a collection basis. Each journal has a rolling embargo on the content, which varies based on the publisher. JSTOR is also considered to be very trustworthy and reliable among librarians and patrons alike. Since they only provide access to back issues, it is like a digital archive of journals, but not the most recent years. LOCKSS (Lots Of Copies Keep Stuff Safe) provides an archiving service where participating libraries maintain a LOCKSS server that preserves copies and free and subscribed materials. Each server also serves to back up the others in the event of catastrophe. There is a concern about the data format becoming obsolete, but so far, LOCKSS has proven itself to be a reliable service.

Portico, also funded through the Mellon Foundation, archives content, but not at the library itself, assuming that libraries do not wish to house or maintain the appropriate infrastructure. The archive is “dark”, meaning that participating libraries do not have access to the content unless the original source (publisher) stops providing its content. Furthermore, the data is normalized in order to help with migration to new, and possibly more stable, formats in the event that file formats change. Normalization is not to change the content, but to keep is accessible. According Fenton, Portico is more concerned with preserving the intellectual content, not the original webpage or digital source as no one (apparently) is interested in preservation for preservation’s sake. (Did Fenton really need to roll her eyes when she mentioned librarians and preservation while she discussed the pricing model?) PubMed and Google Book Search (formerly Google Print) are also providing digitization services, although some may disagree as to the success of Google Books in particular.

Libraries are also banding together in order to help preserve and archive content. For example, the consortia, like the CIC, may go in on content, like all of the Springer Verlag and Wiley publications and house them in an off-site, environmentally controlled storage facility for preservation's sake. Institional repositories are also a viable, yet underused, source for archiving content. Steering committees have started in order to determine some possible solutions to the preservation issue of electronic content and the print versions, with national libraries taking the lead role in providing those services. In addition, due to pressure from libraries, publishers are taking a more active role in digital preservation by signing on with the three main third party systems or by depositing in to a national archive or repository.

Seadle looks to address the issue of “what is a trusted repository in a digital environment?” While the same preservation vocabulary for digital and hard copies of materials are used, the process and needs are quite different. The initial and primary focus is usually on technology, especially in the case of electronic journals. But according to Seadle, this may be to the detriment of social organization, the idea behind keeping rare books in vaults and locked cases. Nothing can damage a physical object like an unrestrained reader. Therefore, we need trust for successful archiving, whether it be physical or digital. While it may be easy to trust a museum to keep a rare manuscript safe, what about digital content? Perhaps we need to have a level of distrust in order to ensure a safe copy of the content. If we distrust the media for example, we may create multiple copies in multiple formats housed different locations, including off-site servers. Not bad for an untrustworthy format.

We should also distrust proprietary software and use more open source software and systems, as they tend to not be in it for the money. Furthermore, Seadle addressing integrity and authenticity in relation to digital objects. If a digital object is marked up or the format changes, it may loose integrity, which somehow devalues the item. Thus, preserving digital content as a bitstream is a more stable, at least in terms of format, method of maintaining an object's integrity. In terms of authenticity, this is tricky. There usually is no one genuine copy of a digital work. However, using bistreams with timestamps and checksums may help determine the authenticity of a digital object.

So, should we trust LOCKSS? It is a community supported project, as opposed to a corporate entity. It is a network of collaborators using open source software and Seadle compares it to the apparent trustworthy appearance of Linux, on which the majority of library servers are based. LOCKSS archives content through a bitstream, without normalizing its content. Even though normalization may help with possible future formatting issues, that process does inherently loose some content through compression. Each bitstream has a timestamp that aids in determining the first, and therefore, authentic version of the content. LOCKSS also get permissions from publishers and content provider prior to crawling their websites for content. Basically, according to Seadle, we really should trust LOCKSS, as it is the best system around.

Unit 13: Reflections on the Worklife of an ER librarian

If I could sum up this week's readings in one sentence, it would be don't panic and take advantage of any training opportunities, including paying attention to your colleagues' skill sets, as well as reaching out to other areas and departments. We had three different, but very related articles addressing the changing nature of the ER librarian. Albitz and Shelbern trace the evolution of the roles and responsibilities of the ER librarian, Griffin provides a great list of resources for help and education, while Affifi looks to the business world for a framework to work through the process of acquiring and using electronic resources.

Albitz and Shelbern began by looking at articles on position descriptions and job postings for ER librarians, following it up with a survey conducted through the ARL membership group to determine how accurate the actual tasks and responsibilities were to the job announcements, which also indicate how much ER management and implementation has changed since around 1997-8. The main focus of the electronic resource management articles was on the resource or database itself, not on staffing issues or who should and can administer the systems.

However, the authors found two good articles, one where the authors analyzed position description between 1990 and 2000 where the words digital and/or electronic appeared in the title. The second article, Fisher, looked at 298 position announcements from American Libraries, finding the ER positions tend toward public services and reference-type positions, as opposed to technical services. A third article by Albitz analyzed position announcements between 1997 and 2001 for academic libraries as published in College & Research Libraries News. ER librarian are found in many departments and prior knowledge was not necessarily a prerequisite for getting the job. So, how do the actual tasks reflect the position descriptions?

Albitz and Shelbern found that while ER librarians typically worked in technical services, with or without also working in public services, they typically report to public services and the primary responsibilities had been for reference and user education. However, public services were no longer highlighted in the descriptions, as the most common responsibility according to the surveys was ER coordination. The second most common activity is a combination (or equal frequency) of acquisitions, license negotiations, and technical support. This was highlighted by Judith’s presentation, when she stated that she frequently needed to figure out why a link was not functioning properly. However, from the surveys, reference and bibliographic instruction was down. Keep in mind that the survey was conducted in 2005, while the articles only went up to 2001. Everything changed since then, at least technologically speaking, so this difference is not a surprise.

ERs are still relatively new, standards are being worked on (DLF, NISO), but there are always delays and it takes time to get the appropriate stakeholders to adopt new standards. Meanwhile, back at the library, it is not always clear which types of skills are necessary to the effective management and utilization of ERs. Therefore, Affifi approaches this problem by looking at process mapping as a way to deal with these issues.

Similar to business process re-engineering (BPR) with roots in total quality management (TQM), process mapping breaks down a process, such as acquiring and administering an ER, into steps with clear beginning and end points. Basically, it helps to analyze a product and how to more effectively utilize said product or process. Several studies and examples indicate that process mapping can be useful to libraries and librarians. One process mapping output can be the input of another, which will how the processes are connected; a very useful tool in a complicated system. Affifi provides a sample flow chart and textual output of possible process mapping for ER systems and products, beginning with the vendor contacting a library about acquiring a resource. What is interesting is that Affifi leaves out “routine product maintenance” from the process mapping, which includes troubleshooting. That is one of the more common tasks performed by ER librarians, according to other articles. Still, libraries have been borrowing models from the business world, and I see how this model could easily benefit an ER librarian, as illustrated by the case study.

Unit 12: E-Books: Audio and Text

Electronic and audio books sure do come in a variety of sizes, styles, and services. The two required articles for this week provided a nice summation and overview of what is available and what librarians should look for in selecting an appropriate e-Book/audio book service. I had not considered how the catalog might look or what it may contain, which I do realize is quite silly. The articles primarily covered Audible, OverDrive, NetLIbrary, TumbleBooks, and TumbleTalking Books. An interesting note on the publishers in relation to the catalog is that some services use the date of a book becoming digital to classify it as a Frontlist title, when really, it may have been originally published in 1957. Overdrive is an example of such a practice in that when hard copy books are digitized and put into their catalog, the date of the digital copy is what the librarian and/or patron sees when searching.

Other considerations a collection development librarian may need to address when selecting a service includes content characteristics, file formats and usability, whether to purchase or lease the content, integration with the OPACs, circulation models, sound quality, and administrative modules. In terms of content characteristics, a librarian should consider abridged vs. unabridged versions and availability, and the narrator/narration style. Is the narrator the author or an actor and can the patron search by narrator? Furthermore, is the narration done by one person, a celebrity, or a cast of characters? It was interesting to think of the digital book as a cross between a motion picture and print book.
Here are the basics for each of the systems that may also address more of the above considerations:

Audible: Some suppliers allow patrons to burn CDs, but not all, which is why it is a good idea to check the content suppliers prior to selecting a service. In terms of circulation, the library must have a player to circulate and only one copy of a title may be checked out at a time. Audible uses a proprietary format, but it is compatible with most mp3 plays and computers. There are no administration modules and about 23,000 titles, but not all suppliers allow access to libraries.

OverDrive: most usable format without proprietary formats, aside from Bill Gates, of course. Titles are delivered in DRM-protected WMA files in 1-hour parts. Patrons may play the titles on a play on a PC with Over Drive Media Console, and depending on the supplier, may burn the files to a CD. The library can decide the circulation procedure and the number of titles a patron may check out at a time. Overdrive offers an unlimited simultaneous users plan as an option for best sellers and also offers leasing in 50 title increments. In addition, libraries can add a title to their collection prior to actually owning or leasing it in order to determine demand. In terms of the catalog and other administrative tasks, Overdrive does provide MARC records, but at $1.50 a title. In terms of administrative modules, Overdrive is great. The purchased title report, in particular, will aid with collection development as the report will state how many titles are checked out, the turnover, and cost per circulation. The report is also available as a spreadsheet, where the librarian could sort by author, for instance. The web site statistics report is also interesting as it tracks what patrons are searching, putting those results into a patron interest reports. However, how is that done and what about privacy? Overdrive also uses 82 subject headings, which come from the suppliers and many books within the master collection have at least two headings. While this seems good, what about controlled vocabulary? Are there several headings that really could be one? Still, this seems a very good option for most libraries.

netLibrary and Recorded Books: this service is primarily useful for academic libraries, although the interface is not the greatest, at least for the eBooks. At the time of the articles, netLibrary had about 850 titles with at least 30 more per month. A library will pay an annual subscription fee, which provides access to all the titles. The pricing fee is based on overall circulation and total population and comes with free MARC records. In terms of formats, the audio books may be played on any media player as DRM protected WMA files and may also be transferred to other devices. Patrons, however, may not burn the files onto CDs. For some reason, patrons must have a personal account with netLibrary in order to use their recorded books. Why? In any case, by doing so, patrons have access to a preview version of the title and two full version: a web browser format and the media player version. netLibrary has its site looked at for accessibility by a third party and representatives who support screen-reading software.

TumbleBooks and TumbleTalkingBooks: both are from Tumbleweed Press and provide animated audio books for children. The first option (TumbleBooks) provides online streaming of the titles using Flash, but no downloading. So, it requires a computer. The titles are also available on CD-ROM. In terms of pricing, there is not a consortial model, but there are group rates. As for TumbleTalkingBooks, a subset of TumbleBooks, libraries may acquire titles for about $20 per book, yet may return poorly circulation books in some instances. The titles still must be read online (no CD-ROM option), but they are building a downloading option. Patrons may not download or transfer the titles to other devices, including CDs. TumbleTalkingBooks provided unlimited simultaneous access, as well as some large print options, up to 34 point font, and some highlighting options. Few of the providers have highlighting or placeholding features, so this was a nice consideration.

I really liked Frank Kurt Cylke’s 3-page strategic plan to approaching audio books, which involved staying on top of the literature and consumer groups (i.e. target audience). I also appreciate their forward thinking approach looking beyond the CD as a medium and using digital books on Flash memory. I wonder what happen now that Flash may be on its way out in lieu of HTML 5. Still, as discussed by the gentleman following Frank (Dan?), I am intrigued by the player used for the Flash memory system. It is about the size of a standard hard cover book, has few, if any, movable parts, which equates to easily repairable and reliable, and has reusable formats. That last point is an important one, especially after learning more about the Playaway, which is essentially a disposable form of reading. Furthermore, how good of sound quality will a disposable system have? Sound quality should be a key consideration point when selecting a service or system.

Wednesday, December 8, 2010

Unit 11: Finding Content: Discovery Tools

A to Z lists
On the surface, this seems like a simple list where a library's electronic resources may be organized alphabetically and provided online. Okay, so far so good. A staff member may just update the lists and links from time to time, no problem. But what happens when a journal title is listed in numerous licensed resources with varying dates of overall availability and full text access. The example provided by Weddle and Grogg on the full text access of The Journal of Black Studies is a prime example of what kind of complications may occur. That title is available from six vendors, although it was not necessarily clear which exact title in the case of Sage Publications, with dates ranging from 1970 to 2004, depending on the vendor. It would be challenging, and probably a full-time job, to maintain that kind of a list for every journal title, especially at a large institution. Therefore, many libraries are opting for an A to Z listing service like Serials Solutions in order to maintain their lists. With this type of service, an ERM librarian will check the titles his or her library subscribes to and the service populates the local knowledgebase, which is basically the back end journal title and database searches.

So what happens with the list? How may patrons Find It useful? While they may not directly see what is happening, patrons do experience the benefits of OpenURLs and link resolvers. Link resolvers, like SFX from Ex Libris, use the information from the knowledgebase and pull together connections of an institution's e-resources in order to point the patron to an appropriate copy of the article s/he is wishing to access. So, a link resolver take the information on the source from the OpenURL and redirects the patron to the article, the target. An OpenURL is a structure for sharing metadata about a resource using particular standards, such as Z39. Basically, an OpenURL is like a MARC record with the added features of dynamic linking (i.e. not static) where the OpenURL recognizes the user, is sent to that user's link resolver and knowledgebase, compares the metadata in the knowledge base for a match to the query, and then links that information back to the user. This is what we see on the front end in FindIt.

How are the appropriate objects linked back to the user? Magical metadata fairies? Sometimes it appears so. In actuality, there are these things called Digital Objects Identifiers (DOI) registered through an agency, like CrossRef. A DOI is a persistent unique identifier and can be used as or within the metadata of the OpenURL, which according to Brand is becoming more common. A publisher will assign an object a DOI and then deposit that DOI with the corresponding URL into the CrossRef database. CrossRef connects the DOI with the URL providing the user with an appropriate copy of the desired resource. However, it should be noted that these is only for items at publisher's websites. If the resource is from an aggregator or an Open Access source, then a link resolver is still needed.

What about searching across multiple e-resources you might ask? Enter federated searching. While this is still relatively new, the idea is that by using a federated searching tool, librarians and users will be able to search across multiple e-resources. However, there are varying standards and usability issues. To that end, NISO started the Metasearch Initiative in order to bring together vendors, libraries and others to create a set of standards and protocols that would make federated searching more productive. Recently, UW – Madison added a federated searching function (Xerxes?) on its database page, which I have found quite useful.

Unit 10: Data standards

Wow do we have a lot of standards and almost need an ERM system just to keep them straight. Pesch provides some nice charts that can aid in the understanding of the standards and how they are utilized. Here is the breakdown based on the life cycle of electronic resources according to Pesch:
Acquire:
ONIX SPS
ONIX SOH
ONIX PL
ICEDIS
ERMI license terms
SERU

Access:
Z39.2 (MARC)
ONIX SOH
ICE DIS
Z39.50
MXG
Z39.91
Z39.88 (open URL)

Administer:
ERMI license terms
ONIX PL
ONIX SOH
Z39.2
TRANSFER
ONIX SRN

Support:
None

Evaluate:
COUNTER
Z39.93
SUSHI
ONIX SPS

Renew:
None, usually built-in with acquisition

So why do we need all of these standards? Well, since the way in which libraries acquire and support electronic resources is complicated, with multiple vendors and platforms, having standards should create a higher level of interoperability. Now, if we can get all of the vendors and publishers on board with standards, and perhaps actually using the same standards (I know that might be asking a lot), we could really get somewhere. However, as addressed by Carpenter of NISO, creating the standards and best practices, like SUSHI and SERU, is the easy part. Getting institutions, vendors, and publishers to use them can be challenging.
As Yue argues, and I agree, we must have at least a basic understanding of common standards and initiatives for each phase of the life cycle (selection, acquisition, administration, bibliographic/access control, and assessment), which include DLF ERMI, NISO/EDItUER (ONIX), XML-based schemas, and project COUNTER.

The DLF ERMI was established in October 2002 following a Web Hub from 2001 and resulted in a workshop on ERMS standards in May 2002. This workshop was attended by librarians, agents, and vendors, so basically all of the appropriate stakeholders. The aim of the project and workshop was to address the development of ERMS through interrelated documents defining functional requirements and data standards that could be used by the stakeholders. They came up with 47 functional requirements for ER management, charts outlining the processes associated with managing ERs through their life cycles, an ERD (ha, I know what that is!) of the relationships between entities of ERMS, and a report on how to apply XML. Overall, the final report received very positive feedback, but also provided findings for further research: consortium support and functionality, usage data, and data standards. Phase 2 (2005) will focus on data standards, license expression, and usage data. Another interesting tidbit from this article that is echoed in many of our readings is that MARC does not work well for ERs and we really should just be using XML, so I am happy to be taking that course next term.

Tuesday, November 30, 2010

Unit 9: Electronic resource management systems: vendors and functionalities

For this week, we explored more reasons as to why electronic resource management (ERM) systems exist, the desired qualities and functions, and how libraries may acquire such systems. No problem, right? Well, as per usual, this is more complicated than one might imagine, depending on that person's imagination, of course. Both Maria Collins and Margaret Hogarth provide us with a nice introduction to ERM systems and the key players. With the growing number of electronic resources and the decrease of libraries purchasing on a title-by-title basis, librarians have found that they need some kind of a system to help keep track of when to start the license negotiations, be able to easily generate A-Z lists, and be able to integrate such functionalities, as well as others, into the Integrated Library System (ILS) or Public Access Management System (PAMS). In short, it is up to the library to make any system or authentication changes and implementations required in a license while the vendor do nothing, which is why we have ERMs.

So how did this come about and what do librarians desire in an ERM system? An effective ERM should have one “point of maintenance”, which most ILS are capable of, or programmed to handle and should streamline the workflow, including maintenance of spreadsheets and databases (license status, etc.). Many libraries created their own ERM systems starting in 1998 with open source systems, and eventually began to work in a more collaborative environment. In 2002, DLF ERMI began in order to look at “the functional requirements desired in an ERM, identify data elements and common definitions for consistency, provide potential XML schemas, and identify and support data standards” (Collins 184). The results of this work was published in a report in 2004, outlining the preferred standards, features, and design of an ERM, including data standards, vendor costs, and system migrations, including those listed above. Some of the preferred standards include EDItEUR, ONIX, SUSHI, and COUNTER. The final three are also mentioned in several Against the Grain articles on ERM systems as a positive example of increased cooperation between libraries, vendors, and publishers. Moreover, as mention by Hogarth, one of the primary concerns to librarians is the issue of standards.

As there are several choices when determining which ERM system to chose for you library, how do you chose? Many librarians want some kind of flexibility, with little manual data entry, good standards in naming conventions, limited staff training and time, interoperability, and a good link resolver. Another consideration is whether or not to use a third party or see if your ILS has an integrated ERM system, such as Ex Libre's VERDE. However, Collins warns against, or at least to be wary of, using the same vendor for both the ILS and ERM system as the library may end up with an under-supported system. If looking at a one size fits all type of system, make sure to know each vendor's track record on development. A one vendor system should have a knowledgeable title management database, link resolver, and A-to-Z listing services. For third party systems, such as Serials Solutions, how well will it integrate with the ILS? This is the primary concern. A third party system should be able to easily link and manage subscriptions, including aggregated resources. There are also subscription agents, such as EBSCO. These systems may be acceptable, but be careful if using another subscription agent or another A-to-Z listing service or link-resolving tools. If using open access or own system, the library must be able to sustain its own systems resources, but can tailor the ERM system to local needs.

All of the options will involve dedication staff and typically includes the need to enter data manually. For example, with EBSCO's system, one of the primary complaints from the Against the Grain articles is that it does not support or update non-EBSCO data efficiently. Serials Solutions 360 had difficulty mapping data to and from the link resolver and it was difficult to get the COUNTER data. Collins suggests that a library should conduct an internal and external needs assessment prior to making the final decision. Considerations should include the type of library, mission statement, size of library and collection, technological infrastructures, cost vs. need, existing tools and ILS, and interoperability. In short, what does you library need and how will the system resolve problems? Hogarth also provides a detailed look at various systems, including open source systems, as well as numerous bullet points of what to consider when choosing a system.

Saturday, November 20, 2010

Unit 8: Technological Protection Measures

For this week, we explored the various types of technological protection measures (TPM) present in licensed resources and how they may pertain to libraries. David Millman provides an introduction to authentication and authorization while Kristin Eschenfelder explores how TPM are being used in libraries, including what sort of implementations are most common in particular types of institutions. In general, libraries tend to use authentication systems to control access to materials. Authentication is the process that determines who the user is and pending that outcome, the user will or will not be granted access to the desired materials, which is part of the authorization process.

Libraries typically provide the publisher or vendor of a licensed resource with some kind of list of IP addresses, as well as those associated with a proxy server or virtual private network (VPN). Proxy servers and VPNs are used when patrons, such as enrolled students at UW – Madison, are attempting to access licensed resources from off campus. The authentication system sees the IP address or number as a proxy address, routes it through the proxy server, which is a UW – Madison IP address, and then onto the UW libraries server or the publisher server. If logging in through a VPN, the authentication system sees the off campus address as part of the UW network. So basically, the VPN adds the user to the network, as opposed to simply allowing access. When logging in to access the licensed resource, the system also looks for authorization rights.

In general on the UW – Madison campus, most users have the same rights, meaning that if they are authorized users (as defined in the license agreement), everyone may more or less view the same materials and resources. Authorization provides permission to use a particular resource, which is typically granted through an IP address range. It is through this range that permissions may be denied, as is the case with certain medical resources, for example. Furthermore, systems such as Shibboleth may be used to grant or deny access based on the users department and program. Such systems may be helpful for small libraries to ensure a good price on licensed resources by demonstrating that only a small number of users will be able to access the resource.

While most licensed resources necessitate authentication and authorization protocols, TPM tools may also be in place either through the resource itself or at the library. TPM tools are hardware and software systems that may facilitate limitation of access or the range of uses allowed to users as users could redistribute, alter and republish items from licensed resources. Essentially, since the language in license agreements may be vague or unclear, TPM may block uses that are not explicitly defined in the agreement. These TPMs may be in the form of soft or hard restrictions where soft restrictions are hardware or software configurations making it difficult for the authorized user to download, save, print, or copy/paste items from the resource. A savvy user usually finds work-arounds to these restrictions, but with hard restrictions, certain uses are prevented through hardware or software. Hard restrictions and TPMs should not be confused with Digital Rights Management (DRM) tools, although they are somewhat related, or at least similar.

Eschenfelder, in her three articles for this week, found that soft restrictions are quite common and include a warning, download limits, and “restriction by decomposition”. The most common hard restriction is the blocked copy and paste functionality, yet there are few examples of hard restrictions actually in use. The most common tools in libraries, museums and archives are authentication systems, specifically network-ID logins and IP address ranges. By limiting use through authentication, further restrictions may not be as necessary. In terms of use control, Eschenfelder and Agnew found that resolution limits and watermarking are quite common. However, some could argue that this may limit legitimate use of the resource. For example, what if an art student is studying a particular work of art or artist and the only online collection has poor resolution or a big watermark in the middle of the image? It may be difficult for that student to effectively analyze his or her topic. This begs the question of how should librarians proceed? If we continue to allow the publishers and vendors to dictate the type of TPM restrictions, what will they try next?

Sunday, October 24, 2010

Unit 5: Electronic Reserves and Georgia State

The radical nature of the e-reserves policy that was once in place at Georgia State led to an inevitable lawsuit that will set the tone and nature of the fair use of e-reserves. Since the lawsuit was brought on, the e-reserves policy has changed. As argued by Kenneth Crews, Georgia State is using a “good faith” defense, which is really not applicable due to the sloppy nature of their policy. Still, it is an interesting tactic. Those in charge of the policy had a legal background and a consitutional view of fair use, which was to preserve very limited rights. According to this view, copyright was a marketing right and any personal use of copyrighted materials was acceptable. Therefore, the e-reserves policy did not require a password, or any authentication software or authorization requirements for that matter, to access the resources, which made the resources available to anyone, not just those enrolled in that particular course.

While I appreciate the boldness of this policy, I have to wonder what they were thinking. Did they really think they could get away with it? Just take a close look at copyright legislation and you will see that it is really geared towards the copyright holders, not geared to help the public (is it any wonder that I keep typing “copyfight”? Please ignore the placement of the keys on the keyboard.). Perhaps since the policy makers have a legal background, they thought they could help to change some of the details of the Copyright Act through a radical policy, but obviously they re-thought that quite early as after the case was brought, the policy was changed to require users to log-in, hence the materials on e-reserve are not available to everyone. This also changed the nature of the lawsuit, as the policy is now different. Another point to consider is that I believe the syllabii were also available to the general public, which enables the publishing community able to easily track how their goods may be being used. This is also why I am no longer including the list of readings with these blog postings.

Why is the former Georgia State policy such a big deal? The best counter argument comes from Sanford Thatcher when he argued that by having everything online and accessible to the students for free, that really cuts into the market for academic presses. The primary market for academic presses is, wait for it, academic institutions, such as Georgia State. However, as addressed by the ARL and Russell, this is only one of the four factors of fair use. Still, this is a valid point. However, by having a password enabled access to e-reserves limited to those enrolled in the class, the effect on the market is minimized. Furthermore, if the other three factors (character of use, nature of the work to be used, and the amount used) demonstrate fair use, then factor four is weighted less heavily.

E-reserves are an essential part of the modern academic setting and we need to find an acceptable way of using them, without too much attention from the publishing industry. A little attention is okay, as I am sure Ken Frazier would agree. However, we do not need to have the pants sued off of us, but we do need to provide the best educational experience as possible. Students generally like to have access to materials online, whether they want a hard copy or not.

Unit 7: You may TEACH, but only in a very specific manner

1. Tomas A. Lipinski (2003) “The Climate Of Distance Education In The 21st Century: Understanding And Surviving The Changes Brought By The TEACH (Technology, Education, And Copyright Harmonization) Act Of 2002” Journal of Academic Librarianship 362, (362-374).
2. ARL Issue Brief: Streaming of Films For Educational Purposes
(http://www.arl.org/bm~doc/ibstreamingfilms_021810pdf.pdf)
3. Russell Complete Copyright pg. 200-201: “CONTU Guidelines on Photocopying under Interlibrary Loan Arrangements (1978)”
Guest Lecture by Tomas Lipinski

The TEACH Act was created in 2002 in order to address issues surround appropriate materials for distance education. This act is specifically for accredited, non-profit educational institutions and addresses what may or may not be performed or displayed “in the classroom”. The TEACH Act specifically builds off of sections 110(2) and 112(f) of the 1976 Copyright Act. Tomas Lipinski attempts to address how to wade through the overly complicated and ridiculous nature of the act, while the ARL (ALA really), concisely and succinctly addresses the issues of what kinds of film may be streamed for educational purposes. These two readings are like night and day in terms of readability and comprehension, which is indicative of the unnecessary complicated mess that is copyright legislation.

Lipinski is much better in person than on the page, at least as far as this topic is concerned. In his article, he speaks around the issues instead of addressing them directly. At least, that was my initial impression. It could also be that he assumed some prior knowledge of the TEACH Act, which I did not have. Moreover, as the nature and language of copyright legislation can be quite convoluted, how might one really write about it without assuming some kind of prior knowledge? I think it would have been better to read the ARL brief first. In any case, I must get this out of the way before I burst. The process of converting an analog copy of a resource to a digital copy is called digitization, not digitalization. Digitalization refers to the process of administrating digitalis, which was once used to help patients with heart problems. I looked this up in several online dictionaries just to be sure. As someone who digitizes sound recordings on a regular basis, I needed to make sure that I have been using the correct terms, which Lipinski, a respected professor with a law background, apparently has not. I was really hoping he would change it for his talk, but alas, he did not. I needed to avoid eye contact when he had a large slide with the incorrect term as a header in bold projecting on the screen. Okay, I got that out of my system. On to the act.

The TEACH Act is a complicated mess addressing the use of copyrighted materials within distance education. The act assumes that distance education is held in discrete installments, with content being available for a limited length of time in a lecture-like package. My first question in learning this was what about electronic reserves and courseware, like Desire to Learn (Learn@UW)? Could an online meeting space, like a Learn@UW site be considered as a face-to-face meeting? The ARL seems to think so, by arguing that an online meeting space is a virtual classroom. Face to face meetings, as addressed by Lipinski, have different requirements, or exemptions, than online places. For example, a professor may not display more materials online than in the classroom. But what if the classroom is online? That is apparently where Section 110(2) comes into play.

In addition to adding the accreditation requirement for online/distance education, thereby making it quite difficult for home schooling communities to share resources online, Section 110(2) also replaces the physical meeting space with an online one. Moreover, the following types of materials are exluded from this section, meaning that in order to use them online, the instructor or content provider must find a different way in which to use them, such as under Fair Use:
Material excluded
1. curricular materials: produced, marketed, displayed for mediated instructional activities
2. supplemental materials: in digital form, such as electronic course-packs, e-reserves, and digital library resources, unrelated background materials, must be REALLY tied into the course
3. “bootleg” materials: must be lawfully made, or at least know that it is not unlawfully made; for 110(2), must be lawfully made AND acquired, the INSTITUTION must know, not just the faculty member or student

Basically, the materials provided online under Section 110(2) must really truly be tied into the course, not made specifically for instructional activities (again, poor home schoolers), and the institution must know that the materials are not unlawfully made. What really gets me on this is the issue of supplemental materials, such as e-reserves and electronic course-packs. How are distance education students supposed to get to the copy shop to purchase a course-pack that has gone through the Copyright Clearance Center, for example?

I was also wondering about making digital copies of materials under the TEACH Act. Is that permissible? It would have to be if that resource was to be put online. According to Lipinski, Section 112(f) allows for making a digital copy in order to stream a resource. The kicker is, however, that the institution must make a new digital copy for each use, even if it is for a different course and used in a different matter. This seems to go against other copyright legislation, as by following Section 112(f), the institution is making multiple copies (systematic?) of a copyrighted resource. Furthermore, it is a waste of time for the employee (me). However, as Lipinski argued nicely in class, you may take the TEACH Act, especially Section 110 to a certain point, and then switch to Fair Use, as if the institution can successfully argue that the intent is Fair Use, there may be no monetary damages to pay or take down provisions. Furthermore, there is a history of case law for Fair Use, but not for the TEACH Act.

Thursday, October 7, 2010

Unit 6: Pricing models

Consortia and Pricing, part 1.
For this week, I decided to address each reading individually. So, here are the first few, with more to follow.

Fischer, C. Electronic Resources Pricing: A Variety of Models. Against the Grain 18.3 (2006): 18-22.

Cheaper by the dozen: bundle for a better deal...sound familiar? I know quite a few people who have cable that they do not use, just so they may get a better price on internet service. Since the big publishers are taking after the telecommunication companies, I wonder if libraries will soon be able to build your own bundle a la AT&T Uverse?

Do the services vary based on price model? I am thinking in particular about the size of the institution. In this model, according to Fischer, larger, doctoral-granting institutions pay more than smaller colleges. The assumption is partially based on the perceived amount of research conducted at said institutions, i.e., user statistics. Yes, larger institutions will have more users. However, are the services the same? Just because one library is from a smaller institution does not mean that users are not heavy researchers. Also, I am curious about the budgets for a small private college as opposed to a large public institution. I would think that the latter would have a larger budget in general, but how does that compare to the price per student and should that affect the pricing models? While these pricing concerns are addressed in the consortia model, what about those not part of a consortium? How would the Carnegie Classification of Institutions of Higher Education be factored into the equation? It would be quite difficult to determine as libraries are typically not allowed to discuss pricing with other librarians.

The librarian's dilemma: Contemplating the costs of the "big deal". Frazier, K. (2001).. D-Lib Magazine 7 (3).

Yay for Ken Frazier!
Why should we learn collection management, selection, weeding, and the rest in terms of journal if our institution is just going to sign on to a Big Deal? The incorporation of game theory, “The Prisoners’ Dilemma” in particular, is quite provocative as an analogy. With that analogy, Frazier basically argues that the publishers wish all of us to “defect”, i.e., not cooperate with one another, as then more institutions will purchase a Big Deal, providing the publisher with more money and the institutions with fewer benefits. But then again, what does “cooperation” mean in this model? With whom are we cooperating and from whom are we defecting?

Big deal = good deal?. Rolnik, Zac. (2009).. The Serials Librarian 57 (3), 194-198.

No other option deal? With this technology, did we lose jobs/funding for graduate students?
If it is low cost distribution on the publishers’ end, then why do online subscriptions cost so much? Sure, it looks good on the surface and some end-users may find new titles from which to research, but at what cost? Librarians are losing their freedom to select and weed resources. The big publishers may incrementally take away more freedoms and control from librarians, and by extension, patrons. So the cost of each journal may decrease in a Big Deal, but are they used? What of other journals?

Consortia and Pricing, part 2.

The use of scholarly journals as the primary means of presenting research has its roots in 19th Germany, since 19th Century German institutions of higher education are the model for those in America now. Furthermore, pricing and space concerns were noted as early as 1833, but we did not see these concerns emerge until the early 20th century. The early price for books and other print materials coming out of Germany between the World Wars were based on the geographic location of the consumer or purchaser, with the exception of periodicals. However, there was a call for an increase in price for non-German or Austrian clientele. This idea may have set the precedent for high priced subscriptions to periodicals, which were then thwarted, or at least analyzed by librarian for lower prices, in the 1930s. Even though librarians today would like to have lower prices, the business models are not conducive to the methods used in the 1930s to effectively lower the cost of subscription. This is a catalyst for the recommendations presented at the end of the chapter by Astle & Hamaker. While I applaud the suggestions, I think we may need a new way of budgeting to implement these changes. What service (budget concerns) might be lost while librarians conduct a cost analysis on their subscription services? In the long run, that type of analysis will be beneficial, but that is difficult to prove when librarians are overworked and the implications of changing costs might not be seen until the end of the fiscal year.

Would a cost analysis help with the “Big Deal” and how do consortia fit in? As argued by Ricky Best, part of the issue that led to the serials crisis, especially at research institutions, is the requirement for faculty to publish or perish, where publishing must occur in a specific style of journal. The crisis led to PubMed, as any research project federally funded must be deposited into PubMed. Moreover, faculty need to be more proactive in selection and then using print journal titles, instead of insisting that the library have them. So, the consortial approach to licensing began as many libraries have eliminated print versions due to budget constraints. However, electronic formats should supplement print versions, not replace, plus librarians loose the opportunity to shape their collections through consortia arrangements and bundle packages. Yet, the Big Deal might be better for smaller libraries as more people may be accessing e-journals that library may not have in print, plus having online access may increase circulation, which is good for circulation statistics. Good deals on the Big Deal within consortia should involve larger institutions, yet the primary beneficiaries are the smaller and medium-sized institutions. OhioLINK has successfully negotiated for decreased content, thus setting a precedent for other consortia to re-negotiate. Furthermore, it confirms the ARL survey that indicates a decline in pricing satisfaction where 81% of bundle deals were from consortia.

Consortia negotiated deals and arrangements can be quite powerful, as indicated by the possible boycott by UC librarians and faculty leaders, aided by the California Digital Library consortium (CDL). One of the primary items that came out of this article is the recognition of pricing and budget concerns by faculty members, not just librarians and administrators. Furthermore, it also shows that taking a stand may help to curb costs. Most vendors would rather have the business than not, I would imagine. This also demonstrates the positive impact of collaboration between consortia members, which can be a challenge. How this plays out will be insightful. Afterall, as argued by Clements, library directors and faculty leaders often provide initial leadership in consortia arrangements, yet leave it up to the librarians themselves to deal with the fallout.

Thursday, September 23, 2010

Unit 4: Licenses, shrink-wrap, and SERU, oh my!

The readings for this week primarily focused on how to negotiate a license agreement, what the terms and clauses mean, and then moved on to “shrink-wrap” licenses and Shared Electronic Resource Understanding (SERU).

1. Harris Licensing Digital Content Chapters 3-8
2. Russell Complete Copyright Chapter 7 “Walter Clicks ‘Yes’…”
3. ALA UCITA 101 http://www.ala.org/ala/aboutala/offices/wo/woissues/copyrightb/ucita/ucita101.cfm
4. SERU Hahn, K.L. (2007). “SERU (Shared Electronic Resource Understanding).” D-Lib Magazine, 13(11/12) (http://www.dlib.org/dlib/november07/hahn/11hahn.html)
5. Josh Hadro (8/31/2009) “Texas Attorney General Orders "Big Deal" Bundle Contracts Released” Library Journal http://www.libraryjournal.com/article/CA6686338.html

Harris devoted the majority of her book outlining and defining terms in license agreements, including key license clauses and “boilerplate” clauses. She also discussed several important factors the library representative needs to keep in mind while negotiating with the licensor (vendor, publisher), including “know when to walk away” (“The Gambler” anyone? No?). Anyway, while reading through the clauses, I compared them with what I know about how the UW System Libraries work in terms of electronic resources. However, I am not privy to the agreements, just observant. While it was somewhat difficult to retain the knowledge in these sections, I am happy to know where the information is located in case I need to refer to it sometime. However, I think many of her explanations use common sense, as well as a reliance on having some kind of a draft license agreement in place as a guide in negotiation, harkening back to an earlier chapter. The Rights Granted clause is a good example of this idea. What sort of rights would the library desire? Well, what is the normal use of this resource and what might the future use be?

Some of the more important, or at least new to me, clauses include how to handle possible fears regarding the wide dissemination of materials due to Inter-Library Loans (ILL). Harris suggests that some vendors may not allow copying for ILL purposes, so the library representative might point out that one could easily just print and scan an article, disseminating it just as quickly. This suggestion is supposed to alleviate the licensor's fears? Couldn't they just not allow printing in the license agreement? How would that benefit the library? Perhaps a better way to handle this fear is to negotiate a frequency of “copies” per journal per year. Although, as discussed in class, that has its drawbacks as well.

Another interesting clause is the library and licensee obligations. As part of this clause, the licensor might try to require the library to monitor for illegal use? How? Why? When I read this, my body clenched. Would I be required to monitor patron use? I have never needed to do so in my seven years as a (student) librarian. Then I turned the page. Harris suggested to argue for including the phrase “within reasonable control”. In other words, do not guarantee that no illegal activity, such as copyright violations, will occur. Reasonable control might include posting copyright restrictions and fair use guidelines in a public place. In addition, the staff should be aware of the license terms and agreements. Does this mean everyone? What about a large academic library with student staff? I am unaware of the license agreements of our electronic resources, yet I work the reference desk and interact with patrons. Another problem in this section is the issue of tracking usage. Some licensors might try to require the library to do just that. Harris suggests that if you do track usage, especially by specific users, you should post that you are tracking. This makes me wonder whether or not the UW tracks. I believe no, but how would I know?

In terms of the boilerplate clauses, the one I found the most interesting, especially in regards to the other readings, is Governing law. Because of UCITA (more below), the licensor may wish to have the jurisdiction in a specific state, such as, oh I don't know, Maryland or Virginia. Harris ends her prose with a section on tips for negotiations and a questions section. The tips really harken back to earlier (having a working license as reference), but something she did not address was the use of recording equipment. Obviously, you would need permission, but do people record the negotiations? I think it could be beneficial to both the licensor and library. This would not be a replacement for good notes, but a fall-back position in case of varying stories. The questions chapter, or as I like to call it, did you read the book section, especially in terms of ILL, is a good point of reference and does have a few new ideas, such as what to do if the publisher does not provide you with a license and how to protect the names of the patrons.

Harris briefly addressed software licenses and UCITA, but these were not the primary subject of her book. Software licenses, as addressed by Russell and the ALA, typically fall under the “shrink-wrap” (or click wrap) license category. This means that when a user clicks “I agree” to the license, that person, or institution (?) must follow those terms, if wishing to use the software fully. The terms are non-negotiable, unlike a general license agreement, but may be challenged in court. However, the courts tend to enforce the terms under contract law. Coming out of this in the late-1990s, was the Uniform Computer Information Transactions Act (UCITA).

UCITA, as explained in the ALA reading, is a proposed state contract law, but was only ratified in two states: Virginia and Maryland. UCITA favors the software companies by not allowing software to be transferred or donated due to the license terms. The proponents argue that this will benefit commerce and is needed to promote a healthy e-economy (I was unaware that e-commerce needed any help). The opponents argue that this bit of legislation is harmful to consumers. UCITA is only valid in two states and any litigation would happen in those states (Russell) and as we learned from Harris, litigation location terms are perfectly legal and appropriate. In order to combat UCITA, three states (Iowa, North Carolina, and West Virginia) passed UTICA “bomb shell” legislation (Russell, ALA) protecting its citizens from litigation under UTICA. In 2003, several amendments were passed in order to appease the opponents. While the opponents (libraries, consumer advocates, many lawyers, financial institutions) would rather see UCITA repealed, one amendment is favorable to libraries: donations or transfers of software to public libraries and schools is now allowed under UTICA.

With all of the legislation and complicated terms, how necessary are licenses? I think the software companies would say absolutely necessary, but some publishers are beginning to change their minds, which is partially what led to the Shared Electronic Resource Understanding (SERU). Hahn provides a brief history of SERU and how it evolved. Basically, both publishers and libraries realized that negotiating license agreements took away too much valuable time and resources. Furthermore, others argued that license agreements are not legally necessary, so why have them. Instead, there could be some general concepts and guidelines. In 2006, four groups (Association of Research Libraries, the Association of Learned and Professional Society Publishers, the Society for Scholarly Publishing, and the Scholarly Publishing and Academic Resources Coalition) to explore using electronic resources without a license, yet having some kind of an agreement or standards. Eventually, NISO formed a working group to continue the discussion. Later that year, the working group came out with a draft (SERU) based on NISO best practices.

While there is an agreement on the general concepts between publishers and libraries, there still is some disagreement on the specifics. However, SERU (still in a trial period) is believed to reduce overhead and costs, but may not be applicable for high transaction or or high priced agreements. Basically, it works best for smaller parties. Nevertheless, it is an interesting idea and makes me wonder if license agreements are on their way out. In many ways that would be preferable; however, that would mean a dramatic change away from big business and conglomerations. I just cannot imagine Elsevier agreeing to SERU, as opposed to a (probably) lucrative license agreement. I am sure they did not like or appreciate the “Big Deal Bundle” rulings either.

Sunday, September 19, 2010

Unit 3: Copyright and licensing in the digital age

In the second half of her book, Jessica Litman explores issues surrounding copyright in the digital age. Building off her discussion of the complicated nature of copyright law and the various nature of the relationships between the copyright holder and user, Litman concludes that “People don't obey laws that they don't believe in. Governments find it difficult to enforce laws that only a handful of people obey. Laws that people don't obey and that governments don't enforce are not much use to the interests that persuaded Congress to enact them. If a law is bad enough, even if its proponents might be willing to abandon it in favor of a different law that seems more legitimate to the people it is intended to command,” (195). So what does this mean for the future of copyright in the digital age and how might that affect libraries?

The first part of her statement harkens back to the complicated nature of copyright law. Litman repeatedly claims that the law does not make sense to the majority of people, who therefore, will not obey the law, knowingly or not (112, 113, 169). So why then, according to Litman's logic, does the law still exist? It is difficult to enforce copyright violators in the digital age, especially with regards to downloading music or sharing TV shows. Still, the RIAA continues to press suits against violators, although the number of lawsuits are dropping, and may have paid over $64 million in lawsuits to recover about $14 million (http://www.electronista.com/articles/10/07/14/riaa.paid.64m.over.three.years.to.get.14m/). Does the drop in lawsuits mean that the RIAA, as well as the major record labels, are backing off in favor of a different law or means of retaining control, such as licensing? As Litman argues on pages 177-180, the creators of all of the copyright legislation never meant for the copyright holders to have exclusive rights over every digital reproduction, as it is in disagreement with the idea of copyright being a bargain between the holder and user. Therefore, we should no longer rely on reproduction as the impetus for copyright enforcement. The record companies could make up the money for the artists by creating more streamed and downloadable digital content, perhaps through a licensed resource.

In the library world, we are already seeing many more licensed resources for music, as well as many other topics and media. Licensing is one way in which libraries may have access to more online content and is a method in which the issue of reproduction might not matter. For electronic resources and journals, the number of downloads (reproductions) is usually built into the cost of service. In terms of music, many of the services provide streamed content, as opposed to putting mp3 files online for download. Licenses, however convenient for the users and patrons, may provide additional workloads and costs for libraries and librarians. As described by Lesley Ellen Harris in Licensing Digital Content, a library should have some sort of a licensing best use policy to which the librarians are able to refer when negotiating the terms of service.

But going back to Litman and the idea of eliminating digital reproduction as a means of enforcing copyright law, how would this change the outcome of Zeidenberg vs. ProCD? Zeidenberg used copyright law as his defense because he was only copying facts. However, he did make a profit off of digital reproductions at the expense of ProCD, not to mention the fact that he had to click “I agree” to the license terms upon using the program. Setting aside the issue of profit for a moment, I think that the licensing agreement (contract law) would still take precedence in this case, even if digital reproductions were made legal and the courts would rule in favor of ProCD.

Speaking of profit, I agree with this statement by Litman: “Conventional wisdom tells us that, without the incentives provided by copyright, entrepreneurs will refuse to invest in new media. History tells us that they do invest without paying attention to conventional wisdom...many entrepreneurs conclude that if something is valuable, a way will be found to charge for it, so they concentrate on getting a market share first, and worry about the profits – and the rules for making them – later,” (173). (Litman would make a good Ferengi.)This is what is , and what has been, happening now, I believe, in terms of both copyright law and issues surrounding license agreements. The copyright holders, and related stakeholders, such as publishers and intermediaries, have historically been creating the market share and then creating the legislation to back up their desires (see Litman, chapter 3). I wonder if we will see any legislation surrounding licensing, or if the UCC or other forms of contract law will be altered for the digital age.

Sunday, September 12, 2010

Unit 2: Litman, Russell, and copyright

My initial reaction to Litman's Digitial Copyright was quite positive. I felt she adequately covered the very complicated nature of copyright law. Of particular importance is Chapter 3, “Copyright and Compromise”, where she provides a solid overview of how, not what, copyright legislation came into being. I found the methods somewhat surprising, although I should not, in that congress basically allowed the industry, along with the Library of Congress librarian, to decide how copyright should work. This method led to years of conferences between the invited stakeholders, followed by extensive backlash from the uninvited groups, such as composers, and piano roll and talking machines producers.

What is troublesome to me is not the fact that stakeholders were involved, after all, it does make sense to have at least some interaction or feedback from the affected parties. Yet, to have the entire legislation dictated by key stakeholders without some sort of guiding principle(s) is quite problematic. So, the idea of Fair Use was thrown in, just to complicate matters further.

This is where it was nice to also read the Fair Use portion of Carrie Russell's Complete Copyright: an Everyday Guide for Librarians. Russell's checklist of Fair Use guidelines are quite helpful, especially considering that due to the nature of copyright legislation, it is best for libraries, and related institutions, to follow fair use and hope it holds up if there is ever a question of copyright infringement. However, the best method is to not put anything on-line unless are permissions have been obtained. Easier said than done.

Still, these readings helped me to solidify my understanding of copyright legislation and fair use guidelines, not to mention the background of lobbying in Washington.