Jody Condit Fagan
James Madison University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jody Condit Fagan.
Reference Services Review | 2002
Margie Ruppel; Jody Condit Fagan
This article analyzes survey results of university students who used Morris Messenger, the instant messaging (IM) reference service at Southern Illinois University Carbondale’s Morris Library. It focuses on the complete results of two surveys, including a comparison of IM reference and traditional reference desk experiences. An overview of the IM reference system and usage data are also discussed. Survey respondents indicated overall enthusiasm for the IM reference service and provided useful suggestions for improvements, which are also listed.
Government Information Quarterly | 2004
Jody Condit Fagan; Bryan D Fagan
Abstract On-line state legislative information should be equally accessible to all citizens. While state Web sites have been praised for aesthetic design and innovative services, access has rarely been examined despite the legal activity regarding Web site accessibility on the federal level. This study examined selected Web pages for each of the fifty states using a program that determined the number and type of barriers posed to users of assistive technology. Although some states have clearly made an effort to provide equal access to all, there were a number of examples of inaccessible Web sites. Even states that meet the minimum requirements for accessibility have not chosen to follow the full guidelines. As the amount of on-line information continues to increase, adopting accessibility guidelines will become even more crucial.
Journal of Web Librarianship | 2011
Jody Condit Fagan
Discovery tools offer users a way to search the library catalog and article databases with one search box. But this convenience masks underlying complexities, making the search process seem easier than it is. How does integrating such a tool into the library home page influence the process of becoming information literate? For this editorial, I will assume library Web pages can affect information literacy development, but in actuality, it’s an open question. In a search of Library, Information Science, and Technology Abstracts, I found only articles about library subject guides and other information literacy content on library Web sites, not about the information literacy role of home pages or Web sites themselves. To be information literate, as defined by the American Library Association, “A person must be able to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information” (ALA 1989). This definition has been used by the Association for College and Research Libraries (ACRL) to create the Information Literacy Competency Standards for Higher Education, which include performance indicators and outcomes for each standard (ACRL 2000). Looking at these standards, how does featuring a discovery tool prominently on a library home page compare with featuring the library catalog and an array of article databases? In this editorial, I will focus on Standards One and Two. Standard Three is mostly concerned with engagement with information sources after selection (e.g., “The information literate student summarizes the main ideas to be extracted from the information gathered”); Standards Four and Five relate to using information effectively for a purpose and understanding the issues surrounding use of information (e.g., economic, legal, social).
Journal of Web Librarianship | 2009
Jody Condit Fagan
This issue, Global Connections showcases The European Library (http:// www.theeuropeanlibrary.org/), a free service providing access to both physical and digital resources from 48 European national libraries in 35 languages. A national library is defined as “the library specifically established by a country to store its information database,” and the library’s mission is to provide “equal access to promote world-wide understanding of the richness and diversity of European learning and culture.”
Journal of Web Librarianship | 2011
Jody Condit Fagan
The many names for federated search software (e.g., metasearch, broadcast search) belie its greatest weakness: the user is never searching the databases in question; a piece of software is searching them. This means the real-life experiences of librarians and patrons never match the canned searches shown in product demonstrations. Attempting to revise one’s search strategies never seems to help; what might bring an improvement from one database could seriously muck up the results from another. Who knows if bad results are from the databases searched, the federated search software, or one’s own search strategy? Results are messy and duplicative, and users frequently can’t tell what the items returned actually are. Can you tell I was never a fan? With the emergence of discovery tools, a sufficiently massive quantity of articles, books, reviews, and more are finally indexed in one enormous database, and one thing is clear: discovery tools beat the pants off federated search software. Many discovery tool vendors are selling a federated search add-on to compensate for librarians’ fears that a discovery tool’s knowledge base has some significant gap in subject or format. Discovery tools do have gaps. But as long as your discovery tool is at least an average performer, the federated add-on is not the answer. In fall 2010, James Madison University (JMU) launched EBSCO Discovery Service with the EBSCO Integrated Search federated search add on. We selected ten databases not covered by our discovery tool subscription, including the MLA International Bibliography, JSTOR, ARTstor, Scopus, the Readex U.S. Serial Set, and several online reference products. At the end of the semester, we investigated the usage statistics to see which federated search databases were the winners. From September through the end of November 2010, the discovery tool had about 68,000 sessions amounting to about 136,000 searches—a very respectable total, considering similar stats for all other JMU EBSCOhost databases numbered 80,000 and 200,000. During that same three-month period, the federated search links received a total of 187 clicks. That means less than half of one percent of users even tried the federated search connectors—and these numbers include library staff clicks. Forty percent of the clicks were on JSTOR, which is now included in EBSCO’s discovery
Research Strategies | 2001
Jody Condit Fagan
Abstract When designing tests for library classes, it is important to consider the advantages and disadvantages of different question types. Selected-response test items may be better for evaluating knowledge of library facts; constructed-response questions may elicit internalization of knowledge; and alternative-response may be good options for evaluating complex library schools such as developing a search strategy. Library literature seems to lack articles discussing testing students enrolled in for-credit library classes. This article outlines the advantages and disadvantages of various question types according to the education literature, and examines a test case in which students in a for-credit library course were given a take home quiz with search story problems. Hopefully this article will encourage others to examine their test-writing, and share similar ideas and best practices for developing testing materials for library courses.
Journal of Web Librarianship | 2012
Jody Condit Fagan
My institution has been reviewing discovery tool products to support a decision related to renewal of our current software. We’ve been reading the library literature, talking with vendors, and generally submersing ourselves in the most current information available. As part of this process, I have repeatedly heard statements that just seem wrong. Five years from now, perhaps this editorial will be added to the list of wrongs, but for now, I will use it to offer my perspective, informed by experience, on a few top myths related to discovery software. Myth #1: My discovery tool is the biggest and/or the best. On the librarian side, there’s a tendency to fall in love with one’s chosen software—or at least those who do, publish. In my experience, which tool you choose seems to relate most to existing subscriptions, whether you have a strong consortial engagement, or whether your ILS vendor sells a tool. It’s also understandable that a discovery tool’s project manager would want to showcase the software in a positive light, to increase buy-in. Vendors, on the other hand, all claim their search tool covers the most content, or finds the most content because of superior metadata, or some similar argument. They brag about the number of items searched or providers included, which are completely incomparable numbers unless you were to thoroughly evaluate just what “items” or “providers” they are counting and how. Even if you could know which was the biggest, does that really matter? Which leads us to . . . Myth #2: Discovery tools will one day search all of a given library’s collections. I have seen this in library industry publications as well as vendor promotional material. Not only do I think this is unlikely, given the diversity and complexity of most library collections, I think it is undesirable. Neither Target nor Amazon stocks all products, and Google doesn’t search all of the world’s information. See also Myth #8: subject-specific databases are dead. Myth #3: Federated search software is still useful. No discovery tool indexes all database providers’ content. EBSCO doesn’t index Gale or ProQuest databases, ProQuest doesn’t index EBSCO databases, and OCLC’s WorldCat Local has less complete indexing of major databases than either EBSCO or ProQuest. Federated search software may
The Electronic Library | 2002
Jody Condit Fagan
Server‐side include (SSI) codes allow Webmasters to insert content into their Web pages on‐the‐fly without programming knowledge. Using these codes effectively can mean an exponential decrease in the time spent maintaining a large or medium‐sized Web site. Most Web servers have server‐side functionality to some extent; a few allow great flexibility with if‐then statements and the ability to set variables. This article describes the functionality of SSI, how to enable the codes on a Web server, and a step‐by‐step process for implementing them. Examples of their use on a large academic library’s Web site are included for illustration.
Journal of Web Librarianship | 2012
Jody Condit Fagan
This two-part column considers how well some of today’s search tools support scholars’ work. The first part of the column, in JWL 5.4, reviewed Google Scholar and Microsoft Academic Search (Fagan 2011) using a modified version of Carole L. Palmer, Lauren C. Teffeau, and Carrier M. Pirmann’s framework (2009). Microsoft Academic Search is a strong contender when considering the interface needs of scholars, but unlike Google Scholar, it is not yet linked in to library resources. Google Scholar recently expanded its Scholar Citations service (Cordell 2011) but contains little support for the activities of browsing, assessing, and translating. Neither tool supports gathering or organizing results. Through this column, the author will add to the mix by reviewing two library search engines, summarizing the state of current tools with respect to scholars’ needs, and outlining possibilities for a system incorporating the best features from each. Again, in considering these tools, the focus will remain on interfaces and system features rather than content. The author will continue to use the first part’s definition of “scholar” and summary of scholarly behaviors (Fagan 2011) inspired by Palmer, Teffeau, and Pirmann (2009).
Charleston Library Conference | 2012
Jody Condit Fagan; Meris Mandernach
In August 2010, James Madison University (JMU) implemented EBSCO Discovery Service (EDS) and placed its search widget front and center on the library home page. This paper will examine general usage trends over the tool’s first two semesters, including changes in physical circulation, library catalog searches, home page traffic, and other database usage. Searches, sessions, and full text downloads of subject specific databases before and after the implementation of the discovery tool will be compared. Finally, the limitations of the data and our methods will be discussed in order to inform other libraries’ work with similar data. The objective of the paper will be to share information for those considering a discovery tool or those preparing to evaluate a discovery tool that has already been implemented.