-
Wilk, D.: Problems in the use of Library of Congress Subject Headings as the basis for Hebrew subject headings in the Bar-Ilan University Library (2000)
0.06
0.06129535 = product of:
0.2451814 = sum of:
0.2451814 = weight(_text_:headings in 6416) [ClassicSimilarity], result of:
0.2451814 = score(doc=6416,freq=4.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7581877 = fieldWeight in 6416, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.078125 = fieldNorm(doc=6416)
0.25 = coord(1/4)
-
Chan, L.M.: Library of Congress Subject Headings : principles and application (2005)
0.06
0.06129535 = product of:
0.2451814 = sum of:
0.2451814 = weight(_text_:headings in 5598) [ClassicSimilarity], result of:
0.2451814 = score(doc=5598,freq=4.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7581877 = fieldWeight in 5598, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.078125 = fieldNorm(doc=5598)
0.25 = coord(1/4)
- Abstract
- The only comprehensive treatise an the Library of Congress Subject Headings system, now fully updated to address LCSH in the electronic environment.
-
Denda, K.: Beyond subject headings : a structured information retrieval tool for interdisciplinary fields (2005)
0.06
0.060679298 = product of:
0.24271719 = sum of:
0.24271719 = weight(_text_:headings in 1106) [ClassicSimilarity], result of:
0.24271719 = score(doc=1106,freq=2.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7505675 = fieldWeight in 1106, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.109375 = fieldNorm(doc=1106)
0.25 = coord(1/4)
-
Lipscomb, C.E.: Medical Subject Headings (MeSH) (2000)
0.06
0.060679298 = product of:
0.24271719 = sum of:
0.24271719 = weight(_text_:headings in 4759) [ClassicSimilarity], result of:
0.24271719 = score(doc=4759,freq=2.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7505675 = fieldWeight in 4759, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.109375 = fieldNorm(doc=4759)
0.25 = coord(1/4)
-
Poll, J.: ¬A question of perspective : assigning Library of Congress Subject Headings to classical literature and ancient history (2001)
0.06
0.060679298 = product of:
0.24271719 = sum of:
0.24271719 = weight(_text_:headings in 438) [ClassicSimilarity], result of:
0.24271719 = score(doc=438,freq=8.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7505675 = fieldWeight in 438, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0546875 = fieldNorm(doc=438)
0.25 = coord(1/4)
- Abstract
- This article explains the concept of world view and shows how the world view of cataloguers influences the development and assignment of subject headings to works about other cultures and civilizations, using works from Classical literature and Ancient history as examples. Cataloguers are encouraged to evaluate the headings they assign to works in Classical literature and Ancient history in terms of the world views of Ancient Greece and Rome so that headings reflect the contents of the works they describe and give fuller expression to the diversity of thoughts and themes that characterize these ancient civilizations.
-
Marshall, L.: Specific and generic subject headings : increasing subject access to library materials (2003)
0.06
0.060679298 = product of:
0.24271719 = sum of:
0.24271719 = weight(_text_:headings in 497) [ClassicSimilarity], result of:
0.24271719 = score(doc=497,freq=8.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7505675 = fieldWeight in 497, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0546875 = fieldNorm(doc=497)
0.25 = coord(1/4)
- Abstract
- The principle of specificity for subject headings provides a clear advantage to many researchers for the precision it brings to subject searching. However, for some researchers very specific subject headings hinder an efficient and comprehensive search. An appropriate broader heading, especially when made narrower in scope by the addition of subheadings, can benefit researchers by providing generic access to their topic. Assigning both specific and generic subject headings to a work would enhance the subject accessibility for the diverse approaches and research needs of different catalog users. However, it can be difficult for catalogers to assign broader terms consistently to different works and without consistency the gathering function of those terms may not be realized.
-
Kulczak, D.E.; Reineka, C.: Marcive GPO records and authority control : an evaluation of name and subject headings at the University of Arkansas libraries (2004)
0.06
0.060679298 = product of:
0.24271719 = sum of:
0.24271719 = weight(_text_:headings in 533) [ClassicSimilarity], result of:
0.24271719 = score(doc=533,freq=8.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7505675 = fieldWeight in 533, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0546875 = fieldNorm(doc=533)
0.25 = coord(1/4)
- Abstract
- In mid-1999, the University of Arkansas Libraries began loading MarciveGPO records into its Innovative Interfaces catalog. Pursuant to that activity, the Database Maintenance Unit examined five system-generated authority reports in order to evaluate the quality of Marcive headings and to determine whether future GPO records could be loaded into the catalog without further authority processing. Final results indicated that while the overall quality of Marcive headings was good, a significant percentage of headings that appeared on the authority reports required additional attention.
-
Anderson, J.D.; Hofmann, M.A.: ¬A fully faceted syntax for Library of Congress Subject Headings (2006)
0.06
0.060679298 = product of:
0.24271719 = sum of:
0.24271719 = weight(_text_:headings in 350) [ClassicSimilarity], result of:
0.24271719 = score(doc=350,freq=8.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7505675 = fieldWeight in 350, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0546875 = fieldNorm(doc=350)
0.25 = coord(1/4)
- Abstract
- Moving to a fully faceted syntax would resolve three problems facing Library of Congress Subject Headings (LCSH): 1. Inconsistent syntax rules; 2. Inability to create headings that are coextensive with the topic of a work; and 3. Lack of effective displays for long lists of subdivisions under a single subject heading in OPACs and similar electronic displays. The authors advocate a fully faceted syntax using the facets of a modern faceted library classification (The Bliss Bibliographic Classification, 2d ed.). They demonstrate how this might be accomplished so as to integrate the new syntax with existing headings.
-
Library of Congress Subject Headings (2004)
0.06
0.060056932 = product of:
0.24022773 = sum of:
0.24022773 = weight(_text_:headings in 5048) [ClassicSimilarity], result of:
0.24022773 = score(doc=5048,freq=6.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7428692 = fieldWeight in 5048, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0625 = fieldNorm(doc=5048)
0.25 = coord(1/4)
- Abstract
- The new edition adds 7.200 new headings and their references; LCSH now has a total of 270.000 authority records. Instructions how to use the LCSH in: Subject Cataloging Manual: Subject Headings (2002 cumulation: 5th. ed. 1996 with updates through 2002 interfiled; looseleaf in 4 vols.) with semiannual updates.
-
Roe, S.: Subject access vocabularies in a multi-type library consortium (2001)
0.06
0.060056932 = product of:
0.24022773 = sum of:
0.24022773 = weight(_text_:headings in 443) [ClassicSimilarity], result of:
0.24022773 = score(doc=443,freq=6.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7428692 = fieldWeight in 443, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0625 = fieldNorm(doc=443)
0.25 = coord(1/4)
- Abstract
- Madison High School Library joined the South Dakota Library Network (SDLN), a multi-type library consortium with a shared online catalog in 1998. This study compares subject access in this small high school library both before and after the retrospective conversion. Vocabulary mapping between the Library of Congress Subject Headings (LCSH) and the Sears List of Subject Headings is discussed.
- Object
- Sears List of Subject Headings
-
Hoerman, H.L.; Furniss, K.A.: Turning practice into principles : a comparison of the IFLA Principles underlying Subject Heading Languages (SHLs) and the principles underlying the Library of Congress Subject Headings system (2000)
0.06
0.058149874 = product of:
0.2325995 = sum of:
0.2325995 = weight(_text_:headings in 6611) [ClassicSimilarity], result of:
0.2325995 = score(doc=6611,freq=10.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.71928 = fieldWeight in 6611, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.046875 = fieldNorm(doc=6611)
0.25 = coord(1/4)
- Abstract
- The IFLA Section on Classification and Indexing's Working Group on Principles Underlying Subject Headings Languages has identified a set of eleven principles for subject heading languages and excerpted the texts that match each principle from the instructions for each of eleven national subject indexing systems, including excerpts from the LC's Subject Cataloging Manual: Subject Headings. This study compares the IFLA principles with other texts that express the principles underlying LCSH, especially Library of Congress Subject Headings: Principles of Structure and Policies for Application, prepared by Lois Mai Chan for the Library of Congress in 1990, Chan's later book on LCSH, and earlier documents by Haykin and Cutter. The principles are further elaborated for clarity and discussed
- Source
- The LCSH century: one hundred years with the Library of Congress Subject Headings system. Ed.: A.T. Stone
-
Strottman, T.A.: Some of our fifty are missing : Library of Congress Subject Headings for southwestern cultures and history (2007)
0.06
0.058149874 = product of:
0.2325995 = sum of:
0.2325995 = weight(_text_:headings in 1784) [ClassicSimilarity], result of:
0.2325995 = score(doc=1784,freq=10.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.71928 = fieldWeight in 1784, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.046875 = fieldNorm(doc=1784)
0.25 = coord(1/4)
- Abstract
- The Library of Congress Subject Headings has flaws in the logic and structure of its headings relating to the Southwest. Examples demonstrate aspects of the regional biases that make it frustrating to use LCSH for cataloging Southwest collections. The frustrations experienced by students, researchers, and library patrons trying to find detailed information on the Southwest have significant social consequences, especially for Hispanics and Native Americans. Antonio Gramsci's concepts provide a framework to present the implications of these consequences and the need to correct them. LCSH is a major cataloging and research resource both nationally and internationally. Successfully changing biased and inaccurate LCSH subject headings will exhibit social and political leadership while LCSH is providing technological leadership as a key source for developing cooperative online international authority files for subject headings.
-
Oehlschläger, S.: Aus der 50. Sitzung der Arbeitsgemeinschaft der Verbundsysteme am 24. und 25. April 2006 in Köln (2006)
0.06
0.057968162 = product of:
0.115936324 = sum of:
0.055257026 = weight(_text_:und in 183) [ClassicSimilarity], result of:
0.055257026 = score(doc=183,freq=38.0), product of:
0.1478073 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06664293 = queryNorm
0.37384504 = fieldWeight in 183, product of:
6.164414 = tf(freq=38.0), with freq of:
38.0 = termFreq=38.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.02734375 = fieldNorm(doc=183)
0.060679298 = weight(_text_:headings in 183) [ClassicSimilarity], result of:
0.060679298 = score(doc=183,freq=2.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.18764187 = fieldWeight in 183, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.02734375 = fieldNorm(doc=183)
0.5 = coord(2/4)
- Abstract
- Am 24. und 25. April 2006 hat die Arbeitsgemeinschaft der Verbundsysteme auf Einladung der Staatsbibliothek zu Berlin Preußischer Kulturbesitz ihre 50. Sitzung in Berlin durchgeführt.
- Content
- Inhalt: - Zusammenarbeit der Verbundsysteme, Verbesserung von Datentausch und Fremddatennutzung - - MARC21 als Austauschformat Die Arbeiten im Rahmen des Projektes Umstieg auf MARC 21 liegen im Zeitplan. Der gegenwärtige Schwerpunkt der Arbeit ist die Erstellung einer Konkordanz MAB2 - MARC 21, die laufend mit der Expertengruppe Datenformate zurückgekoppelt wird und den Verbünden nach ihrer Fertigstellung vorgelegt wird. Es wird erwartet, dass zum 1. Januar 2007 ein Stand erreicht ist, an dem die Verbünde die erforderlichen Schritte ihres Umstiegs konkret terminieren können. Ab Herbst soll eine Arbeitsgruppe aus Vertretern der Verbünde den Umstieg auf operativer Ebene konkret planen. Damit der Umstieg als gut vorbereiteter, konzertierter und flächendeckend durchgeführter Schritt vollzogen werden kann, sollen die Verbundzentralen entsprechende Kapazitäten einplanen. - - Matchkey Kooperative Neukatalogisierung - - Catalogue Enrichment Nach einem Beschluss in der 49. Sitzung wurde eine Arbeitsgruppe Catalogue enrichment unter Federführung des HBZ gegründet. Das HBZ hat einen Entwurf zu einer Referenzdatenbank für Kataloganreicherungen vorgelegt, der gegenwärtig diskutiert wird. Die Mitglieder der Arbeitsgemeinschaft der Verbundsysteme haben sich darauf geeinigt, eine Bestandsaufnahme sowie eine Mengen- und Inhaltsabschätzung dessen zu machen, was sie in die Referenzdatenbank einspeisen können. Das HBZ wird ein Konzept für ein Daten- und Schnittstellenmodell vorlegen und einen Prototyp der Datenbank aufsetzen. Die Deutsche Bibliothek hat zugesagt, mit Verlegern über zusätzliche Daten wie Inhaltsverzeichnisse, Abstracts, Cover o.ä. zu verhandeln mit dem Ziel, diese Informationseinheiten zu übernehmen, ggf. selbst zu erstellen und über ihre Datendienste auszuliefern statt wie bisher nur als Link auf Drittsystemen zur Verfügung zu stellen. Der Deutsche Bibliotheksverband (DBV) steht mit dem Börsenverein des Deutschen Buchhandels in Kontakt und hat bereits eine prinzipielle Zusage erreicht; derzeit wird eine entsprechende Vereinbarung angestrebt. - - Normdaten-Onlineschnittstelle
- Neues von den Mitgliedern (in Auswahl, Stand: April 2006) - - Bibliotheksverbund Bayern (BVB) / Verbundzentrale - - - Erweiterung des ALEPH-Einsatzes - - - Catalogue Enrichment ADAM - - - CD-ROM-Server - - - InfoGuide - - - Application Service Providing (ASP) - - Bibliotheksservice-Zentrum Baden-Württemberg (BSZ) - - - SWB-Verbunddatenbank - - - Catalogue Enrichment - - - OPUS - - - Internetportal für Bibliotheken, Archive und Museen (BAM) - - - Metadatenverwaltung für den Verteilten Dokumentenserver (VDS) - - - Virtuelle Auskunft mit drei Partnerbibliotheken eröffnet - - Die Deutsche Bibliothek - - - DissOnline Portal - - - DissOnline Tutor - - - CrissCross Ziel des Projektes CrissCross ist es, ein multilinguales, thesaurusbasiertes und benutzergerechtes Recherchevokabular zu schaffen. Hierzu werden die Sachschlagwörter der Schlagwortnormdatei (SWD) mit den Notationen der Dewey-Dezimalklassifikation (DDC) verbunden. Die Multilingualität wird durch die Verknüpfung mit ihren Äquivalenten in den beiden umfassenden Schlagwortnormdateien Library of Congress Subject Headings (LCSH) und Rameau erreicht. Dabei wird auf den Ergebnissen des MACS-Projektes aufgebaut. Dem Nutzer wird so der Zugang zu heterogen erschlossenen Dokumenten ermöglicht, ohne dass er die Regeln des jeweiligen nationalen oder internationalen Erschließungsinstrumentes kennen muss. Projektpartner sind die Fakultät für Informations- und Kommunikationswissenschaften der Fachhochschule Köln und Die Deutsche Bibliothek. Das Projekt hat am 1. Februar 2006 begonnen und soll Ende Januar 2008 abgeschlossen sein. Technisch wird das Projekt im PICA/Iltis-System Der Deutschen Bibliothek und in der Arbeitsumgebung für die DDC "MelvilClass" realisiert. - - - DDC-vascoda
- - Gemeinsamer Bibliotheksverbund (GBV) / Verbundzentrale des GBV (VZG) - - - WWW-Datenbanken - - - Virtuelle Fachbibliotheken - - - Verbundkatalog Internetquellen VKI - - - Öffentliche Bibliotheken im GBV - - Hessisches BibliotheksinformationsSystem (NeBIS) / Verbundzentrale - - - HeBIS-Portal - - - PND-Projekt - - - Catalogue Enrichment - - Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (HBZ) - - - Bibliographischer Werkzeugkasten - - - DigiAuskunft - - - Suchmaschine - - - Suchmaschinentechnologie - - - Verfügbarkeitsrecherche - - - Publikationssysteme/Digital Peer Publishing (DiPP) - - Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV) / KOBV-Zentrale - - - KOBV-Portal - - - Inbetriebnahme des neuen KOBV-Index - - - KOBV-Konsortialportal - - - OPUS- und Archivierungsdienste - - - Zeitschriftenartikel im Volltext
- - Österreichische Bibliothekenverbund und Service Ges.m.b.H. (OBVSG) Datenlieferung an die ZDB - - - ZDB als Fremddatenquelle/Normdatei - - - Österreichische Dissertationsdatenbank/eDoc - - - Anbindung weiterer Lokalsysteme - - - Retroerfassungsprojekt - - - Homepage /"Verbundportal" - - Zeitschriftendatenbank (ZDB) - - - ZDB-OPAC - - - Integration DDB in ZDB - - - Collection Management/ Kennzeichnung von Sondersommelgebietszeitschriften in der ZDB - - - Sigelverzeichnis online/Bibliotheksdatei der ZDB
-
Spink, A.; Greisdorf, H.: Regions and levels : Measuring and mapping users' relevance judgements (2001)
0.06
0.055012353 = product of:
0.22004941 = sum of:
0.22004941 = weight(_text_:judge in 6586) [ClassicSimilarity], result of:
0.22004941 = score(doc=6586,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.42709115 = fieldWeight in 6586, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=6586)
0.25 = coord(1/4)
- Abstract
- The dichotomous bipolar approach to relevance has produced an abundance of information retrieval (M) research. However, relevance studies that include consideration of users' partial relevance judgments are moving to a greater relevance clarity and congruity to impact the design of more effective [R systems. The study reported in this paper investigates the various regions of across a distribution of users' relevance judgments, including how these regions may be categorized, measured, and evaluated. An instrument was designed using four scales for collecting, measuring, and describing enduser relevance judgments. The instrument was administered to 21 end-users who conducted searches on their own information problems and made relevance judgments on a total of 1059 retrieved items. Findings include: (1) overlapping regions of relevance were found to impact the usefulness of precision ratios as a measure of IR system effectiveness, (2) both positive and negative levels of relevance are important to users as they make relevance judgments, (3) topicality was used more to reject rather than accept items as highly relevant, (4) utility was more used to judge items highly relevant, and (5) the nature of relevance judgment distribution suggested a new IR evaluation measure-median effect. Findings suggest that the middle region of a distribution of relevance judgments, also called "partial relevance," represents a key avenue for ongoing study. The findings provide implications for relevance theory, and the evaluation of IR systems
-
Pu, H.-T.; Chuang, S.-L.; Yang, C.: Subject categorization of query terms for exploring Web users' search interests (2002)
0.06
0.055012353 = product of:
0.22004941 = sum of:
0.22004941 = weight(_text_:judge in 1587) [ClassicSimilarity], result of:
0.22004941 = score(doc=1587,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.42709115 = fieldWeight in 1587, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=1587)
0.25 = coord(1/4)
- Abstract
- Subject content analysis of Web query terms is essential to understand Web searching interests. Such analysis includes exploring search topics and observing changes in their frequency distributions with time. To provide a basis for in-depth analysis of users' search interests on a larger scale, this article presents a query categorization approach to automatically classifying Web query terms into broad subject categories. Because a query is short in length and simple in structure, its intended subject(s) of search is difficult to judge. Our approach, therefore, combines the search processes of real-world search engines to obtain highly ranked Web documents based on each unknown query term. These documents are used to extract cooccurring terms and to create a feature set. An effective ranking function has also been developed to find the most appropriate categories. Three search engine logs in Taiwan were collected and tested. They contained over 5 million queries from different periods of time. The achieved performance is quite encouraging compared with that of human categorization. The experimental results demonstrate that the approach is efficient in dealing with large numbers of queries and adaptable to the dynamic Web environment. Through good integration of human and machine efforts, the frequency distributions of subject categories in response to changes in users' search interests can be systematically observed in real time. The approach has also shown potential for use in various information retrieval applications, and provides a basis for further Web searching studies.
-
Nicholson, S.: Bibliomining for automated collection development in a digital library setting : using data mining to discover Web-based scholarly research works (2003)
0.06
0.055012353 = product of:
0.22004941 = sum of:
0.22004941 = weight(_text_:judge in 2867) [ClassicSimilarity], result of:
0.22004941 = score(doc=2867,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.42709115 = fieldWeight in 2867, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=2867)
0.25 = coord(1/4)
- Abstract
- This research creates an intelligent agent for automated collection development in a digital library setting. It uses a predictive model based an facets of each Web page to select scholarly works. The criteria came from the academic library selection literature, and a Delphi study was used to refine the list to 41 criteria. A Perl program was designed to analyze a Web page for each criterion and applied to a large collection of scholarly and nonscholarly Web pages. Bibliomining, or data mining for libraries, was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. Accuracy and return were used to judge the effectiveness of each model an test datasets. In addition, a set of problematic pages that were difficult to classify because of their similarity to scholarly research was gathered and classified using the models. The resulting models could be used in the selection process to automatically create a digital library of Webbased scholarly research works. In addition, the technique can be extended to create a digital library of any type of structured electronic information.
-
White, H.D.: Combining bibliometrics, information retrieval, and relevance theory : part 2: some implications for information science (2007)
0.06
0.055012353 = product of:
0.22004941 = sum of:
0.22004941 = weight(_text_:judge in 1437) [ClassicSimilarity], result of:
0.22004941 = score(doc=1437,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.42709115 = fieldWeight in 1437, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=1437)
0.25 = coord(1/4)
- Abstract
- When bibliometric data are converted to term frequency (tf) and inverse document frequency (idf) values, plotted as pennant diagrams, and interpreted according to Sperber and Wilson's relevance theory (RT), the results evoke major variables of information science (IS). These include topicality, in the sense of intercohesion and intercoherence among texts; cognitive effects of texts in response to people's questions; people's levels of expertise as a precondition for cognitive effects; processing effort as textual or other messages are received; specificity of terms as it affects processing effort; relevance, defined in RT as the effects/effort ratio; and authority of texts and their authors. While such concerns figure automatically in dialogues between people, they become problematic when people create or use or judge literature-based information systems. The difficulty of achieving worthwhile cognitive effects and acceptable processing effort in human-system dialogues explains why relevance is the central concern of IS. Moreover, since relevant communication with both systems and unfamiliar people is uncertain, speakers tend to seek cognitive effects that cost them the least effort. Yet hearers need greater effort, often greater specificity, from speakers if their responses are to be highly relevant in their turn. This theme of mismatch manifests itself in vague reference questions, underdeveloped online searches, uncreative judging in retrieval evaluation trials, and perfunctory indexing. Another effect of least effort is a bias toward topical relevance over other kinds. RT can explain these outcomes as well as more adaptive ones. Pennant diagrams, applied here to a literature search and a Bradford-style journal analysis, can model them. Given RT and the right context, bibliometrics may predict psychometrics.
-
Hartley, J.; Betts, L.: ¬The effects of spacing and titles on judgments of the effectiveness of structured abstracts (2007)
0.06
0.055012353 = product of:
0.22004941 = sum of:
0.22004941 = weight(_text_:judge in 2325) [ClassicSimilarity], result of:
0.22004941 = score(doc=2325,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.42709115 = fieldWeight in 2325, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=2325)
0.25 = coord(1/4)
- Abstract
- Previous research assessing the effectiveness of structured abstracts has been limited in two respects. First, when comparing structured abstracts with traditional ones, investigators usually have rewritten the original abstracts, and thus confounded changes in the layout with changes in both the wording and the content of the text. Second, investigators have not always included the title of the article together with the abstract when asking participants to judge the quality of the abstracts, yet titles alert readers to the meaning of the materials that follow. The aim of this research was to redress these limitations. Three studies were carried out. Four versions of each of four abstracts were prepared. These versions consisted of structured/traditional abstracts matched in content, with and without titles. In Study 1, 64 undergraduates each rated one of these abstracts on six separate rating scales. In Study 2, 225 academics and research workers rated the abstracts electronically, and in Study 3, 252 information scientists did likewise. In Studies 1 and 3, the respondents rated the structured abstracts significantly more favorably than they did the traditional ones, but the presence or absence of titles had no effect on their judgments. In Study 2, no main effects were observed for structure or for titles. The layout of the text, together with the subheadings, contributed to the higher ratings of effectiveness for structured abstracts, but the presence or absence of titles had no clear effects in these experimental studies. It is likely that this spatial organization, together with the greater amount of information normally provided in structured abstracts, explains why structured abstracts are generally judged to be superior to traditional ones.
-
Xu, Y.; Yin, H.: Novelty and topicality in interactive information retrieval (2008)
0.06
0.055012353 = product of:
0.22004941 = sum of:
0.22004941 = weight(_text_:judge in 2355) [ClassicSimilarity], result of:
0.22004941 = score(doc=2355,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.42709115 = fieldWeight in 2355, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=2355)
0.25 = coord(1/4)
- Abstract
- The information science research community is characterized by a paradigm split, with a system-centered cluster working on information retrieval (IR) algorithms and a user-centered cluster working on user behavior. The two clusters rarely leverage each other's insight and strength. One major suggestion from user-centered studies is to treat the relevance judgment of documents as a subjective, multidimensional, and dynamic concept rather than treating it as objective and based on topicality only. This study explores the possibility to enhance users' topicality-based relevance judgment with subjective novelty judgment in interactive IR. A set of systems is developed which differs in the way the novelty judgment is incorporated. In particular, this study compares systems which assume that users' novelty judgment is directed to a certain subtopic area and those which assume that users' novelty judgment is undirected. This study also compares systems which assume that users judge a document based on topicality first and then novelty in a stepwise, noncompensatory fashion and those which assume that users consider topicality and novelty simultaneously and as compensatory to each other. The user study shows that systems assuming directed novelty in general have higher relevance precision, but systems assuming a stepwise judgment process and systems assuming a compensatory judgment process are not significantly different.
-
Hartley, J.; Betts, L.: Revising and polishing a structured abstract : is it worth the time and effort? (2008)
0.06
0.055012353 = product of:
0.22004941 = sum of:
0.22004941 = weight(_text_:judge in 3362) [ClassicSimilarity], result of:
0.22004941 = score(doc=3362,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.42709115 = fieldWeight in 3362, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=3362)
0.25 = coord(1/4)
- Abstract
- Many writers of structured abstracts spend a good deal of time revising and polishing their texts - but is it worth it? Do readers notice the difference? In this paper we report three studies of readers using rating scales to judge (electronically) the clarity of an original and a revised abstract, both as a whole and in its constituent parts. In Study 1, with approximately 250 academics and research workers, we found some significant differences in favor of the revised abstract, but in Study 2, with approximately 210 information scientists, we found no significant effects. Pooling the data from Studies 1 and 2, however, in Study 3, led to significant differences at a higher probability level between the perception of the original and revised abstract as a whole and between the same components as found in Study 1. These results thus indicate that the revised abstract as a whole, as well as certain specific components of it, were judged significantly clearer than the original one. In short, the results of these experiments show that readers can and do perceive differences between original and revised texts - sometimes - and that therefore these efforts are worth the time and effort.