-
Wool, G.: Filing and precoordination : how subject headings are displayed in online catalogs and why it matters (2000)
0.07
0.06880386 = product of:
0.27521545 = sum of:
0.27521545 = weight(_text_:headings in 6612) [ClassicSimilarity], result of:
0.27521545 = score(doc=6612,freq=14.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.8510636 = fieldWeight in 6612, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.046875 = fieldNorm(doc=6612)
0.25 = coord(1/4)
- Abstract
- Library of Congress Subjecl Headings retrieved as the results of a search in an online catalog are likely to be filed in straight alphabetical, word-by-word order, ignoring the semantic structures of these headings and scattering headings of a similar type. This practice makes LC headings unnecessarily difficult to use and negates much of their indexing power. Enthusiasm for filing simplicity and postcoordinate indexing are likely contributing factors to this phenomenon. Since the report Headings for Tomorrow (1992) first raised this issue, filing practices favoring postcoordination over precoordination appear to have become more widespread and more entrenched
- Source
- The LCSH century: one hundred years with the Library of Congress Subject Headings system. Ed.: A.T. Stone
-
(Sears') List of Subject Headings (2000)
0.07
0.06784152 = product of:
0.2713661 = sum of:
0.2713661 = weight(_text_:headings in 112) [ClassicSimilarity], result of:
0.2713661 = score(doc=112,freq=10.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.83916 = fieldWeight in 112, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0546875 = fieldNorm(doc=112)
0.25 = coord(1/4)
- Content
- Vorgänger: 'List of Subject Headings for small libraries, compiled from lists used in nine representative small libraries', Ed.: M.E. Sears. - 1st ed. 1923. - 2nd ed. 1926. - 3rd ed. 1933. - 4th ed. 1939, Ed.: I.S. Monro. - 5th ed. 1944: 'Sears List of Subject Headings', Ed. I. S. Monro. - 6th ed. 1950, Ed.: B.M. Frick. - 7th ed. 1954 - 8th ed. 1959. - 'List of Subject Headings'. - 9th. ed. 1965, Ed.: B.M. Westby. - 10th ed. 1972. - 11th ed. 1977. - 12th ed. 1982. - 13th ed. 1986, Ed.: C. Rovira u. C. Reyes. - 14th ed. 1991. Ed. M.T. Mooney. - 15th ed. 1994, Ed.: J. Miller - 16th ed. 1997, Ed.: J. Miller
- Object
- Sears List of Subject Headings
-
Knowlton, S.A.: Three decades since prejudices and antipathies : a study of changes in the Library of Congress Subject Headings (2005)
0.07
0.06784152 = product of:
0.2713661 = sum of:
0.2713661 = weight(_text_:headings in 5841) [ClassicSimilarity], result of:
0.2713661 = score(doc=5841,freq=10.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.83916 = fieldWeight in 5841, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0546875 = fieldNorm(doc=5841)
0.25 = coord(1/4)
- Abstract
- The Library of Congress Subject Headings have been criticized for containing biased subject headings. One leading critic has been Sanford Berman, whose 1971 monograph Prejudices and Antipathies: A Tract on the LC Subject Heads Concerning People (P&A) listed a number of objectionable headings and proposed remedies. In the decades since P&A was first published, many of Berman's suggestions have been implemented, while other headings remain unchanged. This paper compiles all of Berman's suggestions and tracks the changes that have occurred; a brief analysis of the remaining areas of bias is included.
-
Ho, J.: Applying form/genre headings to foreign films : a summary of AUTOCAT and OLAC-LIST discussions (2005)
0.07
0.06784152 = product of:
0.2713661 = sum of:
0.2713661 = weight(_text_:headings in 717) [ClassicSimilarity], result of:
0.2713661 = score(doc=717,freq=10.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.83916 = fieldWeight in 717, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.0546875 = fieldNorm(doc=717)
0.25 = coord(1/4)
- Abstract
- In several discussions on two electronic lists (AUTOCAT and OLAC-LIST) from 1993 to 2003, librarians expressed interest in using form/genre headings to provide access to foreign films as a separate category of material, as well as by language and country of production, but observed that existing standards do not accommodate these practices. Various options were discussed, including the adaptation of subject headings intended for topical use, geographical subdivision of existing form/genre headings, and the creation of local headings. This paper summarizes the discussions and describes the local policy at Texas A&M University Libraries.
-
Kabel, S.; Hoog, R. de; Wielinga, B.J.; Anjewierden, A.: ¬The added value of task and ontology-based markup for information retrieval (2004)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 3210) [ClassicSimilarity], result of:
0.2640593 = score(doc=3210,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 3210, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=3210)
0.25 = coord(1/4)
- Abstract
- In this report, we investigate how retrieving information can be improved through task-related indexing of documents based an ontologies. Different index types, varying from content-based keywords to structured task based indexing ontologies, are compared in an experiment that simulates the task of creating instructional material from a database of source material. To be able to judge the added value of task- and ontology-related indexes, traditional information retrieval performance measures are extended with new measures reflecting the quality of the material produced with the retrieved information. The results of the experiment show that a structured task-based indexing ontology improves the quality of the product created from retrieved material only to some extent, but that it certainly improves the efficiency and effectiveness of search and retrieval and precision of use.
-
Holsapple, C.W.; Joshi, K.D.: ¬A formal knowledge management ontology : conduct, activities, resources, and influences (2004)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 3235) [ClassicSimilarity], result of:
0.2640593 = score(doc=3235,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 3235, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=3235)
0.25 = coord(1/4)
- Abstract
- This article describes a collaboratively engineered general-purpose knowledge management (KM) ontology that can be used by practitioners, researchers, and educators. The ontology is formally characterized in terms of nearly one hundred definitions and axioms that evolved from a Delphi-like process involving a diverse panel of over 30 KM practitioners and researchers. The ontology identifies and relates knowledge manipulation activities that an entity (e.g., an organization) can perform to operate an knowledge resources. It introduces a taxonomy for these resources, which indicates classes of knowledge that may be stored, embedded, and/or represented in an entity. It recognizes factors that influence the conduct of KM both within and across KM episodes. The Delphi panelists judge the ontology favorably overall: its ability to unify KM concepts, its comprehensiveness, and utility. Moreover, various implications of the ontology for the KM field are examined as indicators of its utility for practitioners, educators, and researchers.
-
Maglaughlin, K.L.; Sonnenwald, D.H.: User perspectives an relevance criteria : a comparison among relevant, partially relevant, and not-relevant judgements (2002)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 201) [ClassicSimilarity], result of:
0.2640593 = score(doc=201,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 201, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=201)
0.25 = coord(1/4)
- Abstract
- In this issue Maglaughin and Sonnenwald provided 12 graduate students with searches related to the student's work and asked them to judge the twenty most recent retrieved representations by highlighting passages thought to contribute to relevance, marking out passages detracting from relevance, and providing a relevant, partially relevant or relevant judgement on each. By recorded interview they were asked about how these decisions were made and to describe the three classes of judgement. The union of criteria identified in past studies did not seem to fully capture the information supplied so a new set was produced and coding agreement found to be adequate. Twenty-nine criteria were identified and grouped into six categories based upon the focus of the criterion. Multiple criteria are used for most judgements, and most criteria may have either a positive or negative effect. Content was the most frequently mentioned criterion.
-
Karamuftuoglu, M.: Information arts and information science : time to unite? (2006)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 330) [ClassicSimilarity], result of:
0.2640593 = score(doc=330,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 330, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=330)
0.25 = coord(1/4)
- Abstract
- This article explicates the common ground between two currently independent fields of inquiry, namely information arts and information science, and suggests a framework that could unite them as a single field of study. The article defines and clarifies the meaning of information art and presents an axiological framework that could be used to judge the value of works of information art. The axiological framework is applied to examples of works of information art to demonstrate its use. The article argues that both information arts and information science could be studied under a common framework; namely, the domain-analytic or sociocognitive approach. It also is argued that the unification of the two fields could help enhance the meaning and scope of both information science and information arts and therefore be beneficial to both fields.
-
Hobson, S.P.; Dorr, B.J.; Monz, C.; Schwartz, R.: Task-based evaluation of text summarization using Relevance Prediction (2007)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 1938) [ClassicSimilarity], result of:
0.2640593 = score(doc=1938,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 1938, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=1938)
0.25 = coord(1/4)
- Abstract
- This article introduces a new task-based evaluation measure called Relevance Prediction that is a more intuitive measure of an individual's performance on a real-world task than interannotator agreement. Relevance Prediction parallels what a user does in the real world task of browsing a set of documents using standard search tools, i.e., the user judges relevance based on a short summary and then that same user - not an independent user - decides whether to open (and judge) the corresponding document. This measure is shown to be a more reliable measure of task performance than LDC Agreement, a current gold-standard based measure used in the summarization evaluation community. Our goal is to provide a stable framework within which developers of new automatic measures may make stronger statistical statements about the effectiveness of their measures in predicting summary usefulness. We demonstrate - as a proof-of-concept methodology for automatic metric developers - that a current automatic evaluation measure has a better correlation with Relevance Prediction than with LDC Agreement and that the significance level for detected differences is higher for the former than for the latter.
-
Díaz, A.; Gervás, P.: User-model based personalized summarization (2007)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 1952) [ClassicSimilarity], result of:
0.2640593 = score(doc=1952,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 1952, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=1952)
0.25 = coord(1/4)
- Abstract
- The potential of summary personalization is high, because a summary that would be useless to decide the relevance of a document if summarized in a generic manner, may be useful if the right sentences are selected that match the user interest. In this paper we defend the use of a personalized summarization facility to maximize the density of relevance of selections sent by a personalized information system to a given user. The personalization is applied to the digital newspaper domain and it used a user-model that stores long and short term interests using four reference systems: sections, categories, keywords and feedback terms. On the other side, it is crucial to measure how much information is lost during the summarization process, and how this information loss may affect the ability of the user to judge the relevance of a given document. The results obtained in two personalization systems show that personalized summaries perform better than generic and generic-personalized summaries in terms of identifying documents that satisfy user preferences. We also considered a user-centred direct evaluation that showed a high level of user satisfaction with the summaries.
-
Moreira Orengo, V.; Huyck, C.: Relevance feedback and cross-language information retrieval (2006)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 1970) [ClassicSimilarity], result of:
0.2640593 = score(doc=1970,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 1970, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=1970)
0.25 = coord(1/4)
- Abstract
- This paper presents a study of relevance feedback in a cross-language information retrieval environment. We have performed an experiment in which Portuguese speakers are asked to judge the relevance of English documents; documents hand-translated to Portuguese and documents automatically translated to Portuguese. The goals of the experiment were to answer two questions (i) how well can native Portuguese searchers recognise relevant documents written in English, compared to documents that are hand translated and automatically translated to Portuguese; and (ii) what is the impact of misjudged documents on the performance improvement that can be achieved by relevance feedback. Surprisingly, the results show that machine translation is as effective as hand translation in aiding users to assess relevance in the experiment. In addition, the impact of misjudged documents on the performance of RF is overall just moderate, and varies greatly for different query topics.
-
Leroy, G.; Miller, T.; Rosemblat, G.; Browne, A.: ¬A balanced approach to health information evaluation : a vocabulary-based naïve Bayes classifier and readability formulas (2008)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 2998) [ClassicSimilarity], result of:
0.2640593 = score(doc=2998,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 2998, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=2998)
0.25 = coord(1/4)
- Abstract
- Since millions seek health information online, it is vital for this information to be comprehensible. Most studies use readability formulas, which ignore vocabulary, and conclude that online health information is too difficult. We developed a vocabularly-based, naïve Bayes classifier to distinguish between three difficulty levels in text. It proved 98% accurate in a 250-document evaluation. We compared our classifier with readability formulas for 90 new documents with different origins and asked representative human evaluators, an expert and a consumer, to judge each document. Average readability grade levels for educational and commercial pages was 10th grade or higher, too difficult according to current literature. In contrast, the classifier showed that 70-90% of these pages were written at an intermediate, appropriate level indicating that vocabulary usage is frequently appropriate in text considered too difficult by readability formula evaluations. The expert considered the pages more difficult for a consumer than the consumer did.
-
Cosijn, E.: Relevance judgments and measurements (2009)
0.07
0.06601483 = product of:
0.2640593 = sum of:
0.2640593 = weight(_text_:judge in 842) [ClassicSimilarity], result of:
0.2640593 = score(doc=842,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.5125094 = fieldWeight in 842, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.046875 = fieldNorm(doc=842)
0.25 = coord(1/4)
- Abstract
- Users intuitively know which documents are relevant when they see them. Formal relevance assessment, however, is a complex issue. In this entry relevance assessment are described both from a human perspective and a systems perspective. Humans judge relevance in terms of the relation between the documents retrieved and the way in which these documents are understood and used. This is a subjective and personal judgment and is called user relevance. Systems compute a function between the query and the document features that the systems builders believe will cause documents to be ranked by the likelihood that a user will find the documents relevant. This is an objective measurement of relevance in terms of relations between the query and the documents retrieved-this is called system relevance (or sometimes similarity).
-
Niggemann, E.: Magda Heiner-Freiling (1950-2007) (2007)
0.06
0.06387939 = product of:
0.12775879 = sum of:
0.06707949 = weight(_text_:und in 1676) [ClassicSimilarity], result of:
0.06707949 = score(doc=1676,freq=56.0), product of:
0.1478073 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06664293 = queryNorm
0.4538307 = fieldWeight in 1676, product of:
7.483315 = tf(freq=56.0), with freq of:
56.0 = termFreq=56.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.02734375 = fieldNorm(doc=1676)
0.060679298 = weight(_text_:headings in 1676) [ClassicSimilarity], result of:
0.060679298 = score(doc=1676,freq=2.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.18764187 = fieldWeight in 1676, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.02734375 = fieldNorm(doc=1676)
0.5 = coord(2/4)
- Content
- "Magda Heiner-Freiling, die Leiterin der Abteilung Sacherschließung am Frankfurter Standort der Deutschen Nationalbibliothek, ist am 22. Juli 2007 im Alter von 57 Jahren während ihres Urlaubs tödlich verunglückt. Sie wird in unserer Erinnerung weiterleben als Kollegin, deren enormes Fachwissen wir ebenso schätzten wie ihre warmherzige Sorge um das Wohlergehen ihrer Kollegen und Mitarbeiter. Sie war eine exzellente Expertin und engagierte Bibliothekarin und sie war dabei vor allem auch eine herzliche, immer hilfsbereite, sich für andere notfalls auch kämpferisch einsetzende, mitfühlende Kollegin und Vorgesetzte. Magda Heiner-Freiling verband, integrierte, schaffte Nähe und Vertrautheit nicht nur in ihrer unmittelbaren Umgebung, sondern mühelos auch über geografische Entfernungen hinweg. Ihren Kampfgeist, ihre Loyalität, ihre soziale Kompetenz, ihre Begeisterungsfähigkeit und ihre erfrischende Direktheit habe ich vor allem in den vergangenen zwei Jahren geschätzt, in denen sie mir als Abteilungsleiterin gegenübersaß. Nach ihrem 1. Staatsexamen in den Fächern Deutsch, Englisch und Erziehungswissenschaften sowie weiteren Studien in den Fächern Neuere deutsche Literaturwissenschaft, wissenschaftliche Politik und europäische Ethnologie an der Johannes Gutenberg-Universität in Mainz und an der Philipps-Universität in Marburg begann 1974 ihr bibliothekarischer Werdegang als Bibliotheksreferendarin bei der Deutschen Nationalbibliothek in Frankfurt am Main. 1976 legte sie die bibliothekarische Staatsprüfung für den höheren Dienst an wissenschaftlichen Bibliotheken an der Bibliotheksschule Frankfurt am Main ab. Neben ihrer Tätigkeit als Fachreferentin hat Magda Heiner-Freiling von der ersten Stunde an bei der RSWK-Entwicklung mitgearbeitet. Sie betreute die Belange der öffentlichen Bibliotheken mit großem Engagement und führte Anfang der neunziger Jahre die »Expertengruppe Erschließung für Kinder- und Jugendliteratur, Belletristik, Schul- und Berufsschulbücher«; auch hat sie sich viele Jahre in die Arbeit der Expertengruppe RSWK/SWD eingebracht. Ihrem ausgeprägten Interesse für das Andere, für andere Sprachen, andere Kulturen, entsprach ihr besonderes Interesse für die internationale Klassifikationspraxis und -theorie und den multilingualen Ansatz von Normvokabularien. Sie war von 1994 bis 2000 Mitglied des IFLA-Gremiums »Section on Classification and Indexing / Standing Committee« und hat diese Arbeit immer mit großer Begeisterung gemacht. Darüber hinaus hat sie in den IFLA Working Groups »Working Group of Anonymous Classics«, »Working Group on Guidelines for Multilingual Thesauri« und »Working Group >Survey on Subject Heading Languages in National Bibliographies<« aktiv mitgearbeitet.
Magda Heiner-Freiling war die treibende Kraft, sie war die Initiatorin, die Seele bei der Einführung der Dewey-Dezimalklassifikation in Deutschland; sie war Projektleiterin der DDC-Übertragung ins Deutsche (»DDC Deutsch«, 2002-2005), Vorsitzende der Expertengruppe DDC (seit 2001) und hat das Konsortium DDC mitbegründet. Ihre Freude an Sprachen erwies sich in der Gestaltung und tatkräftigen Mitarbeit im Projekt MACS (»Multilingual Access to Subject Headings«); aus den Erfahrungen mit der DDC erwuchs ein neues Projekt »CrissCross«. Magda Heiner-Freiling hat die bibliothekarische Arbeit als ein zweites Zuhause angesehen, als einen Lebensraum, den es aus Sicht einer engagierten Gewerkschaftlerin zu gestalten galt. Sie ist darin aufgegangen und hat mit ihrem Wissen und ihrem Fachverstand ihr bibliothekarisches Umfeld geprägt. Gleichzeitig hat sie zwei Kindergroßgezogen und war mit dem kulturellen Leben in Frankfurt sehr verwachsen. Als leidenschaftlich Reisende war sie viel zwischen Marokko und der Seidenstraße unterwegs, erlernte die arabische Sprache, war aber genauso für ihre großzügige, herzliche Gastfreundschaft bekannt und beherbergte zu Hause immer wieder Gäste aus der Bibliothekswelt. Wir trauern um einen wunderbaren Menschen. Magda Heiner-Freiling wird in der Erinnerung ihrer Kolleginnen und Kollegen der Deutschen Nationalbibliothek, der Zunft der Sacherschließer in Deutschland und weltweit, weiterleben: als eine Kollegin, deren enormes Fachwissen wir ebenso schätzten wie ihr lebendiges Interesse an ihrem Gegenüber, ihre Herzlichkeit, Hilfsbereitschaft, Offenheit, ihr Engagement für soziale Gerechtigkeit und die Sorge um das Wohlergehen der Menschen in ihrer beruflichen Umgebung. Eine solche Kombination von Expertise und Mitmenschlichkeit ist rar. Magda Heiner-Freiling fehlt uns sehr - in jeder Beziehung."
- Source
- Zeitschrift für Bibliothekswesen und Bibliographie. 54(2007) H.4/5, S.293
-
Dean, R.J.: FAST: development of simplified headings for metadata (2004)
0.06
0.06369999 = product of:
0.25479996 = sum of:
0.25479996 = weight(_text_:headings in 682) [ClassicSimilarity], result of:
0.25479996 = score(doc=682,freq=12.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7879317 = fieldWeight in 682, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.046875 = fieldNorm(doc=682)
0.25 = coord(1/4)
- Abstract
- The Library of Congress Subject Headings schema (LCSH) is the most commonly used and widely accepted subject vocabulary for general application. It is the de facto universal controlled vocabulary and has been a model for developing subject heading systems by many countries. However, LCSH's complex syntax and rules for constructing headings restrict its application by requiring highly skilled personnel and limit the effectiveness of automated authority control. Recent trends, driven to a large extent by the rapid growth of the Web, are forcing changes in bibliographic control systems to make them easier to use, understand, and apply, and subject headings are no exception. The purpose of adapting the LCSH with a simplified syntax to create FAST (Faceted Application of Subject Terminology) headings is to retain the very rich vocabulary of LCSH while making the schema easier to understand, control, apply, and use. The schema maintains compatibility with LCSH--any valid Library of Congress subject heading can be converted to FAST headings.
-
Kuhr, P.S.: Putting the world back together : mapping multiple vocabularies into a single thesaurus (2003)
0.06
0.06287668 = product of:
0.12575336 = sum of:
0.02173171 = weight(_text_:und in 4813) [ClassicSimilarity], result of:
0.02173171 = score(doc=4813,freq=2.0), product of:
0.1478073 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06664293 = queryNorm
0.14702731 = fieldWeight in 4813, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=4813)
0.10402165 = weight(_text_:headings in 4813) [ClassicSimilarity], result of:
0.10402165 = score(doc=4813,freq=2.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.32167178 = fieldWeight in 4813, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.046875 = fieldNorm(doc=4813)
0.5 = coord(2/4)
- Abstract
- This paper describes an ongoing project in which the subject headings contained in twelve controlled vocabularies covering multiple disciplines from the humanities to the sciences and including law and education among others are being collapsed into a single vocabulary and reference structure. The design of the database, algorithms created to programmatically link like-concepts, and daily maintenance are detailed. The problems and pitfalls of dealing with multiple vocabularies are noted, as well as the difficulties in relying purely an computer generated algorithms. The application of this megathesaurus to bibliographic records and methodology of retrieval is explained.
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Landry, P.: Multilingual subject access : the linking approach of MACS (2004)
0.06
0.06287668 = product of:
0.12575336 = sum of:
0.02173171 = weight(_text_:und in 9) [ClassicSimilarity], result of:
0.02173171 = score(doc=9,freq=2.0), product of:
0.1478073 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06664293 = queryNorm
0.14702731 = fieldWeight in 9, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=9)
0.10402165 = weight(_text_:headings in 9) [ClassicSimilarity], result of:
0.10402165 = score(doc=9,freq=2.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.32167178 = fieldWeight in 9, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.046875 = fieldNorm(doc=9)
0.5 = coord(2/4)
- Abstract
- The MACS (Multilingual access to subjects) project is one of the many projects that are currently exploring solutions to multilingual subject access to online catalogs. Its strategy is to develop a Web based link and search interface through which equivalents between three Subject Heading Languages: SWD/RSWK (Schlagwortnormdatei/Regeln für den Schlagwortkatalog) for German, RAMEAU (Repertoire d'Autorite-Matière Encyclopedique et Alphabetique Unifie) for French and LCSH (Library of Congress Subject Headings) for English can be created and maintained, and by which users can access online databases in the language of their choice. Factors that have lead to this approach will be examined and the MACS linking strategy will be explained. The trend to using mapping or linking strategies between different controlled vocabularies to create multilingual access challenges the traditional view of the multilingual thesaurus.
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Sears' list of subject headings (2007)
0.06
0.06250924 = product of:
0.25003695 = sum of:
0.25003695 = weight(_text_:headings in 617) [ClassicSimilarity], result of:
0.25003695 = score(doc=617,freq=26.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.7732028 = fieldWeight in 617, product of:
5.0990195 = tf(freq=26.0), with freq of:
26.0 = termFreq=26.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.03125 = fieldNorm(doc=617)
0.25 = coord(1/4)
- Footnote
- Rez. in: KO 35(2008) no.1, S.55-58 (M.P. Satija): "The Sears List, first published in 1923, has survived times of destabilizing changes while keeping reasonable continuity with the past. Dr. Joseph Miller, at the helm since 1992 and the longest-serving editor in the eighty-four years of the List's existence, first edited the 15th edition of the Sears (1994). Over the years, the Sears has achieved more than it had hoped for: ever-increasing use the world over. In fact, the turbulent progress of media and information theories has forced the Sears to keep up with the changing times. Knowledge organization is a shifting sand in the electronic era. Vast and varied changes generate not only new information, but also new terms and phrases. It is trite to say that the electronic media have transformed the way in which we access information and knowledge. The new edition of the Sears has absorbed these changes to reflect the times. The 19th edition, released in May 2007, has about 440 new headings, to bring the new total to over 8000 headings, which keeps the growth rate at five percent. Newly-added headings generally fall into one of two categories: a) headings for the new and current subjects and b) headings previously missed. A few more have been modified. New editions are produced regularly to: - incorporate terms for new subjects, - restructure the form of old headings to suit the changing information needs and informationseeking behaviour of the users, - add new terms to old subject headings to reflect current usage, - delete the obsolete subjects, - forge new relations between subjects and their terms. Two major areas of new additions are in the fields of Islam, as might be expected, and the graphic novel- the latter has thirty headings perhaps drawn from the WilsonWeb Database on Graphic Novels Core Collection. ... The lapses are minor and could be forgiven; they in no way detract from this continuously-expanding and well-established tool for subject-cataloguing in small and medium libraries. The handy List and its lucid introduction make Sears an excellent and convenient tool for teaching subject headings' use and principles, as well as methods of vocabulary control. With its glossy and flowery cover, clear typeface and high production standards, the new edition is particularly welcome."
- LCSH
- Subject headings
- Object
- Sears List of Subject Headings
- Subject
- Subject headings
-
Cazan, C.: Medizinische Ontologien : das Ende des MeSH (2006)
0.06
0.06177809 = product of:
0.12355618 = sum of:
0.054208413 = weight(_text_:und in 1132) [ClassicSimilarity], result of:
0.054208413 = score(doc=1132,freq=28.0), product of:
0.1478073 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06664293 = queryNorm
0.36675057 = fieldWeight in 1132, product of:
5.2915025 = tf(freq=28.0), with freq of:
28.0 = termFreq=28.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.03125 = fieldNorm(doc=1132)
0.06934777 = weight(_text_:headings in 1132) [ClassicSimilarity], result of:
0.06934777 = score(doc=1132,freq=2.0), product of:
0.32337824 = queryWeight, product of:
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.06664293 = queryNorm
0.21444786 = fieldWeight in 1132, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.8524013 = idf(docFreq=942, maxDocs=44421)
0.03125 = fieldNorm(doc=1132)
0.5 = coord(2/4)
- Abstract
- Die Komplexizität medizinischer Fragestellungen und des medizinischen Informationsmanagements war seit den Anfängen der Informatik immer ein besonders wichtiges Thema. Trotz des Scheiterns der Künstlichen Intelligenz in den 80er Jahren des vorigen Jahrhunderts haben deren Kernideen Früchte getragen. Durch kongruente Entwicklung einer Reihe anderer Wissenschaftsdisziplinen und der exponentiellen Entwicklung im Bereich Computerhardware konnten die gestellten, hohen Anforderungen bei der medizinischen Informationssuche doch noch erfüllt werden. Die programmatische Forderung von Tim Berners-Lee betreffend "Semantic Web" im Jahr 2000 hat dem Thema Ontologien für maschinenlesbare Repositorien in Allgemein- und Fachsprache breitere Aufmerksamkeit gewonnen. Da in der Medizin (PubMed) mit dem von NLM schon vor 20 Jahren entwickelten Unified Medical Language System (UMLS) eine funktionierende Ontologie in Form eines semantischen Netzes in Betrieb ist, ist es auch für Medizinbibliothekare und Medizindokumentare hoch an der Zeit, sich damit zu beschäftigen. Ontologien können im Wesen, trotz der informatisch vernebelnden Terminologie, als Werkzeuge der Klassifikation verstanden werden. Hier sind von seiten der Bibliotheks- und Dokumentationswissenschaft wesentliche Beiträge möglich. Der vorliegende Bericht bietet einen Einstieg in das Thema, erklärt wesentliche Elemente des UMLS und schließt mit einer kommentierten Anmerkungs- und Literaturliste für die weitere Beschäftigung mit Ontologien.
- Content
- Dieser Aufsatz ist kein Abgesang auf MeSH (= Medical Subject Headings in Medline/PubMed), wie man/frau vielleicht vermuten könnte. Vielmehr wird - ohne informatiklastiges Fachchinesisch - an Hand des von der National Library of Medicine entwickelten Unified Medical Language System erklärt, worin die Anforderungen an Ontologien bestehen, die im Zusammenhang mit dem Semantic Web allerorten eingefordert und herbeigewünscht werden. Eine Lektüre für Einsteigerinnen, die zum Vertiefen der gewonnenen Begriffssicherheit an Hand der weiterführenden Literaturhinweise anregt. Da das UMLS hier vor allem als Beispiel verwendet wird, werden auch Bibliothekarlnnen, Dokumentarlnnen und Informationsspezialistinnen anderer Fachbereiche den Aufsatz mit Gewinn lesen - und erfahren, dass unser Fachwissen aus der Sacherschließung und der Verwendung und Mitgestaltung von Normdateien und Thesauri bei der Entwicklung von Ontologien gefragt ist! (Eveline Pipp, Universitätsbibliothek Innsbruck). - Die elektronische Version dieses Artikels ist verfügbar unter: http://www.egms.de/en/journals/mbi/2006-6/mbi000049.shtml.
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Information ethics : privacy, property, and power (2005)
0.06
0.06141512 = product of:
0.12283024 = sum of:
0.110024706 = weight(_text_:judge in 3392) [ClassicSimilarity], result of:
0.110024706 = score(doc=3392,freq=2.0), product of:
0.5152282 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.06664293 = queryNorm
0.21354558 = fieldWeight in 3392, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.01953125 = fieldNorm(doc=3392)
0.0128055345 = weight(_text_:und in 3392) [ClassicSimilarity], result of:
0.0128055345 = score(doc=3392,freq=4.0), product of:
0.1478073 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06664293 = queryNorm
0.086636685 = fieldWeight in 3392, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.01953125 = fieldNorm(doc=3392)
0.5 = coord(2/4)
- BK
- 06.00 / Information und Dokumentation: Allgemeines
- Classification
- 06.00 / Information und Dokumentation: Allgemeines
- Footnote
- Rez. in: JASIST 58(2007) no.2, S.302 (L.A. Ennis):"This is an important and timely anthology of articles "on the normative issues surrounding information control" (p. 11). Using an interdisciplinary approach, Moore's work takes a broad look at the relatively new field of information ethics. Covering a variety of disciplines including applied ethics, intellectual property, privacy, free speech, and more, the book provides information professionals of all kinds with a valuable and thought-provoking resource. Information Ethics is divided into five parts and twenty chapters or articles. At the end of each of the five parts, the editor has included a few "discussion cases," which allows the users to apply what they just read to potential real life examples. Part I, "An Ethical Framework for Analysis," provides readers with an introduction to reasoning and ethics. This complex and philosophical section of the book contains five articles and four discussion cases. All five of the articles are really thought provoking and challenging writings on morality. For instance, in the first article, "Introduction to Moral Reasoning," Tom Regan examines how not to answer a moral question. For example, he thinks using what the majority believes as a means of determining what is and is not moral is flawed. "The Metaphysics of Morals" by Immanuel Kant looks at the reasons behind actions. According to Kant, to be moral one has to do the right thing for the right reasons. By including materials that force the reader to think more broadly and deeply about what is right and wrong, Moore has provided an important foundation and backdrop for the rest of the book. Part II, "Intellectual Property: Moral and Legal Concerns," contains five articles and three discussion cases for tackling issues like ownership, patents, copyright, and biopiracy. This section takes a probing look at intellectual and intangible property from a variety of viewpoints. For instance, in "Intellectual Property is Still Property," Judge Frank Easterbrook argues that intellectual property is no different than physical property and should not be treated any differently by law. Tom Palmer's article, "Are Patents and Copyrights Morally Justified," however, uses historical examples to show how intellectual and physical properties differ.