-
Kwok, K.L.: ¬A network approach to probabilistic information retrieval (1995)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 6696) [ClassicSimilarity], result of:
0.051024422 = score(doc=6696,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 6696, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=6696)
0.25 = coord(1/4)
- Abstract
- Shows how probabilistic information retrieval based on document components may be implemented as a feedforward (feedbackward) artificial neural network. The network supports adaptation of connection weights as well as the growing of new edges between queries and terms based on user relevance feedback data for training, and it reflects query modification and expansion in information retrieval. A learning rule is applied that can also be viewed as supporting sequential learning using a harmonic sequence learning rate. Experimental results with 4 standard small collections and a large Wall Street Journal collection show that small query expansion levels of about 30 terms can achieve most of the gains at the low-recall high-precision region, while larger expansion levels continue to provide gains at the high-recall low-precision region of a precision recall curve
-
Buckley, C.; Allan, J.; Salton, G.: Automatic routing and retrieval using Smart : TREC-2 (1995)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 6699) [ClassicSimilarity], result of:
0.051024422 = score(doc=6699,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 6699, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=6699)
0.25 = coord(1/4)
- Abstract
- The Smart information retrieval project emphazises completely automatic approaches to the understanding and retrieval of large quantities of text. The work in the TREC-2 environment continues, performing both routing and ad hoc experiments. The ad hoc work extends investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document that matches the query. The performance of ad hoc runs is good, but it is clear that full advantage of the available local information is not been taken advantage of. The routing experiments use conventional relevance feedback approaches to routing, but with a much greater degree of query expansion than was previously done. The length of a query vector is increased by a factor of 5 to 10 by adding terms found in previously seen relevant documents. This approach improves effectiveness by 30-40% over the original query
-
Greenberg, J.: Optimal query expansion (QE) processing methods with semantically encoded structured thesaurus terminology (2001)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 6750) [ClassicSimilarity], result of:
0.051024422 = score(doc=6750,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 6750, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=6750)
0.25 = coord(1/4)
- Abstract
- While researchers have explored the value of structured thesauri as controlled vocabularies for general information retrieval (IR) activities, they have not identified the optimal query expansion (QE) processing methods for taking advantage of the semantic encoding underlying the terminology in these tools. The study reported on in this article addresses this question, and examined whether QE via semantically encoded thesauri terminology is more effective in the automatic or interactive processing environment. The research found that, regardless of end-users' retrieval goals, synonyms and partial synonyms (SYNs) and narrower terms (NTs) are generally good candidates for automatic QE and that related (RTs) are better candidates for interactive QE. The study also examined end-users' selection of semantically encoded thesauri terms for interactive QE, and explored how retrieval goals and QE processes may be combined in future thesauri-supported IR systems
-
Zazo, A.F.; Figuerola, C.G.; Berrocal, J.L.A.; Rodriguez, E.: Reformulation of queries using similarity-thesauri (2005)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 2043) [ClassicSimilarity], result of:
0.051024422 = score(doc=2043,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 2043, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=2043)
0.25 = coord(1/4)
- Abstract
- One of the major problems in information retrieval is the formulation of queries on the part of the user. This entails specifying a set of words or terms that express their informational need. However, it is well-known that two people can assign different terms to refer to the same concepts. The techniques that attempt to reduce this problem as much as possible generally start from a first search, and then study how the initial query can be modified to obtain better results. In general, the construction of the new query involves expanding the terms of the initial query and recalculating the importance of each term in the expanded query. Depending on the technique used to formulate the new query several strategies are distinguished. These strategies are based on the idea that if two terms are similar (with respect to any criterion), the documents in which both terms appear frequently will also be related. The technique we used in this study is known as query expansion using similarity thesauri.
-
Johnson, J.D.: On contexts of information seeking (2003)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 2082) [ClassicSimilarity], result of:
0.051024422 = score(doc=2082,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 2082, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=2082)
0.25 = coord(1/4)
- Abstract
- While surprisingly little has been written about context at a meaningful level, context is central to most theoretical approaches to information seeking. In this essay I explore in more detail three senses of context. First, I look at context as equivalent to the situation in which a process is immersed. Second, I discuss contingency approaches that detail active ingredients of the situation that have specific, predictable effects. Third, I examine major frameworks for meaning systems. Then, I discuss how a deeper appreciation of context can enhance our understanding of the process of information seeking by examining two vastly different contexts in which it occurs: organizational and cancer-related, an exemplar of everyday life information seeking. This essay concludes with a discussion of the value that can be added to information seeking research and theory as a result of a deeper appreciation of context, particularly in terms of our current multi-contextual environment and individuals taking an active role in contextualizing.
-
Lin, J.; DiCuccio, M.; Grigoryan, V.; Wilbur, W.J.: Navigating information spaces : a case study of related article search in PubMed (2008)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 3124) [ClassicSimilarity], result of:
0.051024422 = score(doc=3124,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 3124, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=3124)
0.25 = coord(1/4)
- Abstract
- The concept of an "information space" provides a powerful metaphor for guiding the design of interactive retrieval systems. We present a case study of related article search, a browsing tool designed to help users navigate the information space defined by results of the PubMed® search engine. This feature leverages content-similarity links that tie MEDLINE® citations together in a vast document network. We examine the effectiveness of related article search from two perspectives: a topological analysis of networks generated from information needs represented in the TREC 2005 genomics track and a query log analysis of real PubMed users. Together, data suggest that related article search is a useful feature and that browsing related articles has become an integral part of how users interact with PubMed.
-
Huang, L.; Milne, D.; Frank, E.; Witten, I.H.: Learning a concept-based document similarity measure (2012)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 1372) [ClassicSimilarity], result of:
0.051024422 = score(doc=1372,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 1372, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=1372)
0.25 = coord(1/4)
- Abstract
- Document similarity measures are crucial components of many text-analysis tasks, including information retrieval, document classification, and document clustering. Conventional measures are brittle: They estimate the surface overlap between documents based on the words they mention and ignore deeper semantic connections. We propose a new measure that assesses similarity at both the lexical and semantic levels, and learns from human judgments how to combine them by using machine-learning techniques. Experiments show that the new measure produces values for documents that are more consistent with people's judgments than people are with each other. We also use it to classify and cluster large document sets covering different genres and topics, and find that it improves both classification and clustering performance.
-
Wang, Y.-H.; Jhuo, P.-S.: ¬A semantic faceted search with rule-based inference (2009)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 1540) [ClassicSimilarity], result of:
0.051024422 = score(doc=1540,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 1540, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=1540)
0.25 = coord(1/4)
- Abstract
- Semantic Search has become an active research of Semantic Web in recent years. The classification methodology plays a pretty critical role in the beginning of search process to disambiguate irrelevant information. However, the applications related to Folksonomy suffer from many obstacles. This study attempts to eliminate the problems resulted from Folksonomy using existing semantic technology. We also focus on how to effectively integrate heterogeneous ontologies over the Internet to acquire the integrity of domain knowledge. A faceted logic layer is abstracted in order to strengthen category framework and organize existing available ontologies according to a series of steps based on the methodology of faceted classification and ontology construction. The result showed that our approach can facilitate the integration of inconsistent or even heterogeneous ontologies. This paper also generalizes the principles of picking appropriate facets with which our facet browser completely complies so that better semantic search result can be obtained.
-
Zeng, M.L.; Gracy, K.F.; Zumer, M.: Using a semantic analysis tool to generate subject access points : a study using Panofsky's theory and two research samples (2014)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 2464) [ClassicSimilarity], result of:
0.051024422 = score(doc=2464,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 2464, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=2464)
0.25 = coord(1/4)
- Abstract
- This paper attempts to explore an approach of using an automatic semantic analysis tool to enhance the "subject" access to materials that are not included in the usual library subject cataloging process. Using two research samples the authors analyzed the access points supplied by OpenCalais, a semantic analysis tool. As an aid in understanding how computerized subject analysis might be approached, this paper suggests using the three-layer framework that has been accepted and applied in image analysis, developed by Erwin Panofsky.
-
Järvelin, K.; Niemi, T.: Deductive information retrieval based on classifications (1993)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 3229) [ClassicSimilarity], result of:
0.051024422 = score(doc=3229,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 3229, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=3229)
0.25 = coord(1/4)
- Abstract
- Modern fact databses contain abundant data classified through several classifications. Typically, users msut consult these classifications in separate manuals or files, thus making their effective use difficult. Contemporary database systems do little support deductive use of classifications. In this study we show how deductive data management techniques can be applied to the utilization of data value classifications. Computation of transitive class relationships is of primary importance here. We define a representation of classifications which supports transitive computation and present an operation-oriented deductive query language tailored for classification-based deductive information retrieval. The operations of this language are on the same abstraction level as relational algebra operations and can be integrated with these to form a powerful and flexible query language for deductive information retrieval. We define the integration of these operations and demonstrate the usefulness of the language in terms of several sample queries
-
Mäkelä, E.; Hyvönen, E.; Saarela, S.; Vilfanen, K.: Application of ontology techniques to view-based semantic serach and browsing (2012)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 4264) [ClassicSimilarity], result of:
0.051024422 = score(doc=4264,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 4264, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=4264)
0.25 = coord(1/4)
- Abstract
- We scho how the beenfits of the view-based search method, developed within the information retrieval community, can be extended with ontology-based search, developed within the Semantic Web community, and with semantic recommendations. As a proof of the concept, we have implemented an ontology-and view-based search engine and recommendations system Ontogaotr for RDF(S) repositories. Ontogator is innovative in two ways. Firstly, the RDFS.based ontologies used for annotating metadata are used in the user interface to facilitate view-based information retrieval. The views provide the user with an overview of the repositorys contents and a vocabulary for expressing search queries. Secondlyy, a semantic browsing function is provided by a recommender system. This system enriches instance level metadata by ontologies and provides the user with links to semantically related relevant resources. The semantic linkage is specified in terms of logical rules. To illustrate and discuss the ideas, a deployed application of Ontogator to a photo repository of the Helsinki University Museum is presented.
-
Li, N.; Sun, J.: Improving Chinese term association from the linguistic perspective (2017)
0.01
0.0127561055 = product of:
0.051024422 = sum of:
0.051024422 = weight(_text_:how in 4381) [ClassicSimilarity], result of:
0.051024422 = score(doc=4381,freq=2.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.21938327 = fieldWeight in 4381, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.046875 = fieldNorm(doc=4381)
0.25 = coord(1/4)
- Abstract
- The study aims to solve how to construct the semantic relations of specific domain terms by applying linguistic rules. The semantic structure analysis at the morpheme level was used for semantic measure, and a morpheme-based term association model was proposed by improving and combining the literal-based similarity algorithm and co-occurrence relatedness methods. This study provides a novel insight into the method of semantic analysis and calculation by morpheme parsing, and the proposed solution is feasible for the automatic association of compound terms. The results show that this approach could be used to construct appropriate term association and form a reasonable structural knowledge graph. However, due to linguistic differences, the viability and effectiveness of the use of our method in non-Chinese linguistic environments should be verified.
-
Hauer, M: Silicon Valley Vorarlberg : Maschinelle Indexierung und semantisches Retrieval verbessert den Katalog der Vorarlberger Landesbibliothek (2004)
0.01
0.012632083 = product of:
0.050528333 = sum of:
0.050528333 = weight(_text_:und in 3489) [ClassicSimilarity], result of:
0.050528333 = score(doc=3489,freq=14.0), product of:
0.15587237 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.07027929 = queryNorm
0.32416478 = fieldWeight in 3489, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0390625 = fieldNorm(doc=3489)
0.25 = coord(1/4)
- Abstract
- 10 Jahre Internet haben die WeIt um die Bibliotheken herum stark geändert. Der Web-OPAC war eine Antwort der Bibliotheken. Doch reicht ein Web-OPAC im Zeitalter des Internets noch aus? Außer Web ist es doch der alte Katalog. Ca. 90% aller Bibliotheksrecherchen durch Benutzer sind Themenrecherchen. Ein Anteil dieser Recherchen bringt kein Ergebnis. Es kann leicht gemessen werden, dass null Medien gefunden wurden. Die Gründe hierfür wurden auch immer wieder untersucht: Plural- anstelle Singularformen, zu spezifische Suchbegriffe, Schreib- oder Bedienungsfehler. Zu wenig untersucht sind aber die Recherchen, die nicht mit einer Ausleihe enden, denn auch dann kann man in vielen Fällen von einem Retrieval-Mangel ausgehen. Schließlich: Von den ausgeliehenen Büchern werden nach Einschätzung vieler Bibliothekare 80% nicht weiter als bis zum Inhaltsverzeichnis gelesen (außer in Präsenzbibliotheken) - und erst nach Wochen zurückgegeben. Ein Politiker würde dies neudeutsch als "ein Vermittlungsproblem" bezeichnen. Ein Controller als nicht hinreichende Kapitalnutzung. Einfacher machen es sich immer mehr Studenten und Wissenschaftler, ihr Wissensaustausch vollzieht sich zunehmend an anderen Orten. Bibliotheken (als Funktion) sind unverzichtbar für die wissenschaftliche Kommunikation. Deshalb geht es darum, Wege zu finden und auch zu beschreiten, welche die Schätze von Bibliotheken (als Institution) effizienter an die Zielgruppe bringen. Der Einsatz von Information Retrieval-Technologie, neue Erschließungsmethoden und neuer Content sind Ansätze dazu. Doch die bisherigen Verbundstrukturen und Abhängigkeit haben das hier vorgestellte innovative Projekt keineswegs gefördert. Innovation entsteht wie die Innvoationsforschung zeigt eigentlich immer an der Peripherie: in Bregenz fing es an.
- Source
- Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 57(2004) H.3/4, S.33-38
-
Renker, L.: Exploration von Textkorpora : Topic Models als Grundlage der Interaktion (2015)
0.01
0.012632083 = product of:
0.050528333 = sum of:
0.050528333 = weight(_text_:und in 3380) [ClassicSimilarity], result of:
0.050528333 = score(doc=3380,freq=14.0), product of:
0.15587237 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.07027929 = queryNorm
0.32416478 = fieldWeight in 3380, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0390625 = fieldNorm(doc=3380)
0.25 = coord(1/4)
- Abstract
- Das Internet birgt schier endlose Informationen. Ein zentrales Problem besteht heutzutage darin diese auch zugänglich zu machen. Es ist ein fundamentales Domänenwissen erforderlich, um in einer Volltextsuche die korrekten Suchanfragen zu formulieren. Das ist jedoch oftmals nicht vorhanden, so dass viel Zeit aufgewandt werden muss, um einen Überblick des behandelten Themas zu erhalten. In solchen Situationen findet sich ein Nutzer in einem explorativen Suchvorgang, in dem er sich schrittweise an ein Thema heranarbeiten muss. Für die Organisation von Daten werden mittlerweile ganz selbstverständlich Verfahren des Machine Learnings verwendet. In den meisten Fällen bleiben sie allerdings für den Anwender unsichtbar. Die interaktive Verwendung in explorativen Suchprozessen könnte die menschliche Urteilskraft enger mit der maschinellen Verarbeitung großer Datenmengen verbinden. Topic Models sind ebensolche Verfahren. Sie finden in einem Textkorpus verborgene Themen, die sich relativ gut von Menschen interpretieren lassen und sind daher vielversprechend für die Anwendung in explorativen Suchprozessen. Nutzer können damit beim Verstehen unbekannter Quellen unterstützt werden. Bei der Betrachtung entsprechender Forschungsarbeiten fiel auf, dass Topic Models vorwiegend zur Erzeugung statischer Visualisierungen verwendet werden. Das Sensemaking ist ein wesentlicher Bestandteil der explorativen Suche und wird dennoch nur in sehr geringem Umfang genutzt, um algorithmische Neuerungen zu begründen und in einen umfassenden Kontext zu setzen. Daraus leitet sich die Vermutung ab, dass die Verwendung von Modellen des Sensemakings und die nutzerzentrierte Konzeption von explorativen Suchen, neue Funktionen für die Interaktion mit Topic Models hervorbringen und einen Kontext für entsprechende Forschungsarbeiten bieten können.
- Footnote
- Masterthesis zur Erlangung des akademischen Grades Master of Science (M.Sc.) vorgelegt an der Fachhochschule Köln / Fakultät für Informatik und Ingenieurswissenschaften im Studiengang Medieninformatik.
- Imprint
- Gummersbach : Fakultät für Informatik und Ingenieurswissenschaften
-
Surfing versus Drilling for knowledge in science : When should you use your computer? When should you use your brain? (2018)
0.01
0.012026572 = product of:
0.048106287 = sum of:
0.048106287 = weight(_text_:how in 564) [ClassicSimilarity], result of:
0.048106287 = score(doc=564,freq=4.0), product of:
0.2325812 = queryWeight, product of:
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.07027929 = queryNorm
0.20683652 = fieldWeight in 564, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
3.3093843 = idf(docFreq=4411, maxDocs=44421)
0.03125 = fieldNorm(doc=564)
0.25 = coord(1/4)
- Abstract
- For this second Special Issue of Infozine, we have invited students, teachers, researchers, and software developers to share their opinions about one or the other aspect of this broad topic: how to balance drilling (for depth) vs. surfing (for breadth) in scientific learning, teaching, research, and software design - and how the modern digital-liberal system affects our ability to strike this balance. This special issue is meant to provide a wide and unbiased spectrum of possible viewpoints on the topic, helping readers to define lucidly their own position and information use behavior.
-
Rädler, K.: In Bibliothekskatalogen "googlen" : Integration von Inhaltsverzeichnissen, Volltexten und WEB-Ressourcen in Bibliothekskataloge (2004)
0.01
0.011695037 = product of:
0.046780147 = sum of:
0.046780147 = weight(_text_:und in 3432) [ClassicSimilarity], result of:
0.046780147 = score(doc=3432,freq=12.0), product of:
0.15587237 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.07027929 = queryNorm
0.30011827 = fieldWeight in 3432, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0390625 = fieldNorm(doc=3432)
0.25 = coord(1/4)
- Abstract
- Ausgangslage Die Katalog-Recherchen über Internet, also von außerhalb der Bibliothek, nehmen erwartungsgemäß stark zu bzw. sind mittlerweile die Regel. Damit ist natürlich das Bedürfnis und die Notwendigkeit gewachsen, über den Titel hinaus zusätzliche inhaltliche Informationen zu erhalten, die es erlauben, die Zweckmäßigkeit wesentlich besser abschätzen zu können, eine Bestellung vorzunehmen oder vielleicht auch 50 km in die Bibliothek zu fahren, um ein Buch zu entleihen. Dieses Informationsdefizit wird zunehmend als gravierender Mangel erfahren. Inhaltsverzeichnisse referieren den Inhalt kurz und prägnant. Sie sind die erste Stelle, welche zur Relevanz-Beurteilung herangezogen wird. Fast alle relevanten Terme einer Fachbuchpublikation finden sich bereits dort. Andererseits wird immer deutlicher, dass die dem bibliothekarischen Paradigma entsprechende intellektuelle Indexierung der einzelnen dokumentarischen Einheiten mit den engsten umfassenden dokumentationssprachlichen Termen (Schlagwörter, Klassen) zwar eine notwendige, aber keinesfalls hinreichende Methode darstellt, das teuer erworbene Bibliotheksgut Information für den Benutzer in seiner spezifischen Problemstellung zu aktivieren und als Informationsdienstleistung anbieten zu können. Informationen zu sehr speziellen Fragestellungen, die oft nur in kürzeren Abschnitten (Kapitel) erörtert werden, sind derzeit nur indirekt, mit großem Zeitaufwand und oft überhaupt nicht auffindbar. Sie liegen sozusagen brach. Die Tiefe der intellektuellen Indexierung bis in einzelne inhaltliche Details zu erweitern, ist aus personellen und damit auch finanziellen Gesichtspunkten nicht vertretbar. Bibliotheken fallen deshalb in der Wahrnehmung von Informationssuchenden immer mehr zurück. Die enorme Informationsvielfalt liegt hinter dem Informations- bzw. Recherchehorizont der bibliographischen Aufnahmen im Katalog.
-
Gödert, W.; Lepsky, K.: Semantische Umfeldsuche im Information Retrieval (1998)
0.01
0.011577496 = product of:
0.046309985 = sum of:
0.046309985 = weight(_text_:und in 1606) [ClassicSimilarity], result of:
0.046309985 = score(doc=1606,freq=6.0), product of:
0.15587237 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.07027929 = queryNorm
0.29710194 = fieldWeight in 1606, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=1606)
0.25 = coord(1/4)
- Abstract
- Sachliche Suchen in bibliothekarischen Online-Katalogen enden häufig mit unbefriedigenden Ergebnissen. Als eine Ursache dafür kann angesehen werden, daß die Gestaltung des Suchprozesses das semantische Umfeld einer Suchanfrage nicht mit einbezieht, daß in Übertragung der Verhältnisse in konventionellen Katalogen am Paradigma des Wort-Matching zwischen Suchwort und Indexat festgehalten wird. Es wird statt dessen das Konzept einer semantischen Umfeldsuche entwickelt und gezeigt, welche Rolle die Verwendung strukturierten Vokabulars dafür spielen kann. Insbesondere wird dargestellt, welche Möglichkeiten Verfahren der wörterbuchgestützten maschinellen Indexierung in diesem Zusammenhang spielen können. Die Ausführungen werden durch Beispiele illustriert
- Source
- Zeitschrift für Bibliothekswesen und Bibliographie. 45(1998) H.4, S.401-423
-
Heinz, S.: Realisierung und Evaluierung eines virtuellen Bibliotheksregals für die Informationswissenschaft an der Universitätsbibliothek Hildesheim (2003)
0.01
0.011577496 = product of:
0.046309985 = sum of:
0.046309985 = weight(_text_:und in 982) [ClassicSimilarity], result of:
0.046309985 = score(doc=982,freq=6.0), product of:
0.15587237 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.07027929 = queryNorm
0.29710194 = fieldWeight in 982, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=982)
0.25 = coord(1/4)
- Content
- [Magisterarbeit im Studiengang Internationales Informationsmanagement am Fachbereich Informations- und Kommunikationswissenschaften der Universität Hildesheim]
- Imprint
- Hildesheim] : Fachbereich Informations- und Kommunikationswissenschaften
-
red: Alles Wissen gleich einer großen Stadt (2002)
0.01
0.011458748 = product of:
0.045834992 = sum of:
0.045834992 = weight(_text_:und in 2484) [ClassicSimilarity], result of:
0.045834992 = score(doc=2484,freq=8.0), product of:
0.15587237 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.07027929 = queryNorm
0.29405463 = fieldWeight in 2484, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=2484)
0.25 = coord(1/4)
- Content
- "Das rasant wachsende Wissen muss gut verwaltet werden, um es zu nutzen. Dies erfordert intelligente Wissensmanagementsysteme, wie sie Andreas Rauber von der Technischen Uni Wien über digitale Bibliotheken konzipiert hat. Seine "Wissenslandkarte" erlaubt es, große Datenmengen übersichtlich darzustellen, Wissen rasch auffindbar und damit optimal einsetzbar zu machen. Dafür erhielt er nun den Cor Baayen Award 2002 für aussichtsreiche Nachwuchsforscher im Bereich der Informationstechnologie vom European Research Consortium for Informatics and Mathematics. Rauber entwickelte eine Bibliothek, die auf einer sich selbst organisierenden Landkarte basiert: Einer geographischen Landkarte gleich, ist themenverwandtes Wissen in Form eines Clusters abgebildet, quasi als städtischer Ballungsraum. Damit verbundene Inhalte sind räumlich gesehen in kurzer Distanz dazu abgebildet, vergleichbar den Randgebieten des Ballungsraumes. So ist auf einen Blick ersichtlich, wo bestimmte Themenkomplexe und damit verbundene Inhalte in der Bibliothek abgelegt sind. Die Wissenslandkarte bedient sich der Forschungen zu neuronalen Netzen. Durch ein Verfahren erlernt die "Self-Organizing-Map" (SOM) die Inhalte der einzelnen Dokumente und schafft es, mit zunehmender Datenmenge selbst eine Struktur des vorhandenen Wissens zu erstellen. Dieses Verfahren ist sprachunabhängig und daher weltweit einsetzbar."
-
Beier, H.: Vom Wort zum Wissen : Semantische Netze als Mittel gegen die Informationsflut (2004)
0.01
0.011458748 = product of:
0.045834992 = sum of:
0.045834992 = weight(_text_:und in 3302) [ClassicSimilarity], result of:
0.045834992 = score(doc=3302,freq=8.0), product of:
0.15587237 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.07027929 = queryNorm
0.29405463 = fieldWeight in 3302, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=3302)
0.25 = coord(1/4)
- Abstract
- "Thesaurus linguae latinae" - so heißt eine der frühesten Wort-Sammlungen. Seit Alters her beschäftigen sich Menschen mit der qualifizierten Aufbereitung von Information. Noch älter ist sogar das Konzept der Ontologie (wörtlich: die "Lehre vom Sein"), die sich als Disziplin der Philosophie bereits seit Aristoteles (384-322 v. Chr.) mit einer objektivistischen Beschreibung der Wirklichkeit beschäftigt. Ontologien - als Disziplin des modernen Wissensmanagements-sind eine Methode, in möglichst kompakter Form, d.h. unter Verwendung von Konzepten in verschiedenen Meta-Ebenen die reale Welt zu beschreiben. Thesaurus und Ontologie stellen zwei Konzepte dar, die auch heute noch in der Wissenschaft - und in jüngster Zeit mit zunehmender Bedeutung auch in der Wirtschaft - im Bereich des Informationsund Wissensmanagements zum Einsatz kommen. Beide spannen gewissermaßen den konzeptionellen Bogen, an dem sich ein pragmatisches Wissensmanagement heutzutage ausrichtet und sich in Form sogenannter semantischer Netze - auch Wissensnetze genannt - wiederfindet.
- Source
- Information - Wissenschaft und Praxis. 55(2004) H.3, S.133-138