-
Rusho, Y.; Raban, R.R.: Hands on : information experiences as sources of value (2020)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 872) [ClassicSimilarity], result of:
0.20403212 = score(doc=872,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 872, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=872)
0.25 = coord(1/4)
-
Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 1871) [ClassicSimilarity], result of:
0.17356671 = score(doc=1871,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 1871, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=1871)
0.25 = coord(1/4)
- Abstract
- As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
-
Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 272) [ClassicSimilarity], result of:
0.17356671 = score(doc=272,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 272, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=272)
0.25 = coord(1/4)
- Abstract
- This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
-
Eddings, J.: How the Internet works (1994)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 2514) [ClassicSimilarity], result of:
0.17356671 = score(doc=2514,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 2514, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=2514)
0.25 = coord(1/4)
- Abstract
- How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.
-
Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
0.17356671 = score(doc=4179,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 4179, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=4179)
0.25 = coord(1/4)
- Abstract
- In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
-
Fogg, B.J.: Persuasive technology : using computers to change what we think and do (2003)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 2877) [ClassicSimilarity], result of:
0.1632257 = score(doc=2877,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 2877, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=2877)
0.25 = coord(1/4)
- Footnote
- Rez. in: JASIS 54(2003) no.12, S.1168-1170 (A.D. Petrou): "Computers as persuasive technology, or Captology, is the topic of the ten chapters in B.J. Fogg's book. As the author states, the main focus of Captology is not an computer mediated communications (CMC), but rather an human computer interaction (HCI). Furthermore, according to the author, "captology focuses an the design, research, and analysis of interactive computing products created for the purpose of changing people's attitudes or behaviors. It describes the areas where technology and persuasion overlap" (p. 5). Each of the book's chapters presents theories, arguments, and examples to convince readers of a large and growing part that computing products play in persuading people to change their behaviors for the better in a variety of areas. Currently, some of the areas for which B.J. Fogg considers computing products as persuasive or influential in motivating individuals to change their behaviors include quitting smoking, practicing safer sex, eating healthier, staying in shape, improving study habits, and helping doctors develop richer empathy for the pain experienced by their patients. In the wrong hands, however, B.J. Fogg wams, the computer's power to persuade can be enlisted to support unethical social ends and to serve corporate interests that deliver no real benefits to consumers. While Captology's concerns about the ethical side of computing products as persuasive tools are summarized in a chapter an ethics, they are also incorporated as short reminders throughout the book's ten chapters. A strength of the book, however, is that the author does not take it for granted that readers will agree with him an the persuasive power for computers. In addition to the technical and social theories he articulates, B .J. Fogg presents empirical evidence from his own research and also provides many examples of computing products designed to persuade people to change their behaviors. Computers can be designed to be highly interactive and to include many modalities for persuasion to match different situations and human personalities, such as submissive or dominant. Furthermore, computers may allow for anonymity in use and can be ubiquitous. ... Yet, there is no denying an effectiveness in the arguments and empirical data put forth by B.J. Fogg about Captology's power to explain how a merging of technology with techniques of persuasion can help change human behavior for the better. The widespread influence of computing products and a need to ethically manage such influence over human behavior should command our attention as users and researchers and most importantly as designers and producers of computing products."
-
Pettee, J.: ¬The subject approach to books and the development of the dictionary catalog (1985)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 4624) [ClassicSimilarity], result of:
0.1632257 = score(doc=4624,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 4624, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=4624)
0.25 = coord(1/4)
- Abstract
- Julia Pettee's contribution to classification theory came about as part of her work an subject headings. Pettee (1872-1967) was for many years librarian of the Union Theological Seminary in New York and was best known for the classification system she developed for the seminary and as the author of the book Subiect Headings. She was one of the first to call attention to the fact that there was a classification system in subject headings. It was, as she put it, "completely concealed when scattered through the alphabetical sequence" (p. 98). On the other hand, she recognized that an index entry was a pointing device and existed to show users specific terms. Index terms, unlike subject headings, could be manipulated, inverted, repeated, and stated in as many words as might be desired. The subject heading, she reiterated, had in it "some idea of classification," but was designed to pull together like material and, unlike the index term, would have limited capability for supplying access by way of synonyms, catchwords, or other associative forms. It is interesting that she also thought of the subject heading in context as forming a three-dimensional system. Logically this is the case whenever one attempts to reach beyond the conventional hierarchy as described an a plane surface, and, in fact, thought out as if the classification were an a plane surface. Pettee described this dimension variously as names "reaching up and over the surface ... hands clasp[ing] in the air" from an individual term (pp. 99-100). Or, in other context, as the mapping of "the many third-dimensional criss-crossing relationships of subject headings." (p. 103) Investigations following Pettee's insight have shown the nature and the degree of the classification latent in subject headings and also in the cross-references of all indexing systems using cross-references of the associative type ("see also" or equivalent terminology). More importantly, study of this type of connection has revealed jumps in logic and meaning caused by homographs or homonyms and resulting in false connections in classification. Standardized rules for making thesauri have prevented some of the more glaring non sequiturs, but much more still needs to be done. The whole area of "related terms", for example, needs to be brought under control, especially in terms of classification mapping.
-
Pettee, J.: Public libraries and libraries as purveyors of information (1985)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 4630) [ClassicSimilarity], result of:
0.1632257 = score(doc=4630,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 4630, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=4630)
0.25 = coord(1/4)
- Abstract
- Julia Pettee's contribution to classification theory came about as part of her work an subject headings. Pettee (1872-1967) was for many years librarian of the Union Theological Seminary in New York and was best known for the classification system she developed for the seminary and as the author of the book Subiect Headings. She was one of the first to call attention to the fact that there was a classification system in subject headings. It was, as she put it, "completely concealed when scattered through the alphabetical sequence" (p. 98). On the other hand, she recognized that an index entry was a pointing device and existed to show users specific terms. Index terms, unlike subject headings, could be manipulated, inverted, repeated, and stated in as many words as might be desired. The subject heading, she reiterated, had in it "some idea of classification," but was designed to pull together like material and, unlike the index term, would have limited capability for supplying access by way of synonyms, catchwords, or other associative forms. It is interesting that she also thought of the subject heading in context as forming a three-dimensional system. Logically this is the case whenever one attempts to reach beyond the conventional hierarchy as described an a plane surface, and, in fact, thought out as if the classification were an a plane surface. Pettee described this dimension variously as names "reaching up and over the surface ... hands clasp[ing] in the air" from an individual term (pp. 99-100). Or, in other context, as the mapping of "the many third-dimensional criss-crossing relationships of subject headings." (p. 103) Investigations following Pettee's insight have shown the nature and the degree of the classification latent in subject headings and also in the cross-references of all indexing systems using cross-references of the associative type ("see also" or equivalent terminology). More importantly, study of this type of connection has revealed jumps in logic and meaning caused by homographs or homonyms and resulting in false connections in classification. Standardized rules for making thesauri have prevented some of the more glaring non sequiturs, but much more still needs to be done. The whole area of "related terms", for example, needs to be brought under control, especially in terms of classification mapping.
-
Pettee, J.: Fundamental principles of the dictionary catalog (1985)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 4633) [ClassicSimilarity], result of:
0.1632257 = score(doc=4633,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 4633, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=4633)
0.25 = coord(1/4)
- Abstract
- Julia Pettee's contribution to classification theory came about as part of her work an subject headings. Pettee (1872-1967) was for many years librarian of the Union Theological Seminary in New York and was best known for the classification system she developed for the seminary and as the author of the book Subiect Headings. She was one of the first to call attention to the fact that there was a classification system in subject headings. It was, as she put it, "completely concealed when scattered through the alphabetical sequence" (p. 98). On the other hand, she recognized that an index entry was a pointing device and existed to show users specific terms. Index terms, unlike subject headings, could be manipulated, inverted, repeated, and stated in as many words as might be desired. The subject heading, she reiterated, had in it "some idea of classification," but was designed to pull together like material and, unlike the index term, would have limited capability for supplying access by way of synonyms, catchwords, or other associative forms. It is interesting that she also thought of the subject heading in context as forming a three-dimensional system. Logically this is the case whenever one attempts to reach beyond the conventional hierarchy as described an a plane surface, and, in fact, thought out as if the classification were an a plane surface. Pettee described this dimension variously as names "reaching up and over the surface ... hands clasp[ing] in the air" from an individual term (pp. 99-100). Or, in other context, as the mapping of "the many third-dimensional criss-crossing relationships of subject headings." (p. 103) Investigations following Pettee's insight have shown the nature and the degree of the classification latent in subject headings and also in the cross-references of all indexing systems using cross-references of the associative type ("see also" or equivalent terminology). More importantly, study of this type of connection has revealed jumps in logic and meaning caused by homographs or homonyms and resulting in false connections in classification. Standardized rules for making thesauri have prevented some of the more glaring non sequiturs, but much more still needs to be done. The whole area of "related terms", for example, needs to be brought under control, especially in terms of classification mapping.
-
Blowers, H.; Bryan, R.: Weaving a library Web : a guide to developing children's websites (2004)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 5240) [ClassicSimilarity], result of:
0.1632257 = score(doc=5240,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 5240, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=5240)
0.25 = coord(1/4)
- Footnote
- Rez. in: JASIST 56(2005) no.14, S.1555-155639 (K. Reuter): "Blowers and Bryan open their book with the image of the Internet as "a pied piper luring children away from books" (p. xiii), nicely capturing the struggle that some children's librarians face to keep their libraries vital and appealing to today's increasingly technologically savvy children. Part idea book, part primer an Web production, Weaving a Library Web encourages and supports children's librarians in expanding their children's services to the Web, "to bring the library to children, and to bring children into the library" (p. xiii). Blowers and Bryan's guidance grows out of their own work an a family of highly appealing and well-received Web sites for children through the Public Library of Charlotte and Mecklenburg County in North Carolina. Though a number of guides already exist for offering library services online and for creating appropriate online services for children, the au thors note that their book is the first to combine these two areas to focus an the development of library service Web sites for children. The book is organized into eight chapters that can be read through for an overview of the Web production process or can be used individually as stand-alone references. The first half of the book offers ideas for and general principles of Web design for children, and the second half provides hands-on advice for undertaking a Web project. Blowers and Bryan cover each topic thoughtfully in a collegial, often breezy tone. . . .
-
Clyde, L.A.: ¬The teaching librarian : a literature review and content analysis of job advertisements (2005)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 3999) [ClassicSimilarity], result of:
0.1632257 = score(doc=3999,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 3999, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=3999)
0.25 = coord(1/4)
- Abstract
- The "teaching librarian" or "librarian as teacher" is a professional role that has been discussed in the literature of library and information science in recent decades, particularly in relation to bibliographic instruction and information literacy development. This paper reports on a small-scale research project, undertaken in 2002, that investigated the demand for library professionals with knowledge of or skills in instructional techniques and strategies. The project was based on an extensive literature review, plus content analysis of library and information science job advertisements on the international LIBJOBS listserv. The literature review has been updated for this BOBCATSSS paper, as have aspects of the content analysis, in order to provide delegates with more recent information. The idea of a teaching role for librarians is far from new. Michael Lorenzen (2002) has traced academic library-based instruction as far back as the seventeenth century when German academic libraries provided instructional programmes for library users. In academic and school libraries in the nineteenth century it usually took the form of "library orientation" - making sure that students and faculty knew how to find the books and other material for their courses. In the United States, some American university librarians were lecturing to students as early as the 1880s (Lorenzen, 2002). In nineteenth and early twentieth century public libraries, library instruction often took the form of literature promotion or reading promotion activities for children and young people, and even of "lessons" on how to look after books, right down to the need for washing hands before handling books. The introduction of card catalogues and classification systems such as the Dewey Decimal Classification resulted in a need for user education in all kinds of libraries, with sessions based on topics such as "The card catalogue: The key to the library" and "How to find a book on the shelves". The introduction of automated catalogues from the 1960s, and later, databases on CD-ROMs, online information services for end users, and the Internet, have increased the need and demand for formal and informal user education, regardless of the type and size of library. Indeed, there is no doubt that interest in library-based instruction has increased in recent decades: on the basis of an analysis of the literature related to the instructional role of librarians, Edwards (1994) noted that "during the past quarter century, interest and concern for library instruction has grown dramatically", while Marcum said in 2002 that "Over the past decade ... information literacy has emerged as a central purpose for librarians, particularly academic librarians".
-
Miller, S.J.: Metadata for digital collections : a how-to-do-it manual (2011)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 911) [ClassicSimilarity], result of:
0.1632257 = score(doc=911,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 911, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=911)
0.25 = coord(1/4)
- Abstract
- More and more libraries, archives, and museums are creating online collections of digitized resources. Where can those charged with organizing these new collections turn for guidance on the actual practice of metadata design and creation? "Metadata for Digital Collections: A How-to-do-it Manual" is suitable for libraries, archives, and museums. This practical, hands-on volume will make it easy for readers to acquire the knowledge and skills they need, whether they use the book on the job or in a classroom. Author Steven Miller introduces readers to fundamental concepts and practices in a style accessible to beginners and LIS students, as well as experienced practitioners with little metadata training. He also takes account of the widespread use of digital collection management systems such as CONTENTdm. Rather than surveying a large number of metadata schemes, Miller covers only three of the schemes most commonly used in general digital resource description, namely, Dublin Core, MODS, and VRA. By limiting himself, Miller is able to address the chosen schemes in greater depth. He is also able to include numerous practical examples that clarify common application issues and challenges. He provides practical guidance on applying each of the Dublin Core elements, taking special care to clarify those most commonly misunderstood. The book includes a step-by-step guide on how to design and document a metadata scheme for local institutional needs and for specific digital collection projects. The text also serves well as an introduction to broader metadata topics, including XML encoding, mapping between different schemes, metadata interoperability and record sharing, OAI harvesting, and the emerging environment of Linked Data and the Semantic Web, explaining their relevance to current practitioners and students. Each chapter offers a set of exercises, with suggestions for instructors. A companion website includes additional practical and reference resources.
-
Peters, C.; Braschler, M.; Clough, P.: Multilingual information retrieval : from research to practice (2012)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 1361) [ClassicSimilarity], result of:
0.1632257 = score(doc=1361,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 1361, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=1361)
0.25 = coord(1/4)
- Abstract
- We are living in a multilingual world and the diversity in languages which are used to interact with information access systems has generated a wide variety of challenges to be addressed by computer and information scientists. The growing amount of non-English information accessible globally and the increased worldwide exposure of enterprises also necessitates the adaptation of Information Retrieval (IR) methods to new, multilingual settings.Peters, Braschler and Clough present a comprehensive description of the technologies involved in designing and developing systems for Multilingual Information Retrieval (MLIR). They provide readers with broad coverage of the various issues involved in creating systems to make accessible digitally stored materials regardless of the language(s) they are written in. Details on Cross-Language Information Retrieval (CLIR) are also covered that help readers to understand how to develop retrieval systems that cross language boundaries. Their work is divided into six chapters and accompanies the reader step-by-step through the various stages involved in building, using and evaluating MLIR systems. The book concludes with some examples of recent applications that utilise MLIR technologies. Some of the techniques described have recently started to appear in commercial search systems, while others have the potential to be part of future incarnations.The book is intended for graduate students, scholars, and practitioners with a basic understanding of classical text retrieval methods. It offers guidelines and information on all aspects that need to be taken into consideration when building MLIR systems, while avoiding too many 'hands-on details' that could rapidly become obsolete. Thus it bridges the gap between the material covered by most of the classical IR textbooks and the novel requirements related to the acquisition and dissemination of information in whatever language it is stored.
-
Mirizzi, R.; Ragone, A.; Noia, T. Di; Sciascio, E. Di: ¬A recommender system for linked data (2012)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 1436) [ClassicSimilarity], result of:
0.1632257 = score(doc=1436,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 1436, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=1436)
0.25 = coord(1/4)
- Abstract
- Peter and Alice are at home, it is a calm winter night, snow is falling, and it is too cold to go outside. "Why don't we just order a pizza and watch a movie?" says Alice wrapped in her favorite blanket. "Why not?"-Peter replies-"Which movie do you wanna watch?" "Well, what about some comedy, romance-like one? Com'on Pete, look on Facebook, there is that nice application Kara suggested me some days ago!" answers Alice. "Oh yes, MORE, here we go, tell me a movie you like a lot," says Peter excited. "Uhm, I wanna see something like the Bridget Jones's Diary or Four Weddings and a Funeral, humour, romance, good actors..." replies his beloved, rubbing her hands. Peter is a bit concerned, he is more into fantasy genre, but he wants to please Alice, so he looks on MORE for movies similar to the Bridget Jones's Diary and Four Weddings and a Funeral: "Here we are my dear, MORE suggests the sequel or, if you prefer, Love Actually," I would prefer the second." "Great! Let's rent it!" nods Peter in agreement. The scenario just presented highlights an interesting and useful feature of a modern Web application. There are tasks where the users look for items similar to the ones they already know. Hence, we need systems that recommend items based on user preferences. In other words, systems should allow an easy and friendly exploration of the information/data related to a particular domain of interest. Such characteristics are well known in the literature and in common applications such as recommender systems. Nevertheless, new challenges in this field arise whenthe information used by these systems exploits the huge amount of interlinked data coming from the Semantic Web. In this chapter, we present MORE, a system for 'movie recommendation' in the Web of Data.
-
Scott, M.L.: Dewey Decimal Classification, 21st edition : a study manual and number building guide (1998)
0.04
0.035705622 = product of:
0.14282249 = sum of:
0.14282249 = weight(_text_:hands in 2454) [ClassicSimilarity], result of:
0.14282249 = score(doc=2454,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.2954777 = fieldWeight in 2454, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.02734375 = fieldNorm(doc=2454)
0.25 = coord(1/4)
- Content
- This work is a comprehensive guide to Edition 21 of the Dewey Decimal Classification (DDC 21). The previous edition was edited by John Phillip Comaromi, who also was the editor of DDC 20 and thus was able to impart in its pages information about the inner workings of the Decimal Classification Editorial Policy Committee, which guides the Classification's development. The manual begins with a brief history of the development of Dewey Decimal Classification (DDC) up to this edition and its impact internationally. It continues on to a review of the general structure of DDC and the 21st edition in particular, with emphasis on the framework ("Hierarchical Order," "Centered Entries") that aids the classifier in its use. An extensive part of this manual is an in-depth review of how DDC is updated with each edition, such as reductions and expansions, and detailed lists of such changes in each table and class. Each citation of a change indicates the previous location of the topic, usually in parentheses but also in textual explanations ("moved from 248.463"). A brief discussion of the topic moved or added provides substance to what otherwise would be lists of numbers. Where the changes are so dramatic that a new class or division structure has been developed, Comparative and Equivalence Tables are provided in volume 1 of DDC 21 (such as Life sciences in 560-590); any such list in this manual would only be redundant. In these cases, the only references to changes in this work are those topics that were moved from other classes. Besides these citations of changes, each class is introduced with a brief background discussion about its development or structure or both to familiarize the user with it. A new aspect in this edition of the DDC study manual is that it is combined with Marty Bloomberg and Hans Weber's An Introduction to Classification and Number Building in Dewey (Libraries Unlimited, 1976) to provide a complete reference for the application of DDC. Detailed examples of number building for each class will guide the classifier through the process that results in classifications for particular works within that class. In addition, at the end of each chapter, lists of book summaries are given as exercises in number analysis, with Library of Congress-assigned classifications to provide benchmarks. The last chapter covers book, or author, numbers, which-combined with the classification and often the date-provide unique call numbers for circulation and shelf arrangement. Guidelines in the application of Cutter tables and Library of Congress author numbers complete this comprehensive reference to the use of DDC 21. As with all such works, this was a tremendous undertaking, which coincided with the author completing a new edition of Conversion Tables: LC-Dewey, Dewey-LC (Libraries Unlimited, forthcoming). Helping hands are always welcome in our human existence, and this book is no exception. Grateful thanks are extended to Jane Riddle, at the NASA Goddard Space Flight Center Library, and to Darryl Hines, at SANAD Support Technologies, Inc., for their kind assistance in the completion of this study manual.
-
Noerr, P.: ¬The Digital Library Tool Kit (2001)
0.03
0.03471334 = product of:
0.13885336 = sum of:
0.13885336 = weight(_text_:java in 774) [ClassicSimilarity], result of:
0.13885336 = score(doc=774,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.31145877 = fieldWeight in 774, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=774)
0.25 = coord(1/4)
- Footnote
- This Digital Library Tool Kit was sponsored by Sun Microsystems, Inc. to address some of the leading questions that academic institutions, public libraries, government agencies, and museums face in trying to develop, manage, and distribute digital content. The evolution of Java programming, digital object standards, Internet access, electronic commerce, and digital media management models is causing educators, CIOs, and librarians to rethink many of their traditional goals and modes of operation. New audiences, continuous access to collections, and enhanced services to user communities are enabled. As one of the leading technology providers to education and library communities, Sun is pleased to present this comprehensive introduction to digital libraries
-
Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003)
0.03
0.03471334 = product of:
0.13885336 = sum of:
0.13885336 = weight(_text_:java in 3758) [ClassicSimilarity], result of:
0.13885336 = score(doc=3758,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.31145877 = fieldWeight in 3758, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=3758)
0.25 = coord(1/4)
- Abstract
- The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
-
Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010)
0.03
0.03471334 = product of:
0.13885336 = sum of:
0.13885336 = weight(_text_:java in 935) [ClassicSimilarity], result of:
0.13885336 = score(doc=935,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.31145877 = fieldWeight in 935, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=935)
0.25 = coord(1/4)
- Abstract
- Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
-
Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007)
0.03
0.03471334 = product of:
0.13885336 = sum of:
0.13885336 = weight(_text_:java in 709) [ClassicSimilarity], result of:
0.13885336 = score(doc=709,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.31145877 = fieldWeight in 709, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=709)
0.25 = coord(1/4)
- Content
- "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
-
Piros, A.: Automatic interpretation of complex UDC numbers : towards support for library systems (2015)
0.03
0.03471334 = product of:
0.13885336 = sum of:
0.13885336 = weight(_text_:java in 3301) [ClassicSimilarity], result of:
0.13885336 = score(doc=3301,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.31145877 = fieldWeight in 3301, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=3301)
0.25 = coord(1/4)
- Abstract
- Analytico-synthetic and faceted classifications, such as Universal Decimal Classification (UDC) express content of documents with complex, pre-combined classification codes. Without classification authority control that would help manage and access structured notations, the use of UDC codes in searching and browsing is limited. Existing UDC parsing solutions are usually created for a particular database system or a specific task and are not widely applicable. The approach described in this paper provides a solution by which the analysis and interpretation of UDC notations would be stored into an intermediate format (in this case, in XML) by automatic means without any data or information loss. Due to its richness, the output file can be converted into different formats, such as standard mark-up and data exchange formats or simple lists of the recommended entry points of a UDC number. The program can also be used to create authority records containing complex UDC numbers which can be comprehensively analysed in order to be retrieved effectively. The Java program, as well as the corresponding schema definition it employs, is under continuous development. The current version of the interpreter software is now available online for testing purposes at the following web site: http://interpreter-eto.rhcloud.com. The future plan is to implement conversion methods for standard formats and to create standard online interfaces in order to make it possible to use the features of software as a service. This would result in the algorithm being able to be employed both in existing and future library systems to analyse UDC numbers without any significant programming effort.