Search (1230 results, page 3 of 62)

  • × language_ss:"e"
  1. Schaefer, A.; Jordan, M.; Klas, C.-P.; Fuhr, N.: Active support for query formulation in virtual digital libraries : a case study with DAFFODIL (2005) 0.06
    0.06122243 = product of:
      0.24488972 = sum of:
        0.24488972 = weight(_text_:handles in 296) [ClassicSimilarity], result of:
          0.24488972 = score(doc=296,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.4686014 = fieldWeight in 296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0390625 = fieldNorm(doc=296)
      0.25 = coord(1/4)
    
    Abstract
    Daffodil is a front-end to federated, heterogeneous digital libraries targeting at strategic support of users during the information seeking process. This is done by offering a variety of functions for searching, exploring and managing digital library objects. However, the distributed search increases response time and the conceptual model of the underlying search processes is inherently weaker. This makes query formulation harder and the resulting waiting times can be frustrating. In this paper, we investigate the concept of proactive support during the user's query formulation. For improving user efficiency and satisfaction, we implemented annotations, proactive support and error markers on the query form itself. These functions decrease the probability for syntactical or semantical errors in queries. Furthermore, the user is able to make better tactical decisions and feels more confident that the system handles the query properly. Evaluations with 30 subjects showed that user satisfaction is improved, whereas no conclusive results were received for efficiency.
  2. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.06
    0.06122243 = product of:
      0.24488972 = sum of:
        0.24488972 = weight(_text_:handles in 888) [ClassicSimilarity], result of:
          0.24488972 = score(doc=888,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.4686014 = fieldWeight in 888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0390625 = fieldNorm(doc=888)
      0.25 = coord(1/4)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
  3. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.06
    0.06122243 = product of:
      0.24488972 = sum of:
        0.24488972 = weight(_text_:handles in 1099) [ClassicSimilarity], result of:
          0.24488972 = score(doc=1099,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.4686014 = fieldWeight in 1099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1099)
      0.25 = coord(1/4)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
  4. Ferro, N.; Silvello, G.; Keskustalo, H.; Pirkola, A.; Järvelin, K.: ¬The twist measure for IR evaluation : taking user's effort into account (2016) 0.06
    0.06122243 = product of:
      0.24488972 = sum of:
        0.24488972 = weight(_text_:handles in 3771) [ClassicSimilarity], result of:
          0.24488972 = score(doc=3771,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.4686014 = fieldWeight in 3771, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3771)
      0.25 = coord(1/4)
    
    Abstract
    We present a novel measure for ranking evaluation, called Twist (t). It is a measure for informational intents, which handles both binary and graded relevance. t stems from the observation that searching is currently a that searching is currently taken for granted and it is natural for users to assume that search engines are available and work well. As a consequence, users may assume the utility they have in finding relevant documents, which is the focus of traditional measures, as granted. On the contrary, they may feel uneasy when the system returns nonrelevant documents because they are then forced to do additional work to get the desired information, and this causes avoidable effort. The latter is the focus of t, which evaluates the effectiveness of a system from the point of view of the effort required to the users to retrieve the desired information. We provide a formal definition of t, a demonstration of its properties, and introduce the notion of effort/gain plots, which complement traditional utility-based measures. By means of an extensive experimental evaluation, t is shown to grasp different aspects of system performances, to not require extensive and costly assessments, and to be a robust tool for detecting differences between systems.
  5. Ortega, J.L.: ¬The presence of academic journals on Twitter and its relationship with dissemination (tweets) and research impact (citations) (2017) 0.06
    0.06122243 = product of:
      0.24488972 = sum of:
        0.24488972 = weight(_text_:handles in 410) [ClassicSimilarity], result of:
          0.24488972 = score(doc=410,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.4686014 = fieldWeight in 410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0390625 = fieldNorm(doc=410)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to analyze the relationship between dissemination of research papers on Twitter and its influence on research impact. Design/methodology/approach Four types of journal Twitter accounts (journal, owner, publisher and no Twitter account) were defined to observe differences in the number of tweets and citations. In total, 4,176 articles from 350 journals were extracted from Plum Analytics. This altmetric provider tracks the number of tweets and citations for each paper. Student's t-test for two-paired samples was used to detect significant differences between each group of journals. Regression analysis was performed to detect which variables may influence the getting of tweets and citations. Findings The results show that journals with their own Twitter account obtain more tweets (46 percent) and citations (34 percent) than journals without a Twitter account. Followers is the variable that attracts more tweets (ß=0.47) and citations (ß=0.28) but the effect is small and the fit is not good for tweets (R2=0.46) and insignificant for citations (R2=0.18). Originality/value This is the first study that tests the performance of research journals on Twitter according to their handles, observing how the dissemination of content in this microblogging network influences the citation of their papers.
  6. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.06
    0.060607117 = product of:
      0.24242847 = sum of:
        0.24242847 = weight(_text_:handles in 2260) [ClassicSimilarity], result of:
          0.24242847 = score(doc=2260,freq=4.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.46389174 = fieldWeight in 2260, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.02734375 = fieldNorm(doc=2260)
      0.25 = coord(1/4)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.
  7. Braeckman, J.: ¬The integration of library information into a campus wide information system (1996) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.23665328 = score(doc=729,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=729)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the development of Campus Wide Information Systems with reference to the work of Leuven University Library. A 4th phase can now be distinguished in the evolution of CWISs as they evolve towards Intranets. WWW technology is applied to organise a consistent interface to different types of information, databases and services within an institution. WWW servers now exist via which queries and query results are translated from the Web environment to the specific database query language and vice versa. The integration of Java will enable programs to be executed from within the Web environment. Describes each phase of CWIS development at KU Leuven
  8. Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 756) [ClassicSimilarity], result of:
          0.23665328 = score(doc=756,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=756)
      0.25 = coord(1/4)
    
    Abstract
    Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
  9. Lo, M.L.: Recent strategies for retrieving chemical structure information on the Web (1997) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 3611) [ClassicSimilarity], result of:
          0.23665328 = score(doc=3611,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 3611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3611)
      0.25 = coord(1/4)
    
    Abstract
    Discusses various structural searching methods available on the Web. some databases such as the Brookhaven Protein Database use keyword searching which does not provide the desired substructure search capabilities. Others like CS ChemFinder and MDL's Chemscape use graphical plug in programs. Although plug in programs provide more capabilities, users first have to obtain a copy of the programs. Due to this limitation, Tripo's WebSketch and ACD Interactive Lab adopt a different approach. Using JAVA applets, users create and display a structure query of the molecule on the web page without using other software. The new technique is likely to extend itself to other electronic publications
  10. Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 4287) [ClassicSimilarity], result of:
          0.23665328 = score(doc=4287,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 4287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4287)
      0.25 = coord(1/4)
    
    Abstract
    Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet
  11. Priss, U.: ¬A graphical interface for conceptually navigating faceted thesauri (1998) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 658) [ClassicSimilarity], result of:
          0.23665328 = score(doc=658,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=658)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes a graphical interface for the navigation and construction of faceted thesauri that is based on formal concept analysis. Each facet of a thesaurus is represented as a mathematical lattice that is further subdivided into components. Users can graphically navigate through the Java implementation of the interface by clicking on terms that connect facets and components. Since there are many applications for thesauri in the knowledge representation field, such a graphical interface has the potential of being very useful
  12. Renehan, E.J.: Science on the Web : a connoisseur's guide to over 500 of the best, most useful, and most fun science Websites (1996) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 1211) [ClassicSimilarity], result of:
          0.23665328 = score(doc=1211,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    Written by the author of the best-selling 1001 really cool Web sites, this fun and informative book enables readers to take full advantage of the Web. More than a mere directory, it identifies and describes the best sites, guiding surfers to such innovations as VRML3-D and Java. Aside from downloads of Web browsers, Renehan points the way to free compilers and interpreters as well as free online access to major scientific journals
  13. Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 4608) [ClassicSimilarity], result of:
          0.23665328 = score(doc=4608,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 4608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4608)
      0.25 = coord(1/4)
    
    Abstract
    In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.
  14. Hancock, B.; Giarlo, M.J.: Moving to XML : Latin texts XML conversion project at the Center for Electronic Texts in the Humanities (2001) 0.06
    0.05916332 = product of:
      0.23665328 = sum of:
        0.23665328 = weight(_text_:java in 5801) [ClassicSimilarity], result of:
          0.23665328 = score(doc=5801,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.5450528 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5801)
      0.25 = coord(1/4)
    
    Abstract
    The delivery of documents on the Web has moved beyond the restrictions of the traditional Web markup language, HTML. HTML's static tags cannot deal with the variety of data formats now beginning to be exchanged between various entities, whether corporate or institutional. XML solves many of the problems by allowing arbitrary tags, which describe the content for a particular audience or group. At the Center for Electronic Texts in the Humanities the Latin texts of Lector Longinquus are being transformed to XML in readiness for the expected new standard. To allow existing browsers to render these texts, a Java program is used to transform the XML to HTML on the fly.
  15. Calishain, T.; Dornfest, R.: Google hacks : 100 industrial-strength tips and tools (2003) 0.06
    0.059001118 = product of:
      0.118002236 = sum of:
        0.084519036 = weight(_text_:java in 134) [ClassicSimilarity], result of:
          0.084519036 = score(doc=134,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.19466174 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
        0.033483204 = weight(_text_:und in 134) [ClassicSimilarity], result of:
          0.033483204 = score(doc=134,freq=32.0), product of:
            0.13664074 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.061608184 = queryNorm
            0.24504554 = fieldWeight in 134, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.4, S.253 (D. Lewandowski): "Mit "Google Hacks" liegt das bisher umfassendste Werk vor, das sich ausschließlich an den fortgeschrittenen Google-Nutzer wendet. Daher wird man in diesem Buch auch nicht die sonst üblichen Anfänger-Tips finden, die Suchmaschinenbücher und sonstige Anleitungen zur Internet-Recherche für den professionellen Nutzer in der Regel uninteressant machen. Mit Tara Calishain hat sich eine Autorin gefunden, die bereits seit nahezu fünf Jahren einen eigenen Suchmaschinen-Newsletter (www.researchbuzz.com) herausgibt und als Autorin bzw. Co-Autorin einige Bücher zum Thema Recherche verfasst hat. Für die Programmbeispiele im Buch ist Rael Dornfest verantwortlich. Das erste Kapitel ("Searching Google") gibt einen Einblick in erweiterte Suchmöglichkeiten und Spezifika der behandelten Suchmaschine. Dabei wird der Rechercheansatz der Autorin klar: die beste Methode sei es, die Zahl der Treffer selbst so weit einzuschränken, dass eine überschaubare Menge übrig bleibt, die dann tatsächlich gesichtet werden kann. Dazu werden die feldspezifischen Suchmöglichkeiten in Google erläutert, Tips für spezielle Suchen (nach Zeitschriftenarchiven, technischen Definitionen, usw.) gegeben und spezielle Funktionen der Google-Toolbar erklärt. Bei der Lektüre fällt positiv auf, dass auch der erfahrene Google-Nutzer noch Neues erfährt. Einziges Manko in diesem Kapitel ist der fehlende Blick über den Tellerrand: zwar ist es beispielsweise möglich, mit Google eine Datumssuche genauer als durch das in der erweiterten Suche vorgegebene Auswahlfeld einzuschränken; die aufgezeigte Lösung ist jedoch ausgesprochen umständlich und im Recherchealltag nur eingeschränkt zu gebrauchen. Hier fehlt der Hinweis, dass andere Suchmaschinen weit komfortablere Möglichkeiten der Einschränkung bieten. Natürlich handelt es sich bei dem vorliegenden Werk um ein Buch ausschließlich über Google, trotzdem wäre hier auch ein Hinweis auf die Schwächen hilfreich gewesen. In späteren Kapiteln werden durchaus auch alternative Suchmaschinen zur Lösung einzelner Probleme erwähnt. Das zweite Kapitel widmet sich den von Google neben der klassischen Websuche angebotenen Datenbeständen. Dies sind die Verzeichniseinträge, Newsgroups, Bilder, die Nachrichtensuche und die (hierzulande) weniger bekannten Bereichen Catalogs (Suche in gedruckten Versandhauskatalogen), Froogle (eine in diesem Jahr gestartete Shopping-Suchmaschine) und den Google Labs (hier werden von Google entwickelte neue Funktionen zum öffentlichen Test freigegeben). Nachdem die ersten beiden Kapitel sich ausführlich den Angeboten von Google selbst gewidmet haben, beschäftigt sich das Buch ab Kapitel drei mit den Möglichkeiten, die Datenbestände von Google mittels Programmierungen für eigene Zwecke zu nutzen. Dabei werden einerseits bereits im Web vorhandene Programme vorgestellt, andererseits enthält das Buch viele Listings mit Erläuterungen, um eigene Applikationen zu programmieren. Die Schnittstelle zwischen Nutzer und der Google-Datenbank ist das Google-API ("Application Programming Interface"), das es den registrierten Benutzern erlaubt, täglich bis zu 1.00o Anfragen über ein eigenes Suchinterface an Google zu schicken. Die Ergebnisse werden so zurückgegeben, dass sie maschinell weiterverarbeitbar sind. Außerdem kann die Datenbank in umfangreicherer Weise abgefragt werden als bei einem Zugang über die Google-Suchmaske. Da Google im Gegensatz zu anderen Suchmaschinen in seinen Benutzungsbedingungen die maschinelle Abfrage der Datenbank verbietet, ist das API der einzige Weg, eigene Anwendungen auf Google-Basis zu erstellen. Ein eigenes Kapitel beschreibt die Möglichkeiten, das API mittels unterschiedlicher Programmiersprachen wie PHP, Java, Python, usw. zu nutzen. Die Beispiele im Buch sind allerdings alle in Perl geschrieben, so dass es sinnvoll erscheint, für eigene Versuche selbst auch erst einmal in dieser Sprache zu arbeiten.
    Das sechste Kapitel enthält 26 Anwendungen des Google-APIs, die teilweise von den Autoren des Buchs selbst entwickelt wurden, teils von anderen Autoren ins Netz gestellt wurden. Als besonders nützliche Anwendungen werden unter anderem der Touchgraph Google Browser zur Visualisierung der Treffer und eine Anwendung, die eine Google-Suche mit Abstandsoperatoren erlaubt, vorgestellt. Auffällig ist hier, dass die interessanteren dieser Applikationen nicht von den Autoren des Buchs programmiert wurden. Diese haben sich eher auf einfachere Anwendungen wie beispielsweise eine Zählung der Treffer nach der Top-Level-Domain beschränkt. Nichtsdestotrotz sind auch diese Anwendungen zum großen Teil nützlich. In einem weiteren Kapitel werden pranks and games ("Streiche und Spiele") vorgestellt, die mit dem Google-API realisiert wurden. Deren Nutzen ist natürlich fragwürdig, der Vollständigkeit halber mögen sie in das Buch gehören. Interessanter wiederum ist das letzte Kapitel: "The Webmaster Side of Google". Hier wird Seitenbetreibern erklärt, wie Google arbeitet, wie man Anzeigen am besten formuliert und schaltet, welche Regeln man beachten sollte, wenn man seine Seiten bei Google plazieren will und letztlich auch, wie man Seiten wieder aus dem Google-Index entfernen kann. Diese Ausführungen sind sehr knapp gehalten und ersetzen daher keine Werke, die sich eingehend mit dem Thema Suchmaschinen-Marketing beschäftigen. Allerdings sind die Ausführungen im Gegensatz zu manch anderen Büchern zum Thema ausgesprochen seriös und versprechen keine Wunder in Bezug auf eine Plazienung der eigenen Seiten im Google-Index. "Google Hacks" ist auch denjenigen zu empfehlen, die sich nicht mit der Programmierung mittels des APIs beschäftigen möchten. Dadurch, dass es die bisher umfangreichste Sammlung von Tips und Techniken für einen gezielteren Umgang mit Google darstellt, ist es für jeden fortgeschrittenen Google-Nutzer geeignet. Zwar mögen einige der Hacks einfach deshalb mit aufgenommen worden sein, damit insgesamt die Zahl von i00 erreicht wird. Andere Tips bringen dafür klar erweiterte Möglichkeiten bei der Recherche. Insofern hilft das Buch auch dabei, die für professionelle Bedürfnisse leider unzureichende Abfragesprache von Google ein wenig auszugleichen." - Bergische Landeszeitung Nr.207 vom 6.9.2003, S.RAS04A/1 (Rundschau am Sonntag: Netzwelt) von P. Zschunke: Richtig googeln (s. dort)
  16. Gibson, P.: Professionals' perfect Web world in sight : users want more information on the Web, and vendors attempt to provide (1998) 0.05
    0.05071142 = product of:
      0.20284568 = sum of:
        0.20284568 = weight(_text_:java in 2656) [ClassicSimilarity], result of:
          0.20284568 = score(doc=2656,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.46718815 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2656)
      0.25 = coord(1/4)
    
    Abstract
    Many information professionals feel that the time is still far off when the WWW can offer the combined funtionality and content of traditional online and CD-ROM databases, but there have been a number of recent Web developments to reflect on. Describes the testing and launch by Ovid of its Java client which, in effect, allows access to its databases on the Web with full search functionality, and the initiative of Euromonitor in providing Web access to its whole collection of consumer research reports and its entire database of business sources. Also reviews the service of a newcomer to the information scene, Information Quest (IQ) founded by Dawson Holdings which has made an agreement with Infonautics to offer access to its Electric Library database thus adding over 1.000 reference, consumer and business publications to its Web based journal service
  17. Nieuwenhuysen, P.; Vanouplines, P.: Document plus program hybrids on the Internet and their impact on information transfer (1998) 0.05
    0.05071142 = product of:
      0.20284568 = sum of:
        0.20284568 = weight(_text_:java in 2893) [ClassicSimilarity], result of:
          0.20284568 = score(doc=2893,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.46718815 = fieldWeight in 2893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2893)
      0.25 = coord(1/4)
    
    Abstract
    Examines some of the advanced tools, techniques, methods and standards related to the Internet and WWW which consist of hybrids of documents and software, called 'document program hybrids'. Early Internet systems were based on having documents on one side and software on the other, neatly separated, apart from one another and without much interaction, so that the static document can also exist without computers and networks. Documentation program hybrids blur this classical distinction and all components are integrated, interwoven and exist in synergy with each other. Illustrates the techniques with particular reference to practical examples, including: dara collections and dedicated software; advanced HTML features on the WWW, multimedia viewer and plug in software for Internet and WWW browsers; VRML; interaction through a Web server with other servers and with instruments; adaptive hypertext provided by the server; 'webbots' or 'knowbots' or 'searchbots' or 'metasearch engines' or intelligent software agents; Sun's Java; Microsoft's ActiveX; program scripts for HTML and Web browsers; cookies; and Internet push technology with Webcasting channels
  18. Mills, T.; Moody, K.; Rodden, K.: Providing world wide access to historical sources (1997) 0.05
    0.05071142 = product of:
      0.20284568 = sum of:
        0.20284568 = weight(_text_:java in 3697) [ClassicSimilarity], result of:
          0.20284568 = score(doc=3697,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.46718815 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3697)
      0.25 = coord(1/4)
    
    Abstract
    A unique collection of historical material covering the lives and events of an English village between 1400 and 1750 has been made available via a WWW enabled information retrieval system. Since the expected readership of the documents ranges from school children to experienced researchers, providing this information in an easily accessible form has offered many challenges requiring tools to aid searching and browsing. The file structure of the document collection was replaced by an database, enabling query results to be presented on the fly. A Java interface displays each user's context in a form that allows for easy and intuitive relevance feedback
  19. Maarek, Y.S.: WebCutter : a system for dynamic and tailorable site mapping (1997) 0.05
    0.05071142 = product of:
      0.20284568 = sum of:
        0.20284568 = weight(_text_:java in 3739) [ClassicSimilarity], result of:
          0.20284568 = score(doc=3739,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.46718815 = fieldWeight in 3739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3739)
      0.25 = coord(1/4)
    
    Abstract
    Presents an approach that integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables contextualization of the results by augmenting them with the documents' neighbourhoods. This approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runalble from any standard Web browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino application product line
  20. Pan, B.; Gay, G.; Saylor, J.; Hembrooke, H.: One digital library, two undergraduate casses, and four learning modules : uses of a digital library in cassrooms (2006) 0.05
    0.05071142 = product of:
      0.20284568 = sum of:
        0.20284568 = weight(_text_:java in 907) [ClassicSimilarity], result of:
          0.20284568 = score(doc=907,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.46718815 = fieldWeight in 907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=907)
      0.25 = coord(1/4)
    
    Abstract
    The KMODDL (kinematic models for design digital library) is a digital library based on a historical collection of kinematic models made of steel and bronze. The digital library contains four types of learning modules including textual materials, QuickTime virtual reality movies, Java simulations, and stereolithographic files of the physical models. The authors report an evaluation study on the uses of the KMODDL in two undergraduate classes. This research reveals that the users in different classes encountered different usability problems, and reported quantitatively different subjective experiences. Further, the results indicate that depending on the subject area, the two user groups preferred different types of learning modules, resulting in different uses of the available materials and different learning outcomes. These findings are discussed in terms of their implications for future digital library design.

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 794
  • m 310
  • el 107
  • s 93
  • i 21
  • n 17
  • x 12
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications