Search (1451 results, page 5 of 73)

  • × language_ss:"e"
  1. Studwell, W.E.; Hamilton, D.A.: Library of Congress Subject Heading period subdivisions for the Soviet Union : some proposed additions (1986) 0.07
    0.0669184 = product of:
      0.2676736 = sum of:
        0.2676736 = weight(_text_:heading in 478) [ClassicSimilarity], result of:
          0.2676736 = score(doc=478,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.668324 = fieldWeight in 478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.078125 = fieldNorm(doc=478)
      0.25 = coord(1/4)
    
  2. Frost, C.O.; Dede, B.A.: Subject heading compatibility between LCSH and catalog files of a large research library : a suggested model for analysis (1988) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 654) [ClassicSimilarity], result of:
          0.2649834 = score(doc=654,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 654, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=654)
      0.25 = coord(1/4)
    
    Abstract
    Assigned topical and geographic subject headings from a sample of 3.814 bibliographic records in the University of Michigan Library's catalog were analyzed to determine the degree of match with LCSH, 10th edition and to idetify types of heading conflicts that lend themselves to automatic subject authority control. The findings showed a surprising degree of agreement: 44 percent of headings matched LCSH 10th completely. For headings without subdivisions, the match was 88,4 percent. Since 93,6 percent of the topical subdivisions that did not match LCSH were found on the free-floating lists, some consideration should be given to developing a machine-readable file of free-floating subdivisons for matching purposes.
  3. Lopes, M.I.: Principles underlying subject heading languages : an international approach (1996) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 6608) [ClassicSimilarity], result of:
          0.2649834 = score(doc=6608,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 6608, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6608)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the problems in establishing commonly accepted principles for subject retrieval between different bibliographic systems. The Working Group on Principles Underlying Subject Heading Languages was established to devise general principles for any subject retrieval system and to review existing real systems in the light of such principles and compare them in order to evaluate the extent of their coverage and their application in current practices. Provides a background and history of the Working Group. Discusses the principles underlying subject headings and their purposes and the state of the work and major findings
  4. Studwell, W.E.: Library of Congress Subject Heading period subdivisions for Southeast Asia : some proposed additions (1982) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 417) [ClassicSimilarity], result of:
          0.2649834 = score(doc=417,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 417, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=417)
      0.25 = coord(1/4)
    
    Abstract
    Southeast Asia is an important and populous region with an extensive literature. Many libraries in the United States and Canada have large or moderate collections of Southeast Asia materials. Yet the Library of Congress has not provided sufficient subject heading period subdivisions for the area. Additional subdivisons are proposed in detail for: Asia, Southeastern; Indonesia; Malaysia; Singapore; Philippines; Thailand; Indochina; Vietnam; Cambodia; Laos; and Burma. The historical and/or logical justification for the proposed additions follow each area. The function of the essay is not to present absolute answer, but to promote awareness of the problem and to suggest reasonable alternatives.
  5. Palmer, J.W.: Subject authority control and syndetic structure - myth and realities : an inquiry into certain subject heading practices and some questions about their implications (1986) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 501) [ClassicSimilarity], result of:
          0.2649834 = score(doc=501,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 501, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=501)
      0.25 = coord(1/4)
    
    Abstract
    An examination of subject heading practices in the card catalogs of libraries in one New York State county and an analysis of selected subject headings found that only the largest libraries were able to provide any kind of subject authority control. Furthermore, not even the largest libraries were able to provide the "See Also" references upon which the Library of Congress assignment of subject headings is based. Changes in LCSH headings resulted in great confusion and a dispersal of resources in the smaller libraries. Is this situation typical of practices at other libraries in other parts of the United States? If so, the implications could be very serious. The study offers no answers, but raises important questions.
  6. Salas-Tull, L.; Halverson, J.: Subject heading revision : a comparative study (1987) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 507) [ClassicSimilarity], result of:
          0.2649834 = score(doc=507,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 507, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=507)
      0.25 = coord(1/4)
    
    Abstract
    Cataloging departments must weigh the goal of high quality cataloging against the need to make materials available to the patron in a timely, cost effective fashion. Many cataloging departments still review and revise subject headings assigned by OCLC member libraries to achieve quality cataloging for their libraries. This study evaluates this procedure and compares the number of subject heading revisions made to OCLC cooperative cataloging copy input by research, academic and public libraries. Percentages of revisions did not differ greatly among the three types of libraries and were lower than expected. A reassessment of the library's procedures was recommended and several issues that all libraries should consider were enumerated.
  7. McGarry, D.: Magda Heiner-Freiling and her work in the IFLA Section on Classification and Indexing : ein Erfahrungsbericht (2008) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 3201) [ClassicSimilarity], result of:
          0.2649834 = score(doc=3201,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 3201, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3201)
      0.25 = coord(1/4)
    
    Abstract
    Magda Heiner-Freiling was an exceptional person, and her participation with the Section on Classification and Indexing helped to produce valuable publications as well as contributing to a very pleasant working environment. She participated in and contributed to a satellite meeting on subject indexing, in a Working Group on Principles Underlying Subject Heading Languages, and in surveying national libraries and national bibliographies on subject heading languages and classification systems used. She brought many excellent qualities to her work.
  8. Ferris, A.M.: Results of an expanded survey on the use of Classification Web : they will use it, if you buy it! (2009) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 3991) [ClassicSimilarity], result of:
          0.2649834 = score(doc=3991,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 3991, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3991)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents the results of a survey examining the extent to which working catalogers use Classification Web, the Library of Congress' online resource for subject heading and classification documentation. An earlier survey analyzed Class Web's usefulness on an institutional level. This broader survey expands on that analysis and provides information on such questions as: what types of institutions subscribe to Class Web; what are the reasons for using Class Web when performing original or copy cataloging; and what other resources do catalogers use for classification/subject heading analysis?
  9. Biswas, P.: Rooted in the past : use of "East Indians" in Library of Congress Subject Headings (2018) 0.07
    0.06624585 = product of:
      0.2649834 = sum of:
        0.2649834 = weight(_text_:heading in 167) [ClassicSimilarity], result of:
          0.2649834 = score(doc=167,freq=4.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.661607 = fieldWeight in 167, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=167)
      0.25 = coord(1/4)
    
    Abstract
    This article argues that the use of the Library of Congress subject heading "East Indians" in reference to individuals from India represents not only a problematic vestige of colonialism, but also a failure of the principle of literary warrant. It provides an overview of the term's historical roots and then examines whether the term is still widely used in published resources. Although assigning a subject heading is not easy and can involve a choice between contested realities of diverse peoples, the author contends that a rejection of outdated terminology is central to providing any culturally sensitive tool for resource organization.
  10. Coates, E.J.: Significance and term relationship in compound headings (1985) 0.07
    0.065566376 = product of:
      0.2622655 = sum of:
        0.2622655 = weight(_text_:heading in 4634) [ClassicSimilarity], result of:
          0.2622655 = score(doc=4634,freq=12.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.6548211 = fieldWeight in 4634, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.03125 = fieldNorm(doc=4634)
      0.25 = coord(1/4)
    
    Abstract
    In the continuing search for criteria for determining the form of compound headings (i.e., headings containing more than one word), many authors have attempted to deal with the problem of entry element and citation order. Among the proposed criteria are Cutter's concept of "significance," Kaiser's formula of "concrete/process," Prevost's "noun rule," and Farradane's categories of relationships*' (q.v.). One of the problems in applying the criteria has been the difficulty in determining what is "significant," particularly when two or more words in the heading all refer to concrete objects. In the following excerpt from Subject Catalogues: Headings and Structure, a widely cited book an the alphabetical subject catalog, E. J. Coates proposes the concept of "term significance," that is, "the word which evokes the clearest mental image," as the criterion for determining the entry element in a compound heading. Since a concrete object generally evokes a clearer mental image than an action or process, Coates' theory is in line with Kaiser's theory of "concrete/process" (q.v.) which Coates renamed "thing/action." For determining the citation order of component elements in a compound heading where the elements are equally "significant" (i.e., both or all evoking clear mental images), Coates proposes the use of "term relationship" as the determining factor. He has identified twenty different kinds of relationships among terms and set down the citation order for each. Another frequently encountered problem related to citation order is the determination of the entry element for a compound heading which contains a topic and a locality. Entering such headings uniformly under either the topic or the locality has proven to be infeasible in practice. Many headings of this type have the topic as the main heading, subdivided by the locality; others are entered under the locality as the main heading with the topic as the subdivision. No criteria or rules have been proposed that ensure consistency or predictability. In the following selection, Coates attempts to deal with this problem by ranking the "main areas of knowledge according to the extent to which they appear to be significantly conditioned by locality." The theory Coates expounded in his book was put into practice in compiling the British Technology Index for which Coates served as the editor from 1961 to 1977.
  11. Braeckman, J.: ¬The integration of library information into a campus wide information system (1996) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.25433764 = score(doc=729,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=729)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the development of Campus Wide Information Systems with reference to the work of Leuven University Library. A 4th phase can now be distinguished in the evolution of CWISs as they evolve towards Intranets. WWW technology is applied to organise a consistent interface to different types of information, databases and services within an institution. WWW servers now exist via which queries and query results are translated from the Web environment to the specific database query language and vice versa. The integration of Java will enable programs to be executed from within the Web environment. Describes each phase of CWIS development at KU Leuven
  12. Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 756) [ClassicSimilarity], result of:
          0.25433764 = score(doc=756,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=756)
      0.25 = coord(1/4)
    
    Abstract
    Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
  13. Lo, M.L.: Recent strategies for retrieving chemical structure information on the Web (1997) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 3611) [ClassicSimilarity], result of:
          0.25433764 = score(doc=3611,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 3611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3611)
      0.25 = coord(1/4)
    
    Abstract
    Discusses various structural searching methods available on the Web. some databases such as the Brookhaven Protein Database use keyword searching which does not provide the desired substructure search capabilities. Others like CS ChemFinder and MDL's Chemscape use graphical plug in programs. Although plug in programs provide more capabilities, users first have to obtain a copy of the programs. Due to this limitation, Tripo's WebSketch and ACD Interactive Lab adopt a different approach. Using JAVA applets, users create and display a structure query of the molecule on the web page without using other software. The new technique is likely to extend itself to other electronic publications
  14. Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 4287) [ClassicSimilarity], result of:
          0.25433764 = score(doc=4287,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 4287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4287)
      0.25 = coord(1/4)
    
    Abstract
    Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet
  15. Priss, U.: ¬A graphical interface for conceptually navigating faceted thesauri (1998) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 658) [ClassicSimilarity], result of:
          0.25433764 = score(doc=658,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=658)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes a graphical interface for the navigation and construction of faceted thesauri that is based on formal concept analysis. Each facet of a thesaurus is represented as a mathematical lattice that is further subdivided into components. Users can graphically navigate through the Java implementation of the interface by clicking on terms that connect facets and components. Since there are many applications for thesauri in the knowledge representation field, such a graphical interface has the potential of being very useful
  16. Renehan, E.J.: Science on the Web : a connoisseur's guide to over 500 of the best, most useful, and most fun science Websites (1996) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 1211) [ClassicSimilarity], result of:
          0.25433764 = score(doc=1211,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    Written by the author of the best-selling 1001 really cool Web sites, this fun and informative book enables readers to take full advantage of the Web. More than a mere directory, it identifies and describes the best sites, guiding surfers to such innovations as VRML3-D and Java. Aside from downloads of Web browsers, Renehan points the way to free compilers and interpreters as well as free online access to major scientific journals
  17. Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 4608) [ClassicSimilarity], result of:
          0.25433764 = score(doc=4608,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 4608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4608)
      0.25 = coord(1/4)
    
    Abstract
    In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.
  18. Hancock, B.; Giarlo, M.J.: Moving to XML : Latin texts XML conversion project at the Center for Electronic Texts in the Humanities (2001) 0.06
    0.06358441 = product of:
      0.25433764 = sum of:
        0.25433764 = weight(_text_:java in 5801) [ClassicSimilarity], result of:
          0.25433764 = score(doc=5801,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5801)
      0.25 = coord(1/4)
    
    Abstract
    The delivery of documents on the Web has moved beyond the restrictions of the traditional Web markup language, HTML. HTML's static tags cannot deal with the variety of data formats now beginning to be exchanged between various entities, whether corporate or institutional. XML solves many of the problems by allowing arbitrary tags, which describe the content for a particular audience or group. At the Center for Electronic Texts in the Humanities the Latin texts of Lector Longinquus are being transformed to XML in readiness for the expected new standard. To allow existing browsers to render these texts, a Java program is used to transform the XML to HTML on the fly.
  19. Calishain, T.; Dornfest, R.: Google hacks : 100 industrial-strength tips and tools (2003) 0.06
    0.06341009 = product of:
      0.12682018 = sum of:
        0.09083488 = weight(_text_:java in 134) [ClassicSimilarity], result of:
          0.09083488 = score(doc=134,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.19466174 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
        0.0359853 = weight(_text_:und in 134) [ClassicSimilarity], result of:
          0.0359853 = score(doc=134,freq=32.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.24504554 = fieldWeight in 134, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.4, S.253 (D. Lewandowski): "Mit "Google Hacks" liegt das bisher umfassendste Werk vor, das sich ausschließlich an den fortgeschrittenen Google-Nutzer wendet. Daher wird man in diesem Buch auch nicht die sonst üblichen Anfänger-Tips finden, die Suchmaschinenbücher und sonstige Anleitungen zur Internet-Recherche für den professionellen Nutzer in der Regel uninteressant machen. Mit Tara Calishain hat sich eine Autorin gefunden, die bereits seit nahezu fünf Jahren einen eigenen Suchmaschinen-Newsletter (www.researchbuzz.com) herausgibt und als Autorin bzw. Co-Autorin einige Bücher zum Thema Recherche verfasst hat. Für die Programmbeispiele im Buch ist Rael Dornfest verantwortlich. Das erste Kapitel ("Searching Google") gibt einen Einblick in erweiterte Suchmöglichkeiten und Spezifika der behandelten Suchmaschine. Dabei wird der Rechercheansatz der Autorin klar: die beste Methode sei es, die Zahl der Treffer selbst so weit einzuschränken, dass eine überschaubare Menge übrig bleibt, die dann tatsächlich gesichtet werden kann. Dazu werden die feldspezifischen Suchmöglichkeiten in Google erläutert, Tips für spezielle Suchen (nach Zeitschriftenarchiven, technischen Definitionen, usw.) gegeben und spezielle Funktionen der Google-Toolbar erklärt. Bei der Lektüre fällt positiv auf, dass auch der erfahrene Google-Nutzer noch Neues erfährt. Einziges Manko in diesem Kapitel ist der fehlende Blick über den Tellerrand: zwar ist es beispielsweise möglich, mit Google eine Datumssuche genauer als durch das in der erweiterten Suche vorgegebene Auswahlfeld einzuschränken; die aufgezeigte Lösung ist jedoch ausgesprochen umständlich und im Recherchealltag nur eingeschränkt zu gebrauchen. Hier fehlt der Hinweis, dass andere Suchmaschinen weit komfortablere Möglichkeiten der Einschränkung bieten. Natürlich handelt es sich bei dem vorliegenden Werk um ein Buch ausschließlich über Google, trotzdem wäre hier auch ein Hinweis auf die Schwächen hilfreich gewesen. In späteren Kapiteln werden durchaus auch alternative Suchmaschinen zur Lösung einzelner Probleme erwähnt. Das zweite Kapitel widmet sich den von Google neben der klassischen Websuche angebotenen Datenbeständen. Dies sind die Verzeichniseinträge, Newsgroups, Bilder, die Nachrichtensuche und die (hierzulande) weniger bekannten Bereichen Catalogs (Suche in gedruckten Versandhauskatalogen), Froogle (eine in diesem Jahr gestartete Shopping-Suchmaschine) und den Google Labs (hier werden von Google entwickelte neue Funktionen zum öffentlichen Test freigegeben). Nachdem die ersten beiden Kapitel sich ausführlich den Angeboten von Google selbst gewidmet haben, beschäftigt sich das Buch ab Kapitel drei mit den Möglichkeiten, die Datenbestände von Google mittels Programmierungen für eigene Zwecke zu nutzen. Dabei werden einerseits bereits im Web vorhandene Programme vorgestellt, andererseits enthält das Buch viele Listings mit Erläuterungen, um eigene Applikationen zu programmieren. Die Schnittstelle zwischen Nutzer und der Google-Datenbank ist das Google-API ("Application Programming Interface"), das es den registrierten Benutzern erlaubt, täglich bis zu 1.00o Anfragen über ein eigenes Suchinterface an Google zu schicken. Die Ergebnisse werden so zurückgegeben, dass sie maschinell weiterverarbeitbar sind. Außerdem kann die Datenbank in umfangreicherer Weise abgefragt werden als bei einem Zugang über die Google-Suchmaske. Da Google im Gegensatz zu anderen Suchmaschinen in seinen Benutzungsbedingungen die maschinelle Abfrage der Datenbank verbietet, ist das API der einzige Weg, eigene Anwendungen auf Google-Basis zu erstellen. Ein eigenes Kapitel beschreibt die Möglichkeiten, das API mittels unterschiedlicher Programmiersprachen wie PHP, Java, Python, usw. zu nutzen. Die Beispiele im Buch sind allerdings alle in Perl geschrieben, so dass es sinnvoll erscheint, für eigene Versuche selbst auch erst einmal in dieser Sprache zu arbeiten.
    Das sechste Kapitel enthält 26 Anwendungen des Google-APIs, die teilweise von den Autoren des Buchs selbst entwickelt wurden, teils von anderen Autoren ins Netz gestellt wurden. Als besonders nützliche Anwendungen werden unter anderem der Touchgraph Google Browser zur Visualisierung der Treffer und eine Anwendung, die eine Google-Suche mit Abstandsoperatoren erlaubt, vorgestellt. Auffällig ist hier, dass die interessanteren dieser Applikationen nicht von den Autoren des Buchs programmiert wurden. Diese haben sich eher auf einfachere Anwendungen wie beispielsweise eine Zählung der Treffer nach der Top-Level-Domain beschränkt. Nichtsdestotrotz sind auch diese Anwendungen zum großen Teil nützlich. In einem weiteren Kapitel werden pranks and games ("Streiche und Spiele") vorgestellt, die mit dem Google-API realisiert wurden. Deren Nutzen ist natürlich fragwürdig, der Vollständigkeit halber mögen sie in das Buch gehören. Interessanter wiederum ist das letzte Kapitel: "The Webmaster Side of Google". Hier wird Seitenbetreibern erklärt, wie Google arbeitet, wie man Anzeigen am besten formuliert und schaltet, welche Regeln man beachten sollte, wenn man seine Seiten bei Google plazieren will und letztlich auch, wie man Seiten wieder aus dem Google-Index entfernen kann. Diese Ausführungen sind sehr knapp gehalten und ersetzen daher keine Werke, die sich eingehend mit dem Thema Suchmaschinen-Marketing beschäftigen. Allerdings sind die Ausführungen im Gegensatz zu manch anderen Büchern zum Thema ausgesprochen seriös und versprechen keine Wunder in Bezug auf eine Plazienung der eigenen Seiten im Google-Index. "Google Hacks" ist auch denjenigen zu empfehlen, die sich nicht mit der Programmierung mittels des APIs beschäftigen möchten. Dadurch, dass es die bisher umfangreichste Sammlung von Tips und Techniken für einen gezielteren Umgang mit Google darstellt, ist es für jeden fortgeschrittenen Google-Nutzer geeignet. Zwar mögen einige der Hacks einfach deshalb mit aufgenommen worden sein, damit insgesamt die Zahl von i00 erreicht wird. Andere Tips bringen dafür klar erweiterte Möglichkeiten bei der Recherche. Insofern hilft das Buch auch dabei, die für professionelle Bedürfnisse leider unzureichende Abfragesprache von Google ein wenig auszugleichen." - Bergische Landeszeitung Nr.207 vom 6.9.2003, S.RAS04A/1 (Rundschau am Sonntag: Netzwelt) von P. Zschunke: Richtig googeln (s. dort)
  20. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.06
    0.060731784 = product of:
      0.12146357 = sum of:
        0.014394118 = weight(_text_:und in 674) [ClassicSimilarity], result of:
          0.014394118 = score(doc=674,freq=2.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.098018214 = fieldWeight in 674, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.03125 = fieldNorm(doc=674)
        0.10706945 = weight(_text_:heading in 674) [ClassicSimilarity], result of:
          0.10706945 = score(doc=674,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.2673296 = fieldWeight in 674, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.03125 = fieldNorm(doc=674)
      0.5 = coord(2/4)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 980
  • m 323
  • el 116
  • s 103
  • i 21
  • n 19
  • r 12
  • x 12
  • b 9
  • ? 1
  • h 1
  • l 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications