Search (1405 results, page 4 of 71)

  • × language_ss:"e"
  1. Keenan, S.; Johnston, C.: Concise dictionary of library and information science (2000) 0.07
    0.07151692 = product of:
      0.2860677 = sum of:
        0.2860677 = weight(_text_:handling in 2354) [ClassicSimilarity], result of:
          0.2860677 = score(doc=2354,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.69297814 = fieldWeight in 2354, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.078125 = fieldNorm(doc=2354)
      0.25 = coord(1/4)
    
    Abstract
    Contains about 5.000 terms in one alphabetical sequence, incorporating 6 themes (information sources, information handling, computers and telecommunications, management, research methodology and publishing)
  2. Bjerregaard, T.D.: Experiences from an IRM project in three Danish industrial companies (1989) 0.07
    0.07079814 = product of:
      0.28319255 = sum of:
        0.28319255 = weight(_text_:handling in 2927) [ClassicSimilarity], result of:
          0.28319255 = score(doc=2927,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.68601334 = fieldWeight in 2927, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2927)
      0.25 = coord(1/4)
    
    Abstract
    The concept of Information Resource Management has been used as a working model in a project which aims at improving the awareness of information handling in Danish industrial companies. Describes the content of the project and the results which have been obtained until now, almost half way through the project. An important part of the project has been to analyse and describe the actual situation of information handling in 3 companies and to come up with proposals for improvements to the problems encountered. Some of the practical experiences gained during that process are described.
  3. Barker, P.: Living books and dynamic electronic libraries (1996) 0.07
    0.07079814 = product of:
      0.28319255 = sum of:
        0.28319255 = weight(_text_:handling in 150) [ClassicSimilarity], result of:
          0.28319255 = score(doc=150,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.68601334 = fieldWeight in 150, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=150)
      0.25 = coord(1/4)
    
    Abstract
    Libraries have become an established part of scientific and social cultures and provide an essential mechanism for storing, preserving and sharing documentary records of various types of human endeavour. In recent years, new information handling technologies have emerged and these have significantly influenced the basic nature of conventional paper based libraries and have created a need for new types of 'electronic library'. Discusses some of the changes that have taken place within library systems as a consequence of the emergence of new computerized information handling techniques and presents case studies which outline various developments taking place at the Human-Computer Interaction Laboratory, School of Computing and Mathematics, Teeside University, UK, relating to the creation of electronic books and dynamic electronic libraries, including the Open Access Student Information Service (OASIS)
  4. Shaw, R.R.: Mechanical storage, handling, retrieval and supply of information (1958) 0.07
    0.07079814 = product of:
      0.28319255 = sum of:
        0.28319255 = weight(_text_:handling in 697) [ClassicSimilarity], result of:
          0.28319255 = score(doc=697,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.68601334 = fieldWeight in 697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=697)
      0.25 = coord(1/4)
    
    Abstract
    The technical and administrative problems involved in the storage, handling, and retrieval of library information are emphasized throughout this detailed account of the present equipment used. Reference is made to previous studies and suggestions given for future research. Particular attention is paid to the need for fundamental systems studies and for full investigation of the requirements of the scholar. The author concludes that the problem was proceeded in a piecemeal and 'gadget' fashion and stresses the need for more detailed analysis of the usefulness and economic justification of each separate piece of machinery, without, however, losing sight of the problem in its entirely. By way of practical illustration a method for making the recources of Harvard University's Lamont Library available to all colleges is suggested at the end.
  5. Hill, L.: New Protocols for Gazetteer and Thesaurus Services (2002) 0.06
    0.06436761 = product of:
      0.12873521 = sum of:
        0.014308145 = weight(_text_:und in 2206) [ClassicSimilarity], result of:
          0.014308145 = score(doc=2206,freq=2.0), product of:
            0.14597435 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0658165 = queryNorm
            0.098018214 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.03125 = fieldNorm(doc=2206)
        0.114427075 = weight(_text_:handling in 2206) [ClassicSimilarity], result of:
          0.114427075 = score(doc=2206,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.27719125 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.03125 = fieldNorm(doc=2206)
      0.5 = coord(2/4)
    
    Abstract
    The Alexandria Digital Library Project announces the online publication of two protocols to support querying and response interactions using distributed services: one for gazetteers and one for thesauri. These protocols have been developed for our own purposes and also to support the general interoperability of gazetteers and thesauri on the web. See <http://www.alexandria.ucsb.edu/~gjanee/gazetteer/> and <http://www.alexandria.ucsb.edu/~gjanee/thesaurus/>. For the gazetteer protocol, we have provided a page of test forms that can be used to experiment with the operational functions of the protocol in accessing two gazetteers: the ADL Gazetteer and the ESRI Gazetteer (ESRI has participated in the development of the gazetteer protocol). We are in the process of developing a thesaurus server and a simple client to demonstrate the use of the thesaurus protocol. We are soliciting comments on both protocols. Please remember that we are seeking protocols that are essentially "simple" and easy to implement and that support basic operations - they should not duplicate all of the functions of specialized gazetteer and thesaurus interfaces. We continue to discuss ways of handling various issues and to further develop the protocols. For the thesaurus protocol, outstanding issues include the treatment of multilingual thesauri and the degree to which the language attribute should be supported; whether the Scope Note element should be changed to a repeatable Note element; the best way to handle the hierarchical report for multi-hierarchies where portions of the hierarchy are repeated; and whether support for searching by term identifiers is redundant and unnecessary given that the terms themselves are unique within a thesaurus. For the gazetteer protocol, we continue to work on validation of query and report XML documents and on implementing the part of the protocol designed to support the submission of new entries to a gazetteer. We would like to encourage open discussion of these protocols through the NKOS discussion list (see the NKOS webpage at <http://nkos.slis.kent.edu/>) and the CGGR-L discussion list that focuses on gazetteer development (see ADL Gazetteer Development page at <http://www.alexandria.ucsb.edu/gazetteer>).
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  6. Information science in transition (2009) 0.06
    0.06321683 = product of:
      0.12643366 = sum of:
        0.025293468 = weight(_text_:und in 1634) [ClassicSimilarity], result of:
          0.025293468 = score(doc=1634,freq=16.0), product of:
            0.14597435 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0658165 = queryNorm
            0.17327337 = fieldWeight in 1634, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=1634)
        0.10114019 = weight(_text_:handling in 1634) [ClassicSimilarity], result of:
          0.10114019 = score(doc=1634,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.24500476 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.01953125 = fieldNorm(doc=1634)
      0.5 = coord(2/4)
    
    Abstract
    Are we at a turning point in digital information? The expansion of the internet was unprecedented; search engines dealt with it in the only way possible - scan as much as they could and throw it all into an inverted index. But now search engines are beginning to experiment with deep web searching and attention to taxonomies, and the semantic web is demonstrating how much more can be done with a computer if you give it knowledge. What does this mean for the skills and focus of the information science (or sciences) community? Should information designers and information managers work more closely to create computer based information systems for more effective retrieval? Will information science become part of computer science and does the rise of the term informatics demonstrate the convergence of information science and information technology - a convergence that must surely develop in the years to come? Issues and questions such as these are reflected in this monograph, a collection of essays written by some of the most pre-eminent contributors to the discipline. These peer reviewed perspectives capture insights into advances in, and facets of, information science, a profession in transition. With an introduction from Jack Meadows the key papers are: Meeting the challenge, by Brian Vickery; The developing foundations of information science, by David Bawden; The last 50 years of knowledge organization, by Stella G Dextre Clarke; On the history of evaluation in IR, by Stephen Robertson; The information user, by Tom Wilson A; The sociological turn in information science, by Blaise Cronin; From chemical documentation to chemoinformatics, by Peter Willett; Health informatics, by Peter A Bath; Social informatics and sociotechnical research, by Elisabeth Davenport; The evolution of visual information retrieval, by Peter Enser; Information policies, by Elizabeth Orna; Disparity in professional qualifications and progress in information handling, by Barry Mahon; Electronic scholarly publishing and open access, by Charles Oppenheim; Social software: fun and games, or business tools? by Wendy A Warr; and, Bibliometrics to webometrics, by Mike Thelwall. This monograph previously appeared as a special issue of the "Journal of Information Science", published by Sage. Reproduced here as a monograph, this important collection of perspectives on a skill set in transition from a prestigious line-up of authors will now be available to information studies students worldwide and to all those working in the information science field.
    Content
    Inhalt: Fifty years of UK research in information science - Jack Meadows / Smoother pebbles and the shoulders of giants: the developing foundations of information science - David Bawden / The last 50 years of knowledge organization: a journey through my personal archives - Stella G. Dextre Clarke / On the history of evaluation in IR - Stephen Robertson / The information user: past, present and future - Tom Wilson / The sociological turn in information science - Blaise Cronin / From chemical documentation to chemoinformatics: 50 years of chemical information science - Peter Willett / Health informatics: current issues and challenges - Peter A. Bath / Social informatics and sociotechnical research - a view from the UK - Elisabeth Davenport / The evolution of visual information retrieval - Peter Enser / Information policies: yesterday, today, tomorrow - Elizabeth Orna / The disparity in professional qualifications and progress in information handling: a European perspective - Barry Mahon / Electronic scholarly publishing and Open Access - Charles Oppenheim / Social software: fun and games, or business tools ? - Wendy A. Warr / Bibliometrics to webometrics - Mike Thelwall / How I learned to love the Brits - Eugene Garfield
    Footnote
    Rez. in: Mitt VÖB 62(2009) H.3, S.95-99 (O. Oberhauser): "Dieser ansehnliche Band versammelt 16 Beiträge und zwei Editorials, die bereits 2008 als Sonderheft des Journal of Information Science erschienen sind - damals aus Anlass des 50. Jahrestages der Gründung des seit 2002 nicht mehr selbständig existierenden Institute of Information Scientists (IIS). Allgemein gesprochen, reflektieren die Aufsätze den Stand der Informationswissenschaft (IW) damals, heute und im Verlauf dieser 50 Jahre, mit Schwerpunkt auf den Entwicklungen im Vereinigten Königreich. Bei den Autoren der Beiträge handelt es sich um etablierte und namhafte Vertreter der britischen Informationswissenschaft und -praxis - die einzige Ausnahme ist Eugene Garfield (USA), der den Band mit persönlichen Reminiszenzen beschließt. Mit der nunmehrigen Neuauflage dieser Kollektion als Hardcover-Publikation wollten Herausgeber und Verlag vor allem einen weiteren Leserkreis erreichen, aber auch den Bibliotheken, die die erwähnte Zeitschrift im Bestand haben, die Möglichkeit geben, das Werk zusätzlich als Monographie zur Aufstellung zu bringen. . . . Bleibt die Frage, ob eine neuerliche Publikation als Buch gerechtfertigt ist. Inhaltlich besticht der Band ohne jeden Zweifel. Jeder, der sich für Informationswissenschaft interessiert, wird von den hier vorzufindenden Texten profitieren. Und: Natürlich ist es praktisch, eine gediegene Buchpublikation in Händen zu halten, die in vielen Bibliotheken - im Gegensatz zum Zeitschriftenband - auch ausgeliehen werden kann. Alles andere ist eigentlich nur eine Frage des Budgets." Weitere Rez. in IWP 61(2010) H.2, S.148 (L. Weisel); JASIST 61(2010) no.7, S.1505 (M. Buckland); KO 38(2011) no.2, S.171-173 (P. Matthews): "Armed then with tools and techniques often applied to the structural analysis of other scientific fields, this volume frequently sees researchers turning this lens on themselves and ranges in tone from the playfully reflexive to the (parentally?) overprotective. What is in fact revealed is a rather disparate collection of research areas, all making a valuable contribution to our understanding of the nature of information. As is perhaps the tendency with overzealous lumpers (see http://en.wikipedia.org/wiki/Lumpers_and_splitters), some attempts to bring these areas together seem a little forced. The splitters help draw attention to quite distinct specialisms, IS's debts to other fields, and the ambition of some emerging subfields to take up intellectual mantles established elsewhere. In the end, the multidisciplinary nature of information science shines through. With regard to future directions, the subsumption of IS into computer science is regarded as in many ways inevitable, although there is consensus that the distinct infocentric philosophy and outlook which has evolved within IS is something to be retained." Weitere Rez. in: KO 39(2012) no.6, S.463-465 (P. Matthews)
    RSWK
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
    Subject
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
  7. Braeckman, J.: ¬The integration of library information into a campus wide information system (1996) 0.06
    0.06320464 = product of:
      0.25281855 = sum of:
        0.25281855 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.25281855 = score(doc=729,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.5450528 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=729)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the development of Campus Wide Information Systems with reference to the work of Leuven University Library. A 4th phase can now be distinguished in the evolution of CWISs as they evolve towards Intranets. WWW technology is applied to organise a consistent interface to different types of information, databases and services within an institution. WWW servers now exist via which queries and query results are translated from the Web environment to the specific database query language and vice versa. The integration of Java will enable programs to be executed from within the Web environment. Describes each phase of CWIS development at KU Leuven
  8. Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997) 0.06
    0.06320464 = product of:
      0.25281855 = sum of:
        0.25281855 = weight(_text_:java in 756) [ClassicSimilarity], result of:
          0.25281855 = score(doc=756,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.5450528 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=756)
      0.25 = coord(1/4)
    
    Abstract
    Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
  9. Lo, M.L.: Recent strategies for retrieving chemical structure information on the Web (1997) 0.06
    0.06320464 = product of:
      0.25281855 = sum of:
        0.25281855 = weight(_text_:java in 3611) [ClassicSimilarity], result of:
          0.25281855 = score(doc=3611,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.5450528 = fieldWeight in 3611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3611)
      0.25 = coord(1/4)
    
    Abstract
    Discusses various structural searching methods available on the Web. some databases such as the Brookhaven Protein Database use keyword searching which does not provide the desired substructure search capabilities. Others like CS ChemFinder and MDL's Chemscape use graphical plug in programs. Although plug in programs provide more capabilities, users first have to obtain a copy of the programs. Due to this limitation, Tripo's WebSketch and ACD Interactive Lab adopt a different approach. Using JAVA applets, users create and display a structure query of the molecule on the web page without using other software. The new technique is likely to extend itself to other electronic publications
  10. Priss, U.: ¬A graphical interface for conceptually navigating faceted thesauri (1998) 0.06
    0.06320464 = product of:
      0.25281855 = sum of:
        0.25281855 = weight(_text_:java in 658) [ClassicSimilarity], result of:
          0.25281855 = score(doc=658,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.5450528 = fieldWeight in 658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=658)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes a graphical interface for the navigation and construction of faceted thesauri that is based on formal concept analysis. Each facet of a thesaurus is represented as a mathematical lattice that is further subdivided into components. Users can graphically navigate through the Java implementation of the interface by clicking on terms that connect facets and components. Since there are many applications for thesauri in the knowledge representation field, such a graphical interface has the potential of being very useful
  11. Renehan, E.J.: Science on the Web : a connoisseur's guide to over 500 of the best, most useful, and most fun science Websites (1996) 0.06
    0.06320464 = product of:
      0.25281855 = sum of:
        0.25281855 = weight(_text_:java in 1211) [ClassicSimilarity], result of:
          0.25281855 = score(doc=1211,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.5450528 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    Written by the author of the best-selling 1001 really cool Web sites, this fun and informative book enables readers to take full advantage of the Web. More than a mere directory, it identifies and describes the best sites, guiding surfers to such innovations as VRML3-D and Java. Aside from downloads of Web browsers, Renehan points the way to free compilers and interpreters as well as free online access to major scientific journals
  12. Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002) 0.06
    0.06320464 = product of:
      0.25281855 = sum of:
        0.25281855 = weight(_text_:java in 4608) [ClassicSimilarity], result of:
          0.25281855 = score(doc=4608,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.5450528 = fieldWeight in 4608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4608)
      0.25 = coord(1/4)
    
    Abstract
    In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.
  13. Hancock, B.; Giarlo, M.J.: Moving to XML : Latin texts XML conversion project at the Center for Electronic Texts in the Humanities (2001) 0.06
    0.06320464 = product of:
      0.25281855 = sum of:
        0.25281855 = weight(_text_:java in 5801) [ClassicSimilarity], result of:
          0.25281855 = score(doc=5801,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.5450528 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5801)
      0.25 = coord(1/4)
    
    Abstract
    The delivery of documents on the Web has moved beyond the restrictions of the traditional Web markup language, HTML. HTML's static tags cannot deal with the variety of data formats now beginning to be exchanged between various entities, whether corporate or institutional. XML solves many of the problems by allowing arbitrary tags, which describe the content for a particular audience or group. At the Center for Electronic Texts in the Humanities the Latin texts of Lector Longinquus are being transformed to XML in readiness for the expected new standard. To allow existing browsers to render these texts, a Java program is used to transform the XML to HTML on the fly.
  14. Calishain, T.; Dornfest, R.: Google hacks : 100 industrial-strength tips and tools (2003) 0.06
    0.06303135 = product of:
      0.1260627 = sum of:
        0.09029234 = weight(_text_:java in 134) [ClassicSimilarity], result of:
          0.09029234 = score(doc=134,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.19466174 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
        0.035770364 = weight(_text_:und in 134) [ClassicSimilarity], result of:
          0.035770364 = score(doc=134,freq=32.0), product of:
            0.14597435 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0658165 = queryNorm
            0.24504554 = fieldWeight in 134, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.4, S.253 (D. Lewandowski): "Mit "Google Hacks" liegt das bisher umfassendste Werk vor, das sich ausschließlich an den fortgeschrittenen Google-Nutzer wendet. Daher wird man in diesem Buch auch nicht die sonst üblichen Anfänger-Tips finden, die Suchmaschinenbücher und sonstige Anleitungen zur Internet-Recherche für den professionellen Nutzer in der Regel uninteressant machen. Mit Tara Calishain hat sich eine Autorin gefunden, die bereits seit nahezu fünf Jahren einen eigenen Suchmaschinen-Newsletter (www.researchbuzz.com) herausgibt und als Autorin bzw. Co-Autorin einige Bücher zum Thema Recherche verfasst hat. Für die Programmbeispiele im Buch ist Rael Dornfest verantwortlich. Das erste Kapitel ("Searching Google") gibt einen Einblick in erweiterte Suchmöglichkeiten und Spezifika der behandelten Suchmaschine. Dabei wird der Rechercheansatz der Autorin klar: die beste Methode sei es, die Zahl der Treffer selbst so weit einzuschränken, dass eine überschaubare Menge übrig bleibt, die dann tatsächlich gesichtet werden kann. Dazu werden die feldspezifischen Suchmöglichkeiten in Google erläutert, Tips für spezielle Suchen (nach Zeitschriftenarchiven, technischen Definitionen, usw.) gegeben und spezielle Funktionen der Google-Toolbar erklärt. Bei der Lektüre fällt positiv auf, dass auch der erfahrene Google-Nutzer noch Neues erfährt. Einziges Manko in diesem Kapitel ist der fehlende Blick über den Tellerrand: zwar ist es beispielsweise möglich, mit Google eine Datumssuche genauer als durch das in der erweiterten Suche vorgegebene Auswahlfeld einzuschränken; die aufgezeigte Lösung ist jedoch ausgesprochen umständlich und im Recherchealltag nur eingeschränkt zu gebrauchen. Hier fehlt der Hinweis, dass andere Suchmaschinen weit komfortablere Möglichkeiten der Einschränkung bieten. Natürlich handelt es sich bei dem vorliegenden Werk um ein Buch ausschließlich über Google, trotzdem wäre hier auch ein Hinweis auf die Schwächen hilfreich gewesen. In späteren Kapiteln werden durchaus auch alternative Suchmaschinen zur Lösung einzelner Probleme erwähnt. Das zweite Kapitel widmet sich den von Google neben der klassischen Websuche angebotenen Datenbeständen. Dies sind die Verzeichniseinträge, Newsgroups, Bilder, die Nachrichtensuche und die (hierzulande) weniger bekannten Bereichen Catalogs (Suche in gedruckten Versandhauskatalogen), Froogle (eine in diesem Jahr gestartete Shopping-Suchmaschine) und den Google Labs (hier werden von Google entwickelte neue Funktionen zum öffentlichen Test freigegeben). Nachdem die ersten beiden Kapitel sich ausführlich den Angeboten von Google selbst gewidmet haben, beschäftigt sich das Buch ab Kapitel drei mit den Möglichkeiten, die Datenbestände von Google mittels Programmierungen für eigene Zwecke zu nutzen. Dabei werden einerseits bereits im Web vorhandene Programme vorgestellt, andererseits enthält das Buch viele Listings mit Erläuterungen, um eigene Applikationen zu programmieren. Die Schnittstelle zwischen Nutzer und der Google-Datenbank ist das Google-API ("Application Programming Interface"), das es den registrierten Benutzern erlaubt, täglich bis zu 1.00o Anfragen über ein eigenes Suchinterface an Google zu schicken. Die Ergebnisse werden so zurückgegeben, dass sie maschinell weiterverarbeitbar sind. Außerdem kann die Datenbank in umfangreicherer Weise abgefragt werden als bei einem Zugang über die Google-Suchmaske. Da Google im Gegensatz zu anderen Suchmaschinen in seinen Benutzungsbedingungen die maschinelle Abfrage der Datenbank verbietet, ist das API der einzige Weg, eigene Anwendungen auf Google-Basis zu erstellen. Ein eigenes Kapitel beschreibt die Möglichkeiten, das API mittels unterschiedlicher Programmiersprachen wie PHP, Java, Python, usw. zu nutzen. Die Beispiele im Buch sind allerdings alle in Perl geschrieben, so dass es sinnvoll erscheint, für eigene Versuche selbst auch erst einmal in dieser Sprache zu arbeiten.
    Das sechste Kapitel enthält 26 Anwendungen des Google-APIs, die teilweise von den Autoren des Buchs selbst entwickelt wurden, teils von anderen Autoren ins Netz gestellt wurden. Als besonders nützliche Anwendungen werden unter anderem der Touchgraph Google Browser zur Visualisierung der Treffer und eine Anwendung, die eine Google-Suche mit Abstandsoperatoren erlaubt, vorgestellt. Auffällig ist hier, dass die interessanteren dieser Applikationen nicht von den Autoren des Buchs programmiert wurden. Diese haben sich eher auf einfachere Anwendungen wie beispielsweise eine Zählung der Treffer nach der Top-Level-Domain beschränkt. Nichtsdestotrotz sind auch diese Anwendungen zum großen Teil nützlich. In einem weiteren Kapitel werden pranks and games ("Streiche und Spiele") vorgestellt, die mit dem Google-API realisiert wurden. Deren Nutzen ist natürlich fragwürdig, der Vollständigkeit halber mögen sie in das Buch gehören. Interessanter wiederum ist das letzte Kapitel: "The Webmaster Side of Google". Hier wird Seitenbetreibern erklärt, wie Google arbeitet, wie man Anzeigen am besten formuliert und schaltet, welche Regeln man beachten sollte, wenn man seine Seiten bei Google plazieren will und letztlich auch, wie man Seiten wieder aus dem Google-Index entfernen kann. Diese Ausführungen sind sehr knapp gehalten und ersetzen daher keine Werke, die sich eingehend mit dem Thema Suchmaschinen-Marketing beschäftigen. Allerdings sind die Ausführungen im Gegensatz zu manch anderen Büchern zum Thema ausgesprochen seriös und versprechen keine Wunder in Bezug auf eine Plazienung der eigenen Seiten im Google-Index. "Google Hacks" ist auch denjenigen zu empfehlen, die sich nicht mit der Programmierung mittels des APIs beschäftigen möchten. Dadurch, dass es die bisher umfangreichste Sammlung von Tips und Techniken für einen gezielteren Umgang mit Google darstellt, ist es für jeden fortgeschrittenen Google-Nutzer geeignet. Zwar mögen einige der Hacks einfach deshalb mit aufgenommen worden sein, damit insgesamt die Zahl von i00 erreicht wird. Andere Tips bringen dafür klar erweiterte Möglichkeiten bei der Recherche. Insofern hilft das Buch auch dabei, die für professionelle Bedürfnisse leider unzureichende Abfragesprache von Google ein wenig auszugleichen." - Bergische Landeszeitung Nr.207 vom 6.9.2003, S.RAS04A/1 (Rundschau am Sonntag: Netzwelt) von P. Zschunke: Richtig googeln (s. dort)
  15. Drabenstott, K.M.: Classification to the rescue : handling the problems of too many and too few retrievals (1996) 0.06
    0.06068412 = product of:
      0.24273647 = sum of:
        0.24273647 = weight(_text_:handling in 5232) [ClassicSimilarity], result of:
          0.24273647 = score(doc=5232,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.58801144 = fieldWeight in 5232, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=5232)
      0.25 = coord(1/4)
    
    Abstract
    The first studies of online catalog use demonstrated that the problems of too many and too few retrievals plagued the earliest online catalog users. Despite 15 years of system development, implementation, and evaluation, these problems still adversely affect the subject searches of today's online catalog users. In fact, the large-retrievals problem has grown more acute due to the growth of online catalog databases. This paper explores the use of library classifications for consolidating and summarizing high-posted subject searches and for handling subject searches that result in no or too few retrievals. Findings are presented in the form of generalization about retrievals and library classifications, needed improvements to classification terminology, and suggestions for improved functionality to facilitate the display of retrieved titles in online catalogs
  16. Sitas, A.: ¬The classification of byzantine literature in the Library of Congress classification (2001) 0.06
    0.06068412 = product of:
      0.24273647 = sum of:
        0.24273647 = weight(_text_:handling in 957) [ClassicSimilarity], result of:
          0.24273647 = score(doc=957,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.58801144 = fieldWeight in 957, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=957)
      0.25 = coord(1/4)
    
    Abstract
    Topics concerning the Classification of Byzantine literature and, generally, of Byzantine texts are discussed, analyzed and made clear. The time boundaries of this period are described as well as the kinds of published material. Schedule PA (Supplement) of the Library of Congress Classification is discussed and evaluated as far as the handling of Byzantine literature is concerned. Schedule PA is also mentioned, as well as other relevant categories. Based on the results regarding the manner of handling Classical literature texts, it is concluded that a) Early Christian literature and the Fathers of the Church must be excluded from Class PA and b) in order to achieve a uniform, continuous, consistent and reliable classification of Byzantine texts, they must be treated according to the method proposed for Classical literature by the Library of Congress in Schedule PA.
  17. Williamson, N.: ¬An interdisciplinary world and discipline based classification (1998) 0.06
    0.06068412 = product of:
      0.24273647 = sum of:
        0.24273647 = weight(_text_:handling in 1085) [ClassicSimilarity], result of:
          0.24273647 = score(doc=1085,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.58801144 = fieldWeight in 1085, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=1085)
      0.25 = coord(1/4)
    
    Abstract
    The major classification systems continue to remain discipline based despite significant changes in the structure of knowledge. particularly in the latter part of the 20th century.While it would be desirable to have these systems replaced by systems in keeping with the changes, the probability of this happening in the near future is very slim indeed. Problems of handling interdisciplinarity among subjects in conjunction with existing systems are addressed.The nature of interdisciplinarity is defined and general problems discussed. Principles and methods of handling are examined andnew approaches to the problems are proposed. Experiments are currently being carried out to determine how some of the possibilities might be implemented in the existing systems. Experimental examples are under development. Efforts have been made to propose practical solutions and to suggest directions for further theoretical and experimental research
  18. Ritzler, C.: Comparative study of PC-supported thesaurus software (1990) 0.06
    0.057213537 = product of:
      0.22885415 = sum of:
        0.22885415 = weight(_text_:handling in 2218) [ClassicSimilarity], result of:
          0.22885415 = score(doc=2218,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.5543825 = fieldWeight in 2218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=2218)
      0.25 = coord(1/4)
    
    Abstract
    This article presents the results of a comparative study of three PC supported software packages (INDEX, PROTERM and TMS) for development, construction and management of thesauri and other word material with special regard to hardware and software requirements, handling and user interface, and functionality and reliability. Advantages and disadvantages are discussed. The result shows that all three software products comply with the minimum standards of a thesaurus software. After inclusion of additional features distinct differences become visible
  19. Cawkell, A.E.: Selected aspects of image processing and management : review and future prospects (1992) 0.06
    0.057213537 = product of:
      0.22885415 = sum of:
        0.22885415 = weight(_text_:handling in 2658) [ClassicSimilarity], result of:
          0.22885415 = score(doc=2658,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.5543825 = fieldWeight in 2658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=2658)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of techniques applied to various aspects of image processing in information systems including: indexing of images in electronic form (manual and computerised indexing and automatic indexing by content); image handling using microcomputers; and description of 8 British Library funded research projects. The article is based in a BLRD report
  20. Claassen, W.T.: Transparent hypermedia? (1992) 0.06
    0.057213537 = product of:
      0.22885415 = sum of:
        0.22885415 = weight(_text_:handling in 4263) [ClassicSimilarity], result of:
          0.22885415 = score(doc=4263,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.5543825 = fieldWeight in 4263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=4263)
      0.25 = coord(1/4)
    
    Abstract
    Considers why the use of hypermedia has not been more widely accepted and applied in practice, given that it is such a powerful information handling technique and it has been commercially available for 5 years. Argues that hypermedia is not sufficiently open or transparent to users, enabling them to find relevant information relatively easily and at a high level of sophistication. Suggests that a higher degree of transparency can be obtained by taking into account a variety of issues which can best be accomodated by the designation information ecology

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 940
  • m 323
  • el 107
  • s 104
  • i 22
  • n 17
  • r 15
  • x 14
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications