Search (1230 results, page 2 of 62)

  • × language_ss:"e"
  1. Laegreid, J.A.: SIFT: a Norwegian information retrieval system (1993) 0.10
    0.09795589 = product of:
      0.39182356 = sum of:
        0.39182356 = weight(_text_:handles in 7700) [ClassicSimilarity], result of:
          0.39182356 = score(doc=7700,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.74976224 = fieldWeight in 7700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0625 = fieldNorm(doc=7700)
      0.25 = coord(1/4)
    
    Abstract
    Describes SIFT (Search in Free Text) an information retrieval system originally developed for administering governmental documents in Norway but which is now being applied alsewhere. SIFT handles structured information well. A library system, SIFT-BIBL, is now available. SIFT's retrieval engine and search facilities are powerful. Its user interface is limited but being imporved. An application programmer interface has been released which will allow programmers to develop their own interface. A Windows-based- client-server version is now being beta tested
  2. Bazuzi, J; Wüst, R.: integrating images into the OPAC : issues in distributed multimedia libraries (1994) 0.10
    0.09795589 = product of:
      0.39182356 = sum of:
        0.39182356 = weight(_text_:handles in 76) [ClassicSimilarity], result of:
          0.39182356 = score(doc=76,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.74976224 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0625 = fieldNorm(doc=76)
      0.25 = coord(1/4)
    
    Abstract
    Presents VTLS InfoStation, a multimedia workstation which handles video, audio, text and graphics in an integrated manner. It offers a standard environment which support library applications and integrates multimedia into the library's OPAC. Discusses technical aspects as well as management issues in setting up multimedia environment
  3. Bhasker, U.: Languages of India : cataloguing issues (1993) 0.10
    0.09795589 = product of:
      0.39182356 = sum of:
        0.39182356 = weight(_text_:handles in 693) [ClassicSimilarity], result of:
          0.39182356 = score(doc=693,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.74976224 = fieldWeight in 693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0625 = fieldNorm(doc=693)
      0.25 = coord(1/4)
    
    Abstract
    Issues confronting the cataloger of materials from India include problems relating to numerous languages with diverse scripts, confusion with variant name entries and how the name authority file handles this problem, some subjects pertaining to India and the people of India, and how the use of obsolete terms such as East Indies is an ongoing reference-service problem. Appendices list some of the most common problematic names and propose changes in some subject headings.
  4. Ovid announces strategic partnerships : Java-based interface (1997) 0.10
    0.09562237 = product of:
      0.38248947 = sum of:
        0.38248947 = weight(_text_:java in 397) [ClassicSimilarity], result of:
          0.38248947 = score(doc=397,freq=4.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.8809384 = fieldWeight in 397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=397)
      0.25 = coord(1/4)
    
    Abstract
    Reports agreements between Ovid Technologies and 5 publishing companies (Blackwell Science, Lippincott-Raven, Munksgaard, Plenum, Willams and Wilkins) to secure the rights to the full text over 400 leading periodicals. Once the periodicals are loaded on Ovid they will be linked with other fulltext electronic periodicals to bibliographic databases to produce a web of related documents and threaded information. Concludes with notes on the Ovid Java Client graphic user interface, which offers increased speeds of searching the WWW
  5. Wolf, S.: Automating authority control processes (2020) 0.09
    0.085711405 = product of:
      0.34284562 = sum of:
        0.34284562 = weight(_text_:handles in 680) [ClassicSimilarity], result of:
          0.34284562 = score(doc=680,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.656042 = fieldWeight in 680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0546875 = fieldNorm(doc=680)
      0.25 = coord(1/4)
    
    Abstract
    Authority control is an important part of cataloging since it helps provide consistent access to names, titles, subjects, and genre/forms. There are a variety of methods for providing authority control, ranging from manual, time-consuming processes to automated processes. However, the automated processes often seem out of reach for small libraries when it comes to using a pricey vendor or expert cataloger. This paper introduces ideas on how to handle authority control using a variety of tools, both paid and free. The author describes how their library handles authority control; compares vendors and programs that can be used to provide varying levels of authority control; and demonstrates authority control using MarcEdit.
  6. Hawk, J.: OCLC SiteSearch (1998) 0.08
    0.08366957 = product of:
      0.3346783 = sum of:
        0.3346783 = weight(_text_:java in 3079) [ClassicSimilarity], result of:
          0.3346783 = score(doc=3079,freq=4.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.7708211 = fieldWeight in 3079, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3079)
      0.25 = coord(1/4)
    
    Abstract
    Feature on OCLC's SiteSearch suite of software, first introduced in 1992, and how it is helping over 250 libraries integrate and manage their electronic library collections. Describes the new features of version 4.0, released in Apr 1997, which include a new interface, Java based architecture, and an online documentation and training site. Gives an account of how Java is helping the Georgia Library Learning Online (GALILEO) project to keep pace on the WWW; the use of SiteSearch by libraries to customize their interface to electronic resources; and gives details of Project Athena (Assessing Technological Horizons to Educate the Nashville Area), which is using OCLC SiteSearch to allow area library users to search the holdings of public and university libraries simultaneously
  7. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.08
    0.08366957 = product of:
      0.3346783 = sum of:
        0.3346783 = weight(_text_:java in 2673) [ClassicSimilarity], result of:
          0.3346783 = score(doc=2673,freq=4.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.7708211 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2673)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
  8. Gee, Q.: Review of script displays of African languages by current software (2005) 0.07
    0.07346691 = product of:
      0.29386765 = sum of:
        0.29386765 = weight(_text_:handles in 3463) [ClassicSimilarity], result of:
          0.29386765 = score(doc=3463,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.56232166 = fieldWeight in 3463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.046875 = fieldNorm(doc=3463)
      0.25 = coord(1/4)
    
    Abstract
    All recorded African languages that have a writing system have orthographies which use the Roman or Arabic scripts, with a few exceptions. While Unicode successfully handles the encoding of both these scripts, current software, in particular Web browsers, take little account of users wishing to operate in a minority script. Their use for displaying African languages has been limited by the availability of facilities and the desire to communicate with the 'world' through major languages such as English and French. There is a need for more use of the indigenous languages to strengthen their language communities and the use of the local scripts in enhancing the learning, teaching, and general use of their own languages by their speaking communities.
  9. Elichirigoity, F.; Malone, C.K.: Measuring the new economy : industrial classification and open source software production (2005) 0.07
    0.07346691 = product of:
      0.29386765 = sum of:
        0.29386765 = weight(_text_:handles in 39) [ClassicSimilarity], result of:
          0.29386765 = score(doc=39,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.56232166 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.046875 = fieldNorm(doc=39)
      0.25 = coord(1/4)
    
    Abstract
    This paper analyzes the way in which the North American Industry Classification System (NAICS) handles the categorization of open source software production, foregrounding theoretical and political aspects of knowledge organization. NAICS is the industry classification seheme used by the governments of Canada, Mexico and the United States to carry out their respective economic censuses. NAICS is considered a rational system that uses the underlying economic principle of similar production processes as the basis for its classes. For the Information Sector of the economy, as formulated in NAICS, a key production process is the acquisition and defense of copyright. With open source, copyleft licensing eliminates copyright acquisition and protection as major production processes, suggesting that the open source software industry warrants a separate NAICS category. More importantly, our analysis suggests that NAICS cannot be understood as a taxonomy of objective economic activity but is instead a politically and historically contingent system of data classification.
  10. Kumpulainen, S.; Keskustalo, H.; Zhang, B.; Stefanidis, K.: Historical reasoning in authentic research tasks : mapping cognitive and document spaces (2020) 0.07
    0.07346691 = product of:
      0.29386765 = sum of:
        0.29386765 = weight(_text_:handles in 621) [ClassicSimilarity], result of:
          0.29386765 = score(doc=621,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.56232166 = fieldWeight in 621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.046875 = fieldNorm(doc=621)
      0.25 = coord(1/4)
    
    Abstract
    To support historians in their work, we need to understand their work-related needs and propose what is required to support those needs. Although the quantity of digitized historical documents available is increasing, historians' ways of working with the digital documents have not been widely studied, particularly in authentic work settings. To better support the historians' reasoning processes, we investigate history researchers' work tasks as the context of information interaction and examine their cognitive access points into information. The analysis is based on a longitudinal observational research and interviews in a task-based research setting. Based on these findings in the historians' cognitive space, we build bridges into the document space. By studying the information interactions in real task contexts, we facilitate the provision of task-specific handles into documents that can be used in designing digital research tools for historians.
  11. Juhne, J.; Jensen, A.T.; Gronbaek, K.: Ariadne: a Java-based guided tour system for the World Wide Web (1998) 0.07
    0.07171678 = product of:
      0.2868671 = sum of:
        0.2868671 = weight(_text_:java in 4593) [ClassicSimilarity], result of:
          0.2868671 = score(doc=4593,freq=4.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.6607038 = fieldWeight in 4593, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4593)
      0.25 = coord(1/4)
    
    Abstract
    Presents a Guided tour system for the WWW, called Ariadne, which implements the ideas of trails and guided tours, originating from the hypertext field. Ariadne appears as a Java applet to the user and it stores guided tours in a database format separated from the WWW documents included in the tour. Itd main advantages are: an independent user interface which does not affect the layout of the documents being part of the tour, branching tours where the user may follow alternative routes, composition of existing tours into aggregate tours, overview map with indication of which parts of a tour have been visited an support for getting back on track. Ariadne is available as a research prototype, and it has been tested among a group of university students as well as casual users on the Internet
  12. Reed, D.: Essential HTML fast (1997) 0.07
    0.067615226 = product of:
      0.2704609 = sum of:
        0.2704609 = weight(_text_:java in 6851) [ClassicSimilarity], result of:
          0.2704609 = score(doc=6851,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.62291753 = fieldWeight in 6851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=6851)
      0.25 = coord(1/4)
    
    Abstract
    This book provides a quick, concise guide to the issues surrounding the preparation of a well-designed, professional web site using HTML. Topics covered include: how to plan your web site effectively, effective use of hypertext, images, audio and video; layout techniques using tables and and list; how to use style sheets, font sizes and plans for mathematical equation make up. Integration of CGI scripts, Java and ActiveX into your web site is also discussed
  13. Lord Wodehouse: ¬The Intranet : the quiet (r)evolution (1997) 0.07
    0.067615226 = product of:
      0.2704609 = sum of:
        0.2704609 = weight(_text_:java in 171) [ClassicSimilarity], result of:
          0.2704609 = score(doc=171,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.62291753 = fieldWeight in 171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=171)
      0.25 = coord(1/4)
    
    Abstract
    Explains how the Intranet (in effect an Internet limited to the computer systems of a single organization) developed out of the Internet, and what its uses and advantages are. Focuses on the Intranet developed in the Glaxo Wellcome organization. Briefly discusses a number of technologies in development, e.g. Java, Real audio, 3D and VRML, and summarizes the issues involved in the successful development of the Intranet, that is, bandwidth, searching tools, security, and legal issues
  14. Wang, J.; Reid, E.O.F.: Developing WWW information systems on the Internet (1996) 0.07
    0.067615226 = product of:
      0.2704609 = sum of:
        0.2704609 = weight(_text_:java in 604) [ClassicSimilarity], result of:
          0.2704609 = score(doc=604,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.62291753 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    Gives an overview of Web information system development. Discusses some basic concepts and technologies such as HTML, HTML FORM, CGI and Java, which are associated with developing WWW information systems. Further discusses the design and implementation of Virtual Travel Mart, a Web based end user oriented travel information system. Finally, addresses some issues in developing WWW information systems
  15. Ameritech releases Dynix WebPac on NT (1998) 0.07
    0.067615226 = product of:
      0.2704609 = sum of:
        0.2704609 = weight(_text_:java in 2782) [ClassicSimilarity], result of:
          0.2704609 = score(doc=2782,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.62291753 = fieldWeight in 2782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=2782)
      0.25 = coord(1/4)
    
    Abstract
    Ameritech Library Services has released Dynix WebPac on NT, which provides access to a Dynix catalogue from any Java compatible Web browser. Users can place holds, cancel and postpone holds, view and renew items on loan and sort and limit search results from the Web. Describes some of the other features of Dynix WebPac
  16. OCLC completes SiteSearch 4.0 field test (1998) 0.07
    0.067615226 = product of:
      0.2704609 = sum of:
        0.2704609 = weight(_text_:java in 3078) [ClassicSimilarity], result of:
          0.2704609 = score(doc=3078,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.62291753 = fieldWeight in 3078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=3078)
      0.25 = coord(1/4)
    
    Abstract
    OCLC has announced that 6 library systems have completed field tests of the OCLC SiteSearch 4.0 suite of software, paving its way for release. Traces the beta site testing programme from its beginning in November 1997 and notes that OCLC SiteServer components have been written in Java programming language which will increase libraries' ability to extend the functionality of the SiteSearch software to create new features specific to local needs
  17. Robinson, D.A.; Lester, C.R.; Hamilton, N.M.: Delivering computer assisted learning across the WWW (1998) 0.07
    0.067615226 = product of:
      0.2704609 = sum of:
        0.2704609 = weight(_text_:java in 4618) [ClassicSimilarity], result of:
          0.2704609 = score(doc=4618,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.62291753 = fieldWeight in 4618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=4618)
      0.25 = coord(1/4)
    
    Abstract
    Demonstrates a new method of providing networked computer assisted learning to avoid the pitfalls of traditional methods. This was achieved using Web pages enhanced with Java applets, MPEG video clips and Dynamic HTML
  18. Bates, C.: Web programming : building Internet applications (2000) 0.07
    0.067615226 = product of:
      0.2704609 = sum of:
        0.2704609 = weight(_text_:java in 130) [ClassicSimilarity], result of:
          0.2704609 = score(doc=130,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.62291753 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=130)
      0.25 = coord(1/4)
    
    Object
    Java
  19. Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003) 0.07
    0.0673691 = product of:
      0.1347382 = sum of:
        0.10142284 = weight(_text_:java in 55) [ClassicSimilarity], result of:
          0.10142284 = score(doc=55,freq=2.0), product of:
            0.43418413 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061608184 = queryNorm
            0.23359407 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
        0.033315368 = weight(_text_:und in 55) [ClassicSimilarity], result of:
          0.033315368 = score(doc=55,freq=22.0), product of:
            0.13664074 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.061608184 = queryNorm
            0.24381724 = fieldWeight in 55, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
      0.5 = coord(2/4)
    
    Content
    "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
    Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
  20. Widyantoro, D.H.; Ioerger, T.R.; Yen, J.: Learning user Interest dynamics with a three-descriptor representation (2001) 0.06
    0.06122243 = product of:
      0.24488972 = sum of:
        0.24488972 = weight(_text_:handles in 185) [ClassicSimilarity], result of:
          0.24488972 = score(doc=185,freq=2.0), product of:
            0.5225971 = queryWeight, product of:
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.061608184 = queryNorm
            0.4686014 = fieldWeight in 185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.482592 = idf(docFreq=24, maxDocs=44421)
              0.0390625 = fieldNorm(doc=185)
      0.25 = coord(1/4)
    
    Abstract
    The use of documents ranked high by user feedback to profile user interests is commonly done with Rocchio's `s algorithm which uses a single list of attribute value pairs called a descriptor to carry term value weights for an individual. Negative feed back on old preferences or positive feedback on new preferences adjusts the descriptor at a fixed, predetermined, and often slow pace. Widyantoro, et alia, suggest a three descriptor model which adds two short term interest descriptors, one each for positive and negative feedback. User short term interest in a particular document is computed by subtracting the similarity measure with the negative descriptor from the similarity measure with the positive descriptor. Using a constant to represent the desired impact of long and short term interests these values may be summed for a single interest value. Using the Reuters 21578 1.0 test collection split into training and test sets, topics with at least 100 documents in a tight cluster were chosen. The TDR handles change well showing better recovery speed and accuracy than the single descriptor model. The nearest neighbor update strategy appears to keep the category concept relatively consistent when multiple TDRs are used.

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 794
  • m 310
  • el 107
  • s 93
  • i 21
  • n 17
  • x 12
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications