Search (1218 results, page 2 of 61)

  • × language_ss:"e"
  1. Rousseau, R.: Robert Fairthorne and the empirical power laws (2005) 0.09
    0.093597345 = product of:
      0.37438938 = sum of:
        0.37438938 = weight(_text_:hyperbolic in 5398) [ClassicSimilarity], result of:
          0.37438938 = score(doc=5398,freq=2.0), product of:
            0.5421551 = queryWeight, product of:
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.060719278 = queryNorm
            0.6905577 = fieldWeight in 5398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5398)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Aims to review Fairthorne's classic article "Empirical hyperbolic distributions (Bradford-Zipf-Mandelbrot) for bibliometric description and prediction" (Journal of Documentation, Vol. 25, pp. 319-343, 1969), as part of a series marking the Journal of Documentation's 60th anniversary. Design/methodology/approach - Analysis of article content, qualitative evaluation of its subsequent impact, citation analysis, and diffusion analysis. Findings - The content, further developments and influence on the field of informetrics of this landmark paper are explained. Originality/value - A review is given of the contents of Fairthorne's original article and its influence on the field of informetrics. Its transdisciplinary reception is measured through a diffusion analysis.
  2. Hawk, J.: OCLC SiteSearch (1998) 0.08
    0.082462355 = product of:
      0.32984942 = sum of:
        0.32984942 = weight(_text_:java in 3079) [ClassicSimilarity], result of:
          0.32984942 = score(doc=3079,freq=4.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.7708211 = fieldWeight in 3079, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3079)
      0.25 = coord(1/4)
    
    Abstract
    Feature on OCLC's SiteSearch suite of software, first introduced in 1992, and how it is helping over 250 libraries integrate and manage their electronic library collections. Describes the new features of version 4.0, released in Apr 1997, which include a new interface, Java based architecture, and an online documentation and training site. Gives an account of how Java is helping the Georgia Library Learning Online (GALILEO) project to keep pace on the WWW; the use of SiteSearch by libraries to customize their interface to electronic resources; and gives details of Project Athena (Assessing Technological Horizons to Educate the Nashville Area), which is using OCLC SiteSearch to allow area library users to search the holdings of public and university libraries simultaneously
  3. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.08
    0.082462355 = product of:
      0.32984942 = sum of:
        0.32984942 = weight(_text_:java in 2673) [ClassicSimilarity], result of:
          0.32984942 = score(doc=2673,freq=4.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.7708211 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2673)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
  4. Barton, P.: ¬A missed opportunity : why the benefits of information visualisation seem still out of sight (2005) 0.08
    0.080226295 = product of:
      0.32090518 = sum of:
        0.32090518 = weight(_text_:hyperbolic in 2293) [ClassicSimilarity], result of:
          0.32090518 = score(doc=2293,freq=2.0), product of:
            0.5421551 = queryWeight, product of:
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.060719278 = queryNorm
            0.5919066 = fieldWeight in 2293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.046875 = fieldNorm(doc=2293)
      0.25 = coord(1/4)
    
    Object
    Hyperbolic tree
  5. Juhne, J.; Jensen, A.T.; Gronbaek, K.: Ariadne: a Java-based guided tour system for the World Wide Web (1998) 0.07
    0.07068202 = product of:
      0.28272808 = sum of:
        0.28272808 = weight(_text_:java in 4593) [ClassicSimilarity], result of:
          0.28272808 = score(doc=4593,freq=4.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.6607038 = fieldWeight in 4593, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4593)
      0.25 = coord(1/4)
    
    Abstract
    Presents a Guided tour system for the WWW, called Ariadne, which implements the ideas of trails and guided tours, originating from the hypertext field. Ariadne appears as a Java applet to the user and it stores guided tours in a database format separated from the WWW documents included in the tour. Itd main advantages are: an independent user interface which does not affect the layout of the documents being part of the tour, branching tours where the user may follow alternative routes, composition of existing tours into aggregate tours, overview map with indication of which parts of a tour have been visited an support for getting back on track. Ariadne is available as a research prototype, and it has been tested among a group of university students as well as casual users on the Internet
  6. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.07
    0.06685525 = product of:
      0.267421 = sum of:
        0.267421 = weight(_text_:hyperbolic in 2554) [ClassicSimilarity], result of:
          0.267421 = score(doc=2554,freq=8.0), product of:
            0.5421551 = queryWeight, product of:
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.060719278 = queryNorm
            0.49325553 = fieldWeight in 2554, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2554)
      0.25 = coord(1/4)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
    However, Hyun points out that it is still early days and over the next six months or so Walrus will be extended to include core functions beyond just visualizing raw topology graphs. For CAIDA, it is important to see performance measurements associated with the links; as Hyun notes, "you can imagine how important this is to our visualizations, given that we are almost never interested in the mere topology of a network." Walrus has not revealed much new scientific knowledge of the Internet thus far, although Hyun commented that the current visualization of topology "did make it easy to see the degree to which the network is in tangles how some nodes form large clusters and how they are seemingly interconnected in random ways." This random connectedness is perhaps what gives the Internet its organic look and feel. Of course this is not real shape of the Internet. One must always be wary when viewing and interpreting any particular graph visualization as much of the final "look and feel" results from subjective decisions of the analyst, rather than inherent in the underlying phenomena. As Hyun pointed out, "... the organic quality of the images derives almost entirely from the particular combination of the layout algorithm used and hyperbolic distortion." There is no inherently "natural" shape when visualizing massive data, such as the topology of the global Internet, in an abstract space. Somewhat like a jellyfish, maybe? ----
  7. Hood, W.W.; Wilson, C.S.: Overlap in bibliographic databases (2003) 0.07
    0.06685525 = product of:
      0.267421 = sum of:
        0.267421 = weight(_text_:hyperbolic in 2868) [ClassicSimilarity], result of:
          0.267421 = score(doc=2868,freq=2.0), product of:
            0.5421551 = queryWeight, product of:
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.060719278 = queryNorm
            0.49325553 = fieldWeight in 2868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2868)
      0.25 = coord(1/4)
    
    Abstract
    Bibliographic databases contain surrogates to a particular subset of the complete set of literature; some databases are very narrow in their scope, while others are multidisciplinary. These databases overlap in their coverage of the literature to a greater or lesser extent. The topic of Fuzzy Set Theory is examined to determine the overlap of coverage in the databases that index this topic. It was found that about 63% of records in the data set are unique to only one database, and the remaining 37% are duplicated in from two to 12 different databases. The overlap distribution is found to conform to a Lotka-type plot. The records with maximum overlap are identified; however, further work is needed to determine the significance of the high level of overlap in these records. The unique records are plotted using a Bradford-type form of data presentation and are found to conform (visually) to a hyperbolic distribution. The extent and causes of intra-database duplication (records duplicated in the one database) are also examined. Finally, the overlap in the top databases in the dataset were examined, and a high correlation was found between overlapping records, and overlapping DIALOG OneSearch categories.
  8. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.07
    0.06685525 = product of:
      0.267421 = sum of:
        0.267421 = weight(_text_:hyperbolic in 3565) [ClassicSimilarity], result of:
          0.267421 = score(doc=3565,freq=2.0), product of:
            0.5421551 = queryWeight, product of:
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.060719278 = queryNorm
            0.49325553 = fieldWeight in 3565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.928879 = idf(docFreq=15, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3565)
      0.25 = coord(1/4)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
  9. Reed, D.: Essential HTML fast (1997) 0.07
    0.06663965 = product of:
      0.2665586 = sum of:
        0.2665586 = weight(_text_:java in 6851) [ClassicSimilarity], result of:
          0.2665586 = score(doc=6851,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.62291753 = fieldWeight in 6851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=6851)
      0.25 = coord(1/4)
    
    Abstract
    This book provides a quick, concise guide to the issues surrounding the preparation of a well-designed, professional web site using HTML. Topics covered include: how to plan your web site effectively, effective use of hypertext, images, audio and video; layout techniques using tables and and list; how to use style sheets, font sizes and plans for mathematical equation make up. Integration of CGI scripts, Java and ActiveX into your web site is also discussed
  10. Lord Wodehouse: ¬The Intranet : the quiet (r)evolution (1997) 0.07
    0.06663965 = product of:
      0.2665586 = sum of:
        0.2665586 = weight(_text_:java in 171) [ClassicSimilarity], result of:
          0.2665586 = score(doc=171,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.62291753 = fieldWeight in 171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=171)
      0.25 = coord(1/4)
    
    Abstract
    Explains how the Intranet (in effect an Internet limited to the computer systems of a single organization) developed out of the Internet, and what its uses and advantages are. Focuses on the Intranet developed in the Glaxo Wellcome organization. Briefly discusses a number of technologies in development, e.g. Java, Real audio, 3D and VRML, and summarizes the issues involved in the successful development of the Intranet, that is, bandwidth, searching tools, security, and legal issues
  11. Wang, J.; Reid, E.O.F.: Developing WWW information systems on the Internet (1996) 0.07
    0.06663965 = product of:
      0.2665586 = sum of:
        0.2665586 = weight(_text_:java in 604) [ClassicSimilarity], result of:
          0.2665586 = score(doc=604,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.62291753 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    Gives an overview of Web information system development. Discusses some basic concepts and technologies such as HTML, HTML FORM, CGI and Java, which are associated with developing WWW information systems. Further discusses the design and implementation of Virtual Travel Mart, a Web based end user oriented travel information system. Finally, addresses some issues in developing WWW information systems
  12. Ameritech releases Dynix WebPac on NT (1998) 0.07
    0.06663965 = product of:
      0.2665586 = sum of:
        0.2665586 = weight(_text_:java in 2782) [ClassicSimilarity], result of:
          0.2665586 = score(doc=2782,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.62291753 = fieldWeight in 2782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=2782)
      0.25 = coord(1/4)
    
    Abstract
    Ameritech Library Services has released Dynix WebPac on NT, which provides access to a Dynix catalogue from any Java compatible Web browser. Users can place holds, cancel and postpone holds, view and renew items on loan and sort and limit search results from the Web. Describes some of the other features of Dynix WebPac
  13. OCLC completes SiteSearch 4.0 field test (1998) 0.07
    0.06663965 = product of:
      0.2665586 = sum of:
        0.2665586 = weight(_text_:java in 3078) [ClassicSimilarity], result of:
          0.2665586 = score(doc=3078,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.62291753 = fieldWeight in 3078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=3078)
      0.25 = coord(1/4)
    
    Abstract
    OCLC has announced that 6 library systems have completed field tests of the OCLC SiteSearch 4.0 suite of software, paving its way for release. Traces the beta site testing programme from its beginning in November 1997 and notes that OCLC SiteServer components have been written in Java programming language which will increase libraries' ability to extend the functionality of the SiteSearch software to create new features specific to local needs
  14. Robinson, D.A.; Lester, C.R.; Hamilton, N.M.: Delivering computer assisted learning across the WWW (1998) 0.07
    0.06663965 = product of:
      0.2665586 = sum of:
        0.2665586 = weight(_text_:java in 4618) [ClassicSimilarity], result of:
          0.2665586 = score(doc=4618,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.62291753 = fieldWeight in 4618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=4618)
      0.25 = coord(1/4)
    
    Abstract
    Demonstrates a new method of providing networked computer assisted learning to avoid the pitfalls of traditional methods. This was achieved using Web pages enhanced with Java applets, MPEG video clips and Dynamic HTML
  15. Bates, C.: Web programming : building Internet applications (2000) 0.07
    0.06663965 = product of:
      0.2665586 = sum of:
        0.2665586 = weight(_text_:java in 130) [ClassicSimilarity], result of:
          0.2665586 = score(doc=130,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.62291753 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=130)
      0.25 = coord(1/4)
    
    Object
    Java
  16. Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003) 0.07
    0.06639708 = product of:
      0.13279416 = sum of:
        0.09995948 = weight(_text_:java in 55) [ClassicSimilarity], result of:
          0.09995948 = score(doc=55,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.23359407 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
        0.03283468 = weight(_text_:und in 55) [ClassicSimilarity], result of:
          0.03283468 = score(doc=55,freq=22.0), product of:
            0.13466923 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.060719278 = queryNorm
            0.24381724 = fieldWeight in 55, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
      0.5 = coord(2/4)
    
    Content
    "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
    Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
  17. Braeckman, J.: ¬The integration of library information into a campus wide information system (1996) 0.06
    0.058309693 = product of:
      0.23323877 = sum of:
        0.23323877 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.23323877 = score(doc=729,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.5450528 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=729)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the development of Campus Wide Information Systems with reference to the work of Leuven University Library. A 4th phase can now be distinguished in the evolution of CWISs as they evolve towards Intranets. WWW technology is applied to organise a consistent interface to different types of information, databases and services within an institution. WWW servers now exist via which queries and query results are translated from the Web environment to the specific database query language and vice versa. The integration of Java will enable programs to be executed from within the Web environment. Describes each phase of CWIS development at KU Leuven
  18. Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997) 0.06
    0.058309693 = product of:
      0.23323877 = sum of:
        0.23323877 = weight(_text_:java in 756) [ClassicSimilarity], result of:
          0.23323877 = score(doc=756,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.5450528 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=756)
      0.25 = coord(1/4)
    
    Abstract
    Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
  19. Lo, M.L.: Recent strategies for retrieving chemical structure information on the Web (1997) 0.06
    0.058309693 = product of:
      0.23323877 = sum of:
        0.23323877 = weight(_text_:java in 3611) [ClassicSimilarity], result of:
          0.23323877 = score(doc=3611,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.5450528 = fieldWeight in 3611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3611)
      0.25 = coord(1/4)
    
    Abstract
    Discusses various structural searching methods available on the Web. some databases such as the Brookhaven Protein Database use keyword searching which does not provide the desired substructure search capabilities. Others like CS ChemFinder and MDL's Chemscape use graphical plug in programs. Although plug in programs provide more capabilities, users first have to obtain a copy of the programs. Due to this limitation, Tripo's WebSketch and ACD Interactive Lab adopt a different approach. Using JAVA applets, users create and display a structure query of the molecule on the web page without using other software. The new technique is likely to extend itself to other electronic publications
  20. Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998) 0.06
    0.058309693 = product of:
      0.23323877 = sum of:
        0.23323877 = weight(_text_:java in 4287) [ClassicSimilarity], result of:
          0.23323877 = score(doc=4287,freq=2.0), product of:
            0.42791957 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.060719278 = queryNorm
            0.5450528 = fieldWeight in 4287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4287)
      0.25 = coord(1/4)
    
    Abstract
    Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 780
  • m 309
  • el 106
  • s 92
  • i 21
  • n 17
  • x 13
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications