Search (5998 results, page 2 of 300)

  • × language_ss:"e"
  1. Shafer, K.E.; Surface, T.R.: Java Server Side Interpreter and OCLC SiteSearch (2001) 0.12
    0.11598724 = product of:
      0.46394897 = sum of:
        0.46394897 = weight(_text_:java in 2050) [ClassicSimilarity], result of:
          0.46394897 = score(doc=2050,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.9343763 = fieldWeight in 2050, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.09375 = fieldNorm(doc=2050)
      0.25 = coord(1/4)
    
  2. Ovid announces strategic partnerships : Java-based interface (1997) 0.11
    0.10935382 = product of:
      0.43741527 = sum of:
        0.43741527 = weight(_text_:java in 397) [ClassicSimilarity], result of:
          0.43741527 = score(doc=397,freq=4.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.8809384 = fieldWeight in 397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=397)
      0.25 = coord(1/4)
    
    Abstract
    Reports agreements between Ovid Technologies and 5 publishing companies (Blackwell Science, Lippincott-Raven, Munksgaard, Plenum, Willams and Wilkins) to secure the rights to the full text over 400 leading periodicals. Once the periodicals are loaded on Ovid they will be linked with other fulltext electronic periodicals to bibliographic databases to produce a web of related documents and threaded information. Concludes with notes on the Ovid Java Client graphic user interface, which offers increased speeds of searching the WWW
  3. Hawk, J.: OCLC SiteSearch (1998) 0.10
    0.095684595 = product of:
      0.38273838 = sum of:
        0.38273838 = weight(_text_:java in 3079) [ClassicSimilarity], result of:
          0.38273838 = score(doc=3079,freq=4.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.7708211 = fieldWeight in 3079, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3079)
      0.25 = coord(1/4)
    
    Abstract
    Feature on OCLC's SiteSearch suite of software, first introduced in 1992, and how it is helping over 250 libraries integrate and manage their electronic library collections. Describes the new features of version 4.0, released in Apr 1997, which include a new interface, Java based architecture, and an online documentation and training site. Gives an account of how Java is helping the Georgia Library Learning Online (GALILEO) project to keep pace on the WWW; the use of SiteSearch by libraries to customize their interface to electronic resources; and gives details of Project Athena (Assessing Technological Horizons to Educate the Nashville Area), which is using OCLC SiteSearch to allow area library users to search the holdings of public and university libraries simultaneously
  4. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.10
    0.095684595 = product of:
      0.38273838 = sum of:
        0.38273838 = weight(_text_:java in 2673) [ClassicSimilarity], result of:
          0.38273838 = score(doc=2673,freq=4.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.7708211 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2673)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
  5. Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003) 0.09
    0.09280376 = product of:
      0.18560752 = sum of:
        0.15464966 = weight(_text_:java in 3758) [ClassicSimilarity], result of:
          0.15464966 = score(doc=3758,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.31145877 = fieldWeight in 3758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=3758)
        0.030957857 = weight(_text_:have in 3758) [ClassicSimilarity], result of:
          0.030957857 = score(doc=3758,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.13935146 = fieldWeight in 3758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.03125 = fieldNorm(doc=3758)
      0.5 = coord(2/4)
    
    Abstract
    The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
  6. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.09
    0.09280376 = product of:
      0.18560752 = sum of:
        0.15464966 = weight(_text_:java in 709) [ClassicSimilarity], result of:
          0.15464966 = score(doc=709,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.31145877 = fieldWeight in 709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=709)
        0.030957857 = weight(_text_:have in 709) [ClassicSimilarity], result of:
          0.030957857 = score(doc=709,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.13935146 = fieldWeight in 709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.03125 = fieldNorm(doc=709)
      0.5 = coord(2/4)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  7. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (1998) 0.09
    0.08681342 = product of:
      0.17362684 = sum of:
        0.13531844 = weight(_text_:java in 1493) [ClassicSimilarity], result of:
          0.13531844 = score(doc=1493,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.2725264 = fieldWeight in 1493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.02734375 = fieldNorm(doc=1493)
        0.03830839 = weight(_text_:have in 1493) [ClassicSimilarity], result of:
          0.03830839 = score(doc=1493,freq=4.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.17243862 = fieldWeight in 1493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.02734375 = fieldNorm(doc=1493)
      0.5 = coord(2/4)
    
    Abstract
    Some web sites "work" and some don't. Good web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls. You need to know who will be using the site, and what they'll be using it for. You need some idea of what you'd like to draw their attention to during their visit. Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable. Information Architecture for the World Wide Web is about applying the principles of architecture and library science to web site design. Each web site is like a public building, available for tourists and regulars alike to breeze through at their leisure. The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday. Most books on web development concentrate either on the aesthetics or the mechanics of the site. This book is about the framework that holds the two together. With this book, you learn how to design web sites and intranets that support growth, management, and ease of use. Special attention is given to: * The process behind architecting a large, complex site * Web site hierarchy design and organization Information Architecture for the World Wide Web is for webmasters, designers, and anyone else involved in building a web site. It's for novice web designers who, from the start, want to avoid the traps that result in poorly designed sites. It's for experienced web designers who have already created sites but realize that something "is missing" from their sites and want to improve them. It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their web pages into a cohesive site. The authors are two of the principals of Argus Associates, a web consulting firm. At Argus, they have created information architectures for web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical.
  8. Tennant, R.: Library catalogs : the wrong solution (2003) 0.08
    0.07810134 = product of:
      0.15620267 = sum of:
        0.11598724 = weight(_text_:java in 2558) [ClassicSimilarity], result of:
          0.11598724 = score(doc=2558,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.23359407 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
        0.040215436 = weight(_text_:have in 2558) [ClassicSimilarity], result of:
          0.040215436 = score(doc=2558,freq=6.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.18102285 = fieldWeight in 2558, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
      0.5 = coord(2/4)
    
    Content
    "MOST INTEGRATED library systems, as they are currently configured and used, should be removed from public view. Before I say why, let me be clean that I think the integrated library system serves a very important, albeit limited, role. An integrated library system should serve as a key piece of the infrastructure of a library, handling such tasks as ma terials acquisition, cataloging (including holdings, of course), and circulation. The integrated library system should be a complete and accurate recording of a local library's holdings. It should not be presented to users as the primary system for locating information. It fails badly at that important job. - Lack of content- The central problem of almost any library catalog system is that it typically includes only information about the books and journals held by a parficular library. Most do not provide access to joumal article indexes, web search engines, or even selective web directories like the Librarians' Index to the Internet. If they do offen such access, it is only via links to these services. The library catalog is far from onestop shopping for information. Although we acknowledge that fact to each other, we still treat it as if it were the best place in the universe to begin a search. Most of us give the catalog a place of great prominente an our web pages. But Information for each book is limited to the author, title, and a few subject headings. Seldom can book reviews, jacket summaries, recommendations, or tables of contents be found-or anything at all to help users determine if they want the material. - Lack of coverage - Most catalogs do not allow patrons to discover even all the books that are available to them. If you're lucky, your catalog may cover the collections of those libraries with which you have close ties-such as a regional network. But that leaves out all those items that could be requested via interlibrary loan. As Steve Coffman pointed out in his "Building Earth's Largest Library" article, we must show our users the universe that is open to them, highlight the items most accessible, and provide an estimate of how long it would take to obtain other items. - Inability to increase coverage - Despite some well-meaning attempts to smash everything of interest into the library catalog, the fact remains that most integrated library systems expect MARC records and MARC records only. This means that whatever we want to put into the catalog must be described using MARC and AACR2 (see "Marc Must Die," LJ 10/15/02, p. 26ff.). This is a barrier to dramatically increasing the scope of a catalog system, even if we decided to do it. How would you, for example, use the Open Archives Initiative Harvesting Protocol to crawl the bibliographic records of remote repositories and make them searchable within your library catalog? It can't be dope, and it shouldn't. The library catalog should be a record of a given library's holdings. Period.
    - User Interface hostility - Recently I used the Library catalogs of two public libraries, new products from two major library vendors. A link an one catalog said "Knowledge Portal," whatever that was supposed to mean. Clicking an it brought you to two choices: Z39.50 Bibliographic Sites and the World Wide Web. No public library user will have the faintest clue what Z39.50 is. The other catalog launched a Java applet that before long froze my web browser so badly I was forced to shut the program down. Pick a popular book and pretend you are a library patron. Choose three to five libraries at random from the lib web-cats site (pick catalogs that are not using your system) and attempt to find your book. Try as much as possible to see the system through the eyes of your patrons-a teenager, a retiree, or an older faculty member. You may not always like what you see. Now go back to your own system and try the same thing. - What should the public see? - Our users deserve an information system that helps them find all different kinds of resources-books, articles, web pages, working papers in institutional repositories-and gives them the tools to focus in an what they want. This is not, and should not be, the library catalog. It must communicate with the catalog, but it will also need to interface with other information systems, such as vendor databases and web search engines. What will such a tool look like? We are seeing the beginnings of such a tool in the current offerings of cross-database search tools from a few vendors (see "Cross-Database Search," LJ 10/15/01, p. 29ff). We are in the early stages of developing the kind of robust, userfriendly tool that will be required before we can pull our catalogs from public view. Meanwhile, we can begin by making what we have easier to understand and use."
  9. Reed, D.: Essential HTML fast (1997) 0.08
    0.07732483 = product of:
      0.30929932 = sum of:
        0.30929932 = weight(_text_:java in 6851) [ClassicSimilarity], result of:
          0.30929932 = score(doc=6851,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.62291753 = fieldWeight in 6851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=6851)
      0.25 = coord(1/4)
    
    Abstract
    This book provides a quick, concise guide to the issues surrounding the preparation of a well-designed, professional web site using HTML. Topics covered include: how to plan your web site effectively, effective use of hypertext, images, audio and video; layout techniques using tables and and list; how to use style sheets, font sizes and plans for mathematical equation make up. Integration of CGI scripts, Java and ActiveX into your web site is also discussed
  10. Lord Wodehouse: ¬The Intranet : the quiet (r)evolution (1997) 0.08
    0.07732483 = product of:
      0.30929932 = sum of:
        0.30929932 = weight(_text_:java in 171) [ClassicSimilarity], result of:
          0.30929932 = score(doc=171,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.62291753 = fieldWeight in 171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=171)
      0.25 = coord(1/4)
    
    Abstract
    Explains how the Intranet (in effect an Internet limited to the computer systems of a single organization) developed out of the Internet, and what its uses and advantages are. Focuses on the Intranet developed in the Glaxo Wellcome organization. Briefly discusses a number of technologies in development, e.g. Java, Real audio, 3D and VRML, and summarizes the issues involved in the successful development of the Intranet, that is, bandwidth, searching tools, security, and legal issues
  11. Wang, J.; Reid, E.O.F.: Developing WWW information systems on the Internet (1996) 0.08
    0.07732483 = product of:
      0.30929932 = sum of:
        0.30929932 = weight(_text_:java in 604) [ClassicSimilarity], result of:
          0.30929932 = score(doc=604,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.62291753 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    Gives an overview of Web information system development. Discusses some basic concepts and technologies such as HTML, HTML FORM, CGI and Java, which are associated with developing WWW information systems. Further discusses the design and implementation of Virtual Travel Mart, a Web based end user oriented travel information system. Finally, addresses some issues in developing WWW information systems
  12. Ameritech releases Dynix WebPac on NT (1998) 0.08
    0.07732483 = product of:
      0.30929932 = sum of:
        0.30929932 = weight(_text_:java in 2782) [ClassicSimilarity], result of:
          0.30929932 = score(doc=2782,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.62291753 = fieldWeight in 2782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=2782)
      0.25 = coord(1/4)
    
    Abstract
    Ameritech Library Services has released Dynix WebPac on NT, which provides access to a Dynix catalogue from any Java compatible Web browser. Users can place holds, cancel and postpone holds, view and renew items on loan and sort and limit search results from the Web. Describes some of the other features of Dynix WebPac
  13. Robinson, D.A.; Lester, C.R.; Hamilton, N.M.: Delivering computer assisted learning across the WWW (1998) 0.08
    0.07732483 = product of:
      0.30929932 = sum of:
        0.30929932 = weight(_text_:java in 4618) [ClassicSimilarity], result of:
          0.30929932 = score(doc=4618,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.62291753 = fieldWeight in 4618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=4618)
      0.25 = coord(1/4)
    
    Abstract
    Demonstrates a new method of providing networked computer assisted learning to avoid the pitfalls of traditional methods. This was achieved using Web pages enhanced with Java applets, MPEG video clips and Dynamic HTML
  14. Bates, C.: Web programming : building Internet applications (2000) 0.08
    0.07732483 = product of:
      0.30929932 = sum of:
        0.30929932 = weight(_text_:java in 130) [ClassicSimilarity], result of:
          0.30929932 = score(doc=130,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.62291753 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=130)
      0.25 = coord(1/4)
    
    Object
    Java
  15. Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003) 0.08
    0.07704336 = product of:
      0.15408672 = sum of:
        0.11598724 = weight(_text_:java in 55) [ClassicSimilarity], result of:
          0.11598724 = score(doc=55,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.23359407 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
        0.038099483 = weight(_text_:und in 55) [ClassicSimilarity], result of:
          0.038099483 = score(doc=55,freq=22.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.24381724 = fieldWeight in 55, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
      0.5 = coord(2/4)
    
    Content
    "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
    Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
  16. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.07
    0.07392389 = product of:
      0.14784779 = sum of:
        0.09665604 = weight(_text_:java in 2211) [ClassicSimilarity], result of:
          0.09665604 = score(doc=2211,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.19466174 = fieldWeight in 2211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2211)
        0.051191743 = weight(_text_:have in 2211) [ClassicSimilarity], result of:
          0.051191743 = score(doc=2211,freq=14.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.23043081 = fieldWeight in 2211, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2211)
      0.5 = coord(2/4)
    
    Abstract
    In this article we present a method for retrieving documents from a digital library through a visual interface based on automatically generated concepts. We used a vocabulary generation algorithm to generate a set of concepts for the digital library and a technique called the max-min distance technique to cluster them. Additionally, the concepts were visualized in a spring embedding graph layout to depict the semantic relationship among them. The resulting graph layout serves as an aid to users for retrieving documents. An online archive containing the contents of D-Lib Magazine from July 1995 to May 2002 was used to test the utility of an implemented retrieval and visualization system. We believe that the method developed and tested can be applied to many different domains to help users get a better understanding of online document collections and to minimize users' cognitive load during execution of search tasks. Over the past few years, the volume of information available through the World Wide Web has been expanding exponentially. Never has so much information been so readily available and shared among so many people. Unfortunately, the unstructured nature and huge volume of information accessible over networks have made it hard for users to sift through and find relevant information. To deal with this problem, information retrieval (IR) techniques have gained more intensive attention from both industrial and academic researchers. Numerous IR techniques have been developed to help deal with the information overload problem. These techniques concentrate on mathematical models and algorithms for retrieval. Popular IR models such as the Boolean model, the vector-space model, the probabilistic model and their variants are well established.
    From the user's perspective, however, it is still difficult to use current information retrieval systems. Users frequently have problems expressing their information needs and translating those needs into queries. This is partly due to the fact that information needs cannot be expressed appropriately in systems terms. It is not unusual for users to input search terms that are different from the index terms information systems use. Various methods have been proposed to help users choose search terms and articulate queries. One widely used approach is to incorporate into the information system a thesaurus-like component that represents both the important concepts in a particular subject area and the semantic relationships among those concepts. Unfortunately, the development and use of thesauri is not without its own problems. The thesaurus employed in a specific information system has often been developed for a general subject area and needs significant enhancement to be tailored to the information system where it is to be used. This thesaurus development process, if done manually, is both time consuming and labor intensive. Usage of a thesaurus in searching is complex and may raise barriers for the user. For illustration purposes, let us consider two scenarios of thesaurus usage. In the first scenario the user inputs a search term and the thesaurus then displays a matching set of related terms. Without an overview of the thesaurus - and without the ability to see the matching terms in the context of other terms - it may be difficult to assess the quality of the related terms in order to select the correct term. In the second scenario the user browses the whole thesaurus, which is organized as in an alphabetically ordered list. The problem with this approach is that the list may be long, and neither does it show users the global semantic relationship among all the listed terms.
    Nevertheless, because thesaurus use has shown to improve retrieval, for our method we integrate functions in the search interface that permit users to explore built-in search vocabularies to improve retrieval from digital libraries. Our method automatically generates the terms and their semantic relationships representing relevant topics covered in a digital library. We call these generated terms the "concepts", and the generated terms and their semantic relationships we call the "concept space". Additionally, we used a visualization technique to display the concept space and allow users to interact with this space. The automatically generated term set is considered to be more representative of subject area in a corpus than an "externally" imposed thesaurus, and our method has the potential of saving a significant amount of time and labor for those who have been manually creating thesauri as well. Information visualization is an emerging discipline and developed very quickly in the last decade. With growing volumes of documents and associated complexities, information visualization has become increasingly important. Researchers have found information visualization to be an effective way to use and understand information while minimizing a user's cognitive load. Our work was based on an algorithmic approach of concept discovery and association. Concepts are discovered using an algorithm based on an automated thesaurus generation procedure. Subsequently, similarities among terms are computed using the cosine measure, and the associations among terms are established using a method known as max-min distance clustering. The concept space is then visualized in a spring embedding graph, which roughly shows the semantic relationships among concepts in a 2-D visual representation. The semantic space of the visualization is used as a medium for users to retrieve the desired documents. In the remainder of this article, we present our algorithmic approach of concept generation and clustering, followed by description of the visualization technique and interactive interface. The paper ends with key conclusions and discussions on future work.
    Content
    The JAVA applet is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. A prototype of this interface has been developed and is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. The D-Lib search interface is available at <http://www.dlib.org/Architext/AT-dlib2query.html>.
  17. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.07
    0.06960282 = product of:
      0.13920563 = sum of:
        0.11598724 = weight(_text_:java in 378) [ClassicSimilarity], result of:
          0.11598724 = score(doc=378,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.23359407 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=378)
        0.023218391 = weight(_text_:have in 378) [ClassicSimilarity], result of:
          0.023218391 = score(doc=378,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.10451359 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0234375 = fieldNorm(doc=378)
      0.5 = coord(2/4)
    
    Abstract
    The W3C OWL Web Ontology Language has been a W3C recommendation since 2004, and specification of its successor OWL 2 is being finalised. OWL plays an important role in an increasing number and range of applications and as experience using the language grows, new ideas for further extending its reach continue to be proposed. The OWL: Experiences and Direction (OWLED) workshop series is a forum for practitioners in industry and academia, tool developers, and others interested in OWL to describe real and potential applications, to share experience, and to discuss requirements for language extensions and modifications. The workshop will bring users, implementors and researchers together to measure the state of need against the state of the art, and to set an agenda for research and deployment in order to incorporate OWL-based technologies into new applications. This year's 2009 OWLED workshop will be co-located with the Eighth International Semantic Web Conference (ISWC), and the Third International Conference on Web Reasoning and Rule Systems (RR2009). It will be held in Chantilly, VA, USA on October 23 - 24, 2009. The workshop will concentrate on issues related to the development and W3C standardization of OWL 2, and beyond, but other issues related to OWL are also of interest, particularly those related to the task forces set up at OWLED 2007. As usual, the workshop will try to encourage participants to work together and will give space for discussions on various topics, to be decided and published at some point in the future. We ask participants to have a look at these topics and the accepted submissions before the workshop, and to prepare single "slides" that can be presented during these discussions. There will also be formal presentation of submissions to the workshop.
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
  18. Braeckman, J.: ¬The integration of library information into a campus wide information system (1996) 0.07
    0.06765922 = product of:
      0.2706369 = sum of:
        0.2706369 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.2706369 = score(doc=729,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.5450528 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=729)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the development of Campus Wide Information Systems with reference to the work of Leuven University Library. A 4th phase can now be distinguished in the evolution of CWISs as they evolve towards Intranets. WWW technology is applied to organise a consistent interface to different types of information, databases and services within an institution. WWW servers now exist via which queries and query results are translated from the Web environment to the specific database query language and vice versa. The integration of Java will enable programs to be executed from within the Web environment. Describes each phase of CWIS development at KU Leuven
  19. Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997) 0.07
    0.06765922 = product of:
      0.2706369 = sum of:
        0.2706369 = weight(_text_:java in 756) [ClassicSimilarity], result of:
          0.2706369 = score(doc=756,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.5450528 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=756)
      0.25 = coord(1/4)
    
    Abstract
    Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
  20. Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998) 0.07
    0.06765922 = product of:
      0.2706369 = sum of:
        0.2706369 = weight(_text_:java in 4287) [ClassicSimilarity], result of:
          0.2706369 = score(doc=4287,freq=2.0), product of:
            0.49653333 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.07045517 = queryNorm
            0.5450528 = fieldWeight in 4287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4287)
      0.25 = coord(1/4)
    
    Abstract
    Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 5173
  • m 533
  • el 369
  • s 165
  • r 30
  • b 28
  • i 24
  • x 23
  • n 21
  • p 8
  • ? 1
  • ag 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications