Search (2209 results, page 2 of 111)

  • × language_ss:"e"
  1. Trenner, L.: ¬A comparative survey of the friendliness of online "help" in interactive information retrieval systems (1989) 0.09
    0.09461337 = product of:
      0.3784535 = sum of:
        0.3784535 = weight(_text_:help in 798) [ClassicSimilarity], result of:
          0.3784535 = score(doc=798,freq=16.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            1.1759679 = fieldWeight in 798, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.0625 = fieldNorm(doc=798)
      0.25 = coord(1/4)
    
    Abstract
    This article discusses the provision of "help" in interactive information retrieval system (IIRS) and describes a comparative survey of the "help" facilities of 16 such systems. Six guidelines for the design of a "help" facility are drawn up and these are used to evaluate the quality and friendliness of the "help" provided by each system. The scores indicate that "help" on IIRS is often inadequate, especially on the commercial online systems. This article concludes by discussing why "help" is so unfriendly and by suggesting some ways in which online "help" could be improved
  2. Mu, X.; Lu, K.; Ryu, H.: Explicitly integrating MeSH thesaurus help into health information retrieval systems : an empirical user study (2014) 0.09
    0.09292307 = product of:
      0.18584614 = sum of:
        0.018591745 = weight(_text_:und in 3703) [ClassicSimilarity], result of:
          0.018591745 = score(doc=3703,freq=2.0), product of:
            0.15174113 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06841661 = queryNorm
            0.12252277 = fieldWeight in 3703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3703)
        0.16725439 = weight(_text_:help in 3703) [ClassicSimilarity], result of:
          0.16725439 = score(doc=3703,freq=8.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            0.5197093 = fieldWeight in 3703, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3703)
      0.5 = coord(2/4)
    
    Abstract
    When consumers search for health information, a major obstacle is their unfamiliarity with the medical terminology. Even though medical thesauri such as the Medical Subject Headings (MeSH) and related tools (e.g., the MeSH Browser) were created to help consumers find medical term definitions, the lack of direct and explicit integration of these help tools into a health retrieval system prevented them from effectively achieving their objectives. To explore this issue, we conducted an empirical study with two systems: One is a simple interface system supporting query-based searching; the other is an augmented system with two new components supporting MeSH term searching and MeSH tree browsing. A total of 45 subjects were recruited to participate in the study. The results indicated that the augmented system is more effective than the simple system in terms of improving user-perceived topic familiarity and question-answer performance, even though we did not find users spend more time on the augmented system. The two new MeSH help components played a critical role in participants' health information retrieval and were found to allow them to develop new search strategies. The findings of the study enhanced our understanding of consumers' search behaviors and shed light on the design of future health information retrieval systems.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Hawk, J.: OCLC SiteSearch (1998) 0.09
    0.09291604 = product of:
      0.37166417 = sum of:
        0.37166417 = weight(_text_:java in 3079) [ClassicSimilarity], result of:
          0.37166417 = score(doc=3079,freq=4.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.7708211 = fieldWeight in 3079, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3079)
      0.25 = coord(1/4)
    
    Abstract
    Feature on OCLC's SiteSearch suite of software, first introduced in 1992, and how it is helping over 250 libraries integrate and manage their electronic library collections. Describes the new features of version 4.0, released in Apr 1997, which include a new interface, Java based architecture, and an online documentation and training site. Gives an account of how Java is helping the Georgia Library Learning Online (GALILEO) project to keep pace on the WWW; the use of SiteSearch by libraries to customize their interface to electronic resources; and gives details of Project Athena (Assessing Technological Horizons to Educate the Nashville Area), which is using OCLC SiteSearch to allow area library users to search the holdings of public and university libraries simultaneously
  4. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.09
    0.09291604 = product of:
      0.37166417 = sum of:
        0.37166417 = weight(_text_:java in 2673) [ClassicSimilarity], result of:
          0.37166417 = score(doc=2673,freq=4.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.7708211 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2673)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
  5. Valauskas, E.J.: Creating an online help system (1994) 0.08
    0.083627194 = product of:
      0.33450878 = sum of:
        0.33450878 = weight(_text_:help in 8186) [ClassicSimilarity], result of:
          0.33450878 = score(doc=8186,freq=8.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            1.0394186 = fieldWeight in 8186, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.078125 = fieldNorm(doc=8186)
      0.25 = coord(1/4)
    
    Abstract
    Details how the ON-LINE Help Construction Kit (version 2.2) can be used to create a help system for searching online catalogues. Details the planning process when preparing such a system and how to use the kit to create help panels
  6. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.08
    0.08314133 = product of:
      0.16628265 = sum of:
        0.09385938 = weight(_text_:java in 2211) [ClassicSimilarity], result of:
          0.09385938 = score(doc=2211,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.19466174 = fieldWeight in 2211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2211)
        0.07242328 = weight(_text_:help in 2211) [ClassicSimilarity], result of:
          0.07242328 = score(doc=2211,freq=6.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            0.22504075 = fieldWeight in 2211, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2211)
      0.5 = coord(2/4)
    
    Abstract
    In this article we present a method for retrieving documents from a digital library through a visual interface based on automatically generated concepts. We used a vocabulary generation algorithm to generate a set of concepts for the digital library and a technique called the max-min distance technique to cluster them. Additionally, the concepts were visualized in a spring embedding graph layout to depict the semantic relationship among them. The resulting graph layout serves as an aid to users for retrieving documents. An online archive containing the contents of D-Lib Magazine from July 1995 to May 2002 was used to test the utility of an implemented retrieval and visualization system. We believe that the method developed and tested can be applied to many different domains to help users get a better understanding of online document collections and to minimize users' cognitive load during execution of search tasks. Over the past few years, the volume of information available through the World Wide Web has been expanding exponentially. Never has so much information been so readily available and shared among so many people. Unfortunately, the unstructured nature and huge volume of information accessible over networks have made it hard for users to sift through and find relevant information. To deal with this problem, information retrieval (IR) techniques have gained more intensive attention from both industrial and academic researchers. Numerous IR techniques have been developed to help deal with the information overload problem. These techniques concentrate on mathematical models and algorithms for retrieval. Popular IR models such as the Boolean model, the vector-space model, the probabilistic model and their variants are well established.
    From the user's perspective, however, it is still difficult to use current information retrieval systems. Users frequently have problems expressing their information needs and translating those needs into queries. This is partly due to the fact that information needs cannot be expressed appropriately in systems terms. It is not unusual for users to input search terms that are different from the index terms information systems use. Various methods have been proposed to help users choose search terms and articulate queries. One widely used approach is to incorporate into the information system a thesaurus-like component that represents both the important concepts in a particular subject area and the semantic relationships among those concepts. Unfortunately, the development and use of thesauri is not without its own problems. The thesaurus employed in a specific information system has often been developed for a general subject area and needs significant enhancement to be tailored to the information system where it is to be used. This thesaurus development process, if done manually, is both time consuming and labor intensive. Usage of a thesaurus in searching is complex and may raise barriers for the user. For illustration purposes, let us consider two scenarios of thesaurus usage. In the first scenario the user inputs a search term and the thesaurus then displays a matching set of related terms. Without an overview of the thesaurus - and without the ability to see the matching terms in the context of other terms - it may be difficult to assess the quality of the related terms in order to select the correct term. In the second scenario the user browses the whole thesaurus, which is organized as in an alphabetically ordered list. The problem with this approach is that the list may be long, and neither does it show users the global semantic relationship among all the listed terms.
    Content
    The JAVA applet is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. A prototype of this interface has been developed and is available at <http://ella.slis.indiana.edu/~junzhang/dlib/IV.html>. The D-Lib search interface is available at <http://www.dlib.org/Architext/AT-dlib2query.html>.
  7. Taylor, R.S.: Question negotiation and information seeking in libraries (1968) 0.08
    0.08211508 = product of:
      0.16423015 = sum of:
        0.022310091 = weight(_text_:und in 3694) [ClassicSimilarity], result of:
          0.022310091 = score(doc=3694,freq=2.0), product of:
            0.15174113 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06841661 = queryNorm
            0.14702731 = fieldWeight in 3694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.046875 = fieldNorm(doc=3694)
        0.14192006 = weight(_text_:help in 3694) [ClassicSimilarity], result of:
          0.14192006 = score(doc=3694,freq=4.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            0.44098797 = fieldWeight in 3694, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.046875 = fieldNorm(doc=3694)
      0.5 = coord(2/4)
    
    Abstract
    Seekers of information in libraries either go through a librarian intermediary or they help themselves. When they go through librarians they must develop their questions through four levels of need, referred to here as the visceral, conscious, formalized, and compromised needs. In his pre-search interview with an information seeker the reference librarian attempts to help him arrive at an understanding of his 'compromised' need by determining: (1) the subject of his interest; (2) his motivation; (3) his personal characteristics; (4) the relationship of the inquiry to file organization; and (5) anticipated answers. The author contends that research is needed into the techniques of conducting this negotiation between the user and the reference librarian
    Footnote
    Vgl. auch in: Reference and information services: a reader. Ed.: B. Katz and A. Tarr. New York: Scarecrow Press 1978. Vgl. auch den Artikel zur Rezeption und Wirkung: Chang, Y.-W.: The influence of Taylor's paper, Question-Negotiation and Information-Seeking in Libraries. In: Information processing and management. 49(2013) no.5, S.983-994.
  8. Eckert, K: ¬The ICE-map visualization (2011) 0.08
    0.08177515 = product of:
      0.1635503 = sum of:
        0.02974679 = weight(_text_:und in 743) [ClassicSimilarity], result of:
          0.02974679 = score(doc=743,freq=2.0), product of:
            0.15174113 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06841661 = queryNorm
            0.19603643 = fieldWeight in 743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=743)
        0.13380352 = weight(_text_:help in 743) [ClassicSimilarity], result of:
          0.13380352 = score(doc=743,freq=2.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            0.41576743 = fieldWeight in 743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.0625 = fieldNorm(doc=743)
      0.5 = coord(2/4)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  9. Tennant, R.: Library catalogs : the wrong solution (2003) 0.08
    0.081403784 = product of:
      0.16280757 = sum of:
        0.112631254 = weight(_text_:java in 2558) [ClassicSimilarity], result of:
          0.112631254 = score(doc=2558,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.23359407 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
        0.050176315 = weight(_text_:help in 2558) [ClassicSimilarity], result of:
          0.050176315 = score(doc=2558,freq=2.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            0.15591279 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
      0.5 = coord(2/4)
    
    Content
    "MOST INTEGRATED library systems, as they are currently configured and used, should be removed from public view. Before I say why, let me be clean that I think the integrated library system serves a very important, albeit limited, role. An integrated library system should serve as a key piece of the infrastructure of a library, handling such tasks as ma terials acquisition, cataloging (including holdings, of course), and circulation. The integrated library system should be a complete and accurate recording of a local library's holdings. It should not be presented to users as the primary system for locating information. It fails badly at that important job. - Lack of content- The central problem of almost any library catalog system is that it typically includes only information about the books and journals held by a parficular library. Most do not provide access to joumal article indexes, web search engines, or even selective web directories like the Librarians' Index to the Internet. If they do offen such access, it is only via links to these services. The library catalog is far from onestop shopping for information. Although we acknowledge that fact to each other, we still treat it as if it were the best place in the universe to begin a search. Most of us give the catalog a place of great prominente an our web pages. But Information for each book is limited to the author, title, and a few subject headings. Seldom can book reviews, jacket summaries, recommendations, or tables of contents be found-or anything at all to help users determine if they want the material. - Lack of coverage - Most catalogs do not allow patrons to discover even all the books that are available to them. If you're lucky, your catalog may cover the collections of those libraries with which you have close ties-such as a regional network. But that leaves out all those items that could be requested via interlibrary loan. As Steve Coffman pointed out in his "Building Earth's Largest Library" article, we must show our users the universe that is open to them, highlight the items most accessible, and provide an estimate of how long it would take to obtain other items. - Inability to increase coverage - Despite some well-meaning attempts to smash everything of interest into the library catalog, the fact remains that most integrated library systems expect MARC records and MARC records only. This means that whatever we want to put into the catalog must be described using MARC and AACR2 (see "Marc Must Die," LJ 10/15/02, p. 26ff.). This is a barrier to dramatically increasing the scope of a catalog system, even if we decided to do it. How would you, for example, use the Open Archives Initiative Harvesting Protocol to crawl the bibliographic records of remote repositories and make them searchable within your library catalog? It can't be dope, and it shouldn't. The library catalog should be a record of a given library's holdings. Period.
    - User Interface hostility - Recently I used the Library catalogs of two public libraries, new products from two major library vendors. A link an one catalog said "Knowledge Portal," whatever that was supposed to mean. Clicking an it brought you to two choices: Z39.50 Bibliographic Sites and the World Wide Web. No public library user will have the faintest clue what Z39.50 is. The other catalog launched a Java applet that before long froze my web browser so badly I was forced to shut the program down. Pick a popular book and pretend you are a library patron. Choose three to five libraries at random from the lib web-cats site (pick catalogs that are not using your system) and attempt to find your book. Try as much as possible to see the system through the eyes of your patrons-a teenager, a retiree, or an older faculty member. You may not always like what you see. Now go back to your own system and try the same thing. - What should the public see? - Our users deserve an information system that helps them find all different kinds of resources-books, articles, web pages, working papers in institutional repositories-and gives them the tools to focus in an what they want. This is not, and should not be, the library catalog. It must communicate with the catalog, but it will also need to interface with other information systems, such as vendor databases and web search engines. What will such a tool look like? We are seeing the beginnings of such a tool in the current offerings of cross-database search tools from a few vendors (see "Cross-Database Search," LJ 10/15/01, p. 29ff). We are in the early stages of developing the kind of robust, userfriendly tool that will be required before we can pull our catalogs from public view. Meanwhile, we can begin by making what we have easier to understand and use."
  10. Juhne, J.; Jensen, A.T.; Gronbaek, K.: Ariadne: a Java-based guided tour system for the World Wide Web (1998) 0.08
    0.07964232 = product of:
      0.31856927 = sum of:
        0.31856927 = weight(_text_:java in 4593) [ClassicSimilarity], result of:
          0.31856927 = score(doc=4593,freq=4.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.6607038 = fieldWeight in 4593, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4593)
      0.25 = coord(1/4)
    
    Abstract
    Presents a Guided tour system for the WWW, called Ariadne, which implements the ideas of trails and guided tours, originating from the hypertext field. Ariadne appears as a Java applet to the user and it stores guided tours in a database format separated from the WWW documents included in the tour. Itd main advantages are: an independent user interface which does not affect the layout of the documents being part of the tour, branching tours where the user may follow alternative routes, composition of existing tours into aggregate tours, overview map with indication of which parts of a tour have been visited an support for getting back on track. Ariadne is available as a research prototype, and it has been tested among a group of university students as well as casual users on the Internet
  11. Reed, D.: Essential HTML fast (1997) 0.08
    0.0750875 = product of:
      0.30035 = sum of:
        0.30035 = weight(_text_:java in 6851) [ClassicSimilarity], result of:
          0.30035 = score(doc=6851,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.62291753 = fieldWeight in 6851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=6851)
      0.25 = coord(1/4)
    
    Abstract
    This book provides a quick, concise guide to the issues surrounding the preparation of a well-designed, professional web site using HTML. Topics covered include: how to plan your web site effectively, effective use of hypertext, images, audio and video; layout techniques using tables and and list; how to use style sheets, font sizes and plans for mathematical equation make up. Integration of CGI scripts, Java and ActiveX into your web site is also discussed
  12. Lord Wodehouse: ¬The Intranet : the quiet (r)evolution (1997) 0.08
    0.0750875 = product of:
      0.30035 = sum of:
        0.30035 = weight(_text_:java in 171) [ClassicSimilarity], result of:
          0.30035 = score(doc=171,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.62291753 = fieldWeight in 171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=171)
      0.25 = coord(1/4)
    
    Abstract
    Explains how the Intranet (in effect an Internet limited to the computer systems of a single organization) developed out of the Internet, and what its uses and advantages are. Focuses on the Intranet developed in the Glaxo Wellcome organization. Briefly discusses a number of technologies in development, e.g. Java, Real audio, 3D and VRML, and summarizes the issues involved in the successful development of the Intranet, that is, bandwidth, searching tools, security, and legal issues
  13. Wang, J.; Reid, E.O.F.: Developing WWW information systems on the Internet (1996) 0.08
    0.0750875 = product of:
      0.30035 = sum of:
        0.30035 = weight(_text_:java in 604) [ClassicSimilarity], result of:
          0.30035 = score(doc=604,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.62291753 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    Gives an overview of Web information system development. Discusses some basic concepts and technologies such as HTML, HTML FORM, CGI and Java, which are associated with developing WWW information systems. Further discusses the design and implementation of Virtual Travel Mart, a Web based end user oriented travel information system. Finally, addresses some issues in developing WWW information systems
  14. Ameritech releases Dynix WebPac on NT (1998) 0.08
    0.0750875 = product of:
      0.30035 = sum of:
        0.30035 = weight(_text_:java in 2782) [ClassicSimilarity], result of:
          0.30035 = score(doc=2782,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.62291753 = fieldWeight in 2782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=2782)
      0.25 = coord(1/4)
    
    Abstract
    Ameritech Library Services has released Dynix WebPac on NT, which provides access to a Dynix catalogue from any Java compatible Web browser. Users can place holds, cancel and postpone holds, view and renew items on loan and sort and limit search results from the Web. Describes some of the other features of Dynix WebPac
  15. OCLC completes SiteSearch 4.0 field test (1998) 0.08
    0.0750875 = product of:
      0.30035 = sum of:
        0.30035 = weight(_text_:java in 3078) [ClassicSimilarity], result of:
          0.30035 = score(doc=3078,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.62291753 = fieldWeight in 3078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=3078)
      0.25 = coord(1/4)
    
    Abstract
    OCLC has announced that 6 library systems have completed field tests of the OCLC SiteSearch 4.0 suite of software, paving its way for release. Traces the beta site testing programme from its beginning in November 1997 and notes that OCLC SiteServer components have been written in Java programming language which will increase libraries' ability to extend the functionality of the SiteSearch software to create new features specific to local needs
  16. Robinson, D.A.; Lester, C.R.; Hamilton, N.M.: Delivering computer assisted learning across the WWW (1998) 0.08
    0.0750875 = product of:
      0.30035 = sum of:
        0.30035 = weight(_text_:java in 4618) [ClassicSimilarity], result of:
          0.30035 = score(doc=4618,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.62291753 = fieldWeight in 4618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=4618)
      0.25 = coord(1/4)
    
    Abstract
    Demonstrates a new method of providing networked computer assisted learning to avoid the pitfalls of traditional methods. This was achieved using Web pages enhanced with Java applets, MPEG video clips and Dynamic HTML
  17. Bates, C.: Web programming : building Internet applications (2000) 0.08
    0.0750875 = product of:
      0.30035 = sum of:
        0.30035 = weight(_text_:java in 130) [ClassicSimilarity], result of:
          0.30035 = score(doc=130,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.62291753 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=130)
      0.25 = coord(1/4)
    
    Object
    Java
  18. Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003) 0.07
    0.07481418 = product of:
      0.14962836 = sum of:
        0.112631254 = weight(_text_:java in 55) [ClassicSimilarity], result of:
          0.112631254 = score(doc=55,freq=2.0), product of:
            0.48216656 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06841661 = queryNorm
            0.23359407 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
        0.036997102 = weight(_text_:und in 55) [ClassicSimilarity], result of:
          0.036997102 = score(doc=55,freq=22.0), product of:
            0.15174113 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06841661 = queryNorm
            0.24381724 = fieldWeight in 55, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
      0.5 = coord(2/4)
    
    Content
    "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
    Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
  19. Fattahi, R.; Dokhtesmati, M.; Saberi, M.: ¬A survey of internet searching skills among intermediate school students : how librarians can help (2010) 0.07
    0.07227971 = product of:
      0.14455941 = sum of:
        0.026292697 = weight(_text_:und in 673) [ClassicSimilarity], result of:
          0.026292697 = score(doc=673,freq=4.0), product of:
            0.15174113 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06841661 = queryNorm
            0.17327337 = fieldWeight in 673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=673)
        0.118266724 = weight(_text_:help in 673) [ClassicSimilarity], result of:
          0.118266724 = score(doc=673,freq=4.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            0.36749 = fieldWeight in 673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.0390625 = fieldNorm(doc=673)
      0.5 = coord(2/4)
    
    Abstract
    The advent and development of the Internet has changed students' pattern of information seeking behaviors. That is also the case in Iran. The current research was carried out by interviewing with and observing of 20 intermediate girl students to assess their information seeking behavior on the web environment through a qualitative approach. Findings indicate an acceptable level of access to the Internet and vast use of web search engines by the girl students in Tehran. However, students' knowledge of the concept and how search engines work and also about the methods and tools of retrieving information from electronic sources other than the search engines is poor. The study also shows that, compared to the Internet, the role of libraries and librarians are gradually diminishing in fulfilling the students' information needs. Authors recommend that school librarians can provide different instructional and information literacy programs to help students improve their information seeking behavior and their knowledge of the Internet.
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  20. Jones, S.; Hancock-Beaulieu, M.: Support strategies for interactive thesaurus navigation (1994) 0.07
    0.07155326 = product of:
      0.14310652 = sum of:
        0.02602844 = weight(_text_:und in 7733) [ClassicSimilarity], result of:
          0.02602844 = score(doc=7733,freq=2.0), product of:
            0.15174113 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06841661 = queryNorm
            0.17153187 = fieldWeight in 7733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=7733)
        0.11707807 = weight(_text_:help in 7733) [ClassicSimilarity], result of:
          0.11707807 = score(doc=7733,freq=2.0), product of:
            0.32182297 = queryWeight, product of:
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.06841661 = queryNorm
            0.3637965 = fieldWeight in 7733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7038717 = idf(docFreq=1093, maxDocs=44421)
              0.0546875 = fieldNorm(doc=7733)
      0.5 = coord(2/4)
    
    Abstract
    In principle, the 'knowledge' encoded in a thesaurus can be exploited in many ways to help users clarify their information needs and enhance query performance, but attempts to automate this process via AI techniques face many practical difficulties. In the short term it may be more useful to improve support for direct interactive use of thesauri. We discuss some of the issues which have arisen when building an interface for thesaurus navigation and query enhancement, drawing on logs and user feedback from ongoing small-scale experiments
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus

Languages

  • d 32
  • m 4
  • es 1
  • nl 1
  • More… Less…

Types

  • a 1654
  • m 384
  • el 163
  • s 114
  • i 21
  • n 17
  • x 17
  • r 12
  • b 9
  • p 3
  • ? 2
  • v 1
  • More… Less…

Themes

Subjects

Classifications