Search (1405 results, page 2 of 71)

  • × language_ss:"e"
  1. Eastman, C.M.: Overlaps in postings to thesaurus terms : a preliminary study (1988) 0.11
    0.11264332 = product of:
      0.22528663 = sum of:
        0.025039254 = weight(_text_:und in 3623) [ClassicSimilarity], result of:
          0.025039254 = score(doc=3623,freq=2.0), product of:
            0.14597435 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0658165 = queryNorm
            0.17153187 = fieldWeight in 3623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3623)
        0.20024738 = weight(_text_:handling in 3623) [ClassicSimilarity], result of:
          0.20024738 = score(doc=3623,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.48508468 = fieldWeight in 3623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3623)
      0.5 = coord(2/4)
    
    Abstract
    The patterns of overlap between terms which are closely related in a thesaurus are considered. The relationships considered are parent/child, in which one term is a broader term of the other, and sibling in which to 2 terms share the same broader term. The patterns of overlap observed in the MeSH thesaurus with respect to selected MEDLINE postings are examined. The implications of the overlap patterns are discussed, in particular, the impact of the overlap patterns on the potential effectiveness of a proposed algorithm for handling negation is considered.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Fisher, Y.: Spinning the Web : a guide to serving information on the World Wide Web (1996) 0.11
    0.109473646 = product of:
      0.43789458 = sum of:
        0.43789458 = weight(_text_:java in 6014) [ClassicSimilarity], result of:
          0.43789458 = score(doc=6014,freq=6.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.94405925 = fieldWeight in 6014, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6014)
      0.25 = coord(1/4)
    
    Abstract
    Most books on the Internet describe it from the user's end. This one, however, is unique in its focus on serving information on the WWW. It presents everything from the basics to advanced techniques and will thus prove invaluable to site administrators and developers. The author - an expert developer and researcher at UCSD - covers such topics as HTML 3.0, serving documents, interfaces, WWW utilities and browsers such as Netscape. Fisher also includes an introduction to programming with JAVA and JAVA sript, as well as the complete VRML 1.0 specification
    Object
    JAVA
  3. Varela, C.A.; Agha, G.A.: What after Java? : From objects to actors (1998) 0.11
    0.109473646 = product of:
      0.43789458 = sum of:
        0.43789458 = weight(_text_:java in 4612) [ClassicSimilarity], result of:
          0.43789458 = score(doc=4612,freq=6.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.94405925 = fieldWeight in 4612, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4612)
      0.25 = coord(1/4)
    
    Abstract
    Discusses drawbacks of the Java programming language, and proposes some potential improvements for concurrent object-oriented software development. Java's passive object model does not provide an effective means for building distributed applications, critical for the future of Web-based next-generation information systems. Suggests improvements to Java's existing mechanisms for maintaining consistency across multiple threads, sending asynchronous messages and controlling resources. Drives the discussion with examples and suggestions from work on the Actor model of computation
    Object
    Java
  4. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.11
    0.109473646 = product of:
      0.43789458 = sum of:
        0.43789458 = weight(_text_:java in 6896) [ClassicSimilarity], result of:
          0.43789458 = score(doc=6896,freq=6.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.94405925 = fieldWeight in 6896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6896)
      0.25 = coord(1/4)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
  5. Hickey, T.B.: Guidon Web Applying Java to Scholarly Electronic Journals (2001) 0.11
    0.108350806 = product of:
      0.43340322 = sum of:
        0.43340322 = weight(_text_:java in 2035) [ClassicSimilarity], result of:
          0.43340322 = score(doc=2035,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.9343763 = fieldWeight in 2035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.09375 = fieldNorm(doc=2035)
      0.25 = coord(1/4)
    
  6. Shafer, K.E.; Surface, T.R.: Java Server Side Interpreter and OCLC SiteSearch (2001) 0.11
    0.108350806 = product of:
      0.43340322 = sum of:
        0.43340322 = weight(_text_:java in 2050) [ClassicSimilarity], result of:
          0.43340322 = score(doc=2050,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.9343763 = fieldWeight in 2050, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.09375 = fieldNorm(doc=2050)
      0.25 = coord(1/4)
    
  7. Ovid announces strategic partnerships : Java-based interface (1997) 0.10
    0.10215412 = product of:
      0.40861648 = sum of:
        0.40861648 = weight(_text_:java in 397) [ClassicSimilarity], result of:
          0.40861648 = score(doc=397,freq=4.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.8809384 = fieldWeight in 397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=397)
      0.25 = coord(1/4)
    
    Abstract
    Reports agreements between Ovid Technologies and 5 publishing companies (Blackwell Science, Lippincott-Raven, Munksgaard, Plenum, Willams and Wilkins) to secure the rights to the full text over 400 leading periodicals. Once the periodicals are loaded on Ovid they will be linked with other fulltext electronic periodicals to bibliographic databases to produce a web of related documents and threaded information. Concludes with notes on the Ovid Java Client graphic user interface, which offers increased speeds of searching the WWW
  8. Koenig, M.E.D.: ¬The information controllability explosion (1982) 0.10
    0.10114019 = product of:
      0.40456077 = sum of:
        0.40456077 = weight(_text_:handling in 601) [ClassicSimilarity], result of:
          0.40456077 = score(doc=601,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.98001903 = fieldWeight in 601, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.078125 = fieldNorm(doc=601)
      0.25 = coord(1/4)
    
    Abstract
    Information handling technology is an explosive growth area. Librarians need not run faster just to keep up with the information explosion any more, but must now run faster to keep up with the information controllability explosion. If they don't, their place in the information-handling world will be usurped by others who do realise what a growth area it is
  9. Schwarz, C.: Content based text handling (1990) 0.10
    0.09909675 = product of:
      0.396387 = sum of:
        0.396387 = weight(_text_:handling in 5247) [ClassicSimilarity], result of:
          0.396387 = score(doc=5247,freq=6.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.96021867 = fieldWeight in 5247, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=5247)
      0.25 = coord(1/4)
    
    Abstract
    Whereas up to now document analysis was mainly concerned with the handling of formal properties of documents (scanning, editing), AI (artificial intelligence) techniques in the field of Natural Language Processing have shown the possibility of "Content based text handling", i.e., a content analysis for textual documents. Research and development in this field at The Siemens Corporate Research Laboratories are described in this article.
  10. Fjällbrant, N.: EDUCATE: a user education program for information retrieval and handling (1995) 0.10
    0.09909675 = product of:
      0.396387 = sum of:
        0.396387 = weight(_text_:handling in 5875) [ClassicSimilarity], result of:
          0.396387 = score(doc=5875,freq=6.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.96021867 = fieldWeight in 5875, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=5875)
      0.25 = coord(1/4)
    
    Abstract
    Describes the EDUCATE (End User Courses in Information Access through Communication Technology) project for end user education in information access, retrieval and handling, a 3 year CEC Libraries Programme Project started in Feb 94. Examines the need for education and training in information retrieval and handling, presents the course design, and gives the goals for the project. Discusses the use of networks in connection with EDUCATE, and the tools and interfaces used. Describes the ways in which the program can be used for a variety of users
  11. Tennant, R.: Library catalogs : the wrong solution (2003) 0.10
    0.09708555 = product of:
      0.1941711 = sum of:
        0.108350806 = weight(_text_:java in 2558) [ClassicSimilarity], result of:
          0.108350806 = score(doc=2558,freq=2.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.23359407 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
        0.0858203 = weight(_text_:handling in 2558) [ClassicSimilarity], result of:
          0.0858203 = score(doc=2558,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.20789343 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
      0.5 = coord(2/4)
    
    Content
    "MOST INTEGRATED library systems, as they are currently configured and used, should be removed from public view. Before I say why, let me be clean that I think the integrated library system serves a very important, albeit limited, role. An integrated library system should serve as a key piece of the infrastructure of a library, handling such tasks as ma terials acquisition, cataloging (including holdings, of course), and circulation. The integrated library system should be a complete and accurate recording of a local library's holdings. It should not be presented to users as the primary system for locating information. It fails badly at that important job. - Lack of content- The central problem of almost any library catalog system is that it typically includes only information about the books and journals held by a parficular library. Most do not provide access to joumal article indexes, web search engines, or even selective web directories like the Librarians' Index to the Internet. If they do offen such access, it is only via links to these services. The library catalog is far from onestop shopping for information. Although we acknowledge that fact to each other, we still treat it as if it were the best place in the universe to begin a search. Most of us give the catalog a place of great prominente an our web pages. But Information for each book is limited to the author, title, and a few subject headings. Seldom can book reviews, jacket summaries, recommendations, or tables of contents be found-or anything at all to help users determine if they want the material. - Lack of coverage - Most catalogs do not allow patrons to discover even all the books that are available to them. If you're lucky, your catalog may cover the collections of those libraries with which you have close ties-such as a regional network. But that leaves out all those items that could be requested via interlibrary loan. As Steve Coffman pointed out in his "Building Earth's Largest Library" article, we must show our users the universe that is open to them, highlight the items most accessible, and provide an estimate of how long it would take to obtain other items. - Inability to increase coverage - Despite some well-meaning attempts to smash everything of interest into the library catalog, the fact remains that most integrated library systems expect MARC records and MARC records only. This means that whatever we want to put into the catalog must be described using MARC and AACR2 (see "Marc Must Die," LJ 10/15/02, p. 26ff.). This is a barrier to dramatically increasing the scope of a catalog system, even if we decided to do it. How would you, for example, use the Open Archives Initiative Harvesting Protocol to crawl the bibliographic records of remote repositories and make them searchable within your library catalog? It can't be dope, and it shouldn't. The library catalog should be a record of a given library's holdings. Period.
    - User Interface hostility - Recently I used the Library catalogs of two public libraries, new products from two major library vendors. A link an one catalog said "Knowledge Portal," whatever that was supposed to mean. Clicking an it brought you to two choices: Z39.50 Bibliographic Sites and the World Wide Web. No public library user will have the faintest clue what Z39.50 is. The other catalog launched a Java applet that before long froze my web browser so badly I was forced to shut the program down. Pick a popular book and pretend you are a library patron. Choose three to five libraries at random from the lib web-cats site (pick catalogs that are not using your system) and attempt to find your book. Try as much as possible to see the system through the eyes of your patrons-a teenager, a retiree, or an older faculty member. You may not always like what you see. Now go back to your own system and try the same thing. - What should the public see? - Our users deserve an information system that helps them find all different kinds of resources-books, articles, web pages, working papers in institutional repositories-and gives them the tools to focus in an what they want. This is not, and should not be, the library catalog. It must communicate with the catalog, but it will also need to interface with other information systems, such as vendor databases and web search engines. What will such a tool look like? We are seeing the beginnings of such a tool in the current offerings of cross-database search tools from a few vendors (see "Cross-Database Search," LJ 10/15/01, p. 29ff). We are in the early stages of developing the kind of robust, userfriendly tool that will be required before we can pull our catalogs from public view. Meanwhile, we can begin by making what we have easier to understand and use."
  12. Hawk, J.: OCLC SiteSearch (1998) 0.09
    0.089384854 = product of:
      0.35753942 = sum of:
        0.35753942 = weight(_text_:java in 3079) [ClassicSimilarity], result of:
          0.35753942 = score(doc=3079,freq=4.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.7708211 = fieldWeight in 3079, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3079)
      0.25 = coord(1/4)
    
    Abstract
    Feature on OCLC's SiteSearch suite of software, first introduced in 1992, and how it is helping over 250 libraries integrate and manage their electronic library collections. Describes the new features of version 4.0, released in Apr 1997, which include a new interface, Java based architecture, and an online documentation and training site. Gives an account of how Java is helping the Georgia Library Learning Online (GALILEO) project to keep pace on the WWW; the use of SiteSearch by libraries to customize their interface to electronic resources; and gives details of Project Athena (Assessing Technological Horizons to Educate the Nashville Area), which is using OCLC SiteSearch to allow area library users to search the holdings of public and university libraries simultaneously
  13. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.09
    0.089384854 = product of:
      0.35753942 = sum of:
        0.35753942 = weight(_text_:java in 2673) [ClassicSimilarity], result of:
          0.35753942 = score(doc=2673,freq=4.0), product of:
            0.46384227 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0658165 = queryNorm
            0.7708211 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2673)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
  14. Rader, H.B.: User education and information literacy for the next decade : an international perspective (1995) 0.09
    0.08670966 = product of:
      0.34683865 = sum of:
        0.34683865 = weight(_text_:handling in 5416) [ClassicSimilarity], result of:
          0.34683865 = score(doc=5416,freq=6.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.84019136 = fieldWeight in 5416, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5416)
      0.25 = coord(1/4)
    
    Abstract
    In the information age marked by the global highways and instant information handling sharing worldwide, all citizens must become knowledgeable about, and efficient in, handling information. People need training in how to organize, evaluate, and analyze the enormous array of information now available in both print and electronic formats. Information skills need to be taught and developed on all levels from elementary schools thorugh universities. Librarians worldwide are uniquely qualified through education, training, and experience to provide people with necessary information-handling skills on all levels. Using available data regarding information literacy programs on the international level, Rader proposes a course of action for the next decade
  15. Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017) 0.09
    0.08670966 = product of:
      0.34683865 = sum of:
        0.34683865 = weight(_text_:handling in 4939) [ClassicSimilarity], result of:
          0.34683865 = score(doc=4939,freq=6.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.84019136 = fieldWeight in 4939, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4939)
      0.25 = coord(1/4)
    
    Abstract
    The Semantic Web has attracted much attention, both from academia and industry. An important role in research towards the Semantic Web is played by formalisms and technologies for handling uncertainty and/or vagueness. In this paper, I first provide some motivating examples for handling uncertainty and/or vagueness in the Semantic Web. I then give an overview of some own formalisms for handling uncertainty and/or vagueness in the Semantic Web.
  16. Robinson, B.: Electronic document handling using SGML (1994) 0.09
    0.0858203 = product of:
      0.3432812 = sum of:
        0.3432812 = weight(_text_:handling in 1039) [ClassicSimilarity], result of:
          0.3432812 = score(doc=1039,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.8315737 = fieldWeight in 1039, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.09375 = fieldNorm(doc=1039)
      0.25 = coord(1/4)
    
  17. Robinson, B.: Electronic document handling using SGML : hypertext interchange and SGML (1994) 0.09
    0.0858203 = product of:
      0.3432812 = sum of:
        0.3432812 = weight(_text_:handling in 1040) [ClassicSimilarity], result of:
          0.3432812 = score(doc=1040,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.8315737 = fieldWeight in 1040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.09375 = fieldNorm(doc=1040)
      0.25 = coord(1/4)
    
  18. Barberá, J.: ¬The Intranet : a new concept for corporate information handling (1996) 0.09
    0.0858203 = product of:
      0.3432812 = sum of:
        0.3432812 = weight(_text_:handling in 105) [ClassicSimilarity], result of:
          0.3432812 = score(doc=105,freq=2.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.8315737 = fieldWeight in 105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.09375 = fieldNorm(doc=105)
      0.25 = coord(1/4)
    
  19. Liang, Z.; Mao, J.; Li, G.: Bias against scientific novelty : a prepublication perspective (2023) 0.09
    0.0858203 = product of:
      0.3432812 = sum of:
        0.3432812 = weight(_text_:handling in 1846) [ClassicSimilarity], result of:
          0.3432812 = score(doc=1846,freq=8.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.8315737 = fieldWeight in 1846, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=1846)
      0.25 = coord(1/4)
    
    Abstract
    Novel ideas often experience resistance from incumbent forces. While evidence of the bias against novelty has been widely identified in science, there is still a lack of large-scale quantitative work to study this problem occurring in the prepublication process of manuscripts. This paper examines the association between manuscript novelty and handling time of publication based on 778,345 articles in 1,159 journals indexed by PubMed. Measuring the novelty as the extent to which manuscripts disrupt existing knowledge, we found systematic evidence that higher novelty is associated with longer handling time. Matching and fixed-effect models were adopted to confirm the statistical significance of this pattern. Moreover, submissions from prestigious authors and institutions have the advantage of shorter handling time, but this advantage is diminishing as manuscript novelty increases. In addition, we found longer handling time is negatively related to the impact of manuscripts, while the relationships between novelty and 3- and 5-year citations are U-shape. This study expands the existing knowledge of the novelty bias by examining its existence in the prepublication process of manuscripts.
  20. Stern, B.T.: ¬The new ADONIS (1992) 0.08
    0.08091216 = product of:
      0.32364863 = sum of:
        0.32364863 = weight(_text_:handling in 3743) [ClassicSimilarity], result of:
          0.32364863 = score(doc=3743,freq=4.0), product of:
            0.4128091 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0658165 = queryNorm
            0.78401524 = fieldWeight in 3743, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=3743)
      0.25 = coord(1/4)
    
    Abstract
    Reports on the 2 year trail period of the document delivery system ADONIS, made for the pharmaceutical industry. A market survey reports the needs of the pharmaceutical industry for such a product. Its success as a CD-ROM product depends on rapid conversion from paper in less than 3 weeks and special compression techniques to limit the number of CD-ROMs produced. Discusses handling of source material, the production software, errata handling and the hardware. Considers current developments, the benefits of using ADONIS generally and those for the publishers

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 940
  • m 323
  • el 107
  • s 104
  • i 22
  • n 17
  • r 15
  • x 14
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications