Search (4504 results, page 4 of 226)

  • × year_i:[1990 TO 2000}
  1. Scepanski, J.M.: Public services in a telecommuting world (1996) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 5119) [ClassicSimilarity], result of:
          0.13719417 = score(doc=5119,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 5119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=5119)
      0.25 = coord(1/4)
    
    Abstract
    Notes the challenges facing public libraries in the provision of library services to users who are becoming increasingly involved in telecommuting and are therefore tending to demand the delivery of information services (including interloans and document delivery) in some kind of telecommuted form. Trends include the increasing bias towards the acquisition of digital materials and information resources at the expense of traditional formats, and the use of facsimile transmission to fax materials from the library to the user's desk at the place of work. Notes the specific example of Ohio State University Library which enables users to call and learn if a particular item is held by the library and have the option of having it paged to them electronically, held for pickup, or mailed to a campus address. Discusses the steps necessary to evaluate the cost benefits of such services
  2. El-Ramly, N.; Peterson. R.E.; Volonino, L.: Top ten Web sites using search engines : the case of the desalination industry (1996) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 1945) [ClassicSimilarity], result of:
          0.13719417 = score(doc=1945,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 1945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=1945)
      0.25 = coord(1/4)
    
    Abstract
    The desalination industry involves the desalting of sea or brackish water and achieves the purpose of increasing the worls's effective water supply. There are approximately 4.000 desalination Web sites. The six major Internet search engines were used to determine, according to each of the six, the top twenty sites for desalination. Each site was visited and the 120 gross returns were pared down to the final ten - the 'Top Ten'. The Top Ten were then analyzed to determine what it was that made the sites useful and informative. The major attributes were: a) currency (up-to-date); b) search site capability; c) access to articles on desalination; d) newsletters; e) databases; f) product information; g) online conferencing; h) valuable links to other sites; l) communication links; j) site maps; and k) case studies. Reasons for having a Web site and the current status and prospects for Internet commerce are discussed
  3. Nieuwenhuysen, P.; Vanouplines, P.: Document plus program hybrids on the Internet and their impact on information transfer (1998) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 2893) [ClassicSimilarity], result of:
          0.13719417 = score(doc=2893,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 2893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=2893)
      0.25 = coord(1/4)
    
    Abstract
    Examines some of the advanced tools, techniques, methods and standards related to the Internet and WWW which consist of hybrids of documents and software, called 'document program hybrids'. Early Internet systems were based on having documents on one side and software on the other, neatly separated, apart from one another and without much interaction, so that the static document can also exist without computers and networks. Documentation program hybrids blur this classical distinction and all components are integrated, interwoven and exist in synergy with each other. Illustrates the techniques with particular reference to practical examples, including: dara collections and dedicated software; advanced HTML features on the WWW, multimedia viewer and plug in software for Internet and WWW browsers; VRML; interaction through a Web server with other servers and with instruments; adaptive hypertext provided by the server; 'webbots' or 'knowbots' or 'searchbots' or 'metasearch engines' or intelligent software agents; Sun's Java; Microsoft's ActiveX; program scripts for HTML and Web browsers; cookies; and Internet push technology with Webcasting channels
  4. Newton, R.; Maclennan, A.; Allison, J.D.C.: Public libraries on the Internet (1998) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 3174) [ClassicSimilarity], result of:
          0.13719417 = score(doc=3174,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 3174, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=3174)
      0.25 = coord(1/4)
    
    Abstract
    Reports the results of a questionnaire survey conducted in Autumn 1996 which sought to establish the situation regarding Internet provision in Scottish public libraries, and identify key issues likely to affect the further development of such provision. Librarians were asked what they perceived to be the main benefits (if any) from providing Internet access, and how they envisaged future trends. Examines reasons why Internet access should be considered an important aspect of public library provision. Of 25 responding libraries, 14 were currently making use of the Internet, and 11 others envisaged connection within 1-3 years. However, the overall picture is of a relatively small number of libraries which are extremely active, with the majority only having a very basic level of activity. Reference work was by far the most common Internet application, but there was also significant use for educational purposes. Othe applications noted were communications, community information and publicity, and recreation
  5. Ucoluk, G.; Toroslu, I.H.: ¬A genetic algorithm approach for verification of the syllable-based text compression technique (1997) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 3601) [ClassicSimilarity], result of:
          0.13719417 = score(doc=3601,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 3601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=3601)
      0.25 = coord(1/4)
    
    Abstract
    It is possible to decompose any text into strings that have lengthy greater than 1 and occur frequently, provided that an easy mechanism exists for it. Having in one hand the set of such frequently occuring strings and in the other the set of letters and symbols, it is possible to compress the text using Huffman coding over an alphabet which is a subset of the union of these 2 sets. Observations reveal that, in most cases, the maximal inclusion of the strings leads to an optimal length of the compressed text. However, the verification of this prediction requires the consideration of all subsets in order to find the one that leads to the best compression. Describes a genetic algorithm devised and used for this process and concludes that Turkish texts, because of the agglutinative nature of the language and the highly regular syllable formation, provides a useful test bed for this technique
  6. Cousins, S.A.: COPAC: the new national OPAC service based on the CURL database (1997) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 3834) [ClassicSimilarity], result of:
          0.13719417 = score(doc=3834,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 3834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=3834)
      0.25 = coord(1/4)
    
    Abstract
    Presents a brief description of the operation of COPAC, the new OPAC providing a unified interface to the consolidated database and online catalogues of the UK's Consortium of University Research Libraries (CURL). COPAC is seen as the partial realization of the aims of earlier projects, such as the UK Libraries Database System (UKLDS). Provides a brief overview of the background to the CURL OPAC and the COPAC project, describing the main content of the COPAC database. Considers the effect of having multiple contributors to the database and the inevitable need for deduplication and record consolidation to cope with the inevitable record duplication. COPAC is accessible via a text interface and a WWW interface. Discusses each interface using example screens to illustrate the search process
  7. Simkin, E.: Professionalism (1997) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 4642) [ClassicSimilarity], result of:
          0.13719417 = score(doc=4642,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=4642)
      0.25 = coord(1/4)
    
    Abstract
    Contrasts the practice of indexing as a technician with that of the professional indexer. A technician knows and can apply some rules, but the professional can, if necessary, devise a new set od rules or otherwise design the appropriate solution to an indexing problem having a knowledge of the theory on which such decisions are made. Discusses suitable training for a professional indexer which would include study of the the processes of knowledge and of the development of the technology available now or in the future to support these processes. Argues that the profession of indexing has a fundamental importance to human endeavour since indexers are experts in the organization of knowledge. Laments the fact that indexers were not ready to meet the challenges of the Internet when it arrived. Suggests ways in which the Australian Society of Indexers can help to raise the status of the profession
  8. Crawford, J.C.; Thom, L.C.; Powles, J.A.: ¬A survey of subject access to academic library catalogues in Great Britain (1993) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 6358) [ClassicSimilarity], result of:
          0.13719417 = score(doc=6358,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 6358, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=6358)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a questionnaire survey of UK academic libraries to determine the level of use of online public access catalogues (OPACs) and the development of inhouse subject indexes. 75 respondents reported having commercial systems and 7 reported inhouse systems. Data includes: named systems in use and numbers of libraries using each system; percentages of bibliographic records in machine readable format; types of materials; and record formats (UKMARC, LCMARC etc.) Reports the most common access points for searching the OPACs (author, keyword), methods of generating terms to be used for subject searching, subject heading sources (LCSH, MeSH, PRECIS) and classification schemes (Dewey (DDC), UDC). Results show that all universities and polytechnics now have OPACs and only the smaller colleges do not. OPACs are moving towards comprehensive covergae of academic library stocks with the MARC record the most popular format. The 3 main subject access strategies involve: LCSH, inhouse strategies, and strategies not based on controlled terminolgy. Draws heavily on the results of an earlier survey by Fran Slack (Vine 72(1988) Nov., S.8.12)
  9. Weiss, S.C.: ¬The seamless, Web-based library : a meta site for the 21st century (1999) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 542) [ClassicSimilarity], result of:
          0.13719417 = score(doc=542,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=542)
      0.25 = coord(1/4)
    
    Abstract
    Taking a step beyond Meta search engines which require Web site evaluation skills and a knowledge of how to construct effective search statements, we encounter the concept of a seamless, Web-based library. These are electronic libraries created by information professionals, Meta sites for the 21st Century. Here is a place where average people with average Internet skills can find significant Web sites arranged under a hierarchy of subject categories. Having observed client behavior in a university library setting for a quarter of a century, it is apparent that the extent to which information is used has always been determined by content applicable to user needs, an easy-to-understand design, and high visibility. These same elements have determined the extent to which Internet Quick Reference (IQR), a seamless, Web-based library at cc.usu.edu/-stewei/hot.htm. has been used
  10. Dixon, P.; Banwell, L.: School governors and effective dicision making (1999) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 1284) [ClassicSimilarity], result of:
          0.13719417 = score(doc=1284,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 1284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=1284)
      0.25 = coord(1/4)
    
    Abstract
    GINN has been a two year project investigating school Governors' INformation Needs, undertaking an in-depth assessment of information needed and used by school governors as seen from the individual governor's point of view. It investigated the nature of information flow within school governing bodies and sought to establish the role of information and nature of information needed in relation to the effective functioning of the school governing body. There are over 350,000 school governors in the state supported sector of education in England and Wales, members of governing bodies having corporate responsibility for overseeing the management of schools. Layder's research map with "self' at the heart of and part of "situation", "setting" and "context" provides a framework for the initial identification of the interrelated environmental factors, both at the macro and rnicro levels. (Layder, 1993)
  11. Cawkell, A.E.: Encyclopaedic dictionary of information technology and systems (1993) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 1762) [ClassicSimilarity], result of:
          0.13719417 = score(doc=1762,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 1762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=1762)
      0.25 = coord(1/4)
    
    Abstract
    The entries, in alphabetical order, cover people, organizations, and technology, including terms and their concepts as having become known up to April 1993. The dictionary contains very short and very long entries, some of them are repeated in different combinations, such as 'Artificial intelligence (AI)' which is combines in separate entries with - expert systems - history - inference - information retrieval applications - knowledge bases - natural language - shells and chaining - software and hardware trends - vision systems. There is no entry for 'index', however many for 'indexing' (7 columns for the concept as such) and combinations, such as Indexing - automatic, - coordinate indexing, - online systems, - system performance, exhaustivity and specifity, - system performance, recall and prescision, - thesauri. Some 5.000 terms, including abbreviations are related to their concepts and are explained and described
  12. Essers, J.; Schreinemakers, J.: ¬The conceptions of knowledge and information in knowledge management (1996) 0.03
    0.034298543 = product of:
      0.13719417 = sum of:
        0.13719417 = weight(_text_:having in 1909) [ClassicSimilarity], result of:
          0.13719417 = score(doc=1909,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.39649835 = fieldWeight in 1909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.046875 = fieldNorm(doc=1909)
      0.25 = coord(1/4)
    
    Abstract
    The emergence of Knowledge Management (KM) over the last decade has triggered the question how or even whether this new management discipline can be distinguished from the established field of Information Management (IM). In this paper we critically examine this demarcation issue from two angles. First we will investigate to what extent the difference between IM and KM can be anchored an a conceptual distinction between their respective objects: information and knowledge. After having shown that this widely adopted strategy promises little success, we will shift our attention to an examination of the fundamental objectives or guiding principles behind both disciplines. Seen from this angle we argue that KM in order to foster organizational learning, innovation and strategy flexibility, should adopt a postmodern epistemological perspective that is geared to the management of incommensurability and difference within and between organizations.
  13. Katz, W.A.: Introduction to reference work : Vol.1: Basic information sources; vol.2: Reference services and reference processes (1992) 0.03
    0.03251226 = product of:
      0.06502452 = sum of:
        0.007860276 = weight(_text_:und in 4364) [ClassicSimilarity], result of:
          0.007860276 = score(doc=4364,freq=2.0), product of:
            0.12830718 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.05785077 = queryNorm
            0.061261386 = fieldWeight in 4364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=4364)
        0.05716424 = weight(_text_:having in 4364) [ClassicSimilarity], result of:
          0.05716424 = score(doc=4364,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.16520765 = fieldWeight in 4364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.01953125 = fieldNorm(doc=4364)
      0.5 = coord(2/4)
    
    Abstract
    Standardwerk mit Bezug zum anglo-amerikanischen Verständnis von 'Reference work', das zwar Vorbild für viele Betrachtungen in der deutschen Literatur und in deutschen Bibliotheken ist, das aber bis heute keine Entsprechung in der bibliothekarischen Praxis gefunden hat
    Content
    Volume 1 is divided into three parts. Part One (Chapters 1 and 2) constitutes an introduction to the reference process and automated reference services. Part Two, "Information: Control and Access," consists of Chapters 3 through 6 and covers an introduction to bibliographies, indexing, and abstracting services. Chapters 7 through 12 are in Part Three, "Sources of Information," which include encyclopedias, various ready reference sources, biographical sources, dictionaries, geographical sources, and government documents. It is as pointless for students to memorize details about specific reference sources, as it is necessary for them to grasp the essential areas of agreement and difference among the various forms. To this end, every effort is made to compare rather than to detail. Only basic or foundation reference works are discussed in this volume. But readers may not find all basic titles included or annotated because: (1) There is no consensus an what constitutes "basic". (2) The objective of this text is to discuss various forms, and the titles used for that purpose are those that best illustrate those forms. (3) The annotations for a specific title are duplicated over and over again in Guide to Reference Books and Guide to Reference Materials, which list the numerous subject bibliographies. In both volumes, suggested readings are found in the footnotes and at the end of each chapter. When a publication is cited in a footnote, the reference is rarely duplicated in the "Suggested Reading." For the most part, these readings are limited to publications issued since 1987. In addition to providing readers with current thinking, these more recent citations have the added bonus of making it easier for the student to locate the readings. A number of the suggested reading items will be found in Reference and Information Sources, A Reader, 4th ed., published by Scarecrow Press, in 1991. It is beyond argument, of course, that all readings need not necessarily be current and that many older articles and books are as valuable today as they were when first published. Thanks to many teachers' having retained earlier editions of this text and the aforementioned Scarecrow title, it is possible to have a bibliography of previous readings. As has been done in all previous editions, the sixth edition notes prices for most of the major basic titles. This practice seems particularly useful today, since librarians must more and more be aware of budgetary constraints when selecting reference titles. CD-ROMS are listed where available. Prices are based an information either from the publisher of the original reference source or from the publisher of the CD-ROM disc. If a particular work is available online, the gross hourly rate as charged by DIALOG is given for its use. Both this rate and the book prices are current as of late 1990 and are useful in determining relative costs. Bibliographic data are based an publisher's catalogs, Books in print, and examination of the titles. The information is applicable as of late 1990 and, like prices, is subject to change.
  14. Panyr, J.: Information retrieval techniques in rule-based expert systems (1991) 0.03
    0.02858212 = product of:
      0.11432848 = sum of:
        0.11432848 = weight(_text_:having in 3035) [ClassicSimilarity], result of:
          0.11432848 = score(doc=3035,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.3304153 = fieldWeight in 3035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3035)
      0.25 = coord(1/4)
    
    Abstract
    In rule-based expert systems knowledge is represented in an IF-THEN form: IF <set of conditions> THEN <decision>. A limited subset of natural language - supplemented by specified relations and operators - is used to formulate the rules. Rule syntax is simple. This makes it easy to acquire knowledge through an expert and permits plausibility checks on the knowledge base without the expert having knowledge of the implementation language or details of the system. A number of steps are used to to select suitable rules during the rule-matching process. It is noteworthy that rules are well structured documents for an information retrieval system, particularly since the number of rules in a rule-based system remains manageable. In this paper it will be shown that this permits automatic processing of the rule set by methods of information retrieval (i.e. automatic indexing and automatic classification of rules, automatic thesaurus construction to the knowledge base). A knowledge base which is processed and structured in this fashion allows use of a complex application-specific search strategy and hence an efficient and effective realization of reasoning mechanisms
  15. Electronic publishing practice in the UK (1994) 0.03
    0.02858212 = product of:
      0.11432848 = sum of:
        0.11432848 = weight(_text_:having in 8027) [ClassicSimilarity], result of:
          0.11432848 = score(doc=8027,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.3304153 = fieldWeight in 8027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=8027)
      0.25 = coord(1/4)
    
    Abstract
    Report of a project commissioned by British Library Research and Development Department (BLRDD) from Electronic Publishing Services Ltd designed to provide factual input about electronic publishing in the UK for a worlking party convened by the British Library as part of a follow up to the Information 2000 exercise, completed in 1991. The working party will consider the possible impacts on the library community of the development of electronic publishing. For the purpose of this study, electronic publishing was defined as including: online services (including videotex); magnetic tape services; magnetic disk products; CD-ROM and other optical disc products; ROM cards; and electronic periodicals. The main conclusions were: that the dominant position of Reuters and other financial information services means that online information retrieval still accounts for the vast majority of electronic publishing revenues; that CD-ROM is experiencing high growth, but growth from a small base and coming later than predicted; that network publishing is still in the experimental stage and almost entirely funded from the public sector; that ROM cards, which provide the medium for hand held electronic reference books are still present in the market and represent the only mass market channel; and that other electronic media (magnetic tape, magnetic disk, analogue videodisc) are not seen as having a significant part to play
  16. Day, J.M.; Edwards, C.; Walton, G.: IMPEL: a research project into the impact on people of electronic libraries : stage one - librarians (1995) 0.03
    0.02858212 = product of:
      0.11432848 = sum of:
        0.11432848 = weight(_text_:having in 1736) [ClassicSimilarity], result of:
          0.11432848 = score(doc=1736,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.3304153 = fieldWeight in 1736, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1736)
      0.25 = coord(1/4)
    
    Abstract
    The IMPEL project is investigating the impact which the convergence of computing and communications technology is having on academic libraries, as the traditional archival role moves towards one of giving access to information, increasingly in electronic form. It aims to complement the discussion of technological developments by looking at the social implications of the move towards the 'electronic' library, and a further shift in the balance of teaching and learning. The first stage is concentrating on the implications for library staff, highlighted by the recent publication of the Joint Funding Councils' Libraries Review Group - the Follett Report - and its emphasis on the need for adequate staff training and effective deployment if the benefits of convergence are to be realised. The paper will discuss the context of the research and report on a brief survey of 98 higher education libraries to identify the stage of development towards an electronically based service, from which six have been identified for in depth investigation. The emphasis on library staff reflects the interests of the joint partners in the research - an academic department involved in the education of library and information professionals, and a large, recently converged Information Services Department
  17. Adeniran, O.R.; Adigun, A.T.: Retro-conversion exercise in an online thesaurus environment : a post card-catalogue conversion experience at the International Institute of Tropical Agriculture in Ibadan (1996) 0.03
    0.02858212 = product of:
      0.11432848 = sum of:
        0.11432848 = weight(_text_:having in 5614) [ClassicSimilarity], result of:
          0.11432848 = score(doc=5614,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.3304153 = fieldWeight in 5614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5614)
      0.25 = coord(1/4)
    
    Abstract
    Faced with the need to undertake the retrospective conversion of the library catalogue, at the International Institute of Tropical Agriculture Library, Ibadan, Nigeria, 2 possible approaches to conversion were indentified: use of a previously compiled accession list of materials available in the manual system; and scanning of all books and periodicals to create entries from source. Having chosen the first option, the next stage was to select the best way to proceed from several options. In the case of books and book chapters, the options were: use of MARC tapes; use of CD-ROM databases; downloading from online databases; and creation of original cataloguing and indexing entries. The first option was considered to cumbersome; the second was too expensive; the third was not viable with present telecommunications and INMARSAT satellite online access. In the case of periodical articles and conference proceedings, the options were: use of abstracting and indexing services on floppy discs; and locally generated indexing. The first option was not considered viable due to the scattering of entries over many CD-ROM databases; the second was rules out because of lack of coverage and problems in copying records in ASCII format
  18. Riggs, F.W.: Onomantics and terminology : pt.4: neologisms, neoterisms, meta-terms, phrases, and pleonisms (1997) 0.03
    0.02858212 = product of:
      0.11432848 = sum of:
        0.11432848 = weight(_text_:having in 535) [ClassicSimilarity], result of:
          0.11432848 = score(doc=535,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.3304153 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=535)
      0.25 = coord(1/4)
    
    Abstract
    Part 1 to 3 of this series have examined the terminology of Terminology by contrast with the vocabulary of Onomantics and identified some of the differences and difficulties revealed by a close study of ISO-1087, the most important glossary for terminologists. Part 4, finally, offers a speculative explanation of these problems. My central hypothesis is that an aversion to neologisms - especially newly coined words - impedes the introduction and acceptance of new concepts. The pressure for standardization of terminology compounds this difficulty. There are 3 kinds of neologisms: 1. newly coined words (neoterisms), 2. phrases composed of familiar words (phrasal tags) and 3. familiar words for which new meanings have been stipulated (meta-terms). Neologisms in the form of phrases containing familiar words are often found in ISO 1087. Some perplexing ambiguities in ISO 1087 occur when new meanings are stioulated for familiar words, creating terminological metaphors ('meta-terms') that are often obscure. Such meta-terms abound in the terminology of Terminology. Increased willingness to accept well-formed new words (neoterisms) would greatly simplify the development of a more adequate terminology for Terminology. The use of pleonasms is recommended as a technique to overcome ambiguity by linking familiar words having new meanings (meta-terms) to new words for the same concepts (neoterisms) and as a simple way fo facilitate the introduction of such neoterisms
  19. Borgman, C.L.; Walter, V.A.; Rosenberg, J.: ¬The Science Library Catalog project : comparison of children's searching behaviour in hypertext and a keyword search system (1991) 0.03
    0.02858212 = product of:
      0.11432848 = sum of:
        0.11432848 = weight(_text_:having in 3779) [ClassicSimilarity], result of:
          0.11432848 = score(doc=3779,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.3304153 = fieldWeight in 3779, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3779)
      0.25 = coord(1/4)
    
    Abstract
    Reports on a continuing project to study children's use of a graphically-based direct manipulation interface for science materials. The Science Library Catalogue (SLC), a component of project SEED, has been implemented in the libraries of 21 elementary schools in Los Angeles and will soon be implemented in a public library. The interface employs a hierarchical structure drawn from the DDC and implemented in HyperCard on the Macintosh. The study on the 2nd version of the interface indicates that children are able to use the Science Library Catalogue unaided, with reasonable success in finding items. Search success on the same topics on a Boolean command driven system was equivalent, but Boolean searches were faster. However, the Boolean system was more sensitive to differences in age, with 12-year-olds having significantly better success rates than 10-year-olds; and to search topic, with one set of questions being much easier to search than the other. On average, children liked the 2 systems about the same; the Boolean system was more attractive to certain age and gender combinations, while the Science Library Catalogue was more consistently liked across groups. results are compared to prior studies on the Science Library Catalogue and other online catalogues
  20. Kantor, P.; Kim, M.H.; Ibraev, U.; Atasoy, K.: Estimating the number of relevant documents in enormous collections (1999) 0.03
    0.02858212 = product of:
      0.11432848 = sum of:
        0.11432848 = weight(_text_:having in 690) [ClassicSimilarity], result of:
          0.11432848 = score(doc=690,freq=2.0), product of:
            0.34601447 = queryWeight, product of:
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.05785077 = queryNorm
            0.3304153 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.981156 = idf(docFreq=304, maxDocs=44421)
              0.0390625 = fieldNorm(doc=690)
      0.25 = coord(1/4)
    
    Abstract
    In assessing information retrieval systems, it is important to know not only the precision of the retrieved set, but also to compare the number of retrieved relevant items to the total number of relevant items. For large collections, such as the TREC test collections, or the World Wide Web, it is not possible to enumerate the entire set of relevant documents. If the retrieved documents are evaluated, a variant of the statistical "capture-recapture" method can be used to estimate the total number of relevant documents, providing the several retrieval systems used are sufficiently independent. We show that the underlying signal detection model supporting such an analysis can be extended in two ways. First, assuming that there are two distinct performance characteristics (corresponding to the chance of retrieving a relevant, and retrieving a given non-relevant document), we show that if there are three or more independent systems available it is possible to estimate the number of relevant documents without actually having to decide whether each individual document is relevant. We report applications of this 3-system method to the TREC data, leading to the conclusion that the independence assumptions are not satisfied. We then extend the model to a multi-system, multi-problem model, and show that it is possible to include statistical dependencies of all orders in the model, and determine the number of relevant documents for each of the problems in the set. Application to the TREC setting will be presented

Languages

Types

  • a 3152
  • m 752
  • s 230
  • x 170
  • el 132
  • i 112
  • r 40
  • b 33
  • ? 29
  • l 17
  • n 17
  • p 13
  • d 10
  • h 9
  • u 8
  • fi 6
  • z 2
  • au 1
  • More… Less…

Themes

Subjects

Classifications