Search (13346 results, page 1 of 668)

  1. Burkart, M.: PROTERM : Ein Softwarepaket für Aufbau, Pflege, Handling von Thesauri und anderen Wortgutsammlungen (1988) 0.22
    0.22497077 = product of:
      0.44994155 = sum of:
        0.050008293 = weight(_text_:und in 204) [ClassicSimilarity], result of:
          0.050008293 = score(doc=204,freq=2.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.39207286 = fieldWeight in 204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.125 = fieldNorm(doc=204)
        0.39993325 = weight(_text_:handling in 204) [ClassicSimilarity], result of:
          0.39993325 = score(doc=204,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            1.108765 = fieldWeight in 204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.125 = fieldNorm(doc=204)
      0.5 = coord(2/4)
    
  2. Jolley, J.L.: Information handling : Einführung in die Praxis der Datenverarbeitung (1974) 0.21
    0.20591184 = product of:
      0.4118237 = sum of:
        0.06188211 = weight(_text_:und in 3805) [ClassicSimilarity], result of:
          0.06188211 = score(doc=3805,freq=4.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.48516542 = fieldWeight in 3805, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.109375 = fieldNorm(doc=3805)
        0.34994158 = weight(_text_:handling in 3805) [ClassicSimilarity], result of:
          0.34994158 = score(doc=3805,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.97016937 = fieldWeight in 3805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.109375 = fieldNorm(doc=3805)
      0.5 = coord(2/4)
    
    Abstract
    Anhand zahlreicher Beispiele und Abbildungen werden die Möglichkeiten moderner Informationsverarbeitungs- und Klassifikationssysteme dargestellt
  3. Denkin, D.: ¬An information handling service that delivers stae-of-the-art document processing (1993) 0.12
    0.12372303 = product of:
      0.49489212 = sum of:
        0.49489212 = weight(_text_:handling in 6273) [ClassicSimilarity], result of:
          0.49489212 = score(doc=6273,freq=4.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            1.3720267 = fieldWeight in 6273, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.109375 = fieldNorm(doc=6273)
      0.25 = coord(1/4)
    
    Abstract
    Beschreibung der Leistungen der 'Information handling Service Corporation (IHS)'
  4. Schulz, U.: "Wie der Schnabel gewachsen ist" : Über die Qualität von OPACs - Anforderungen, Realität, Perspektiven (1998) 0.12
    0.12163754 = product of:
      0.24327508 = sum of:
        0.043308455 = weight(_text_:und in 2559) [ClassicSimilarity], result of:
          0.043308455 = score(doc=2559,freq=6.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.33954507 = fieldWeight in 2559, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=2559)
        0.19996662 = weight(_text_:handling in 2559) [ClassicSimilarity], result of:
          0.19996662 = score(doc=2559,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.5543825 = fieldWeight in 2559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=2559)
      0.5 = coord(2/4)
    
    Abstract
    Ob Erwachsene oder Kinder die Bibliothek besuchen, Schwierigkeiten mit dem Handling von OPACs sind fast die Regel und gleichen sich überall auf der Welt. Durch Forschungen und Projekte ist seit langem bekannt, woraus diese Schwierigkeiten resultieren und wie ihnen abgeholfen werden könnte - bisher aber waren Bibliothekare auf diesem Gebiet keine guten Anwälte ihrer Kunden
  5. Zhang, X: Rough set theory based automatic text categorization (2005) 0.11
    0.11248539 = product of:
      0.22497077 = sum of:
        0.025004147 = weight(_text_:und in 3822) [ClassicSimilarity], result of:
          0.025004147 = score(doc=3822,freq=2.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.19603643 = fieldWeight in 3822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=3822)
        0.19996662 = weight(_text_:handling in 3822) [ClassicSimilarity], result of:
          0.19996662 = score(doc=3822,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.5543825 = fieldWeight in 3822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=3822)
      0.5 = coord(2/4)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
  6. Robinson, B.M.: Reference services : a model of question handling (1989) 0.11
    0.107147284 = product of:
      0.42858914 = sum of:
        0.42858914 = weight(_text_:handling in 2936) [ClassicSimilarity], result of:
          0.42858914 = score(doc=2936,freq=12.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            1.1882099 = fieldWeight in 2936, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2936)
      0.25 = coord(1/4)
    
    Abstract
    Describes the conceptual framework and a vocabulary which can be used to discuss strategies and choices involved in question handling. A model of question handling is presented which provides for: developing strategies for handling questions; evaluating the appropriateness of the strategy; and relating levels of service to resource requirements. The model involves 5 phases: conducting a reference interview; formulating a question handling strategy; handling the question; reporting the result to the client; and evaluating the service delivered. It describes the reference librarian's interaction with a client and the types of decisions which a librarian makes when selecting and matching the level of resources to the desired level of service.
  7. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.11
    0.10608599 = product of:
      0.21217199 = sum of:
        0.062197033 = weight(_text_:und in 2383) [ClassicSimilarity], result of:
          0.062197033 = score(doc=2383,freq=22.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.48763448 = fieldWeight in 2383, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.046875 = fieldNorm(doc=2383)
        0.14997496 = weight(_text_:handling in 2383) [ClassicSimilarity], result of:
          0.14997496 = score(doc=2383,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.41578686 = fieldWeight in 2383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=2383)
      0.5 = coord(2/4)
    
    Abstract
    Das Buch richtet sich an Praktiker in Unternehmen, die sich mit der Analyse von großen Datenbeständen beschäftigen. Nach einem kurzen Theorieteil werden vier Fallstudien aus dem Customer Relationship Management eines Versandhändlers bearbeitet. Dabei wurden acht führende Softwarelösungen verwendet: der Intelligent Miner von IBM, der Enterprise Miner von SAS, Clementine von SPSS, Knowledge Studio von Angoss, der Delta Miner von Bissantz, der Business Miner von Business Object und die Data Engine von MIT. Im Rahmen der Fallstudien werden die Stärken und Schwächen der einzelnen Lösungen deutlich, und die methodisch-korrekte Vorgehensweise beim Data Mining wird aufgezeigt. Beides liefert wertvolle Entscheidungshilfen für die Auswahl von Standardsoftware zum Data Mining und für die praktische Datenanalyse.
    Content
    Modelle, Methoden und Werkzeuge: Ziele und Aufbau der Untersuchung.- Grundlagen.- Planung und Entscheidung mit Data-Mining-Unterstützung.- Methoden.- Funktionalität und Handling der Softwarelösungen. Fallstudien: Ausgangssituation und Datenbestand im Versandhandel.- Kundensegmentierung.- Erklärung regionaler Marketingerfolge zur Neukundengewinnung.Prognose des Customer Lifetime Values.- Selektion von Kunden für eine Direktmarketingaktion.- Welche Softwarelösung für welche Entscheidung?- Fazit und Marktentwicklungen.
  8. Abbott, R.: ¬The world as information : overload and personal design (1999) 0.10
    0.10463875 = product of:
      0.2092775 = sum of:
        0.05930254 = weight(_text_:und in 6939) [ClassicSimilarity], result of:
          0.05930254 = score(doc=6939,freq=20.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.4649412 = fieldWeight in 6939, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.046875 = fieldNorm(doc=6939)
        0.14997496 = weight(_text_:handling in 6939) [ClassicSimilarity], result of:
          0.14997496 = score(doc=6939,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.41578686 = fieldWeight in 6939, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.046875 = fieldNorm(doc=6939)
      0.5 = coord(2/4)
    
    Abstract
    This book takes the broadest view of information, considering it as a phenomenon in its own roght, rather than exploring the technology for handling it. It is very much concerned with the meaning of information - and what we as individuals do with it
    BK
    02.10 / Wissenschaft und Gesellschaft
    Classification
    AP 16100 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Aussagefunktion und Aussagegestaltung / Unterrichtung (Information)
    MS 7850 Soziologie / Spezielle Soziologien / Soziologie der Massenkommunikation und öffentlichen Meinung / Allgemeine Theorie der gesellschaftlichen Kommunikation und ihrer Medien; Begriff der Öffentlichkeit; Meinungsbildung, public relations
    02.10 / Wissenschaft und Gesellschaft
    RVK
    AP 16100 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Aussagefunktion und Aussagegestaltung / Unterrichtung (Information)
    MS 7850 Soziologie / Spezielle Soziologien / Soziologie der Massenkommunikation und öffentlichen Meinung / Allgemeine Theorie der gesellschaftlichen Kommunikation und ihrer Medien; Begriff der Öffentlichkeit; Meinungsbildung, public relations
  9. Lamport, L.: LaTeX: a document preparation system : 2nd ed. (1994) 0.10
    0.10295592 = product of:
      0.20591184 = sum of:
        0.030941054 = weight(_text_:und in 329) [ClassicSimilarity], result of:
          0.030941054 = score(doc=329,freq=4.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.24258271 = fieldWeight in 329, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=329)
        0.17497079 = weight(_text_:handling in 329) [ClassicSimilarity], result of:
          0.17497079 = score(doc=329,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.48508468 = fieldWeight in 329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=329)
      0.5 = coord(2/4)
    
    Abstract
    This authoritative user's guide and reference manual for the LATEX computer typesetting system has been revised to document features now available in the new standard software release - LATEX2e. The new edition features additional styles and functions, improved font handling, and enhanced graphics capabilities.
    Classification
    ST 281 Informatik / Monographien / Software und -entwicklung / Einzelne Benutzerschnittstellen (alphabet.)
    RVK
    ST 281 Informatik / Monographien / Software und -entwicklung / Einzelne Benutzerschnittstellen (alphabet.)
  10. Edmonds, E.: Expert systems and document handling (1987) 0.10
    0.09998331 = product of:
      0.39993325 = sum of:
        0.39993325 = weight(_text_:handling in 1088) [ClassicSimilarity], result of:
          0.39993325 = score(doc=1088,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            1.108765 = fieldWeight in 1088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.125 = fieldNorm(doc=1088)
      0.25 = coord(1/4)
    
  11. Swain, L.; Tallim, P.: X.400: the standard for message handling systems (1990) 0.10
    0.09998331 = product of:
      0.39993325 = sum of:
        0.39993325 = weight(_text_:handling in 7706) [ClassicSimilarity], result of:
          0.39993325 = score(doc=7706,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            1.108765 = fieldWeight in 7706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.125 = fieldNorm(doc=7706)
      0.25 = coord(1/4)
    
  12. Information handling in offices and archives (1993) 0.10
    0.09998331 = product of:
      0.39993325 = sum of:
        0.39993325 = weight(_text_:handling in 226) [ClassicSimilarity], result of:
          0.39993325 = score(doc=226,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            1.108765 = fieldWeight in 226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.125 = fieldNorm(doc=226)
      0.25 = coord(1/4)
    
  13. Eastman, C.M.: Overlaps in postings to thesaurus terms : a preliminary study (1988) 0.10
    0.09842471 = product of:
      0.19684942 = sum of:
        0.021878628 = weight(_text_:und in 3623) [ClassicSimilarity], result of:
          0.021878628 = score(doc=3623,freq=2.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.17153187 = fieldWeight in 3623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3623)
        0.17497079 = weight(_text_:handling in 3623) [ClassicSimilarity], result of:
          0.17497079 = score(doc=3623,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.48508468 = fieldWeight in 3623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3623)
      0.5 = coord(2/4)
    
    Abstract
    The patterns of overlap between terms which are closely related in a thesaurus are considered. The relationships considered are parent/child, in which one term is a broader term of the other, and sibling in which to 2 terms share the same broader term. The patterns of overlap observed in the MeSH thesaurus with respect to selected MEDLINE postings are examined. The implications of the overlap patterns are discussed, in particular, the impact of the overlap patterns on the potential effectiveness of a proposed algorithm for handling negation is considered.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  14. Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998) 0.10
    0.09624777 = product of:
      0.38499108 = sum of:
        0.38499108 = weight(_text_:jav in 1871) [ClassicSimilarity], result of:
          0.38499108 = score(doc=1871,freq=2.0), product of:
            0.6330741 = queryWeight, product of:
              11.008321 = idf(docFreq=1, maxDocs=44421)
              0.057508692 = queryNorm
            0.60812956 = fieldWeight in 1871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              11.008321 = idf(docFreq=1, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1871)
      0.25 = coord(1/4)
    
    Abstract
    As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
  15. Kliemt, A.: Vom VÖBB zum WorldCat : DER WWW-OPAC des VÖBB im funktionalen Vergleich mit anderen Web-OPACs (2002) 0.09
    0.09066261 = product of:
      0.18132523 = sum of:
        0.056346085 = weight(_text_:und in 2174) [ClassicSimilarity], result of:
          0.056346085 = score(doc=2174,freq=26.0), product of:
            0.12754847 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057508692 = queryNorm
            0.44176215 = fieldWeight in 2174, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2174)
        0.12497914 = weight(_text_:handling in 2174) [ClassicSimilarity], result of:
          0.12497914 = score(doc=2174,freq=2.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.34648907 = fieldWeight in 2174, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2174)
      0.5 = coord(2/4)
    
    Abstract
    Recherche in Bibliothekskatalogen war, zumindest in deutschen Bibliotheken, bis zur Mitte des 20. Jahrhunderts in der Regel Sache des Personals, eine wirklich erfolgreiche Recherche meist Spezialisten vorbehalten. Nur diese waren in der Lage, häufig unter Einbeziehung diverser Hilfsmittel, Titelwünsche der Leser nach den jeweiligen Katalog- und Bibliotheksregeln in entsprechende Kataloganfragen umzusetzen. Als dann den Bibliothekskunden selbst nach und nach der Zugang zu diesen Zettelkatalogen in den Bibliotheken eröffnet wurde, war für eine erfolgreiche Suche meist "guter Rat teuer". Hilfestellung und "guter Rat", oft schon bei einer simplen Titelsuche wie "Schuld und Sühne" nötig, konnte vom uneingeweihten Leser direkt vor Ort in der Bibliothek eingeholt werden. Elektronische Formen der Kataloge in Bibliotheken, kurz "OPACs" genannt, eröffneten neue, den alten Zettel- und Mikrofichekatalogen völlig unbekannte Suchmöglichkeiten. Sie zogen aber auch neue Fragestellungen nach sich, besonders zum Handling und zur Suchstrategie. Zumindest dem "fragemutigen" Nutzer konnte und kann dabei vor Ort in den Bibliotheken immer noch direkt geholfen werden. Diese Hilfestellung entfällt jedoch bei allen Bibliotheks- und Verbundkatalogen, die im Internet als so genannte "Web-OPACs" angeboten werden. Sie erreichen einen viel größeren, dafür aber anonym bleibenden Interessentenkreis, vom absoluten Bibliothekslaien bis zum kundigen Bibliotheksnutzer. Diese aktiven und potentiellen Nutzer treten mit den unterschiedlichsten Anforderungen und Fragestellungen an solche Web-OPACs heran. Ein Web-OPAC muss demnach so gestaltet sein, dass er für Laien und Profis gleichermaßen nutzbar ist und dabei möglichst wenige Fragen überhaupt erst aufkommen lässt, im Idealfall also über eine selbsterklärende Oberfläche verfügt. Hilfetexte müssen die nicht vorhandene persönliche Hilfestellung kompensieren helfen. Sie sind also übersichtlich, verständlich und zielgerichtet einsetzbar zu gestalten.
  16. Koenig, M.E.D.: ¬The information controllability explosion (1982) 0.09
    0.088373594 = product of:
      0.35349438 = sum of:
        0.35349438 = weight(_text_:handling in 601) [ClassicSimilarity], result of:
          0.35349438 = score(doc=601,freq=4.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.98001903 = fieldWeight in 601, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.078125 = fieldNorm(doc=601)
      0.25 = coord(1/4)
    
    Abstract
    Information handling technology is an explosive growth area. Librarians need not run faster just to keep up with the information explosion any more, but must now run faster to keep up with the information controllability explosion. If they don't, their place in the information-handling world will be usurped by others who do realise what a growth area it is
  17. Schwarz, C.: Content based text handling (1990) 0.09
    0.086588085 = product of:
      0.34635234 = sum of:
        0.34635234 = weight(_text_:handling in 5247) [ClassicSimilarity], result of:
          0.34635234 = score(doc=5247,freq=6.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.96021867 = fieldWeight in 5247, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=5247)
      0.25 = coord(1/4)
    
    Abstract
    Whereas up to now document analysis was mainly concerned with the handling of formal properties of documents (scanning, editing), AI (artificial intelligence) techniques in the field of Natural Language Processing have shown the possibility of "Content based text handling", i.e., a content analysis for textual documents. Research and development in this field at The Siemens Corporate Research Laboratories are described in this article.
  18. Fjällbrant, N.: EDUCATE: a user education program for information retrieval and handling (1995) 0.09
    0.086588085 = product of:
      0.34635234 = sum of:
        0.34635234 = weight(_text_:handling in 5875) [ClassicSimilarity], result of:
          0.34635234 = score(doc=5875,freq=6.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.96021867 = fieldWeight in 5875, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0625 = fieldNorm(doc=5875)
      0.25 = coord(1/4)
    
    Abstract
    Describes the EDUCATE (End User Courses in Information Access through Communication Technology) project for end user education in information access, retrieval and handling, a 3 year CEC Libraries Programme Project started in Feb 94. Examines the need for education and training in information retrieval and handling, presents the course design, and gives the goals for the project. Discusses the use of networks in connection with EDUCATE, and the tools and interfaces used. Describes the ways in which the program can be used for a variety of users
  19. Rader, H.B.: User education and information literacy for the next decade : an international perspective (1995) 0.08
    0.07576458 = product of:
      0.30305833 = sum of:
        0.30305833 = weight(_text_:handling in 5416) [ClassicSimilarity], result of:
          0.30305833 = score(doc=5416,freq=6.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.84019136 = fieldWeight in 5416, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5416)
      0.25 = coord(1/4)
    
    Abstract
    In the information age marked by the global highways and instant information handling sharing worldwide, all citizens must become knowledgeable about, and efficient in, handling information. People need training in how to organize, evaluate, and analyze the enormous array of information now available in both print and electronic formats. Information skills need to be taught and developed on all levels from elementary schools thorugh universities. Librarians worldwide are uniquely qualified through education, training, and experience to provide people with necessary information-handling skills on all levels. Using available data regarding information literacy programs on the international level, Rader proposes a course of action for the next decade
  20. Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017) 0.08
    0.07576458 = product of:
      0.30305833 = sum of:
        0.30305833 = weight(_text_:handling in 4939) [ClassicSimilarity], result of:
          0.30305833 = score(doc=4939,freq=6.0), product of:
            0.36070153 = queryWeight, product of:
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.057508692 = queryNorm
            0.84019136 = fieldWeight in 4939, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.272122 = idf(docFreq=227, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4939)
      0.25 = coord(1/4)
    
    Abstract
    The Semantic Web has attracted much attention, both from academia and industry. An important role in research towards the Semantic Web is played by formalisms and technologies for handling uncertainty and/or vagueness. In this paper, I first provide some motivating examples for handling uncertainty and/or vagueness in the Semantic Web. I then give an overview of some own formalisms for handling uncertainty and/or vagueness in the Semantic Web.

Authors

Languages

Types

  • a 9478
  • m 2233
  • el 1007
  • x 593
  • s 565
  • i 169
  • r 121
  • ? 66
  • n 55
  • b 47
  • l 23
  • p 21
  • h 17
  • d 15
  • u 14
  • fi 10
  • v 2
  • z 2
  • au 1
  • ms 1
  • More… Less…

Themes

Subjects

Classifications