Search (13355 results, page 9 of 668)

  1. Lee, Y.-H.; Evens, M.W.: Natural language interface for an expert system (1998) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 6108) [ClassicSimilarity], result of:
          0.21626179 = score(doc=6108,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 6108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6108)
      0.25 = coord(1/4)
    
    Abstract
    Presents a complete analysis of the underlying principles of natural language interfaces from the screen manager to the parser / understander. The main focus is on the design and development of a subsystem for understanding natural language input in an expert system. Considers that fast response time and user friendliness are the most important considerations in the design. The screen manager provides an easy editing capability for users and the spelling correction system can detect most spelling errors and correct them automatically, quickly and effectively. The Lexical Functional Grammar (LFG) parser and the understander are designed to handle most types of simple sentences, fragments, and ellipses
  2. Albertsen, K.; Nuys, C. van: Paradigma: FRBR and digital documents (2004) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 5182) [ClassicSimilarity], result of:
          0.21626179 = score(doc=5182,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 5182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5182)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes the Paradigma Project at the National Library of Norway and its work to ensure the legal deposit of all types of digital documents. The Paradigma project plans to implement extensions to IFLA's FRBR model for handling composite Group 1 entities at all abstraction levels. A new taxonomy is introduced: This is done by forming various relationships into component aggregates, and grouping these aggregates into various classes. This serves two main purposes: New applications may be introduced without requiring modifications to the model, and automated mechanisms may be designed to handle each class in a common way, largely unaffected by the details of the relationship semantics.
  3. Stine, D.: Suggested standards for cataloging textbooks (1991) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 637) [ClassicSimilarity], result of:
          0.21626179 = score(doc=637,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 637, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=637)
      0.25 = coord(1/4)
    
    Abstract
    To determine the feasibility of our library providing full cataloging for textbooks using OCLC, I conducted a study of records in the OCLC data base and performed a literature search on the topic. I found that a preponderance of duplicate OCLC records and a lack of uniformity in cataloging practices would make this a costly proposition. It would also make it difficult to train a paraprofessional to select catalog records and to handle the cataloging of these materials. This paper suggests standards to be considered by the appropriate ALA committees in order to alleviate the duplication of records and to make textbook cataloging easier.
  4. Cory, K.A.: ¬The imaging industry wants us! (1992) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 660) [ClassicSimilarity], result of:
          0.21626179 = score(doc=660,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 660, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=660)
      0.25 = coord(1/4)
    
    Abstract
    Paper-based manual filing systems are inadequate to handle the flood of information found in most commercial offices and government agencies. Examples are included to delineate the dimensions of the problem. In response, imaging technology, which converts information in paper format to computer-readable binary format, is creating a multitude of electronic databases. However, imaging vendors are minimizing the difficulties of database organization. The author, drawing on personal experience, recounts instances of inadequate database organization. Because classification and indexing principles are only imparted in schools of library and/or information science, the imaging industry is highly dependent upon expertise possessed by library science graduates. In order to take advantage of this new job market, recommendations for library science students and faculty are included.
  5. Mai, J.-E.: Classification in a social world : bias and trust (2010) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 123) [ClassicSimilarity], result of:
          0.21626179 = score(doc=123,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=123)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to establish pluralism as the basis for bibliographic classification theory and practice and examine the possibility of establishing trustworthy classifications. Design/methodology/approach - The paper examines several key notions in classification and extends previous frameworks by combining an explanation-based approach to classification with the concepts of cognitive authority and trust. Findings - The paper presents an understanding of classification that allows designers and editors to establish trust through the principle of transparency. It demonstrates that modern classification theory and practice are tied to users' activities and domains of knowledge and that trustworthy classification systems are in close dialogue with users to handle bias responsible and establish trust. Originality/value - The paper establishes a foundation for exploring trust and authority for classification systems.
  6. Bedathur, S.; Narang, A.: Mind your language : effects of spoken query formulation on retrieval effectiveness (2013) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 2150) [ClassicSimilarity], result of:
          0.21626179 = score(doc=2150,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 2150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2150)
      0.25 = coord(1/4)
    
    Abstract
    Voice search is becoming a popular mode for interacting with search engines. As a result, research has gone into building better voice transcription engines, interfaces, and search engines that better handle inherent verbosity of queries. However, when one considers its use by non- native speakers of English, another aspect that becomes important is the formulation of the query by users. In this paper, we present the results of a preliminary study that we conducted with non-native English speakers who formulate queries for given retrieval tasks. Our results show that the current search engines are sensitive in their rankings to the query formulation, and thus highlights the need for developing more robust ranking methods.
  7. Denton, W.: FRBR and the history of cataloging (2007) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 2677) [ClassicSimilarity], result of:
          0.21626179 = score(doc=2677,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 2677, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2677)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http://yorkspace.library.yorku.ca/xmlui/handle/10315/1250. Vgl. auch: https://www.miskatonic.org/library/frbr.html (vom Autor als veraltete Seite deklariert)
  8. Wolf, S.: Automating authority control processes (2020) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 680) [ClassicSimilarity], result of:
          0.21626179 = score(doc=680,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=680)
      0.25 = coord(1/4)
    
    Abstract
    Authority control is an important part of cataloging since it helps provide consistent access to names, titles, subjects, and genre/forms. There are a variety of methods for providing authority control, ranging from manual, time-consuming processes to automated processes. However, the automated processes often seem out of reach for small libraries when it comes to using a pricey vendor or expert cataloger. This paper introduces ideas on how to handle authority control using a variety of tools, both paid and free. The author describes how their library handles authority control; compares vendors and programs that can be used to provide varying levels of authority control; and demonstrates authority control using MarcEdit.
  9. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.05
    0.054065447 = product of:
      0.21626179 = sum of:
        0.21626179 = weight(_text_:handle in 1053) [ClassicSimilarity], result of:
          0.21626179 = score(doc=1053,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.5059889 = fieldWeight in 1053, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1053)
      0.25 = coord(1/4)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
  10. Gibson, P.: Professionals' perfect Web world in sight : users want more information on the Web, and vendors attempt to provide (1998) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 2656) [ClassicSimilarity], result of:
          0.21509399 = score(doc=2656,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2656)
      0.25 = coord(1/4)
    
    Abstract
    Many information professionals feel that the time is still far off when the WWW can offer the combined funtionality and content of traditional online and CD-ROM databases, but there have been a number of recent Web developments to reflect on. Describes the testing and launch by Ovid of its Java client which, in effect, allows access to its databases on the Web with full search functionality, and the initiative of Euromonitor in providing Web access to its whole collection of consumer research reports and its entire database of business sources. Also reviews the service of a newcomer to the information scene, Information Quest (IQ) founded by Dawson Holdings which has made an agreement with Infonautics to offer access to its Electric Library database thus adding over 1.000 reference, consumer and business publications to its Web based journal service
  11. Nieuwenhuysen, P.; Vanouplines, P.: Document plus program hybrids on the Internet and their impact on information transfer (1998) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 2893) [ClassicSimilarity], result of:
          0.21509399 = score(doc=2893,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 2893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2893)
      0.25 = coord(1/4)
    
    Abstract
    Examines some of the advanced tools, techniques, methods and standards related to the Internet and WWW which consist of hybrids of documents and software, called 'document program hybrids'. Early Internet systems were based on having documents on one side and software on the other, neatly separated, apart from one another and without much interaction, so that the static document can also exist without computers and networks. Documentation program hybrids blur this classical distinction and all components are integrated, interwoven and exist in synergy with each other. Illustrates the techniques with particular reference to practical examples, including: dara collections and dedicated software; advanced HTML features on the WWW, multimedia viewer and plug in software for Internet and WWW browsers; VRML; interaction through a Web server with other servers and with instruments; adaptive hypertext provided by the server; 'webbots' or 'knowbots' or 'searchbots' or 'metasearch engines' or intelligent software agents; Sun's Java; Microsoft's ActiveX; program scripts for HTML and Web browsers; cookies; and Internet push technology with Webcasting channels
  12. Mills, T.; Moody, K.; Rodden, K.: Providing world wide access to historical sources (1997) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 3697) [ClassicSimilarity], result of:
          0.21509399 = score(doc=3697,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3697)
      0.25 = coord(1/4)
    
    Abstract
    A unique collection of historical material covering the lives and events of an English village between 1400 and 1750 has been made available via a WWW enabled information retrieval system. Since the expected readership of the documents ranges from school children to experienced researchers, providing this information in an easily accessible form has offered many challenges requiring tools to aid searching and browsing. The file structure of the document collection was replaced by an database, enabling query results to be presented on the fly. A Java interface displays each user's context in a form that allows for easy and intuitive relevance feedback
  13. Maarek, Y.S.: WebCutter : a system for dynamic and tailorable site mapping (1997) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 3739) [ClassicSimilarity], result of:
          0.21509399 = score(doc=3739,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 3739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3739)
      0.25 = coord(1/4)
    
    Abstract
    Presents an approach that integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables contextualization of the results by augmenting them with the documents' neighbourhoods. This approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runalble from any standard Web browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino application product line
  14. Pan, B.; Gay, G.; Saylor, J.; Hembrooke, H.: One digital library, two undergraduate casses, and four learning modules : uses of a digital library in cassrooms (2006) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 907) [ClassicSimilarity], result of:
          0.21509399 = score(doc=907,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=907)
      0.25 = coord(1/4)
    
    Abstract
    The KMODDL (kinematic models for design digital library) is a digital library based on a historical collection of kinematic models made of steel and bronze. The digital library contains four types of learning modules including textual materials, QuickTime virtual reality movies, Java simulations, and stereolithographic files of the physical models. The authors report an evaluation study on the uses of the KMODDL in two undergraduate classes. This research reveals that the users in different classes encountered different usability problems, and reported quantitatively different subjective experiences. Further, the results indicate that depending on the subject area, the two user groups preferred different types of learning modules, resulting in different uses of the available materials and different learning outcomes. These findings are discussed in terms of their implications for future digital library design.
  15. Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 2167) [ClassicSimilarity], result of:
          0.21509399 = score(doc=2167,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 2167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2167)
      0.25 = coord(1/4)
    
    Abstract
    The Indiana University School of Library and Information Science opened a new research laboratory in January 2003; The Indiana University School of Library and Information Science Information Processing Laboratory [IU IP Lab]. The purpose of the new laboratory is to facilitate collaboration between scientists in the department in the areas of information retrieval (IR) and information visualization (IV) research. The lab has several areas of focus. These include grid and cluster computing, and a standard Java-based software platform to support plug and play research datasets, a selection of standard IR modules and standard IV algorithms. Future development includes software to enable researchers to contribute datasets, IR algorithms, and visualization algorithms into the standard environment. We decided early on to use OAI-PMH as a resource discovery tool because it is consistent with our mission.
  16. Song, R.; Luo, Z.; Nie, J.-Y.; Yu, Y.; Hon, H.-W.: Identification of ambiguous queries in web search (2009) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 3441) [ClassicSimilarity], result of:
          0.21509399 = score(doc=3441,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 3441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3441)
      0.25 = coord(1/4)
    
    Abstract
    It is widely believed that many queries submitted to search engines are inherently ambiguous (e.g., java and apple). However, few studies have tried to classify queries based on ambiguity and to answer "what the proportion of ambiguous queries is". This paper deals with these issues. First, we clarify the definition of ambiguous queries by constructing the taxonomy of queries from being ambiguous to specific. Second, we ask human annotators to manually classify queries. From manually labeled results, we observe that query ambiguity is to some extent predictable. Third, we propose a supervised learning approach to automatically identify ambiguous queries. Experimental results show that we can correctly identify 87% of labeled queries with the approach. Finally, by using our approach, we estimate that about 16% of queries in a real search log are ambiguous.
  17. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
          0.21509399 = score(doc=3605,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 3605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3605)
      0.25 = coord(1/4)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  18. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.05
    0.053773496 = product of:
      0.21509399 = sum of:
        0.21509399 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
          0.21509399 = score(doc=4615,freq=2.0), product of:
            0.4604012 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06532823 = queryNorm
            0.46718815 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4615)
      0.25 = coord(1/4)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  19. Sokal, A.: Transgressing the boundaries : toward a transformative hermeneutics of quantum gravity (1996) 0.05
    0.05117109 = product of:
      0.10234218 = sum of:
        0.025105825 = weight(_text_:und in 3136) [ClassicSimilarity], result of:
          0.025105825 = score(doc=3136,freq=16.0), product of:
            0.14489143 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06532823 = queryNorm
            0.17327337 = fieldWeight in 3136, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=3136)
        0.077236354 = weight(_text_:handle in 3136) [ClassicSimilarity], result of:
          0.077236354 = score(doc=3136,freq=2.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.18071032 = fieldWeight in 3136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.01953125 = fieldNorm(doc=3136)
      0.5 = coord(2/4)
    
    Content
    1996 reichte der amerikanische Physiker Alan Sokal einen Aufsatz mit dem Titel Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity (deutsch: Die Grenzen überschreiten: Auf dem Weg zu einer transformativen Hermeneutik der Quantengravitation) bei der amerikanischen, für ihre postmoderne Ausrichtung bekannten Zeitschrift für Cultural studies Social Text zur Veröffentlichung ein. Diese druckte ihn unbeanstandet mit anderen in einer Sondernummer ab. Kurz nach der Veröffentlichung bekannte Sokal in einer anderen Zeitschrift, Lingua Franca, dass es sich bei dem Aufsatz um eine Parodie handle. Er habe die zusammengesuchten Zitate verschiedener postmoderner Denker mit dem typischen Jargon dieser Denkrichtung zu einem Text montiert, dessen unsinniger Inhalt bei Beachtung wissenschaftlicher Standards, so der Vorwurf an die Herausgeber von Social Text, als solcher hätte erkannt werden müssen. Dieser Vorfall löste im akademischen Milieu und der Presse (der Fall kam immerhin bis auf die Titelseite der New York Times) eine öffentliche Diskussion aus, wie dieser Vorfall im Besonderen und die Seriosität der postmodernen Philosophie im Allgemeinen zu bewerten sei. Sokal und Vertreter des kritisierten Personenkreises führten die Diskussion in weiteren Zeitschriftenartikeln fort und verteidigten ihre Standpunkte. 1997 veröffentlichte Sokal zusammen mit seinem belgischen Kollegen Jean Bricmont dazu ein Buch mit dem Titel Impostures Intellectuelles (übersetzt: Intellektuelle Hochstapeleien, deutscher Titel: Eleganter Unsinn), in dem er seine Thesen erklärt und an Beispielen von Texten bedeutender postmoderner französischer Philosophen erläutert (namentlich Jean Baudrillard, Gilles Deleuze/Félix Guattari, Luce Irigaray, Julia Kristeva, Jacques Lacan, Bruno Latour und Paul Virilio und - obwohl kein Postmoderner, als historisches Beispiel - Henri Bergson). In diesem Buch gaben Sokal/Bricmont - neben der Verteidigung gegen den vermuteten Missbrauch der Wissenschaft - auch ein politisches Motiv für ihren Vorstoß an. Sie bekannten sich zur politischen Linken und vertraten die Meinung, dass die zunehmende Verbreitung der postmodernen Denkrichtung in der Linken deren Fähigkeit zu wirkungsvoller Gesellschaftskritik schwäche. (http://de.wikipedia.org/wiki/Sokal-Aff%C3%A4re)
  20. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.05
    0.04682205 = product of:
      0.1872882 = sum of:
        0.1872882 = weight(_text_:handle in 2260) [ClassicSimilarity], result of:
          0.1872882 = score(doc=2260,freq=6.0), product of:
            0.42740422 = queryWeight, product of:
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.06532823 = queryNorm
            0.43819922 = fieldWeight in 2260, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.5424123 = idf(docFreq=173, maxDocs=44421)
              0.02734375 = fieldNorm(doc=2260)
      0.25 = coord(1/4)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.

Authors

Languages

Types

  • a 9476
  • m 2245
  • el 1025
  • x 595
  • s 558
  • i 168
  • r 117
  • ? 66
  • n 55
  • b 47
  • l 23
  • p 23
  • h 17
  • d 15
  • u 14
  • fi 10
  • v 2
  • z 2
  • au 1
  • ms 1
  • More… Less…

Themes

Subjects

Classifications