Search (171 results, page 1 of 9)

  • × theme_ss:"Wissensrepräsentation"
  1. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.14
    0.13976367 = product of:
      0.27952734 = sum of:
        0.25433764 = weight(_text_:java in 1604) [ClassicSimilarity], result of:
          0.25433764 = score(doc=1604,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.5450528 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1604)
        0.025189707 = weight(_text_:und in 1604) [ClassicSimilarity], result of:
          0.025189707 = score(doc=1604,freq=2.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.17153187 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1604)
      0.5 = coord(2/4)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Nix, M.: ¬Die praktische Einsetzbarkeit des CIDOC CRM in Informationssystemen im Bereich des Kulturerbes (2004) 0.11
    0.10882753 = product of:
      0.21765506 = sum of:
        0.18166976 = weight(_text_:java in 729) [ClassicSimilarity], result of:
          0.18166976 = score(doc=729,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.38932347 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=729)
        0.0359853 = weight(_text_:und in 729) [ClassicSimilarity], result of:
          0.0359853 = score(doc=729,freq=8.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.24504554 = fieldWeight in 729, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=729)
      0.5 = coord(2/4)
    
    Abstract
    Es steht uns eine praktisch unbegrenzte Menge an Informationen über das World Wide Web zur Verfügung. Das Problem, das daraus erwächst, ist, diese Menge zu bewältigen und an die Information zu gelangen, die im Augenblick benötigt wird. Das überwältigende Angebot zwingt sowohl professionelle Anwender als auch Laien zu suchen, ungeachtet ihrer Ansprüche an die gewünschten Informationen. Um dieses Suchen effizienter zu gestalten, gibt es einerseits die Möglichkeit, leistungsstärkere Suchmaschinen zu entwickeln. Eine andere Möglichkeit ist, Daten besser zu strukturieren, um an die darin enthaltenen Informationen zu gelangen. Hoch strukturierte Daten sind maschinell verarbeitbar, sodass ein Teil der Sucharbeit automatisiert werden kann. Das Semantic Web ist die Vision eines weiterentwickelten World Wide Web, in dem derart strukturierten Daten von so genannten Softwareagenten verarbeitet werden. Die fortschreitende inhaltliche Strukturierung von Daten wird Semantisierung genannt. Im ersten Teil der Arbeit sollen einige wichtige Methoden der inhaltlichen Strukturierung von Daten skizziert werden, um die Stellung von Ontologien innerhalb der Semantisierung zu klären. Im dritten Kapitel wird der Aufbau und die Aufgabe des CIDOC Conceptual Reference Model (CRM), einer Domain Ontologie im Bereich des Kulturerbes dargestellt. Im darauf folgenden praktischen Teil werden verschiedene Ansätze zur Verwendung des CRM diskutiert und umgesetzt. Es wird ein Vorschlag zur Implementierung des Modells in XML erarbeitet. Das ist eine Möglichkeit, die dem Datentransport dient. Außerdem wird der Entwurf einer Klassenbibliothek in Java dargelegt, auf die die Verarbeitung und Nutzung des Modells innerhalb eines Informationssystems aufbauen kann.
  3. Botana Varela, J.: Unscharfe Wissensrepräsentationen bei der Implementation des Semantic Web (2004) 0.10
    0.09959683 = product of:
      0.19919366 = sum of:
        0.1453358 = weight(_text_:java in 346) [ClassicSimilarity], result of:
          0.1453358 = score(doc=346,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.31145877 = fieldWeight in 346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=346)
        0.05385786 = weight(_text_:und in 346) [ClassicSimilarity], result of:
          0.05385786 = score(doc=346,freq=28.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.36675057 = fieldWeight in 346, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.03125 = fieldNorm(doc=346)
      0.5 = coord(2/4)
    
    Abstract
    In der vorliegenden Arbeit soll einen Ansatz zur Implementation einer Wissensrepräsentation mit den in Abschnitt 1.1. skizzierten Eigenschaften und dem Semantic Web als Anwendungsbereich vorgestellt werden. Die Arbeit ist im Wesentlichen in zwei Bereiche gegliedert: dem Untersuchungsbereich (Kapitel 2-5), in dem ich die in Abschnitt 1.1. eingeführte Terminologie definiert und ein umfassender Überblick über die zugrundeliegenden Konzepte gegeben werden soll, und dem Implementationsbereich (Kapitel 6), in dem aufbauend auf dem im Untersuchungsbereich erarbeiteten Wissen einen semantischen Suchdienst entwickeln werden soll. In Kapitel 2 soll zunächst das Konzept der semantischen Interpretation erläutert und in diesem Kontext hauptsächlich zwischen Daten, Information und Wissen unterschieden werden. In Kapitel 3 soll Wissensrepräsentation aus einer kognitiven Perspektive betrachtet und in diesem Zusammenhang das Konzept der Unschärfe beschrieben werden. In Kapitel 4 sollen sowohl aus historischer als auch aktueller Sicht die Ansätze zur Wissensrepräsentation und -auffindung beschrieben und in diesem Zusammenhang das Konzept der Unschärfe diskutiert werden. In Kapitel 5 sollen die aktuell im WWW eingesetzten Modelle und deren Einschränkungen erläutert werden. Anschließend sollen im Kontext der Entscheidungsfindung die Anforderungen beschrieben werden, die das WWW an eine adäquate Wissensrepräsentation stellt, und anhand der Technologien des Semantic Web die Repräsentationsparadigmen erläutert werden, die diese Anforderungen erfüllen. Schließlich soll das Topic Map-Paradigma erläutert werden. In Kapitel 6 soll aufbauend auf die im Untersuchtungsbereich gewonnenen Erkenntnisse ein Prototyp entwickelt werden. Dieser besteht im Wesentlichen aus Softwarewerkzeugen, die das automatisierte und computergestützte Extrahieren von Informationen, das unscharfe Modellieren, sowie das Auffinden von Wissen unterstützen. Die Implementation der Werkzeuge erfolgt in der Programmiersprache Java, und zur unscharfen Wissensrepräsentation werden Topic Maps eingesetzt. Die Implementation wird dabei schrittweise vorgestellt. Schließlich soll der Prototyp evaluiert und ein Ausblick auf zukünftige Erweiterungsmöglichkeiten gegeben werden. Und schließlich soll in Kapitel 7 eine Synthese formuliert werden.
  4. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.06
    0.060731784 = product of:
      0.12146357 = sum of:
        0.014394118 = weight(_text_:und in 674) [ClassicSimilarity], result of:
          0.014394118 = score(doc=674,freq=2.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.098018214 = fieldWeight in 674, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.03125 = fieldNorm(doc=674)
        0.10706945 = weight(_text_:heading in 674) [ClassicSimilarity], result of:
          0.10706945 = score(doc=674,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.2673296 = fieldWeight in 674, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.03125 = fieldNorm(doc=674)
      0.5 = coord(2/4)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.05
    0.054500923 = product of:
      0.21800369 = sum of:
        0.21800369 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
          0.21800369 = score(doc=4615,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.46718815 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4615)
      0.25 = coord(1/4)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  6. Miles, A.; Pérez-Agüera, J.R.: SKOS: Simple Knowledge Organisation for the Web (2006) 0.05
    0.046842884 = product of:
      0.18737154 = sum of:
        0.18737154 = weight(_text_:heading in 1504) [ClassicSimilarity], result of:
          0.18737154 = score(doc=1504,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.4678268 = fieldWeight in 1504, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1504)
      0.25 = coord(1/4)
    
    Abstract
    This article introduces the Simple Knowledge Organisation System (SKOS), a Semantic Web language for representing controlled structured vocabularies, including thesauri, classification schemes, subject heading systems and taxonomies. SKOS provides a framework for publishing thesauri, classification schemes, and subject indexes on the Web, and for applying these systems to resource collections that are part of the SemanticWeb. SemanticWeb applications may harvest and merge SKOS data, to integrate and enhances retrieval service across multiple collections (e.g. libraries). This article also describes some alternatives for integrating Semantic Web services based on the Resource Description Framework (RDF) and SKOS into a distributed enterprise architecture.
  7. Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016) 0.05
    0.04541744 = product of:
      0.18166976 = sum of:
        0.18166976 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
          0.18166976 = score(doc=4179,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.38932347 = fieldWeight in 4179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4179)
      0.25 = coord(1/4)
    
    Abstract
    In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
  8. SKOS Simple Knowledge Organization System Reference : W3C Recommendation 18 August 2009 (2009) 0.04
    0.040151045 = product of:
      0.16060418 = sum of:
        0.16060418 = weight(_text_:heading in 688) [ClassicSimilarity], result of:
          0.16060418 = score(doc=688,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.40099442 = fieldWeight in 688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.046875 = fieldNorm(doc=688)
      0.25 = coord(1/4)
    
    Abstract
    This document defines the Simple Knowledge Organization System (SKOS), a common data model for sharing and linking knowledge organization systems via the Web. Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications. The SKOS data model provides a standard, low-cost migration path for porting existing knowledge organization systems to the Semantic Web. SKOS also provides a lightweight, intuitive language for developing and sharing new knowledge organization systems. It may be used on its own, or in combination with formal knowledge representation languages such as the Web Ontology language (OWL). This document is the normative specification of the Simple Knowledge Organization System. It is intended for readers who are involved in the design and implementation of information systems, and who already have a good understanding of Semantic Web technology, especially RDF and OWL. For an informative guide to using SKOS, see the [SKOS-PRIMER].
  9. SKOS Core Guide (2005) 0.04
    0.040151045 = product of:
      0.16060418 = sum of:
        0.16060418 = weight(_text_:heading in 689) [ClassicSimilarity], result of:
          0.16060418 = score(doc=689,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.40099442 = fieldWeight in 689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.046875 = fieldNorm(doc=689)
      0.25 = coord(1/4)
    
    Abstract
    SKOS Core provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies. The SKOS Core Vocabulary is an application of the Resource Description Framework (RDF), that can be used to express a concept scheme as an RDF graph. Using RDF allows data to be linked to and/or merged with other data, enabling data sources to be distributed across the web, but still be meaningfully composed and integrated. This document is a guide using the SKOS Core Vocabulary, for readers who already have a basic understanding of RDF concepts. This edition of the SKOS Core Guide [SKOS Core Guide] is a W3C Public Working Draft. It is the authoritative guide to recommended usage of the SKOS Core Vocabulary at the time of publication.
  10. SKOS Simple Knowledge Organization System Primer (2009) 0.04
    0.040151045 = product of:
      0.16060418 = sum of:
        0.16060418 = weight(_text_:heading in 795) [ClassicSimilarity], result of:
          0.16060418 = score(doc=795,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.40099442 = fieldWeight in 795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.046875 = fieldNorm(doc=795)
      0.25 = coord(1/4)
    
    Abstract
    SKOS (Simple Knowledge Organisation System) provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other types of controlled vocabulary. As an application of the Resource Description Framework (RDF) SKOS allows concepts to be documented, linked and merged with other data, while still being composed, integrated and published on the World Wide Web. This document is an implementors guide for those who would like to represent their concept scheme using SKOS. In basic SKOS, conceptual resources (concepts) can be identified using URIs, labelled with strings in one or more natural languages, documented with various types of notes, semantically related to each other in informal hierarchies and association networks, and aggregated into distinct concept schemes. In advanced SKOS, conceptual resources can be mapped to conceptual resources in other schemes and grouped into labelled or ordered collections. Concept labels can also be related to each other. Finally, the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice.
  11. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.04
    0.03633395 = product of:
      0.1453358 = sum of:
        0.1453358 = weight(_text_:java in 935) [ClassicSimilarity], result of:
          0.1453358 = score(doc=935,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.31145877 = fieldWeight in 935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=935)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
  12. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.03
    0.027250461 = product of:
      0.109001845 = sum of:
        0.109001845 = weight(_text_:java in 378) [ClassicSimilarity], result of:
          0.109001845 = score(doc=378,freq=2.0), product of:
            0.46662933 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06621197 = queryNorm
            0.23359407 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=378)
      0.25 = coord(1/4)
    
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
  13. Gnoli, C.: Fundamentos ontológicos de la organización del conocimiento : la teoría de los niveles integrativos aplicada al orden de cita (2011) 0.03
    0.026767362 = product of:
      0.10706945 = sum of:
        0.10706945 = weight(_text_:heading in 3659) [ClassicSimilarity], result of:
          0.10706945 = score(doc=3659,freq=2.0), product of:
            0.40051475 = queryWeight, product of:
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.06621197 = queryNorm
            0.2673296 = fieldWeight in 3659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0489783 = idf(docFreq=284, maxDocs=44421)
              0.03125 = fieldNorm(doc=3659)
      0.25 = coord(1/4)
    
    Abstract
    The field of knowledge organization (KO) can be described as composed of the four distinct but connected layers of theory, systems, representation, and application. This paper focuses on the relations between KO theory and KO systems. It is acknowledged how the structure of KO systems is the product of a mixture of ontological, epistemological, and pragmatical factors. However, different systems give different priorities to each factor. A more ontologically-oriented approach, though not offering quick solutions for any particular group of users, will produce systems of wide and long-lasting application as they are based on general, shareable principles. I take the case of the ontological theory of integrative levels, which has been considered as a useful source for general classifications for several decades, and is currently implemented in the Integrative Levels Classification system. The theory produces a sequence of main classes modelling a natural order between phenomena. This order has interesting effects also on other features of the system, like the citation order of concepts within compounds. As it has been shown by facet analytical theory, it is useful that citation order follow a principle of inversion, as compared to the order of the same concepts in the schedules. In the light of integrative levels theory, this principle also acquires an ontological meaning: phenomena of lower level should be cited first, as most often they act as specifications of higher-level ones. This ontological principle should be complemented by consideration of the epistemological treatment of phenomena: in case a lower-level phenomenon is the main theme, it can be promoted to the leading position in the compound subject heading. The integration of these principles is believed to produce optimal results in the ordering of knowledge contents.
  14. Wildgen, W.: Semantischer Realismus und Antirealismus in der Sprachtheorie (1992) 0.02
    0.02413967 = product of:
      0.09655868 = sum of:
        0.09655868 = weight(_text_:und in 2139) [ClassicSimilarity], result of:
          0.09655868 = score(doc=2139,freq=10.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.6575262 = fieldWeight in 2139, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=2139)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  15. Sandkühler, H.J.: Epistemologischer Realismus und die Wirklichkeit des Wissens : eine Verteidigung der Philosophie des Geistes gegen Naturalismus und Reduktionismus (1992) 0.02
    0.022036403 = product of:
      0.08814561 = sum of:
        0.08814561 = weight(_text_:und in 1731) [ClassicSimilarity], result of:
          0.08814561 = score(doc=1731,freq=12.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.60023654 = fieldWeight in 1731, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.078125 = fieldNorm(doc=1731)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  16. Roth, G.; Schwegler, H.: Kognitive Referenz und Selbstreferentialität des Gehirns : ein Beitrag zur Klärung des Verhältnisses zwischen Erkenntnistheorie und Hirnforschung (1992) 0.02
    0.022036403 = product of:
      0.08814561 = sum of:
        0.08814561 = weight(_text_:und in 607) [ClassicSimilarity], result of:
          0.08814561 = score(doc=607,freq=12.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.60023654 = fieldWeight in 607, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.078125 = fieldNorm(doc=607)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  17. Kutschera, F. von: ¬Der erkenntnistheoretische Realismus (1992) 0.02
    0.021591177 = product of:
      0.08636471 = sum of:
        0.08636471 = weight(_text_:und in 608) [ClassicSimilarity], result of:
          0.08636471 = score(doc=608,freq=8.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.58810925 = fieldWeight in 608, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=608)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  18. Franzen, W.: Idealismus statt Realismus? : Realismus plus Skeptizismus! (1992) 0.02
    0.021591177 = product of:
      0.08636471 = sum of:
        0.08636471 = weight(_text_:und in 612) [ClassicSimilarity], result of:
          0.08636471 = score(doc=612,freq=8.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.58810925 = fieldWeight in 612, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=612)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  19. Baumer, C.; Reichenberger, K.: Business Semantics - Praxis und Perspektiven (2006) 0.02
    0.01904163 = product of:
      0.07616652 = sum of:
        0.07616652 = weight(_text_:und in 20) [ClassicSimilarity], result of:
          0.07616652 = score(doc=20,freq=14.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.51866364 = fieldWeight in 20, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=20)
      0.25 = coord(1/4)
    
    Abstract
    Der Artikel führt in semantische Technologien ein und gewährt Einblick in unterschiedliche Entwicklungsrichtungen. Insbesondere werden Business Semantics vorgestellt und vom Semantic Web abgegrenzt. Die Stärken von Business Semantics werden speziell an den Praxisbeispielen des Knowledge Portals und dem Projekt "Knowledge Base" der Wienerberger AG veranschaulicht. So werden die Anforderungen - was brauchen Anwendungen in Unternehmen heute - und die Leistungsfähigkeit von Systemen - was bieten Business Semantics - konkretisiert und gegenübergestellt.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.359-366
  20. Kunze, C.: Lexikalisch-semantische Wortnetze in Sprachwissenschaft und Sprachtechnologie (2006) 0.02
    0.01904163 = product of:
      0.07616652 = sum of:
        0.07616652 = weight(_text_:und in 23) [ClassicSimilarity], result of:
          0.07616652 = score(doc=23,freq=14.0), product of:
            0.14685147 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06621197 = queryNorm
            0.51866364 = fieldWeight in 23, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=23)
      0.25 = coord(1/4)
    
    Abstract
    Dieser Beitrag beschreibt die Strukturierungsprinzipien und Anwendungskontexte lexikalisch-semantischer Wortnetze, insbesondere des deutschen Wortnetzes GermaNet. Wortnetze sind zurzeit besonders populäre elektronische Lexikonressourcen, die große Abdeckungen semantisch strukturierter Datenfür verschiedene Sprachen und Sprachverbünde enthalten. In Wortnetzen sind die häufigsten und wichtigsten Konzepte einer Sprache mit ihren elementaren Bedeutungsrelationen repräsentiert. Zentrale Anwendungen für Wortnetze sind u.a. die Lesartendisambiguierung und die Informationserschließung. Der Artikel skizziert die neusten Szenarien, in denen GermaNet eingesetzt wird: die Semantische Informationserschließung und die Integration allgemeinsprachlicher Wortnetze mit terminologischen Ressourcen vordem Hintergrund der Datenkonvertierung in OWL.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.309-314

Years

Languages

  • d 119
  • e 46
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 101
  • el 41
  • x 21
  • m 18
  • r 7
  • n 4
  • s 4
  • p 1
  • More… Less…

Subjects

Classifications