-
Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010)
0.14
0.13631633 = product of:
0.27263266 = sum of:
0.24806426 = weight(_text_:java in 1604) [ClassicSimilarity], result of:
0.24806426 = score(doc=1604,freq=2.0), product of:
0.45511967 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06457882 = queryNorm
0.5450528 = fieldWeight in 1604, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=1604)
0.024568388 = weight(_text_:und in 1604) [ClassicSimilarity], result of:
0.024568388 = score(doc=1604,freq=2.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.17153187 = fieldWeight in 1604, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=1604)
0.5 = coord(2/4)
- Abstract
- iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Groß, M.; Rusch, B.: Open Source Programm Mable+ zur Analyse von Katalogdaten veröffentlicht (2011)
0.12
0.12455056 = product of:
0.24910112 = sum of:
0.21262652 = weight(_text_:java in 1181) [ClassicSimilarity], result of:
0.21262652 = score(doc=1181,freq=2.0), product of:
0.45511967 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06457882 = queryNorm
0.46718815 = fieldWeight in 1181, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=1181)
0.0364746 = weight(_text_:und in 1181) [ClassicSimilarity], result of:
0.0364746 = score(doc=1181,freq=6.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.25465882 = fieldWeight in 1181, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=1181)
0.5 = coord(2/4)
- Abstract
- Als eines der Ergebnisse der 2007 zwischen BVB und KOBV geschlossenen strategischen Allianz konnte am 12. September 2011 Mable+, eine Java-gestützte OpenSource-Software zur automatischen Daten- und Fehleranalyse von Bibliothekskatalogen, veröffentlicht werden. Basierend auf dem MAB-Datenaustauschformat ermöglicht Mable+ die formale Prüfung von Katalogdaten verbunden mit einer statistischen Auswertung über die Verteilung der Felder. Dazu benötigt es einen MAB-Abzug des Katalogs im MAB2-Bandformat mit MAB2-Zeichensatz. Dieses Datenpaket wird innerhalb weniger Minuten analysiert. Als Ergebnis erhält man einen Report mit einer allgemeinen Statistik zu den geprüften Datensätzen (Verteilung der Satztypen, Anzahl der MAB-Felder, u.a.), sowie eine Liste gefundener Fehler. Die Software wurde bereits bei der Migration der Katalogdaten aller KOBV-Bibliotheken in den B3Kat erfolgreich eingesetzt. Auf der Projekt-Webseite http://mable.kobv.de/ findet man allgemeine Informationen sowie diverse Anleitungen zur Nutzung des Programms. Die Software kann man sich unter http://mable.kobv.de/download.html herunterladen. Derzeit wird ein weiterführendes Konzept zur Nutzung und Modifizierung der Software entwickelt.
-
Jaeger, L.: ¬Die gefährlichen Ideologen von Silicon Valley : Technologische Allmachtsphantasien (2019)
0.12
0.121565916 = product of:
0.24313183 = sum of:
0.0364746 = weight(_text_:und in 740) [ClassicSimilarity], result of:
0.0364746 = score(doc=740,freq=6.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.25465882 = fieldWeight in 740, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=740)
0.20665723 = weight(_text_:hoffmann in 740) [ClassicSimilarity], result of:
0.20665723 = score(doc=740,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.4605835 = fieldWeight in 740, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.046875 = fieldNorm(doc=740)
0.5 = coord(2/4)
- Abstract
- In den 1970er-Jahren gelang den Biologen ein bedeutender Durchbruch: Die Entdeckung so genannter Restriktionsenzyme versetzte sie in die Lage, "Gen-Transplantationen" durchzuführen. Es war die Geburtsstunde der Gentechnik. Künstliche Gene produzierten bestimmte Proteine, mit denen sich menschliche Krankheiten behandeln ließen. Mit dieser Form des "genetischen Engineering" erregten die Biowissenschaften mit einem Schlag die Phantasie und das Interesse der Unternehmer. Ein Pionier dieser Entwicklung war der Molekularbiologie Herbert Boyer. Dieser traf sich 1976 mit dem Manager und Finanzinvestor Robert Swanson, um ihm seine Ergebnisse zu erläutern. Gemeinsam gründeten sie ein Unternehmen, das die Forschungsergebnisse Boyers in konkrete medizinische Produkte umsetzen sollte. Südlich von San Francisco, dort, wo zeitgleich zahlreiche neue Computerfirmen entstanden, entstand das Unternehmen "Genentech". 1982 brachte Genentech mit Insulin das erste gentechnisch hergestellte Medikament auf den Markt. Swanson und Boyer verkauften ihr Unternehmen 1990 für 2,1 Milliarden US-Dollar an das Schweizer Pharmaunternehmen Hoffmann-La Roche. Damit war Boyer zum ersten Wissenschaftsmilliardär der Geschichte aufgestiegen.
-
Lanier, J.: Zehn Gründe, warum du deine Social Media Accounts sofort löschen musst (2018)
0.09
0.086080045 = product of:
0.17216009 = sum of:
0.034388583 = weight(_text_:und in 448) [ClassicSimilarity], result of:
0.034388583 = score(doc=448,freq=12.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.24009462 = fieldWeight in 448, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.03125 = fieldNorm(doc=448)
0.1377715 = weight(_text_:hoffmann in 448) [ClassicSimilarity], result of:
0.1377715 = score(doc=448,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.30705568 = fieldWeight in 448, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.03125 = fieldNorm(doc=448)
0.5 = coord(2/4)
- Abstract
- Um .Zehn Gründe.... zu lesen, reicht ein einziger Grund: Jaron Lanier. Am wichtigsten Mahner vor Datenmissbrauch, Social-Media-Verdummung und der fatalen Umsonst-Mentalität im Netz führt in diesen Tagen kein Weg vorbei. Frank Schätzing Ein Buch, das jeder lesen muss, der sich im Netz bewegt. Jaron Lanier, Tech-Guru und Vordenker des Internets, liefert zehn bestechende Gründe, warum wir mit Social Media Schluss machen müssen. Facebook, Google & Co. überwachen uns, manipulieren unser Verhalten, machen Politik unmöglich und uns zu ekligen, rechthaberischen Menschen. Social Media ist ein allgegenwärtiger Käfig geworden, dem wir nicht entfliehen können. Lanier hat ein aufrüttelndes Buch geschrieben, das seine Erkenntnisse als Insider des Silicon Valleys wiedergibt und dazu anregt, das eigenen Verhalten in den sozialen Netzwerken zu überdenken. Wenn wir den Kampf mit dem Wahnsinn unserer Zeit nicht verlieren wollen, bleibt uns nur eine Möglichkeit: Löschen wir all unsere Accounts!
- Imprint
- Hamburg : Hoffmann und Campe
- Issue
- 2. Auflage. Aus dem amerikanischen Englisch von Martin Bayer und Karsten Petersen.
-
Lorenzon, E.J.; Gracioso, L. de Souza; Silva, M.D.P. da; Tinelli, M.; Amaral, R.M.; Faria, L.I.L. de; Hoffmann, W.A.M.: Controlled vocabulary used in intelligence information system for shoes (2012)
0.06
0.06027503 = product of:
0.24110012 = sum of:
0.24110012 = weight(_text_:hoffmann in 1863) [ClassicSimilarity], result of:
0.24110012 = score(doc=1863,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.53734744 = fieldWeight in 1863, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.0546875 = fieldNorm(doc=1863)
0.25 = coord(1/4)
-
Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010)
0.05
0.05315663 = product of:
0.21262652 = sum of:
0.21262652 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
0.21262652 = score(doc=3605,freq=2.0), product of:
0.45511967 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06457882 = queryNorm
0.46718815 = fieldWeight in 3605, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3605)
0.25 = coord(1/4)
- Abstract
- For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
-
Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017)
0.05
0.05315663 = product of:
0.21262652 = sum of:
0.21262652 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
0.21262652 = score(doc=4615,freq=2.0), product of:
0.45511967 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06457882 = queryNorm
0.46718815 = fieldWeight in 4615, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=4615)
0.25 = coord(1/4)
- Abstract
- Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
-
Lutz, C.; Hoffmann, C.P.; Meckel, M.: Online serendipity : a contextual differentiation of antecedents and outcomes (2017)
0.05
0.051664308 = product of:
0.20665723 = sum of:
0.20665723 = weight(_text_:hoffmann in 4689) [ClassicSimilarity], result of:
0.20665723 = score(doc=4689,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.4605835 = fieldWeight in 4689, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.046875 = fieldNorm(doc=4689)
0.25 = coord(1/4)
-
Categories, contexts and relations in knowledge organization : Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India (2012)
0.05
0.04925805 = product of:
0.0985161 = sum of:
0.012408911 = weight(_text_:und in 1986) [ClassicSimilarity], result of:
0.012408911 = score(doc=1986,freq=4.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.086636685 = fieldWeight in 1986, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.01953125 = fieldNorm(doc=1986)
0.08610719 = weight(_text_:hoffmann in 1986) [ClassicSimilarity], result of:
0.08610719 = score(doc=1986,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.1919098 = fieldWeight in 1986, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.01953125 = fieldNorm(doc=1986)
0.5 = coord(2/4)
- BK
- 02.14 Organisation von Wissenschaft und Kultur
- Classification
- 02.14 Organisation von Wissenschaft und Kultur
- Content
- KNOWLEDGE ORGANIZATION FOR ARCHIVES Renato Rocha Souza, Flávio Codeço Coelho and Suemi Higuchi. The CPDOC Semantic Portal: Applying Semantic and Knowledge Organization Systems to the Brazilian Contemporary History Domain - Natália Bolfarini Tognoli and José Augusto Chaves Guimarães. Challenges of Knowledge Representation in Contemporary Archival Science - Thiago Henrique Bragato Barros and João Batista Ernesto de Moraes. Archival Classification and Knowledge Organization: Theoretical Possibilities for the Archival Field - Pekka Henttonen. Diversity of Knowledge Organization in Records and Archives Management DESIGN AND DEVELOPMENT OF KNOWLEDGE ORGANIZATION TOOLS Leonard Will. The ISO 25964 Data Model for the Structure of an Information Retrieval Thesaurus - Wieslaw Babik. A Faceted Classification of Cartographic Materials: Problems of Construction and Use - Ming-Shu, Yuan, Fan-Hua, Nan and Gou-Chi, Lee. Constructing Knowledge Classification Scheme in Industrial Technology via Domain Analysis: An Empirical Study - B.L. Vinod Kumar and Khaiser Nikam. Sanskrit-English Bilingual Thesaurus for Yogic Sciences: A Case Study of Problems and Issues with Terms of Non-Latin Origin - Emilena Josemary Lorenzon, Luciana de Souza Gracioso, Marco Donizete Paulino da Silva, Marcele Tinelli, Roniberto Morato Amaral, Leandro Innocentini Lopes de Faria and Wanda Aparecida Machado Hoffmann. Controlled Vocabulary for Intelligence Information System for Shoes
-
Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016)
0.04
0.044297192 = product of:
0.17718877 = sum of:
0.17718877 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
0.17718877 = score(doc=4179,freq=2.0), product of:
0.45511967 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06457882 = queryNorm
0.38932347 = fieldWeight in 4179, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=4179)
0.25 = coord(1/4)
- Abstract
- In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
-
Galgani, F.; Compton, P.; Hoffmann, A.: Summarization based on bi-directional citation analysis (2015)
0.04
0.043053593 = product of:
0.17221437 = sum of:
0.17221437 = weight(_text_:hoffmann in 3685) [ClassicSimilarity], result of:
0.17221437 = score(doc=3685,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.3838196 = fieldWeight in 3685, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.0390625 = fieldNorm(doc=3685)
0.25 = coord(1/4)
-
Hoffmann, C.P.; Lutz, C.; Meckel, M.: ¬A relational altmetric? : network centrality on ResearchGate as an indicator of scientific impact (2016)
0.04
0.043053593 = product of:
0.17221437 = sum of:
0.17221437 = weight(_text_:hoffmann in 3843) [ClassicSimilarity], result of:
0.17221437 = score(doc=3843,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.3838196 = fieldWeight in 3843, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.0390625 = fieldNorm(doc=3843)
0.25 = coord(1/4)
-
Hoffmann, A.L.: Beyond distributions and primary goods : assessing applications of Rawls in information science and technology literature since 1990 (2017)
0.04
0.043053593 = product of:
0.17221437 = sum of:
0.17221437 = weight(_text_:hoffmann in 4695) [ClassicSimilarity], result of:
0.17221437 = score(doc=4695,freq=2.0), product of:
0.4486857 = queryWeight, product of:
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.06457882 = queryNorm
0.3838196 = fieldWeight in 4695, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.9478774 = idf(docFreq=115, maxDocs=44421)
0.0390625 = fieldNorm(doc=4695)
0.25 = coord(1/4)
-
Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010)
0.04
0.03543775 = product of:
0.141751 = sum of:
0.141751 = weight(_text_:java in 935) [ClassicSimilarity], result of:
0.141751 = score(doc=935,freq=2.0), product of:
0.45511967 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06457882 = queryNorm
0.31145877 = fieldWeight in 935, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=935)
0.25 = coord(1/4)
- Abstract
- Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
-
Piros, A.: Automatic interpretation of complex UDC numbers : towards support for library systems (2015)
0.04
0.03543775 = product of:
0.141751 = sum of:
0.141751 = weight(_text_:java in 3301) [ClassicSimilarity], result of:
0.141751 = score(doc=3301,freq=2.0), product of:
0.45511967 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06457882 = queryNorm
0.31145877 = fieldWeight in 3301, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=3301)
0.25 = coord(1/4)
- Abstract
- Analytico-synthetic and faceted classifications, such as Universal Decimal Classification (UDC) express content of documents with complex, pre-combined classification codes. Without classification authority control that would help manage and access structured notations, the use of UDC codes in searching and browsing is limited. Existing UDC parsing solutions are usually created for a particular database system or a specific task and are not widely applicable. The approach described in this paper provides a solution by which the analysis and interpretation of UDC notations would be stored into an intermediate format (in this case, in XML) by automatic means without any data or information loss. Due to its richness, the output file can be converted into different formats, such as standard mark-up and data exchange formats or simple lists of the recommended entry points of a UDC number. The program can also be used to create authority records containing complex UDC numbers which can be comprehensively analysed in order to be retrieved effectively. The Java program, as well as the corresponding schema definition it employs, is under continuous development. The current version of the interpreter software is now available online for testing purposes at the following web site: http://interpreter-eto.rhcloud.com. The future plan is to implement conversion methods for standard formats and to create standard online interfaces in order to make it possible to use the features of software as a service. This would result in the algorithm being able to be employed both in existing and future library systems to analyse UDC numbers without any significant programming effort.
-
Kübler, H.-D.: Digitale Vernetzung (2018)
0.02
0.024568388 = product of:
0.09827355 = sum of:
0.09827355 = weight(_text_:und in 279) [ClassicSimilarity], result of:
0.09827355 = score(doc=279,freq=32.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.6861275 = fieldWeight in 279, product of:
5.656854 = tf(freq=32.0), with freq of:
32.0 = termFreq=32.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=279)
0.25 = coord(1/4)
- Abstract
- Vernetzung und Netzwerke finden sich allerorten, haben vielerlei Qualität und Materialität, erfüllen diverse Zwecke und Funktionen und konstituieren unterschiedliche Infrastrukturen, nicht nur kommunikativer und sozialer Art. Mit der Entwicklung und Verbreitung der Informationstechnik, der globalen Transport- und Vermittlungssysteme und endlich der anhaltenden Digitalisierung werden der Begriff und die damit bezeichnete Konnektivität omnipräsent und auf digitale Netze fokussiert, die im Internet als dem Netz der Netze seinen wichtigsten und folgenreichsten Prototypen findet. Dessen Entwicklung wird kompakt dargestellt. Die bereits vorhandenen und verfügbaren Anwendungsfelder sowie die künftigen (Industrie 4.0, Internet der Dinge) lassen revolutionäre Umbrüche in allen Segmenten der Gesellschaft erahnen, die von der nationalstaatlichen Gesetzgebung und Politik kaum mehr gesteuert und kontrolliert werden, neben unbestreitbar vielen Vorzügen und Verbesserungen aber auch Risiken und Benachteiligungen zeitigen können.
-
Rusch, G.: Sicherheit und Freiheit (2015)
0.02
0.022336038 = product of:
0.08934415 = sum of:
0.08934415 = weight(_text_:und in 3666) [ClassicSimilarity], result of:
0.08934415 = score(doc=3666,freq=36.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.62378407 = fieldWeight in 3666, product of:
6.0 = tf(freq=36.0), with freq of:
36.0 = termFreq=36.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=3666)
0.25 = coord(1/4)
- Abstract
- Hier und heute bezeichnen die Worte Freiheit und Sicherheit vor allem jene politischen Begriffe, die den rhetorischen Referenzrahmen im sicherheitspolitischen Koordinatensystem unserer westlichen Demokratien nach Innen und Außen abstecken. Legitimatorische und agitatorische Diskurse, Wahlkampfrhetorik und Parlamentsdebatten, Zivilgesellschaft und politische Administration bemühen regelmäßig und formelhaft Begriffe von Freiheit und Sicherheit für ihre jeweiligen Zwecke. Dabei werden die Begriffe oft in ein oppositionelles Verhältnis zueinander gesetzt: Mehr (z.B. innenpolitische) Sicherheit bedeutet dann weniger (z.B. persönliche) Freiheit, und umgekehrt. Oder Sicherheit wird zur Voraussetzung und Bedingung von Freiheit (z.B. in der "wehrhaften Demokratie"). Die operationalen Wurzeln dieser Begrifflichkeit in der Wahrnehmung, im Verhalten und Handeln gelangen dabei jedoch weit aus dem Blick. Welche initialen und konsolidierten Eindrücke, Einsichten und Erfahrungen sind es, auf die wir uns affektiv und rational mit diesen Begriffen beziehen? Wie fühlt sich Sicherheit an? Wie sieht Verhalten oder Handeln als Ausdruck von Freiheit aus? Kann man Freiheit spüren? Zu welcher Freiheit ist man überhaupt fähig? Wieviel Sicherheit ist für das Leben nötig? Welche operationalen Evidenzen bieten Wahrnehmung und Verhalten für die Begriffe der Sicherheit und Freiheit vor all ihren ideologischen Aufladungen, historischen Interpretationen und philosophischen Explikationen?
-
Dextre Clarke, S.G.: Teil 1 der Thesaurus-Norm ISO 25964 veröffentlicht (2012)
0.02
0.022145646 = product of:
0.08858258 = sum of:
0.08858258 = weight(_text_:und in 1176) [ClassicSimilarity], result of:
0.08858258 = score(doc=1176,freq=26.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.618467 = fieldWeight in 1176, product of:
5.0990195 = tf(freq=26.0), with freq of:
26.0 = termFreq=26.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=1176)
0.25 = coord(1/4)
- Abstract
- Die neue internationale Thesaurus-Norm ISO 25964-1 ersetzt die Normen ISO 2788 und ISO 5964. Ihr englischer Titel lautet "Information and documentation - Thesauri and interoperability with other vocabularies - Part 1: Thesauri for information retrieval". Die Norm umfasst ein- und mehrsprachige Thesauri und berück sichtigt die Notwendigkeit von Datenaustausch, Vernetzung und Interoperabilität. Zu den Inhalten gehören - Konstruktion ein- und mehrsprachiger Thesauri - Unterschied zwischen Begriff und Benennung und ihren Beziehungen - Facettenanalyse und Layout - Einsatz von Thesauri in computergestützten und vernetzten Systemen - Management und Pflege von Thesauri - Richtlinien für Thesaurusmanagement-Software - Datenmodell für ein- und mehrsprachige Thesauri - Empfehlungen
- Source
- Information - Wissenschaft und Praxis. 63(2012) H.2, S.122-123
- Theme
- Konzeption und Anwendung des Prinzips Thesaurus
-
Fachlicher und finanzieller Beistand : Normenausschuss Bibliotheks- und Dokumentationswesen gründet Förderkreis / Informationen auf dem Bibliothekartag (2011)
0.02
0.021058617 = product of:
0.08423447 = sum of:
0.08423447 = weight(_text_:und in 620) [ClassicSimilarity], result of:
0.08423447 = score(doc=620,freq=18.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.58810925 = fieldWeight in 620, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=620)
0.25 = coord(1/4)
- Abstract
- In 75 Normenausschüssen und Kommissionen mit 3 244 Arbeitsausschüssen werden kontinuierlich rund 8000 Norm-Projekte im DIN bearbeitet. 2500 Normen, Norm-Entwürfe und Vornormen werden jährlich fertiggestellt und veröffentlicht. Die Normenausschüsse verantworten die nationale, europäische und internationale Normung in ihren jeweiligen Fach- und Wissensgebieten und setzen sich für die Einführung der erarbeiteten Normen ein. Einer dieser Normenausschüsse ist der Normenausschuss Bibliotheks- und Dokumentationswesen (NABD).
-
Tantner, A.: Suchen und Finden vor Google : eine Skizze (2011)
0.02
0.021058617 = product of:
0.08423447 = sum of:
0.08423447 = weight(_text_:und in 1188) [ClassicSimilarity], result of:
0.08423447 = score(doc=1188,freq=18.0), product of:
0.14322929 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.06457882 = queryNorm
0.58810925 = fieldWeight in 1188, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=1188)
0.25 = coord(1/4)
- Abstract
- Es gab eine Zeit vor Google, die Karteikarten, Enzyklopädien, Adress- und Telefonbücher kannte. Es gab "Menschmedien", die als Suchmaschinen betrachtet werden können, wie Diener, "Zubringerinnen" und Hausmeister, und es gab Auskunftscomptoirs und Zeitungsausschnittsdienste. Der Beitrag möchte einige dieser Einrichtungen in Erinnerung rufen.
- Content
- Inhalt 1. Einleitung 2. Verzeichnisse von Büchern 3. Anordnung und Erschließung des Wissens 4. Datensammlungen in staatlichem und privatem Auftrag 5. Menschliche Informationseinrichtungen 6. Institutionen der Informationsvermittlung 7. Adressbücher und Personensuche 8. Schluss
- Source
- Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 64(2011) H.1, S.42-69