-
Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998)
0.08
0.08306712 = product of:
0.33226848 = sum of:
0.33226848 = weight(_text_:java in 2673) [ClassicSimilarity], result of:
0.33226848 = score(doc=2673,freq=4.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.7708211 = fieldWeight in 2673, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0546875 = fieldNorm(doc=2673)
0.25 = coord(1/4)
- Abstract
- The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
-
Fritch, J.W.; Cromwell, R.L.: Evaluating Internet resources : identity, affiliation, and cognitve authority in a networked world (2001)
0.08
0.07682583 = product of:
0.3073033 = sum of:
0.3073033 = weight(_text_:herein in 6749) [ClassicSimilarity], result of:
0.3073033 = score(doc=6749,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.57711416 = fieldWeight in 6749, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.046875 = fieldNorm(doc=6749)
0.25 = coord(1/4)
- Abstract
- Many people fail to properly evaluate Internet information. This is often due to a lack of understanding of the issues surrounding evaluation and authority, and, more specifically, a lack of understanding of the structure and modi operandi of the Internet and the Domain Name System. The fact that evaluation is not being properly performed on Internet information means both that questionable information is being used recklessly, without adequately assessing its authority, and good information is being disregarded, because trust in the information is lacking. Both scenarios may be resolved by ascribing proper amounts of cognitive authority to Internet information. Traditional measures of authority present in a print environment are lacking on the Internet, and, even when occasionally present, are of questionable veracity. A formal model and evaluative criteria are herein suggested and explained to provide a means for accurately ascribing cognitive authority in a networked environment; the model is unique in its representation of overt and covert affiliations as a mechanism for ascribing proper authority to Internet information
-
Joint, N.: URLs in the OPAC : comparative reflections on US vs UK practice (2007)
0.08
0.07682583 = product of:
0.3073033 = sum of:
0.3073033 = weight(_text_:herein in 1857) [ClassicSimilarity], result of:
0.3073033 = score(doc=1857,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.57711416 = fieldWeight in 1857, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.046875 = fieldNorm(doc=1857)
0.25 = coord(1/4)
- Abstract
- Purpose - To examine whether placing URLs into library OPACs has been an effective way of enhancing the role of the catalogue for the contemporary library user. Design/methodology/approach - A brief review of the literature combined with an analysis of publicly available statistics for library use in the USA and the UK. Findings - That certain ways of placing URLs into the OPAC are loosely associated with a successful library environment, i.e. with constant or increasing levels of stock circulation and OPAC use, while other forms of hyper-linking OPAC records are loosely associated with declining levels of library use. Research limitations/implications - The loose association between different OPAC management practices and apparent statistical trends of library use could be investigated in greater depth by further subsequent research, but along the lines and methodology suggested herein. Practical implications - Firm suggestions on how to place and manage URLs in the online catalogue are made. Originality/value - This paper takes certain catalogue enhancement practices which are identified with the US library environment and investigates them in a UK, and specifically Scottish context, to shed light on the original US ideas behind these practices.
-
Costas, R.; Leeuwen, T.N. van; Bordons, M.: ¬A bibliometric classificatory approach for the study and assessment of research performance at the individual level : the effects of age on productivity and impact (2010)
0.08
0.07682583 = product of:
0.3073033 = sum of:
0.3073033 = weight(_text_:herein in 687) [ClassicSimilarity], result of:
0.3073033 = score(doc=687,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.57711416 = fieldWeight in 687, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.046875 = fieldNorm(doc=687)
0.25 = coord(1/4)
- Abstract
- The authors set forth a general methodology for conducting bibliometric analyses at the micro level. It combines several indicators grouped into three factors or dimensions, which characterize different aspects of scientific performance. Different profiles or classes of scientists are described according to their research performance in each dimension. A series of results based on the findings from the application of this methodology to the study of Spanish National Research Council scientists in Spain in three thematic areas are presented. Special emphasis is made on the identification and description of top scientists from structural and bibliometric perspectives. The effects of age on the productivity and impact of the different classes of scientists are analyzed. The classificatory approach proposed herein may prove a useful tool in support of research assessment at the individual level and for exploring potential determinants of research success.
-
Anguiano Peña, G.; Naumis Peña, C.: Method for selecting specialized terms from a general language corpus (2015)
0.08
0.07682583 = product of:
0.3073033 = sum of:
0.3073033 = weight(_text_:herein in 3196) [ClassicSimilarity], result of:
0.3073033 = score(doc=3196,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.57711416 = fieldWeight in 3196, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.046875 = fieldNorm(doc=3196)
0.25 = coord(1/4)
- Abstract
- Among the many aspects studied by library and information science are linguistic phenomena associated with document content analysis, for purposes of both information organization and retrieval. To this end, terms used in scientific and technical language must be recovered and their area of domain and behavior studied. Through language, society controls the knowledge available to people. Document content analysis, in this case of scientific texts, facilitates gathering knowledge of lexical units and their major applications and separating such specialized terms from the general language, to create indexing languages. The model presented here or other lexicographic resources with similar characteristics may be useful in the near future, in computer-assisted indexing or as corpora monitors, with respect to new text analyses or specialized corpora. Thus, using techniques for document content analysis of a lexicographically labeled general language corpus proposed herein, components which enable the extraction of lexical units from specialized language may be obtained and characterized.
-
Juhne, J.; Jensen, A.T.; Gronbaek, K.: Ariadne: a Java-based guided tour system for the World Wide Web (1998)
0.07
0.071200386 = product of:
0.28480154 = sum of:
0.28480154 = weight(_text_:java in 4593) [ClassicSimilarity], result of:
0.28480154 = score(doc=4593,freq=4.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.6607038 = fieldWeight in 4593, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=4593)
0.25 = coord(1/4)
- Abstract
- Presents a Guided tour system for the WWW, called Ariadne, which implements the ideas of trails and guided tours, originating from the hypertext field. Ariadne appears as a Java applet to the user and it stores guided tours in a database format separated from the WWW documents included in the tour. Itd main advantages are: an independent user interface which does not affect the layout of the documents being part of the tour, branching tours where the user may follow alternative routes, composition of existing tours into aggregate tours, overview map with indication of which parts of a tour have been visited an support for getting back on track. Ariadne is available as a research prototype, and it has been tested among a group of university students as well as casual users on the Internet
-
Reed, D.: Essential HTML fast (1997)
0.07
0.067128375 = product of:
0.2685135 = sum of:
0.2685135 = weight(_text_:java in 6851) [ClassicSimilarity], result of:
0.2685135 = score(doc=6851,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.62291753 = fieldWeight in 6851, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=6851)
0.25 = coord(1/4)
- Abstract
- This book provides a quick, concise guide to the issues surrounding the preparation of a well-designed, professional web site using HTML. Topics covered include: how to plan your web site effectively, effective use of hypertext, images, audio and video; layout techniques using tables and and list; how to use style sheets, font sizes and plans for mathematical equation make up. Integration of CGI scripts, Java and ActiveX into your web site is also discussed
-
Lord Wodehouse: ¬The Intranet : the quiet (r)evolution (1997)
0.07
0.067128375 = product of:
0.2685135 = sum of:
0.2685135 = weight(_text_:java in 171) [ClassicSimilarity], result of:
0.2685135 = score(doc=171,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.62291753 = fieldWeight in 171, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=171)
0.25 = coord(1/4)
- Abstract
- Explains how the Intranet (in effect an Internet limited to the computer systems of a single organization) developed out of the Internet, and what its uses and advantages are. Focuses on the Intranet developed in the Glaxo Wellcome organization. Briefly discusses a number of technologies in development, e.g. Java, Real audio, 3D and VRML, and summarizes the issues involved in the successful development of the Intranet, that is, bandwidth, searching tools, security, and legal issues
-
Wang, J.; Reid, E.O.F.: Developing WWW information systems on the Internet (1996)
0.07
0.067128375 = product of:
0.2685135 = sum of:
0.2685135 = weight(_text_:java in 604) [ClassicSimilarity], result of:
0.2685135 = score(doc=604,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.62291753 = fieldWeight in 604, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=604)
0.25 = coord(1/4)
- Abstract
- Gives an overview of Web information system development. Discusses some basic concepts and technologies such as HTML, HTML FORM, CGI and Java, which are associated with developing WWW information systems. Further discusses the design and implementation of Virtual Travel Mart, a Web based end user oriented travel information system. Finally, addresses some issues in developing WWW information systems
-
Ameritech releases Dynix WebPac on NT (1998)
0.07
0.067128375 = product of:
0.2685135 = sum of:
0.2685135 = weight(_text_:java in 2782) [ClassicSimilarity], result of:
0.2685135 = score(doc=2782,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.62291753 = fieldWeight in 2782, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=2782)
0.25 = coord(1/4)
- Abstract
- Ameritech Library Services has released Dynix WebPac on NT, which provides access to a Dynix catalogue from any Java compatible Web browser. Users can place holds, cancel and postpone holds, view and renew items on loan and sort and limit search results from the Web. Describes some of the other features of Dynix WebPac
-
OCLC completes SiteSearch 4.0 field test (1998)
0.07
0.067128375 = product of:
0.2685135 = sum of:
0.2685135 = weight(_text_:java in 3078) [ClassicSimilarity], result of:
0.2685135 = score(doc=3078,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.62291753 = fieldWeight in 3078, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=3078)
0.25 = coord(1/4)
- Abstract
- OCLC has announced that 6 library systems have completed field tests of the OCLC SiteSearch 4.0 suite of software, paving its way for release. Traces the beta site testing programme from its beginning in November 1997 and notes that OCLC SiteServer components have been written in Java programming language which will increase libraries' ability to extend the functionality of the SiteSearch software to create new features specific to local needs
-
Robinson, D.A.; Lester, C.R.; Hamilton, N.M.: Delivering computer assisted learning across the WWW (1998)
0.07
0.067128375 = product of:
0.2685135 = sum of:
0.2685135 = weight(_text_:java in 4618) [ClassicSimilarity], result of:
0.2685135 = score(doc=4618,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.62291753 = fieldWeight in 4618, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=4618)
0.25 = coord(1/4)
- Abstract
- Demonstrates a new method of providing networked computer assisted learning to avoid the pitfalls of traditional methods. This was achieved using Web pages enhanced with Java applets, MPEG video clips and Dynamic HTML
-
Bates, C.: Web programming : building Internet applications (2000)
0.07
0.067128375 = product of:
0.2685135 = sum of:
0.2685135 = weight(_text_:java in 130) [ClassicSimilarity], result of:
0.2685135 = score(doc=130,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.62291753 = fieldWeight in 130, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0625 = fieldNorm(doc=130)
0.25 = coord(1/4)
- Object
- Java
-
Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003)
0.07
0.06688402 = product of:
0.13376804 = sum of:
0.100692555 = weight(_text_:java in 55) [ClassicSimilarity], result of:
0.100692555 = score(doc=55,freq=2.0), product of:
0.43105784 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.061164584 = queryNorm
0.23359407 = fieldWeight in 55, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0234375 = fieldNorm(doc=55)
0.033075485 = weight(_text_:und in 55) [ClassicSimilarity], result of:
0.033075485 = score(doc=55,freq=22.0), product of:
0.13565688 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.061164584 = queryNorm
0.24381724 = fieldWeight in 55, product of:
4.690416 = tf(freq=22.0), with freq of:
22.0 = termFreq=22.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0234375 = fieldNorm(doc=55)
0.5 = coord(2/4)
- Content
- "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
-
Pisanski, J.; Zumer, M.: Mental models of the bibliographic universe : part 1: mental models of descriptions (2010)
0.06
0.06402152 = product of:
0.25608608 = sum of:
0.25608608 = weight(_text_:herein in 145) [ClassicSimilarity], result of:
0.25608608 = score(doc=145,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.48092845 = fieldWeight in 145, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.0390625 = fieldNorm(doc=145)
0.25 = coord(1/4)
- Abstract
- Purpose - The paper aims to present the results of the first two tasks of a user study looking into mental models of the bibliographic universe and especially their comparison to the Functional Requirements for Bibliographic Records (FRBR) conceptual model, which has not yet been user tested. Design/methodology/approach - The paper employes a combination of techniques for eliciting mental models and consisted of three tasks, two of which, card sorting and concept mapping, are presented herein. Its participants were 30 individuals residing in the general area of Ljubljana, Slovenia. Findings - Cumulative results of concept mapping show a strong resemblance to FRBR. Card sorts did not produce conclusive results. In both tasks, participants paid special attention to the original expression, indicating that a special place for it should be considered. Research limitations/implications - The study was performed using a relatively small sample of participants living in a geographically limited space using relatively straight-forward examples. Practical implications - Some solid evidence is provided for adoption of FRBR as the conceptual basis for cataloguing. Originality/value - This is the first widely published user study of FRBR, applying novel methodological approaches in the field of Library and Information Science.
-
Cregan, A.: ¬An OWL DL construction for the ISO Topic Map Data Model (2005)
0.06
0.06402152 = product of:
0.25608608 = sum of:
0.25608608 = weight(_text_:herein in 718) [ClassicSimilarity], result of:
0.25608608 = score(doc=718,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.48092845 = fieldWeight in 718, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.0390625 = fieldNorm(doc=718)
0.25 = coord(1/4)
- Abstract
- Both Topic Maps and the W3C Semantic Web technologies are meta-level semantic maps describing relationships between information resources. Previous attempts at interoperability between XTM Topic Maps and RDF have proved problematic. The ISO's drafting of an explicit Topic Map Data Model [TMDM 05] combined with the advent of the W3C's XML and RDFbased Description Logic-equivalent Web Ontology Language [OWLDL 04] now provides the means for the construction of an unambiguous semantic model to represent Topic Maps, in a form that is equivalent to a Description Logic representation. This paper describes the construction of the proposed TMDM ISO Topic Map Standard in OWL DL (Description Logic equivalent) form. The construction is claimed to exactly match the features of the proposed TMDM. The intention is that the topic map constructs described herein, once officially published on the world-wide web, may be used by Topic Map authors to construct their Topic Maps in OWL DL. The advantage of OWL DL Topic Map construction over XTM, the existing XML-based DTD standard, is that OWL DL allows many constraints to be explicitly stated. OWL DL's suite of tools, although currently still somewhat immature, will provide the means for both querying and enforcing constraints. This goes a long way towards fulfilling the requirements for a Topic Map Query Language (TMQL) and Constraint Language (TMCL), which the Topic Map Community may choose to expend effort on extending. Additionally, OWL DL has a clearly defined formal semantics (Description Logic ref)
-
Marijuán, P.C.; Moral, R.; Navarro, J.: Scientomics : an emergent perspective in knowledge organization (2012)
0.06
0.06402152 = product of:
0.25608608 = sum of:
0.25608608 = weight(_text_:herein in 1141) [ClassicSimilarity], result of:
0.25608608 = score(doc=1141,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.48092845 = fieldWeight in 1141, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.0390625 = fieldNorm(doc=1141)
0.25 = coord(1/4)
- Abstract
- In one of the most important conceptual changes of our times, biology has definitely abandoned its mechanistic hardcore and is advancing "fast and furious" along the informational dimension. Biology has really become an information science; and, as such, it is also inspiring new ways of thinking and new kinds of knowledge paradigms beyond those discussed during past decades. In this regard, a new "bioinformational" approach to the inter-multi-disciplinary relationships among the sciences will be proposed herein: scientomics. Biologically inspired, scientomics contemplates the multifarious interactions between scientific disciplines from the "knowledge recombination" vantage point. In their historical expansion, the sciences would have recapitulated upon collective cognitive dynamics already realized along the evolutionary expansion of living systems, mostly by means of domain recombination processes within cellular genomes, but also occurring neurally inside the "cerebral workspace" of human brains and advanced mammals. Scientomics, understood as a new research field in the domain of knowledge organization, would capture the ongoing processes of scientific expansion and recombination by means of genomic inspired software (like in the new field of culturomics). It would explain the peculiar interaction maps of the sciences (scientometrics) as well as the increasing complexity of research amidst scientific and technological cumulative achievements. Beyond the polarized classical positions of reductionism and holism, scientomics could also propose new conceptual tools for scientific integration and planning, and for research management.
-
Hook, P.A.: Using course-subject Co-occurrence (CSCO) to reveal the structure of an academic discipline : a framework to evaluate different inputs of a domain map (2017)
0.06
0.06402152 = product of:
0.25608608 = sum of:
0.25608608 = weight(_text_:herein in 4324) [ClassicSimilarity], result of:
0.25608608 = score(doc=4324,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.48092845 = fieldWeight in 4324, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.0390625 = fieldNorm(doc=4324)
0.25 = coord(1/4)
- Abstract
- This article proposes, exemplifies, and validates the use of course-subject co-occurrence (CSCO) data to generate topic maps of an academic discipline. A CSCO event is when 2 course-subjects are taught in the same academic year by the same teacher. A total of 61,856 CSCO events were extracted from the 2010-11 directory of the American Association of Law Schools and used to visualize the structure of law school education in the United States. Different normalization, ordination (layout), and clustering algorithms were compared and the best performing algorithm of each type was used to generate the final map. Validation studies demonstrate that CSCO produces topic maps that are consistent with expert opinion and 4 other indicators of the topical similarity of law school course-subjects. This research is the first to use CSCO to produce a visualization of a domain. It is also the first to use an expanded, multi-part gold standard to evaluate the validity of domain maps and the intermediate steps in their creation. It is suggested that the framework used herein may be adopted for other studies that compare different inputs of a domain map in order to empirically derive the best maps as measured against extrinsic sources of topical similarity (gold standards).
-
Yan, B.; Luo, J.: Measuring technological distance for patent mapping (2017)
0.06
0.06402152 = product of:
0.25608608 = sum of:
0.25608608 = weight(_text_:herein in 4351) [ClassicSimilarity], result of:
0.25608608 = score(doc=4351,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.48092845 = fieldWeight in 4351, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.0390625 = fieldNorm(doc=4351)
0.25 = coord(1/4)
- Abstract
- Recent works in the information science literature have presented cases of using patent databases and patent classification information to construct network maps of technology fields, which aim to aid in competitive intelligence analysis and innovation decision making. Constructing such a patent network requires a proper measure of the distance between different classes of patents in the patent classification systems. Despite the existence of various distance measures in the literature, it is unclear how to consistently assess and compare them, and which ones to select for constructing patent technology network maps. This ambiguity has limited the development and applications of such technology maps. Herein, we propose to compare alternative distance measures and identify the superior ones by analyzing the differences and similarities in the structural properties of resulting patent network maps. Using United States patent data from 1976 to 2006 and the International Patent Classification (IPC) system, we compare 12 representative distance measures, which quantify interfield knowledge base proximity, field-crossing diversification likelihood or frequency of innovation agents, and co-occurrences of patent classes in the same patents. Our comparative analyses suggest the patent technology network maps based on normalized coreference and inventor diversification likelihood measures are the best representatives.
-
St. Jean, B.: Factors motivating, demotivating, or impeding information seeking and use by people with type 2 diabetes : a call to work toward preventing, identifying, and addressing incognizance (2017)
0.06
0.06402152 = product of:
0.25608608 = sum of:
0.25608608 = weight(_text_:herein in 4423) [ClassicSimilarity], result of:
0.25608608 = score(doc=4423,freq=2.0), product of:
0.5324827 = queryWeight, product of:
8.705735 = idf(docFreq=19, maxDocs=44421)
0.061164584 = queryNorm
0.48092845 = fieldWeight in 4423, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.705735 = idf(docFreq=19, maxDocs=44421)
0.0390625 = fieldNorm(doc=4423)
0.25 = coord(1/4)
- Abstract
- Type 2 diabetes has grown increasingly prevalent over recent decades, now affecting nearly 400 million people worldwide; however, nearly half of these individuals have no idea they have it. Consumer health information behavior (CHIB), which encompasses people's health-related information needs as well as the ways in which they interact (or do not interact) with health-related information, plays an important role in people's ability to prevent, cope with, and successfully manage a serious chronic disease across time. In this mixed-method longitudinal study, the CHIB of 34 people with type 2 diabetes is explored with the goal of identifying the factors that motivate, demotivate, or impede their diabetes-related information seeking and use. The findings reveal that while these processes can be motivated by many different factors and can lead to important benefits, there are significant barriers (such as "incognizance," defined herein as having an information need that one is not aware of) that may demotivate or impede their information seeking and use. The implications of these findings are discussed, focusing on how we might work toward preventing, identifying, and addressing incognizance among this population, ensuring they have the information they need when it can be of the most use to them.