-
Warner, J.: What should we understand by information technology (and some hints at other issues)? (2000)
0.06
0.058558512 = product of:
0.23423405 = sum of:
0.23423405 = weight(_text_:hints in 840) [ClassicSimilarity], result of:
0.23423405 = score(doc=840,freq=2.0), product of:
0.5128635 = queryWeight, product of:
8.267481 = idf(docFreq=30, maxDocs=44421)
0.06203383 = queryNorm
0.4567181 = fieldWeight in 840, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.267481 = idf(docFreq=30, maxDocs=44421)
0.0390625 = fieldNorm(doc=840)
0.25 = coord(1/4)
-
Liu, J.S.; Lu, L.Y.Y.: ¬An integrated approach for main path analysis : development of the Hirsch index as an example (2012)
0.06
0.058558512 = product of:
0.23423405 = sum of:
0.23423405 = weight(_text_:hints in 1072) [ClassicSimilarity], result of:
0.23423405 = score(doc=1072,freq=2.0), product of:
0.5128635 = queryWeight, product of:
8.267481 = idf(docFreq=30, maxDocs=44421)
0.06203383 = queryNorm
0.4567181 = fieldWeight in 1072, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.267481 = idf(docFreq=30, maxDocs=44421)
0.0390625 = fieldNorm(doc=1072)
0.25 = coord(1/4)
- Abstract
- This study enhances main path analysis by proposing several variants to the original approach. Main path analysis is a bibliometric method capable of tracing the most significant paths in a citation network and is commonly used to trace the development trajectory of a research field. We highlight several limitations of the original main path analysis and suggest new, complementary approaches to overcome these limitations. In contrast to the original local main path, the new approaches generate the global main path, the backward local main path, multiple main paths, and key-route main paths. Each of them is obtained via a perspective different from the original approach. By simultaneously conducting the new, complementary approaches, one uncovers the key development of the target discipline from a broader view. To demonstrate the value of these new approaches, we simultaneously apply them to a set of academic articles related to the Hirsch index. The results show that the integrated approach discovers several paths that are not captured by the original approach. Among these new approaches, the key-route approach is especially useful and hints at a divergence-convergence-divergence structure in the development of the Hirsch index.
-
Vakkari, P.; Chang, Y.-W.; Järvelin, K.: Disciplinary contributions to research topics and methodology in Library and Information Science : leading to fragmentation? (2022)
0.06
0.058558512 = product of:
0.23423405 = sum of:
0.23423405 = weight(_text_:hints in 1768) [ClassicSimilarity], result of:
0.23423405 = score(doc=1768,freq=2.0), product of:
0.5128635 = queryWeight, product of:
8.267481 = idf(docFreq=30, maxDocs=44421)
0.06203383 = queryNorm
0.4567181 = fieldWeight in 1768, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.267481 = idf(docFreq=30, maxDocs=44421)
0.0390625 = fieldNorm(doc=1768)
0.25 = coord(1/4)
- Abstract
- The study analyses contributions to Library and Information Science (LIS) by researchers representing various disciplines. How are such contributions associated with the choice of research topics and methodology? The study employs a quantitative content analysis of articles published in 31 scholarly LIS journals in 2015. Each article is seen as a contribution to LIS by the authors' disciplines, which are inferred from their affiliations. The unit of analysis is the article-discipline pair. Of the contribution instances, the share of LIS is one third. Computer Science contributes one fifth and Business and Economics one sixth. The latter disciplines dominate the contributions in information retrieval, information seeking, and scientific communication indicating strong influences in LIS. Correspondence analysis reveals three clusters of research, one focusing on traditional LIS with contributions from LIS and Humanities and survey-type research; another on information retrieval with contributions from Computer Science and experimental research; and the third on scientific communication with contributions from Natural Sciences and Medicine and citation analytic research. The strong differentiation of scholarly contributions in LIS hints to the fragmentation of LIS as a discipline.
-
Gibson, P.: Professionals' perfect Web world in sight : users want more information on the Web, and vendors attempt to provide (1998)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 2656) [ClassicSimilarity], result of:
0.20424712 = score(doc=2656,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 2656, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=2656)
0.25 = coord(1/4)
- Abstract
- Many information professionals feel that the time is still far off when the WWW can offer the combined funtionality and content of traditional online and CD-ROM databases, but there have been a number of recent Web developments to reflect on. Describes the testing and launch by Ovid of its Java client which, in effect, allows access to its databases on the Web with full search functionality, and the initiative of Euromonitor in providing Web access to its whole collection of consumer research reports and its entire database of business sources. Also reviews the service of a newcomer to the information scene, Information Quest (IQ) founded by Dawson Holdings which has made an agreement with Infonautics to offer access to its Electric Library database thus adding over 1.000 reference, consumer and business publications to its Web based journal service
-
Nieuwenhuysen, P.; Vanouplines, P.: Document plus program hybrids on the Internet and their impact on information transfer (1998)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 2893) [ClassicSimilarity], result of:
0.20424712 = score(doc=2893,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 2893, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=2893)
0.25 = coord(1/4)
- Abstract
- Examines some of the advanced tools, techniques, methods and standards related to the Internet and WWW which consist of hybrids of documents and software, called 'document program hybrids'. Early Internet systems were based on having documents on one side and software on the other, neatly separated, apart from one another and without much interaction, so that the static document can also exist without computers and networks. Documentation program hybrids blur this classical distinction and all components are integrated, interwoven and exist in synergy with each other. Illustrates the techniques with particular reference to practical examples, including: dara collections and dedicated software; advanced HTML features on the WWW, multimedia viewer and plug in software for Internet and WWW browsers; VRML; interaction through a Web server with other servers and with instruments; adaptive hypertext provided by the server; 'webbots' or 'knowbots' or 'searchbots' or 'metasearch engines' or intelligent software agents; Sun's Java; Microsoft's ActiveX; program scripts for HTML and Web browsers; cookies; and Internet push technology with Webcasting channels
-
Mills, T.; Moody, K.; Rodden, K.: Providing world wide access to historical sources (1997)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 3697) [ClassicSimilarity], result of:
0.20424712 = score(doc=3697,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 3697, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3697)
0.25 = coord(1/4)
- Abstract
- A unique collection of historical material covering the lives and events of an English village between 1400 and 1750 has been made available via a WWW enabled information retrieval system. Since the expected readership of the documents ranges from school children to experienced researchers, providing this information in an easily accessible form has offered many challenges requiring tools to aid searching and browsing. The file structure of the document collection was replaced by an database, enabling query results to be presented on the fly. A Java interface displays each user's context in a form that allows for easy and intuitive relevance feedback
-
Maarek, Y.S.: WebCutter : a system for dynamic and tailorable site mapping (1997)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 3739) [ClassicSimilarity], result of:
0.20424712 = score(doc=3739,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 3739, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3739)
0.25 = coord(1/4)
- Abstract
- Presents an approach that integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables contextualization of the results by augmenting them with the documents' neighbourhoods. This approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runalble from any standard Web browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino application product line
-
Pan, B.; Gay, G.; Saylor, J.; Hembrooke, H.: One digital library, two undergraduate casses, and four learning modules : uses of a digital library in cassrooms (2006)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 907) [ClassicSimilarity], result of:
0.20424712 = score(doc=907,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 907, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=907)
0.25 = coord(1/4)
- Abstract
- The KMODDL (kinematic models for design digital library) is a digital library based on a historical collection of kinematic models made of steel and bronze. The digital library contains four types of learning modules including textual materials, QuickTime virtual reality movies, Java simulations, and stereolithographic files of the physical models. The authors report an evaluation study on the uses of the KMODDL in two undergraduate classes. This research reveals that the users in different classes encountered different usability problems, and reported quantitatively different subjective experiences. Further, the results indicate that depending on the subject area, the two user groups preferred different types of learning modules, resulting in different uses of the available materials and different learning outcomes. These findings are discussed in terms of their implications for future digital library design.
-
Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 2167) [ClassicSimilarity], result of:
0.20424712 = score(doc=2167,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 2167, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=2167)
0.25 = coord(1/4)
- Abstract
- The Indiana University School of Library and Information Science opened a new research laboratory in January 2003; The Indiana University School of Library and Information Science Information Processing Laboratory [IU IP Lab]. The purpose of the new laboratory is to facilitate collaboration between scientists in the department in the areas of information retrieval (IR) and information visualization (IV) research. The lab has several areas of focus. These include grid and cluster computing, and a standard Java-based software platform to support plug and play research datasets, a selection of standard IR modules and standard IV algorithms. Future development includes software to enable researchers to contribute datasets, IR algorithms, and visualization algorithms into the standard environment. We decided early on to use OAI-PMH as a resource discovery tool because it is consistent with our mission.
-
Song, R.; Luo, Z.; Nie, J.-Y.; Yu, Y.; Hon, H.-W.: Identification of ambiguous queries in web search (2009)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 3441) [ClassicSimilarity], result of:
0.20424712 = score(doc=3441,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 3441, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3441)
0.25 = coord(1/4)
- Abstract
- It is widely believed that many queries submitted to search engines are inherently ambiguous (e.g., java and apple). However, few studies have tried to classify queries based on ambiguity and to answer "what the proportion of ambiguous queries is". This paper deals with these issues. First, we clarify the definition of ambiguous queries by constructing the taxonomy of queries from being ambiguous to specific. Second, we ask human annotators to manually classify queries. From manually labeled results, we observe that query ambiguity is to some extent predictable. Third, we propose a supervised learning approach to automatically identify ambiguous queries. Experimental results show that we can correctly identify 87% of labeled queries with the approach. Finally, by using our approach, we estimate that about 16% of queries in a real search log are ambiguous.
-
Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
0.20424712 = score(doc=3605,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 3605, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3605)
0.25 = coord(1/4)
- Abstract
- For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
-
Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017)
0.05
0.05106178 = product of:
0.20424712 = sum of:
0.20424712 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
0.20424712 = score(doc=4615,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.46718815 = fieldWeight in 4615, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=4615)
0.25 = coord(1/4)
- Abstract
- Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
-
Kerr, M.: Using the Internet for business information : practical tips and hints (2004)
0.05
0.04684681 = product of:
0.18738724 = sum of:
0.18738724 = weight(_text_:hints in 5505) [ClassicSimilarity], result of:
0.18738724 = score(doc=5505,freq=2.0), product of:
0.5128635 = queryWeight, product of:
8.267481 = idf(docFreq=30, maxDocs=44421)
0.06203383 = queryNorm
0.36537448 = fieldWeight in 5505, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.267481 = idf(docFreq=30, maxDocs=44421)
0.03125 = fieldNorm(doc=5505)
0.25 = coord(1/4)
-
Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006)
0.05
0.04684681 = product of:
0.18738724 = sum of:
0.18738724 = weight(_text_:hints in 2178) [ClassicSimilarity], result of:
0.18738724 = score(doc=2178,freq=2.0), product of:
0.5128635 = queryWeight, product of:
8.267481 = idf(docFreq=30, maxDocs=44421)
0.06203383 = queryNorm
0.36537448 = fieldWeight in 2178, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.267481 = idf(docFreq=30, maxDocs=44421)
0.03125 = fieldNorm(doc=2178)
0.25 = coord(1/4)
- Abstract
- In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
-
Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998)
0.04
0.042551484 = product of:
0.17020594 = sum of:
0.17020594 = weight(_text_:java in 1871) [ClassicSimilarity], result of:
0.17020594 = score(doc=1871,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.38932347 = fieldWeight in 1871, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=1871)
0.25 = coord(1/4)
- Abstract
- As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
-
Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006)
0.04
0.042551484 = product of:
0.17020594 = sum of:
0.17020594 = weight(_text_:java in 272) [ClassicSimilarity], result of:
0.17020594 = score(doc=272,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.38932347 = fieldWeight in 272, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=272)
0.25 = coord(1/4)
- Abstract
- This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
-
Eddings, J.: How the Internet works (1994)
0.04
0.042551484 = product of:
0.17020594 = sum of:
0.17020594 = weight(_text_:java in 2514) [ClassicSimilarity], result of:
0.17020594 = score(doc=2514,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.38932347 = fieldWeight in 2514, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=2514)
0.25 = coord(1/4)
- Abstract
- How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.
-
Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016)
0.04
0.042551484 = product of:
0.17020594 = sum of:
0.17020594 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
0.17020594 = score(doc=4179,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.38932347 = fieldWeight in 4179, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=4179)
0.25 = coord(1/4)
- Abstract
- In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
-
Bourne, C.P.; Hahn, T.B.: ¬A history of online information services : 1963-1976 (2003)
0.04
0.040990956 = product of:
0.16396382 = sum of:
0.16396382 = weight(_text_:hints in 121) [ClassicSimilarity], result of:
0.16396382 = score(doc=121,freq=2.0), product of:
0.5128635 = queryWeight, product of:
8.267481 = idf(docFreq=30, maxDocs=44421)
0.06203383 = queryNorm
0.31970266 = fieldWeight in 121, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.267481 = idf(docFreq=30, maxDocs=44421)
0.02734375 = fieldNorm(doc=121)
0.25 = coord(1/4)
- Footnote
- Overall, Bourne and Hahn's book is richly detailed and extensively documented. In the book's introduction, the authors provide a good overview of other online system histories, but they also write about a lack of archival and secondary sources in this area. This explains why it took the authors 15 years to gather information for this volume, most of it derived from technical reports, newsletters, and personal interviews. From a research standpoint, the authors have done an excellent job. However, while no one can take issue with the book's level of scholarship, the presentation of the research could have been more effective. The majority of the book is written in a straightforward, factual manner that is difficult to read as an historical narrative. Except for Chapter 10, there is very little writing in the book that engages the reader and captures the human side of the online information retrieval story. A quote from W. Boyd Rayward an the back of the book's dust cover calls the work "encyclopedic," and in many ways the book as it exists would have worked better as an encyclopedia. Even the book's layout, with double instead of single columns, hints at its reference-like qualities. To be fair, though, it is entirely possible that Bourse and Hahn may have wanted to create a book with a human interest angle, but the lack of documentation may have prevented them from creating such a work. In short, A History of Online Information Services, 1963-1976, does a commendable job of encapsulating the significant people, organizations, and events that helped shape early online information services. Given the problems Bourne and Hahn had in gathering historical evidence for their book, it makes one wonder about the implications for future historical work in the online field. One can only hope that organizations are archiving enough historical material to be able to write the post-1976 online story."
-
Noerr, P.: ¬The Digital Library Tool Kit (2001)
0.03
0.034041185 = product of:
0.13616474 = sum of:
0.13616474 = weight(_text_:java in 774) [ClassicSimilarity], result of:
0.13616474 = score(doc=774,freq=2.0), product of:
0.43718386 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06203383 = queryNorm
0.31145877 = fieldWeight in 774, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.03125 = fieldNorm(doc=774)
0.25 = coord(1/4)
- Footnote
- This Digital Library Tool Kit was sponsored by Sun Microsystems, Inc. to address some of the leading questions that academic institutions, public libraries, government agencies, and museums face in trying to develop, manage, and distribute digital content. The evolution of Java programming, digital object standards, Internet access, electronic commerce, and digital media management models is causing educators, CIOs, and librarians to rethink many of their traditional goals and modes of operation. New audiences, continuous access to collections, and enhanced services to user communities are enabled. As one of the leading technology providers to education and library communities, Sun is pleased to present this comprehensive introduction to digital libraries