-
Maarek, Y.S.: WebCutter : a system for dynamic and tailorable site mapping (1997)
0.05
0.05207001 = product of:
0.20828004 = sum of:
0.20828004 = weight(_text_:java in 3739) [ClassicSimilarity], result of:
0.20828004 = score(doc=3739,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.46718815 = fieldWeight in 3739, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3739)
0.25 = coord(1/4)
- Abstract
- Presents an approach that integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables contextualization of the results by augmenting them with the documents' neighbourhoods. This approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runalble from any standard Web browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino application product line
-
Pan, B.; Gay, G.; Saylor, J.; Hembrooke, H.: One digital library, two undergraduate casses, and four learning modules : uses of a digital library in cassrooms (2006)
0.05
0.05207001 = product of:
0.20828004 = sum of:
0.20828004 = weight(_text_:java in 907) [ClassicSimilarity], result of:
0.20828004 = score(doc=907,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.46718815 = fieldWeight in 907, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=907)
0.25 = coord(1/4)
- Abstract
- The KMODDL (kinematic models for design digital library) is a digital library based on a historical collection of kinematic models made of steel and bronze. The digital library contains four types of learning modules including textual materials, QuickTime virtual reality movies, Java simulations, and stereolithographic files of the physical models. The authors report an evaluation study on the uses of the KMODDL in two undergraduate classes. This research reveals that the users in different classes encountered different usability problems, and reported quantitatively different subjective experiences. Further, the results indicate that depending on the subject area, the two user groups preferred different types of learning modules, resulting in different uses of the available materials and different learning outcomes. These findings are discussed in terms of their implications for future digital library design.
-
Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003)
0.05
0.05207001 = product of:
0.20828004 = sum of:
0.20828004 = weight(_text_:java in 2167) [ClassicSimilarity], result of:
0.20828004 = score(doc=2167,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.46718815 = fieldWeight in 2167, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=2167)
0.25 = coord(1/4)
- Abstract
- The Indiana University School of Library and Information Science opened a new research laboratory in January 2003; The Indiana University School of Library and Information Science Information Processing Laboratory [IU IP Lab]. The purpose of the new laboratory is to facilitate collaboration between scientists in the department in the areas of information retrieval (IR) and information visualization (IV) research. The lab has several areas of focus. These include grid and cluster computing, and a standard Java-based software platform to support plug and play research datasets, a selection of standard IR modules and standard IV algorithms. Future development includes software to enable researchers to contribute datasets, IR algorithms, and visualization algorithms into the standard environment. We decided early on to use OAI-PMH as a resource discovery tool because it is consistent with our mission.
-
Song, R.; Luo, Z.; Nie, J.-Y.; Yu, Y.; Hon, H.-W.: Identification of ambiguous queries in web search (2009)
0.05
0.05207001 = product of:
0.20828004 = sum of:
0.20828004 = weight(_text_:java in 3441) [ClassicSimilarity], result of:
0.20828004 = score(doc=3441,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.46718815 = fieldWeight in 3441, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3441)
0.25 = coord(1/4)
- Abstract
- It is widely believed that many queries submitted to search engines are inherently ambiguous (e.g., java and apple). However, few studies have tried to classify queries based on ambiguity and to answer "what the proportion of ambiguous queries is". This paper deals with these issues. First, we clarify the definition of ambiguous queries by constructing the taxonomy of queries from being ambiguous to specific. Second, we ask human annotators to manually classify queries. From manually labeled results, we observe that query ambiguity is to some extent predictable. Third, we propose a supervised learning approach to automatically identify ambiguous queries. Experimental results show that we can correctly identify 87% of labeled queries with the approach. Finally, by using our approach, we estimate that about 16% of queries in a real search log are ambiguous.
-
Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010)
0.05
0.05207001 = product of:
0.20828004 = sum of:
0.20828004 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
0.20828004 = score(doc=3605,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.46718815 = fieldWeight in 3605, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=3605)
0.25 = coord(1/4)
- Abstract
- For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
-
Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017)
0.05
0.05207001 = product of:
0.20828004 = sum of:
0.20828004 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
0.20828004 = score(doc=4615,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.46718815 = fieldWeight in 4615, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.046875 = fieldNorm(doc=4615)
0.25 = coord(1/4)
- Abstract
- Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
-
Cooper, L.Z.: Methodology for a project examining cognitive categories for library information in young children (2002)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 2258) [ClassicSimilarity], result of:
0.20403212 = score(doc=2258,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 2258, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=2258)
0.25 = coord(1/4)
- Abstract
- This article presents an overview of some of the methodology used in a project that examined children's understanding of library information and how those perspectives change in the first 5 years of formal schooling. Because our understanding of information is reflected in the manner in which we classify, or typify, that information in order to view the library collection from a child's perspective children were invited to shelve (i.e., classify) terms representative of library books and then to label those categories. The resulting shelf categories help us to see library information from a child's perspective. Data collection using group dialog, visual imagery, narrative, cooperative learning techniques, and hands-on manipulatives is described for one session of a project in which children used induction to form concepts related to knowledge organization in a hypothetical library. Analysis for this session included use of hierarchical clustering and multidimensional scaling to examine and compare children's constructions for qualitative differences an several grade levels. Following the description of data collection methods and analysis, a discussion focuses an the reasons for using these particular methods of data collection with a child population.
-
Ruiter, J. de: Aspects of dealing with digital information : "mature" novices on the Internet (2002)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 1046) [ClassicSimilarity], result of:
0.20403212 = score(doc=1046,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 1046, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=1046)
0.25 = coord(1/4)
- Abstract
- This article seeks to address the following questions: Why do certain people, who are fully information literate with printed materials, become hesitant and even reluctant when it comes to finding something on the Internet? And why do we, information professionals, find it difficult to support them adequately? Mature users of digital information are often skeptical about the value of the Internet as a source for professional information. Over the years much has been achieved, but many prophecies of the experts on digitalization from the early hours still have not yet been fulfilled. Mature users do possess all skills needed to be digital-information literate, but they need to be assisted in specific areas where those skills are insufficient. They tend to blame themselves even if shortcomings in accessibility of digital sources and computer errors obstruct their search. Operating hardware requires a dexterity that can only be acquired by experience. Instruction should be hands-on; demonstration is far less effective. Special attention should be given to reading and interpreting navigation information on the screen and to the search strategies the Internet requires. Use of imagination and trial-and-error methods are to be recommended in this respect.
-
Greene, A.: Managing subject guides with SQL Server and ASP.Net (2008)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 3601) [ClassicSimilarity], result of:
0.20403212 = score(doc=3601,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 3601, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=3601)
0.25 = coord(1/4)
- Abstract
- Purpose - The purpose of this paper is to report on the content management solution for 50 subject guides maintained by librarian subject specialists at the University of Nevada, Reno Libraries. Design/methodology/approach - The Web Development Librarian designed an SQL Server database to store subject guide content and wrote ASP.Net scripts to generate dynamic web pages. Subject specialists provided input throughout the process. Hands-on workshops were held the summer before the new guides were launched. Findings - The new method has successfully produced consistent but individually customized subject guides while greatly reducing maintenance time. Simple reports reveal the association between guides and licensed resources. Using the system to create course-specific guides would be a useful follow-up project. Skills learned in training workshops should be refreshed at regular intervals to boost confidence and introduce changes in the system. Practical implications - The advantages of centralizing content and separating it from presentation cannot be overstated. More consistency and less maintenance is just the beginning. Once accomplished, a library can incorporate Web 2.0 features into the application by repurposing the data or modifying the ASP.Net template. The now-organized data is clean and ready to migrate to web services or next-generation research guides when the time is right. Originality/value - This paper uniquely reports on an SQL Server, ASP.Net solution for managing subject guides. SQL Server includes data management features that increase application security and ASP.Net offers built-in functionality for manipulating and presenting data. Utmost attention was given to creating simple user interfaces that enable subject specialists to create complex web pages without coding HTML.
-
State, E.N.; Åsmul, A.B.: Building cataloging capacity for libraries in South Sudan : a north-south-south collaboration (2013)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 2944) [ClassicSimilarity], result of:
0.20403212 = score(doc=2944,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 2944, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=2944)
0.25 = coord(1/4)
- Abstract
- In the developing world, a common trend is to have north-south collaborations constituting most of the continuous professional development (CPD) activities for librarians. The emerging drift highlighted in this article is a north-south-south initiative aimed at rebuilding the University of Juba Library through capacity-building of the library staff. This article will illuminate the process of equipping library staff with cataloging skills in the absence of previous library training. This endeavor is a result of the collaborative efforts of Makerere University Library (MakLib) in Uganda and the University of Bergen Library (UoBL) in Norway under the Juba Library Automation Project (JULAP). JULAP's main objective is to rebuild the University of Juba Library with the components of library automation and training of library staff. This article will concentrate on the practical training of the library staff in cataloging and the hands-on training on the Koha integrated library system to lay the groundwork for computerized library services for the University of Juba. This article will also highlight the challenges and lessons learned so far while articulating strategies for the future.
-
Aranyi, G.; Schaik, P. van: Testing a model of user-experience with news websites : how research questions evolve (2016)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 4009) [ClassicSimilarity], result of:
0.20403212 = score(doc=4009,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 4009, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=4009)
0.25 = coord(1/4)
- Abstract
- Although the Internet has become a major source for accessing news, there is little research regarding users' experience with news sites. We conducted an experiment to test a comprehensive model of user experience with news sites that was developed previously by means of an online survey. Level of adoption (novel or adopted site) was controlled with a between-subjects manipulation. We collected participants' answers to psychometric scales at 2 times: after presentation of 5 screenshots of a news site and directly after 10 minutes of hands-on experience with the site. The model was extended with the prediction of users' satisfaction with news sites as a high-level design goal. A psychometric measure of trust in news providers was developed and added to the model to better predict people's intention to use particular news sites. The model presented in this article represents a theoretically founded, empirically tested basis for evaluating news websites, and it holds theoretical relevance to user-experience research in general. Finally, the findings and the model are applied to provide practical guidance in design prioritization.
-
Müller, V.C.: Pancomputationalism: theory or metaphor? (2014)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 4411) [ClassicSimilarity], result of:
0.20403212 = score(doc=4411,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 4411, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=4411)
0.25 = coord(1/4)
- Abstract
- Prelude: Some Science Fiction on the Ultimate Answer and The Ultimate Question Many many millions of years ago a race of hyperintelligent pan-dimensional beings (whose physical manifestation in their own pan-dimensional universe is not dissimilar to our own) got so fed up with the constant bickering about the meaning of life which used to interrupt their favourite pastime of Brockian Ultra Cricket (a curious game which involved suddenly hitting people for no readily apparent reason and then running away) that they decided to sit down and solve their problems once and for all. And to this end they built themselves a stupendous super computer . 'O Deep Thought computer', Fook said, 'the task we have designed you to perform is this. We want you to tell us . ' he paused, 'the Answer!' 'The Answer?' said Deep Thought. 'The Answer to what?' 'Life!' urged Fook. 'The Universe!' said Lunkwill. 'Everything!' they said in chorus. (At this point the whole procedure is interrupted by two representatives of the 'Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons' who demand to switch off the machine because it endangers their jobs. They demand 'rigidly defined areas of doubt and uncertainty!', and threaten: 'You'll have a national Philosopher's strike on your hands!' . . . )
-
Rusho, Y.; Raban, R.R.: Hands on : information experiences as sources of value (2020)
0.05
0.05100803 = product of:
0.20403212 = sum of:
0.20403212 = weight(_text_:hands in 872) [ClassicSimilarity], result of:
0.20403212 = score(doc=872,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.42211097 = fieldWeight in 872, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.0390625 = fieldNorm(doc=872)
0.25 = coord(1/4)
-
Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 1871) [ClassicSimilarity], result of:
0.17356671 = score(doc=1871,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 1871, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=1871)
0.25 = coord(1/4)
- Abstract
- As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
-
Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 272) [ClassicSimilarity], result of:
0.17356671 = score(doc=272,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 272, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=272)
0.25 = coord(1/4)
- Abstract
- This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
-
Eddings, J.: How the Internet works (1994)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 2514) [ClassicSimilarity], result of:
0.17356671 = score(doc=2514,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 2514, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=2514)
0.25 = coord(1/4)
- Abstract
- How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.
-
Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016)
0.04
0.04339168 = product of:
0.17356671 = sum of:
0.17356671 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
0.17356671 = score(doc=4179,freq=2.0), product of:
0.4458162 = queryWeight, product of:
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.06325871 = queryNorm
0.38932347 = fieldWeight in 4179, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.0475073 = idf(docFreq=104, maxDocs=44421)
0.0390625 = fieldNorm(doc=4179)
0.25 = coord(1/4)
- Abstract
- In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
-
Fogg, B.J.: Persuasive technology : using computers to change what we think and do (2003)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 2877) [ClassicSimilarity], result of:
0.1632257 = score(doc=2877,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 2877, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=2877)
0.25 = coord(1/4)
- Footnote
- Rez. in: JASIS 54(2003) no.12, S.1168-1170 (A.D. Petrou): "Computers as persuasive technology, or Captology, is the topic of the ten chapters in B.J. Fogg's book. As the author states, the main focus of Captology is not an computer mediated communications (CMC), but rather an human computer interaction (HCI). Furthermore, according to the author, "captology focuses an the design, research, and analysis of interactive computing products created for the purpose of changing people's attitudes or behaviors. It describes the areas where technology and persuasion overlap" (p. 5). Each of the book's chapters presents theories, arguments, and examples to convince readers of a large and growing part that computing products play in persuading people to change their behaviors for the better in a variety of areas. Currently, some of the areas for which B.J. Fogg considers computing products as persuasive or influential in motivating individuals to change their behaviors include quitting smoking, practicing safer sex, eating healthier, staying in shape, improving study habits, and helping doctors develop richer empathy for the pain experienced by their patients. In the wrong hands, however, B.J. Fogg wams, the computer's power to persuade can be enlisted to support unethical social ends and to serve corporate interests that deliver no real benefits to consumers. While Captology's concerns about the ethical side of computing products as persuasive tools are summarized in a chapter an ethics, they are also incorporated as short reminders throughout the book's ten chapters. A strength of the book, however, is that the author does not take it for granted that readers will agree with him an the persuasive power for computers. In addition to the technical and social theories he articulates, B .J. Fogg presents empirical evidence from his own research and also provides many examples of computing products designed to persuade people to change their behaviors. Computers can be designed to be highly interactive and to include many modalities for persuasion to match different situations and human personalities, such as submissive or dominant. Furthermore, computers may allow for anonymity in use and can be ubiquitous. ... Yet, there is no denying an effectiveness in the arguments and empirical data put forth by B.J. Fogg about Captology's power to explain how a merging of technology with techniques of persuasion can help change human behavior for the better. The widespread influence of computing products and a need to ethically manage such influence over human behavior should command our attention as users and researchers and most importantly as designers and producers of computing products."
-
Pettee, J.: ¬The subject approach to books and the development of the dictionary catalog (1985)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 4624) [ClassicSimilarity], result of:
0.1632257 = score(doc=4624,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 4624, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=4624)
0.25 = coord(1/4)
- Abstract
- Julia Pettee's contribution to classification theory came about as part of her work an subject headings. Pettee (1872-1967) was for many years librarian of the Union Theological Seminary in New York and was best known for the classification system she developed for the seminary and as the author of the book Subiect Headings. She was one of the first to call attention to the fact that there was a classification system in subject headings. It was, as she put it, "completely concealed when scattered through the alphabetical sequence" (p. 98). On the other hand, she recognized that an index entry was a pointing device and existed to show users specific terms. Index terms, unlike subject headings, could be manipulated, inverted, repeated, and stated in as many words as might be desired. The subject heading, she reiterated, had in it "some idea of classification," but was designed to pull together like material and, unlike the index term, would have limited capability for supplying access by way of synonyms, catchwords, or other associative forms. It is interesting that she also thought of the subject heading in context as forming a three-dimensional system. Logically this is the case whenever one attempts to reach beyond the conventional hierarchy as described an a plane surface, and, in fact, thought out as if the classification were an a plane surface. Pettee described this dimension variously as names "reaching up and over the surface ... hands clasp[ing] in the air" from an individual term (pp. 99-100). Or, in other context, as the mapping of "the many third-dimensional criss-crossing relationships of subject headings." (p. 103) Investigations following Pettee's insight have shown the nature and the degree of the classification latent in subject headings and also in the cross-references of all indexing systems using cross-references of the associative type ("see also" or equivalent terminology). More importantly, study of this type of connection has revealed jumps in logic and meaning caused by homographs or homonyms and resulting in false connections in classification. Standardized rules for making thesauri have prevented some of the more glaring non sequiturs, but much more still needs to be done. The whole area of "related terms", for example, needs to be brought under control, especially in terms of classification mapping.
-
Pettee, J.: Public libraries and libraries as purveyors of information (1985)
0.04
0.040806424 = product of:
0.1632257 = sum of:
0.1632257 = weight(_text_:hands in 4630) [ClassicSimilarity], result of:
0.1632257 = score(doc=4630,freq=2.0), product of:
0.48336133 = queryWeight, product of:
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.06325871 = queryNorm
0.33768877 = fieldWeight in 4630, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.6410246 = idf(docFreq=57, maxDocs=44421)
0.03125 = fieldNorm(doc=4630)
0.25 = coord(1/4)
- Abstract
- Julia Pettee's contribution to classification theory came about as part of her work an subject headings. Pettee (1872-1967) was for many years librarian of the Union Theological Seminary in New York and was best known for the classification system she developed for the seminary and as the author of the book Subiect Headings. She was one of the first to call attention to the fact that there was a classification system in subject headings. It was, as she put it, "completely concealed when scattered through the alphabetical sequence" (p. 98). On the other hand, she recognized that an index entry was a pointing device and existed to show users specific terms. Index terms, unlike subject headings, could be manipulated, inverted, repeated, and stated in as many words as might be desired. The subject heading, she reiterated, had in it "some idea of classification," but was designed to pull together like material and, unlike the index term, would have limited capability for supplying access by way of synonyms, catchwords, or other associative forms. It is interesting that she also thought of the subject heading in context as forming a three-dimensional system. Logically this is the case whenever one attempts to reach beyond the conventional hierarchy as described an a plane surface, and, in fact, thought out as if the classification were an a plane surface. Pettee described this dimension variously as names "reaching up and over the surface ... hands clasp[ing] in the air" from an individual term (pp. 99-100). Or, in other context, as the mapping of "the many third-dimensional criss-crossing relationships of subject headings." (p. 103) Investigations following Pettee's insight have shown the nature and the degree of the classification latent in subject headings and also in the cross-references of all indexing systems using cross-references of the associative type ("see also" or equivalent terminology). More importantly, study of this type of connection has revealed jumps in logic and meaning caused by homographs or homonyms and resulting in false connections in classification. Standardized rules for making thesauri have prevented some of the more glaring non sequiturs, but much more still needs to be done. The whole area of "related terms", for example, needs to be brought under control, especially in terms of classification mapping.