Search (1229 results, page 4 of 62)

  • × language_ss:"e"
  1. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.05
    0.05083697 = product of:
      0.20334788 = sum of:
        0.20334788 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
          0.20334788 = score(doc=4615,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.46718815 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4615)
      0.25 = coord(1/4)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  2. Paulos, J.A.: I think, therefore I laugh (1985) 0.05
    0.048212312 = product of:
      0.19284925 = sum of:
        0.19284925 = weight(_text_:heard in 6746) [ClassicSimilarity], result of:
          0.19284925 = score(doc=6746,freq=2.0), product of:
            0.51913774 = queryWeight, product of:
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.061760712 = queryNorm
            0.37147993 = fieldWeight in 6746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.03125 = fieldNorm(doc=6746)
      0.25 = coord(1/4)
    
    Abstract
    Have you heard about the irritable husband who, seeking to improve his marriage, was told by a doctor to take a calming 10-mile walk each evening, and call back after a month? "Things are fine, I'm very relaxed," the man reported the next month, "but I'm 300 miles from home." Assumptions, suppositions, and theories are necessary to do science or to "do" life but, as this story points out, they can be misleading when made unthinkingly (as they often are). In I Think, Therefore I Laugh John Paulos makes use of a great variety of jokes and stories in providing a profound and witty account of some of the most basic riddles of modern analytic philosophy. The Austrian philosopher Ludwig Wittgenstein once remarked that "a good and serious philosophical work could be written that would consist entirely of jokes"- one understands the philosophical point only if one gets the joke. Paulos contends that humor and philosophy resonate at an even deeper level (both evince a strong penchant for debunking, for example), and proves his contention through parables, puzzles, and paradoxes dealing with topics ranging from scientific induction to the distinction between intentional and causal explanations. He engages Groucho Marx and Bertrand Russell in a spirited dialogue that flows from their mutual concern with self-reference and their tendency toward skepticism and anarchist feelings. And, he links Wittgenstein himself with Lewis Carroll, both having been preoccupied with nonsense, logical confusion, and language puzzles. To enjoy I Think, Therefore I Laugh requires no advanced knowledge of philosophy, but a hearty appreciation for wit and humor is a must. This informal but brilliant book is not only a lucid introduction to some of philosophy's most perplexing problems, but a thoroughly amusing and entertaining show as well. The dialogue between Groucho Marx and Bertrand Russell is itself worth the price of admission
  3. Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998) 0.04
    0.042364143 = product of:
      0.16945657 = sum of:
        0.16945657 = weight(_text_:java in 1871) [ClassicSimilarity], result of:
          0.16945657 = score(doc=1871,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.38932347 = fieldWeight in 1871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1871)
      0.25 = coord(1/4)
    
    Abstract
    As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
  4. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.04
    0.042364143 = product of:
      0.16945657 = sum of:
        0.16945657 = weight(_text_:java in 272) [ClassicSimilarity], result of:
          0.16945657 = score(doc=272,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.38932347 = fieldWeight in 272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=272)
      0.25 = coord(1/4)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
  5. Eddings, J.: How the Internet works (1994) 0.04
    0.042364143 = product of:
      0.16945657 = sum of:
        0.16945657 = weight(_text_:java in 2514) [ClassicSimilarity], result of:
          0.16945657 = score(doc=2514,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.38932347 = fieldWeight in 2514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2514)
      0.25 = coord(1/4)
    
    Abstract
    How the Internet Works promises "an exciting visual journey down the highways and byways of the Internet," and it delivers. The book's high quality graphics and simple, succinct text make it the ideal book for beginners; however it still has much to offer for Net vets. This book is jam- packed with cool ways to visualize how the Net works. The first section visually explores how TCP/IP, Winsock, and other Net connectivity mysteries work. This section also helps you understand how e-mail addresses and domains work, what file types mean, and how information travels across the Net. Part 2 unravels the Net's underlying architecture, including good information on how routers work and what is meant by client/server architecture. The third section covers your own connection to the Net through an Internet Service Provider (ISP), and how ISDN, cable modems, and Web TV work. Part 4 discusses e-mail, spam, newsgroups, Internet Relay Chat (IRC), and Net phone calls. In part 5, you'll find out how other Net tools, such as gopher, telnet, WAIS, and FTP, can enhance your Net experience. The sixth section takes on the World Wide Web, including everything from how HTML works to image maps and forms. Part 7 looks at other Web features such as push technology, Java, ActiveX, and CGI scripting, while part 8 deals with multimedia on the Net. Part 9 shows you what intranets are and covers groupware, and shopping and searching the Net. The book wraps up with part 10, a chapter on Net security that covers firewalls, viruses, cookies, and other Web tracking devices, plus cryptography and parental controls.
  6. Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016) 0.04
    0.042364143 = product of:
      0.16945657 = sum of:
        0.16945657 = weight(_text_:java in 4179) [ClassicSimilarity], result of:
          0.16945657 = score(doc=4179,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.38932347 = fieldWeight in 4179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4179)
      0.25 = coord(1/4)
    
    Abstract
    In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
  7. Hill, J.S.: ¬The elephant in the catalog : cataloging animals you can't see or touch (1996) 0.04
    0.042185772 = product of:
      0.16874309 = sum of:
        0.16874309 = weight(_text_:heard in 606) [ClassicSimilarity], result of:
          0.16874309 = score(doc=606,freq=2.0), product of:
            0.51913774 = queryWeight, product of:
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.061760712 = queryNorm
            0.32504493 = fieldWeight in 606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.02734375 = fieldNorm(doc=606)
      0.25 = coord(1/4)
    
    Content
    We have all heard the story of the three blind men who were put next to an elephant and asked to describe it. Each of them touched a different part of the beast, and because none of them could examine the entire creature, their resulting description was neither accurate nor useful. Constructing a catalog has always been a bit like describing elephants blind, and rather than getting easier as standardization and new technologies are widely implemented, the emergence of new types of information resources are making the job more difficult. Remotely-accessible electronic information resources are among the newest of cataloging's elephants. Not only is it difficult to see -or touch the entire animal, but the creature may move or change during or after the description process. The beast is also unwieldy, and the person doing the description may have no control or ownership of it. The temptation is great to say that it is not our business to describe either this particular beast or any other animal that we don't own, and to walk away. Unfortunately, remotely-accessible electronic information resources are increasing in number and importance, and access to information about materials over which the local library has no control is becoming both easier and more common. Library users more and more expect to have access to these resources, so the option of leaving them undescribed and thus excluding them from the catalog is becoming indefensible. In coming to grips with the problem of describing these exotic beasts, it may be helpful to recall how we have dealt with similar challenges in the past, and to remember that the practices, rules, policies, and principles that surround and define the activity of cataloging have always reflected the current concept of what constitutes a library catalog, and that that concept inevitably reflects both the history and role of libraries and available technology. Until relatively recently the primary roles of a catalog were widely recognized to be providing inventory control for a particular collection and serving as a finding aid to that collection only, but in practice, even the most elaborate catalogs never fulfilled even these roles entirely. Whole categories of materials, such as maps, photographs, newspapers, pamphlets, and rare books were excluded, or at best were described in separate catalogs or finding aids. Information about the contents of individual objects, such as chapters, contributions, and journal articles were also rarely included in the catalog. A small number of major parts of some works were described through analytic cataloging, and contents of other items were sometimes listed in notes in cataloging records when those parts were considered separable and potentially important in their own right, but because entries were generally not made for items included in contents notes the lists were primarily useful to those who had already found the main record. Description of the internal contents of information resources was left to reference works such as indexes and bibliographies. Far from being viewed as a flaw or insufficiency in the catalog, this need to use outside finding aids was accepted as the way things were.
  8. Siess, J.A.: ¬The visible librarian : asserting your value with marketing and advocacy (2003) 0.04
    0.036159232 = product of:
      0.14463693 = sum of:
        0.14463693 = weight(_text_:heard in 4098) [ClassicSimilarity], result of:
          0.14463693 = score(doc=4098,freq=2.0), product of:
            0.51913774 = queryWeight, product of:
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.061760712 = queryNorm
            0.27860993 = fieldWeight in 4098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.0234375 = fieldNorm(doc=4098)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304 (L.A. Ennis): "Written by Judith A. Siess, president of Information Bridges International, Inc. and recognized expert in one-person librarianship, The Visible Librarian: Asserting Your Value with Marketing and Advocacy is a concise and easy to read work an the art of self-promotion. As Siess explains in her introduction "libraries are no longer a given" (p. xi). Librarians must leam to market themselves and their services to the people who make decisions and practice proactive advocacy to survive. In The Visible Librarian Siess applies proven and practical marketing, customer service, and public relations strategies to libraries and librarians. The Visible Librarian is divided in to live chapters. The first chapter, "The Primacy of Customer Service and Other Basics," takes a close look at what it means to provide good customer service. Drawing from a variety of resources Siess provides the reader with a basic overview of customer service theory and then demonstrates how to put the theory into practice in libraries of all kinds. Siess also stresses the importance of thinking of library users as customers. Further, one of the most compelling points Siess makes in this chapter is that libraries now must compete with other information providers for customers. Libraries are no longer the only place for people to find information and, as Siess argues, good customer service is what will keep people coming back to the library. This is an excellent introductory chapter for this work. Chapter 2, "Doing the Groundwork: Marketing," is a lively discussion an the role energetic and positive marketing can play in promoting libraries and their services. Siess begins by stressing that marketing is vital to all libraries and that librarians must be the ones to do the marketing. The bulk of this chapter focuses an the "Six Ps" of marketing; "the right product at the right price in the right place, promoted in the right way to the right people at the right point in time" (p. 20). Along with the discussion of the six Ps Siess uses Ranganathan's model to provide the reader with some examples of creative marketing. This chapter also includes a sample customer satisfaction survey and a small section addressing marketing in specialized libraries such as corporate, academic, and hospitals. One of the best discussions is in chapter three, "Publicity: The Tangibles." Siess broadly defines publicity as "anything written or said, seen or heard about your business that communicates the who, what, why, when, and where ... (52). Siess begins by providing an outline explaining the different sections of a public relations plan. The chapter then covers publicity basics and provides the reader with a number of tips for conducting publicity such as, keeping things simple and proof reading copy multiple times. Siess closes with examples of forms of publicity such as brochures, newsletters, business cards, and more. One example given by the author is how she uses her e-mail signature file to publicize her book. Overall, this chapter especially is a practical and useful guide for all types of libraries and librarians.
  9. Lee, F.R.: ¬The library, unbound and everywhere (2004) 0.04
    0.036159232 = product of:
      0.14463693 = sum of:
        0.14463693 = weight(_text_:heard in 4099) [ClassicSimilarity], result of:
          0.14463693 = score(doc=4099,freq=2.0), product of:
            0.51913774 = queryWeight, product of:
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.061760712 = queryNorm
            0.27860993 = fieldWeight in 4099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.0234375 = fieldNorm(doc=4099)
      0.25 = coord(1/4)
    
    Content
    "When Randall C. Jimerson, the president of the Society of American Archivists, heard of Google's plan to convert certain holdings at Oxford University and at some of the leading research libraries in the United States into digital files, searchable over the Web, he asked, "What are they thinking?" Mr. Jimerson had worries. Who would select the material? How would it be organized and identified to avoid mountains of excerpts taken out of context? Would Google users eventually forgo the experience of holding a book or looking at a historicaldocument? But in recent interviews, many scholars and librarians applauded the announcement by Google, the operator of the world's most popular Internet search service, to digitize some of the collections at Oxford, the University of Michigan, Stanford University, Harvard and the New York Public Library. The plan, in the words of Paul Duguid, information specialist at the University of California at Berkeley, will "blast wide open" the walls around the libraries of world-class institutions.
  10. How classifications work : problems and challenges in an electronic age (1998) 0.04
    0.036159232 = product of:
      0.14463693 = sum of:
        0.14463693 = weight(_text_:heard in 974) [ClassicSimilarity], result of:
          0.14463693 = score(doc=974,freq=2.0), product of:
            0.51913774 = queryWeight, product of:
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.061760712 = queryNorm
            0.27860993 = fieldWeight in 974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.0234375 = fieldNorm(doc=974)
      0.25 = coord(1/4)
    
    Abstract
    There are no a priori solutions here: each scheme must be taken in its own context of use. Classifications that work in the real world must meet both challenges simultaneously. For example, in studying the history of the International Classification of Diseases, we noted that the designers of this global classification system must constantly make practical tradeoffs between the two challenges. In order to do justice to the range of subtle vernacular terms used by medical personnel around the world, a huge unwieldy list would have to be developed. In order for physicians and other users to actually employ the system, a much shorter key to filling out forms is the only possible alternative. As the Internet, Web, and various digital libraries burst their boundaries and appear on desktops and in homes, the tension between these two challenges deepens. What do we understand about the interplay between vernacular classifications and the more formal structures underlying search engines, online catalogs, and other electronic guides? For groups of users that may be both global and unknown, what is the meaning of joining the two aspects of classification? What is usability in the context of both the Web and the intimate desktop? The combination of the cultural and the formal in turn produces a third challenge-a moral and ethical one. For large-scale systems, whose voices will be heard and whose silenced? Whose culture will become the taken-for-granted and whose the exotic other? Where makers and users of classification systems do not address these questions, silent inequities prevail. The articles in this collection each address this set of issues from a variety of angles.
  11. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.04
    0.036159232 = product of:
      0.14463693 = sum of:
        0.14463693 = weight(_text_:heard in 2197) [ClassicSimilarity], result of:
          0.14463693 = score(doc=2197,freq=2.0), product of:
            0.51913774 = queryWeight, product of:
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.061760712 = queryNorm
            0.27860993 = fieldWeight in 2197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2197)
      0.25 = coord(1/4)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
  12. Bertolucci, K.: Happiness is taxonomy : four structures for Snoopy - libraries' method of categorizing and classification (2003) 0.04
    0.036159232 = product of:
      0.14463693 = sum of:
        0.14463693 = weight(_text_:heard in 2212) [ClassicSimilarity], result of:
          0.14463693 = score(doc=2212,freq=2.0), product of:
            0.51913774 = queryWeight, product of:
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.061760712 = queryNorm
            0.27860993 = fieldWeight in 2212, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.405631 = idf(docFreq=26, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2212)
      0.25 = coord(1/4)
    
    Abstract
    Many of you first heard the word "taxonomy" in junior high science class when you studied Linnaeus and biologic nomenclature. The word originated with the Greek word taxis, meaning "to arrange," and is related to similar arrangement words like taxidermy. The other "tax" word comes from a Latin verb taxare, meaning "to collect money," and is linked to such collecting devices as taxicabs. In the 18th century, Linnaeus arranged all known living things into a hierarchy. Figure 1 shows where dogs fit into the Animalia hierarchy, as identified in the Integrated Taxonomic Information System (ITIS, www.itis.usda.gov). It's a straight drill down from the Animal Kingdom to the species Canis familiaris. For domesticated animals, biology taxonomists rely on categories from animal breeding associations. So I added two facets from the American Kennel Club, "Hounds" and "Beagles," leading us directly to that most articulate and philosophical dog, Snoopy. Linnaeus's straightforward structure continues to serve life scientists after two centuries of development. The whole Animalia taxonomy offers valuable information about the natural relationships of animals. It shows exactly where an organism sits in the vast complexity of life. Snoopy's extended family of coyotes and wolves lives one step above in the genus Canis. Foxes are added at the next step in the family Canidae. Because the Linnaean taxonomy must be scientifically accurate, it must also be flexible. If a new scientific discovery changes our knowledge of life, that change is reflected by taxonomic revision. However, one important grouping remains the same: In 1758, Linnaeus placed humans and apes together in the Primate order, 73 years before Charles Darwin sailed to the Galapagos on the HMS Beagle.
  13. Noerr, P.: ¬The Digital Library Tool Kit (2001) 0.03
    0.033891313 = product of:
      0.13556525 = sum of:
        0.13556525 = weight(_text_:java in 774) [ClassicSimilarity], result of:
          0.13556525 = score(doc=774,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.31145877 = fieldWeight in 774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=774)
      0.25 = coord(1/4)
    
    Footnote
    This Digital Library Tool Kit was sponsored by Sun Microsystems, Inc. to address some of the leading questions that academic institutions, public libraries, government agencies, and museums face in trying to develop, manage, and distribute digital content. The evolution of Java programming, digital object standards, Internet access, electronic commerce, and digital media management models is causing educators, CIOs, and librarians to rethink many of their traditional goals and modes of operation. New audiences, continuous access to collections, and enhanced services to user communities are enabled. As one of the leading technology providers to education and library communities, Sun is pleased to present this comprehensive introduction to digital libraries
  14. Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003) 0.03
    0.033891313 = product of:
      0.13556525 = sum of:
        0.13556525 = weight(_text_:java in 3758) [ClassicSimilarity], result of:
          0.13556525 = score(doc=3758,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.31145877 = fieldWeight in 3758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=3758)
      0.25 = coord(1/4)
    
    Abstract
    The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
  15. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.03
    0.033891313 = product of:
      0.13556525 = sum of:
        0.13556525 = weight(_text_:java in 935) [ClassicSimilarity], result of:
          0.13556525 = score(doc=935,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.31145877 = fieldWeight in 935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=935)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
  16. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.03
    0.033891313 = product of:
      0.13556525 = sum of:
        0.13556525 = weight(_text_:java in 709) [ClassicSimilarity], result of:
          0.13556525 = score(doc=709,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.31145877 = fieldWeight in 709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=709)
      0.25 = coord(1/4)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  17. Piros, A.: Automatic interpretation of complex UDC numbers : towards support for library systems (2015) 0.03
    0.033891313 = product of:
      0.13556525 = sum of:
        0.13556525 = weight(_text_:java in 3301) [ClassicSimilarity], result of:
          0.13556525 = score(doc=3301,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.31145877 = fieldWeight in 3301, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.03125 = fieldNorm(doc=3301)
      0.25 = coord(1/4)
    
    Abstract
    Analytico-synthetic and faceted classifications, such as Universal Decimal Classification (UDC) express content of documents with complex, pre-combined classification codes. Without classification authority control that would help manage and access structured notations, the use of UDC codes in searching and browsing is limited. Existing UDC parsing solutions are usually created for a particular database system or a specific task and are not widely applicable. The approach described in this paper provides a solution by which the analysis and interpretation of UDC notations would be stored into an intermediate format (in this case, in XML) by automatic means without any data or information loss. Due to its richness, the output file can be converted into different formats, such as standard mark-up and data exchange formats or simple lists of the recommended entry points of a UDC number. The program can also be used to create authority records containing complex UDC numbers which can be comprehensively analysed in order to be retrieved effectively. The Java program, as well as the corresponding schema definition it employs, is under continuous development. The current version of the interpreter software is now available online for testing purposes at the following web site: http://interpreter-eto.rhcloud.com. The future plan is to implement conversion methods for standard formats and to create standard online interfaces in order to make it possible to use the features of software as a service. This would result in the algorithm being able to be employed both in existing and future library systems to analyse UDC numbers without any significant programming effort.
  18. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (1998) 0.03
    0.029654898 = product of:
      0.11861959 = sum of:
        0.11861959 = weight(_text_:java in 1493) [ClassicSimilarity], result of:
          0.11861959 = score(doc=1493,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.2725264 = fieldWeight in 1493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.02734375 = fieldNorm(doc=1493)
      0.25 = coord(1/4)
    
    Abstract
    Some web sites "work" and some don't. Good web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls. You need to know who will be using the site, and what they'll be using it for. You need some idea of what you'd like to draw their attention to during their visit. Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable. Information Architecture for the World Wide Web is about applying the principles of architecture and library science to web site design. Each web site is like a public building, available for tourists and regulars alike to breeze through at their leisure. The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday. Most books on web development concentrate either on the aesthetics or the mechanics of the site. This book is about the framework that holds the two together. With this book, you learn how to design web sites and intranets that support growth, management, and ease of use. Special attention is given to: * The process behind architecting a large, complex site * Web site hierarchy design and organization Information Architecture for the World Wide Web is for webmasters, designers, and anyone else involved in building a web site. It's for novice web designers who, from the start, want to avoid the traps that result in poorly designed sites. It's for experienced web designers who have already created sites but realize that something "is missing" from their sites and want to improve them. It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their web pages into a cohesive site. The authors are two of the principals of Argus Associates, a web consulting firm. At Argus, they have created information architectures for web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical.
  19. Tennant, R.: Library catalogs : the wrong solution (2003) 0.03
    0.025418485 = product of:
      0.10167394 = sum of:
        0.10167394 = weight(_text_:java in 2558) [ClassicSimilarity], result of:
          0.10167394 = score(doc=2558,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.23359407 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=2558)
      0.25 = coord(1/4)
    
    Content
    - User Interface hostility - Recently I used the Library catalogs of two public libraries, new products from two major library vendors. A link an one catalog said "Knowledge Portal," whatever that was supposed to mean. Clicking an it brought you to two choices: Z39.50 Bibliographic Sites and the World Wide Web. No public library user will have the faintest clue what Z39.50 is. The other catalog launched a Java applet that before long froze my web browser so badly I was forced to shut the program down. Pick a popular book and pretend you are a library patron. Choose three to five libraries at random from the lib web-cats site (pick catalogs that are not using your system) and attempt to find your book. Try as much as possible to see the system through the eyes of your patrons-a teenager, a retiree, or an older faculty member. You may not always like what you see. Now go back to your own system and try the same thing. - What should the public see? - Our users deserve an information system that helps them find all different kinds of resources-books, articles, web pages, working papers in institutional repositories-and gives them the tools to focus in an what they want. This is not, and should not be, the library catalog. It must communicate with the catalog, but it will also need to interface with other information systems, such as vendor databases and web search engines. What will such a tool look like? We are seeing the beginnings of such a tool in the current offerings of cross-database search tools from a few vendors (see "Cross-Database Search," LJ 10/15/01, p. 29ff). We are in the early stages of developing the kind of robust, userfriendly tool that will be required before we can pull our catalogs from public view. Meanwhile, we can begin by making what we have easier to understand and use."
  20. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.03
    0.025418485 = product of:
      0.10167394 = sum of:
        0.10167394 = weight(_text_:java in 378) [ClassicSimilarity], result of:
          0.10167394 = score(doc=378,freq=2.0), product of:
            0.43525907 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061760712 = queryNorm
            0.23359407 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=378)
      0.25 = coord(1/4)
    
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 790
  • m 311
  • el 106
  • s 94
  • i 21
  • n 17
  • x 12
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications