Search (1236 results, page 3 of 62)

  • × language_ss:"e"
  1. Renehan, E.J.: Science on the Web : a connoisseur's guide to over 500 of the best, most useful, and most fun science Websites (1996) 0.06
    0.05980172 = product of:
      0.23920688 = sum of:
        0.23920688 = weight(_text_:java in 1211) [ClassicSimilarity], result of:
          0.23920688 = score(doc=1211,freq=2.0), product of:
            0.43886918 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062272966 = queryNorm
            0.5450528 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    Written by the author of the best-selling 1001 really cool Web sites, this fun and informative book enables readers to take full advantage of the Web. More than a mere directory, it identifies and describes the best sites, guiding surfers to such innovations as VRML3-D and Java. Aside from downloads of Web browsers, Renehan points the way to free compilers and interpreters as well as free online access to major scientific journals
  2. Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002) 0.06
    0.05980172 = product of:
      0.23920688 = sum of:
        0.23920688 = weight(_text_:java in 4608) [ClassicSimilarity], result of:
          0.23920688 = score(doc=4608,freq=2.0), product of:
            0.43886918 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062272966 = queryNorm
            0.5450528 = fieldWeight in 4608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4608)
      0.25 = coord(1/4)
    
    Abstract
    In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.
  3. Hancock, B.; Giarlo, M.J.: Moving to XML : Latin texts XML conversion project at the Center for Electronic Texts in the Humanities (2001) 0.06
    0.05980172 = product of:
      0.23920688 = sum of:
        0.23920688 = weight(_text_:java in 5801) [ClassicSimilarity], result of:
          0.23920688 = score(doc=5801,freq=2.0), product of:
            0.43886918 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062272966 = queryNorm
            0.5450528 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5801)
      0.25 = coord(1/4)
    
    Abstract
    The delivery of documents on the Web has moved beyond the restrictions of the traditional Web markup language, HTML. HTML's static tags cannot deal with the variety of data formats now beginning to be exchanged between various entities, whether corporate or institutional. XML solves many of the problems by allowing arbitrary tags, which describe the content for a particular audience or group. At the Center for Electronic Texts in the Humanities the Latin texts of Lector Longinquus are being transformed to XML in readiness for the expected new standard. To allow existing browsers to render these texts, a Java program is used to transform the XML to HTML on the fly.
  4. Calishain, T.; Dornfest, R.: Google hacks : 100 industrial-strength tips and tools (2003) 0.06
    0.05963777 = product of:
      0.11927554 = sum of:
        0.08543104 = weight(_text_:java in 134) [ClassicSimilarity], result of:
          0.08543104 = score(doc=134,freq=2.0), product of:
            0.43886918 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.062272966 = queryNorm
            0.19466174 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
        0.0338445 = weight(_text_:und in 134) [ClassicSimilarity], result of:
          0.0338445 = score(doc=134,freq=32.0), product of:
            0.13811515 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.062272966 = queryNorm
            0.24504554 = fieldWeight in 134, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.4, S.253 (D. Lewandowski): "Mit "Google Hacks" liegt das bisher umfassendste Werk vor, das sich ausschließlich an den fortgeschrittenen Google-Nutzer wendet. Daher wird man in diesem Buch auch nicht die sonst üblichen Anfänger-Tips finden, die Suchmaschinenbücher und sonstige Anleitungen zur Internet-Recherche für den professionellen Nutzer in der Regel uninteressant machen. Mit Tara Calishain hat sich eine Autorin gefunden, die bereits seit nahezu fünf Jahren einen eigenen Suchmaschinen-Newsletter (www.researchbuzz.com) herausgibt und als Autorin bzw. Co-Autorin einige Bücher zum Thema Recherche verfasst hat. Für die Programmbeispiele im Buch ist Rael Dornfest verantwortlich. Das erste Kapitel ("Searching Google") gibt einen Einblick in erweiterte Suchmöglichkeiten und Spezifika der behandelten Suchmaschine. Dabei wird der Rechercheansatz der Autorin klar: die beste Methode sei es, die Zahl der Treffer selbst so weit einzuschränken, dass eine überschaubare Menge übrig bleibt, die dann tatsächlich gesichtet werden kann. Dazu werden die feldspezifischen Suchmöglichkeiten in Google erläutert, Tips für spezielle Suchen (nach Zeitschriftenarchiven, technischen Definitionen, usw.) gegeben und spezielle Funktionen der Google-Toolbar erklärt. Bei der Lektüre fällt positiv auf, dass auch der erfahrene Google-Nutzer noch Neues erfährt. Einziges Manko in diesem Kapitel ist der fehlende Blick über den Tellerrand: zwar ist es beispielsweise möglich, mit Google eine Datumssuche genauer als durch das in der erweiterten Suche vorgegebene Auswahlfeld einzuschränken; die aufgezeigte Lösung ist jedoch ausgesprochen umständlich und im Recherchealltag nur eingeschränkt zu gebrauchen. Hier fehlt der Hinweis, dass andere Suchmaschinen weit komfortablere Möglichkeiten der Einschränkung bieten. Natürlich handelt es sich bei dem vorliegenden Werk um ein Buch ausschließlich über Google, trotzdem wäre hier auch ein Hinweis auf die Schwächen hilfreich gewesen. In späteren Kapiteln werden durchaus auch alternative Suchmaschinen zur Lösung einzelner Probleme erwähnt. Das zweite Kapitel widmet sich den von Google neben der klassischen Websuche angebotenen Datenbeständen. Dies sind die Verzeichniseinträge, Newsgroups, Bilder, die Nachrichtensuche und die (hierzulande) weniger bekannten Bereichen Catalogs (Suche in gedruckten Versandhauskatalogen), Froogle (eine in diesem Jahr gestartete Shopping-Suchmaschine) und den Google Labs (hier werden von Google entwickelte neue Funktionen zum öffentlichen Test freigegeben). Nachdem die ersten beiden Kapitel sich ausführlich den Angeboten von Google selbst gewidmet haben, beschäftigt sich das Buch ab Kapitel drei mit den Möglichkeiten, die Datenbestände von Google mittels Programmierungen für eigene Zwecke zu nutzen. Dabei werden einerseits bereits im Web vorhandene Programme vorgestellt, andererseits enthält das Buch viele Listings mit Erläuterungen, um eigene Applikationen zu programmieren. Die Schnittstelle zwischen Nutzer und der Google-Datenbank ist das Google-API ("Application Programming Interface"), das es den registrierten Benutzern erlaubt, täglich bis zu 1.00o Anfragen über ein eigenes Suchinterface an Google zu schicken. Die Ergebnisse werden so zurückgegeben, dass sie maschinell weiterverarbeitbar sind. Außerdem kann die Datenbank in umfangreicherer Weise abgefragt werden als bei einem Zugang über die Google-Suchmaske. Da Google im Gegensatz zu anderen Suchmaschinen in seinen Benutzungsbedingungen die maschinelle Abfrage der Datenbank verbietet, ist das API der einzige Weg, eigene Anwendungen auf Google-Basis zu erstellen. Ein eigenes Kapitel beschreibt die Möglichkeiten, das API mittels unterschiedlicher Programmiersprachen wie PHP, Java, Python, usw. zu nutzen. Die Beispiele im Buch sind allerdings alle in Perl geschrieben, so dass es sinnvoll erscheint, für eigene Versuche selbst auch erst einmal in dieser Sprache zu arbeiten.
    Das sechste Kapitel enthält 26 Anwendungen des Google-APIs, die teilweise von den Autoren des Buchs selbst entwickelt wurden, teils von anderen Autoren ins Netz gestellt wurden. Als besonders nützliche Anwendungen werden unter anderem der Touchgraph Google Browser zur Visualisierung der Treffer und eine Anwendung, die eine Google-Suche mit Abstandsoperatoren erlaubt, vorgestellt. Auffällig ist hier, dass die interessanteren dieser Applikationen nicht von den Autoren des Buchs programmiert wurden. Diese haben sich eher auf einfachere Anwendungen wie beispielsweise eine Zählung der Treffer nach der Top-Level-Domain beschränkt. Nichtsdestotrotz sind auch diese Anwendungen zum großen Teil nützlich. In einem weiteren Kapitel werden pranks and games ("Streiche und Spiele") vorgestellt, die mit dem Google-API realisiert wurden. Deren Nutzen ist natürlich fragwürdig, der Vollständigkeit halber mögen sie in das Buch gehören. Interessanter wiederum ist das letzte Kapitel: "The Webmaster Side of Google". Hier wird Seitenbetreibern erklärt, wie Google arbeitet, wie man Anzeigen am besten formuliert und schaltet, welche Regeln man beachten sollte, wenn man seine Seiten bei Google plazieren will und letztlich auch, wie man Seiten wieder aus dem Google-Index entfernen kann. Diese Ausführungen sind sehr knapp gehalten und ersetzen daher keine Werke, die sich eingehend mit dem Thema Suchmaschinen-Marketing beschäftigen. Allerdings sind die Ausführungen im Gegensatz zu manch anderen Büchern zum Thema ausgesprochen seriös und versprechen keine Wunder in Bezug auf eine Plazienung der eigenen Seiten im Google-Index. "Google Hacks" ist auch denjenigen zu empfehlen, die sich nicht mit der Programmierung mittels des APIs beschäftigen möchten. Dadurch, dass es die bisher umfangreichste Sammlung von Tips und Techniken für einen gezielteren Umgang mit Google darstellt, ist es für jeden fortgeschrittenen Google-Nutzer geeignet. Zwar mögen einige der Hacks einfach deshalb mit aufgenommen worden sein, damit insgesamt die Zahl von i00 erreicht wird. Andere Tips bringen dafür klar erweiterte Möglichkeiten bei der Recherche. Insofern hilft das Buch auch dabei, die für professionelle Bedürfnisse leider unzureichende Abfragesprache von Google ein wenig auszugleichen." - Bergische Landeszeitung Nr.207 vom 6.9.2003, S.RAS04A/1 (Rundschau am Sonntag: Netzwelt) von P. Zschunke: Richtig googeln (s. dort)
  5. Jager, K. de: Obsolescence and stress : a study of the use of books on open shelves at a university library (1994) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 7654) [ClassicSimilarity], result of:
          0.22828431 = score(doc=7654,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 7654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=7654)
      0.25 = coord(1/4)
    
    Abstract
    Reports the results of a study at the main library of Cape Town University, to investigate complaints about ageing book stock and declining resources and observations that many books were hardly circulating. The study aimed to establish the proportion of the books in the library which were actively circulating and whether the accepted phenomenon of decline in use with age, or obsolescence, would be supported in an environment where a reduction in the purchase of new books was evident. Two separate investigations were conducted: a diachronous study of accession dates, classification numbers and date labels of the open shelf collection; and a synchronous study of books on loan during the period of investigation. The resulting database consisted of 2654 and 1023 records respectively. Evidence suggests that older books, do not exhibit the expected characteristics of obsolescence and, while a certain measure of decline of use with age was demonstrated, such decline may be reversed in times of decreasing resources or increasing demands from existing resources. Suggests that the library could develop an informed weeding policy that will enable it to remove from the shelves those materials that have remained unused or little used for 25 years or more
  6. Weggeman, M.: Knowledge management : the modus operandi for a learning organization (1996) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 1912) [ClassicSimilarity], result of:
          0.22828431 = score(doc=1912,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 1912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1912)
      0.25 = coord(1/4)
    
    Abstract
    It has been suggested that the labour production factor is being replaced by the knowledge production factor in the West and in Japan. Knowledge is a person's capacity to carry out a particular task well. Knowledge capacity is thought to be composed of information, experiences, skills and attitude. The product of that capacity can be a combination of deterministic, stochastic or heuristic assertions, causal associations, intuition, predictions and decisions which are relevant to the task at hand. Leaming is considered to be the production process by which knowledge is generated. Corresponding managementproblems arise because the competitive resource knowledge is not owned by the corporation for it is captured in the heads of autonomous professionals and therefore hardly controllable in the way traditional production factors such as raw materials, capital and labour are controlled. Knowledge management - i.e. increasing the yield of learning processes in the knowledge value chain - is thus important in organizations in which the collection of knowledge workers has a dominant position. Such organizations are referred to as knowledge-intensive organizations. Some tools intended to improve the mastering of the intangible asset knowledge in those organizations, are presented.
  7. Chowdhury, G.G.: Digital libraries and reference services : present and future (2002) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 5466) [ClassicSimilarity], result of:
          0.22828431 = score(doc=5466,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 5466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5466)
      0.25 = coord(1/4)
    
    Abstract
    Reference services have taken a central place in library and information services. They are also regarded as personalised services since in most cases a personal discussion takes place between a user and a reference librarian. Based on this, the librarian points to the sources that are considered to be most appropriate to meet the specific information need(s) of the user. Since the Web and digital libraries are meant for providing direct access to information sources and services without the intervention of human intermediaries, the pertinent question that appears is whether we need reference services in digital libraries, and, if so, how best to offer such services. Current digital libraries focus more on access to, and retrieval of, digital information, and hardly lay emphasis on the service aspects. This may have been caused by the narrower definitions of digital libraries formulated by digital library researchers. This paper looks at the current state of research in personalised information services in digital libraries. It first analyses some representative definitions of digital libraries in order to establish the need for personalised services. It then provides a brief overview of the various online reference and information services currently available on the Web. The paper also briefly reviews digital library research that specifically focuses on the personalisation of digital libraries and the provision of digital reference and information services. Finally, the paper proposes some new areas of research that may be undertaken to improve the provision of personalised information services in digital libraries.
  8. Voorbij, H.: Title keywords and subject descriptors : a comparison of subject search entries of books in the humanities and social sciences (1998) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 5721) [ClassicSimilarity], result of:
          0.22828431 = score(doc=5721,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 5721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5721)
      0.25 = coord(1/4)
    
    Abstract
    In order to compare the value of subject descriptors and title keywords as entries to subject searches, two studies were carried out. Both studies concentrated on monographs in the humanities and social sciences, held by the online public access catalogue of the National Library of the Netherlands. In the first study, a comparison was made by subject librarians between the subject descriptors and the title keywords of 475 records. They could express their opinion on a scale from 1 (descriptor is exactly or almost the same as word in title) to 7 (descriptor does not appear in title at all). It was concluded that 37 per cent of the records are considerably enhanced by a subject descriptor, and 49 per cent slightly or considerably enhanced. In the second study, subject librarians performed subject searches using title keywords and subject descriptors on the same topic. The relative recall amounted to 48 per cent and 86 per cent respectively. Failure analysis revealed the reasons why so many records that were found by subject descriptors were not found by title keywords. First, although completely meaningless titles hardly ever appear, the title of a publication does not always offer sufficient clues for title keyword searching. In those cases, descriptors may enhance the record of a publication. A second and even more important task of subject descriptors is controlling the vocabulary. Many relevant titles cannot be retrieved by title keyword searching because of the wide diversity of ways of expressing a topic. Descriptors take away the burden of vocabulary control from the user.
  9. Markey, K.: Twenty-five years of end-user searching : part 1: research findings (2007) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 163) [ClassicSimilarity], result of:
          0.22828431 = score(doc=163,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=163)
      0.25 = coord(1/4)
    
    Abstract
    This is the first part of a two-part article that reviews 25 years of published research findings on end-user searching in online information retrieval (IR) systems. In Part 1 (Markey, 2007), the author seeks to answer the following questions: What characterizes the queries that end users submit to online IR systems? What search features do people use? What features would enable them to improve on the retrievals they have in hand? What features are hardly ever used? What do end users do in response to the system's retrievals? Are end users satisfied with their online searches? Summarizing searches of online IR systems by the search features people use everyday makes information retrieval appear to be a very simplistic one-stop event. In Part 2, the author examines current models of the information retrieval process, demonstrating that information retrieval is much more complex and involves changes in cognition, feelings, and/or events during the information seeking process. She poses a host of new research questions that will further our understanding about end-user searching of online IR systems.
  10. Trentin, G.: Graphic tools for knowledge representation and informal problem-based learning in professional online communities (2007) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 2463) [ClassicSimilarity], result of:
          0.22828431 = score(doc=2463,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 2463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2463)
      0.25 = coord(1/4)
    
    Abstract
    The use of graphical representations is very common in information technology and engineering. Although these same tools could be applied effectively in other areas, they are not used because they are hardly known or are completely unheard of. This article aims to discuss the results of the experimentation carried out on graphical approaches to knowledge representation during research, analysis and problem-solving in the health care sector. The experimentation was carried out on conceptual mapping and Petri Nets, developed collaboratively online with the aid of the CMapTool and WoPeD graphic applications. Two distinct professional communities have been involved in the research, both pertaining to the Local Health Units in Tuscany. One community is made up of head physicians and health care managers whilst the other is formed by technical staff from the Department of Nutrition and Food Hygiene. It emerged from the experimentation that concept maps arc considered more effective in analyzing knowledge domain related to the problem to be faced (description of what it is). On the other hand, Petri Nets arc more effective in studying and formalizing its possible solutions (description of what to do to). For the same reason, those involved in the experimentation have proposed the complementary rather than alternative use of the two knowledge representation methods as a support for professional problem-solving.
  11. Bornmann, L.; Daniel, H.D.: What do citation counts measure? : a review of studies on citing behavior (2008) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 2729) [ClassicSimilarity], result of:
          0.22828431 = score(doc=2729,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 2729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2729)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non-scientific, reasons. Design/methodology/approach - The review covers research published from the early 1960s up to mid-2005 (approximately 30 studies on citing behavior-reporting results in about 40 publications). Findings - The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non-scientific, factors that play a part in the decision to cite. However, the results of the studies must also be deemed scarcely reliable: the studies vary widely in design, and their results can hardly be replicated. Many of the studies have methodological weaknesses. Furthermore, there is evidence that the different motivations of citers are "not so different or 'randomly given' to such an extent that the phenomenon of citation would lose its role as a reliable measure of impact". Originality/value - Given the increasing importance of evaluative bibliometrics in the world of scholarship, the question "What do citation counts measure?" is a particularly relevant and topical issue.
  12. Himma, K.E.: Foundational issues in information ethics (2007) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 3591) [ClassicSimilarity], result of:
          0.22828431 = score(doc=3591,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 3591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3591)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Information ethics, as is well known, has emerged as an independent area of ethical and philosophical inquiry. There are a number of academic journals that are devoted entirely to the numerous ethical issues that arise in connection with the new information communication technologies; these issues include a host of intellectual property, information privacy, and security issues of concern to librarians and other information professionals. In addition, there are a number of major international conferences devoted to information ethics every year. It would hardly be overstating the matter to say that information ethics is as "hot" an area of theoretical inquiry as medical ethics. The purpose of this paper is to provide an overview on these and related issues. Design/methodology/approach - The paper presents a review of relevant information ethics literature together with the author's assessment of the arguments. Findings - There are issues that are more abstract and basic than the substantive issues with which most information ethics theorizing is concerned. These issues are thought to be "foundational" in the sense that we cannot fully succeed in giving an analysis of the concrete problems of information ethics (e.g. are legal intellectual property rights justifiably protected?) until these issues are adequately addressed. Originality/value - The paper offers a needed survey of foundational issues in information ethics.
  13. Vivanco, L.; Bartolomé, B.; San Martín, M.; Martínez, A.: Bibliometric analysis of the use of the term preembryo in scientific literature (2011) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 454) [ClassicSimilarity], result of:
          0.22828431 = score(doc=454,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 454, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=454)
      0.25 = coord(1/4)
    
    Abstract
    Our objective was to determine the prevalence of the term preembryo in the scientific literature using a bibliometric study in the Web of Science database. We retrieved data from the Web of Science from 1986 to 2005, covering a range of 20 years since the term was first published. Searches for the terms embryo, blastocyst, preimplantation embryo, and preembryo were performed. Then, Boolean operators were applied to measure associations between terms. Finally, statistical assessments were made to compare the use of each term in the scientific literature, and in specific areas where preembryo is most used. From a total of 93,019 registers, 90,888 corresponded to embryo; 8,366 to blastocyst; 2,397 to preimplantation embryo; and 172 to preembryo. The use frequency for preembryo was 2:1000. The term preembryo showed a lower cumulative impact factor (343) in comparison with the others (25,448; 5,530; and 546; respectively) in the highest scored journal category. We conclude that the term preembryo is not used in the scientific community, probably because it is confusing or inadequate. The authors suggest that its use in the scientific literature should be avoided in future publications. The bibliometric analysis confirms this statement. While preembryo hardly ever is used, terms such as preimplantation embryo and blastocyst have gained wide acceptance in publications from the same areas of study.
  14. Breton, P.: ¬The culture of the Internet and the Internet as cult : social fears and religious fantasies (2011) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 950) [ClassicSimilarity], result of:
          0.22828431 = score(doc=950,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=950)
      0.25 = coord(1/4)
    
    Abstract
    In this book, French author Philippe Breton looks at the Internet and the culture surrounding it through the lens of its cultural background. Central in his insightful analysis of "the Internet as cult" are Teilhard de Chardin and the New Age, but he looks also at the fears, passions and pathologies of Alan Turing and Norbert Wiener, the imagined worlds of Isaac Asimov, William Gibson, J.G. Ballard and Timothy Leary, the prognostications and confessions of Bill Gates, Nicolas Negroponte and Bill Joy, and the philosophies of Saint-Simon, McLuhan and Pierre Lévy. Dreams of a transparent and unmediated world, a world in which neither time nor space are relevant, a world without violence, without law, without a distinction between the public and the private, Breton contrasts with the reality of propaganda, computer viruses and surveillance, the world in which "sociality in the sense of mutuality disappears in favor of interactivity," where "experience with another and with the world in general is replaced by brief reactionary relations that hardly engage us at all."
  15. Falchi, F.; Lucchese, C.; Orlando, S.; Perego, R.; Rabitti, F.: Similarity caching in large-scale image retrieval (2012) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 3729) [ClassicSimilarity], result of:
          0.22828431 = score(doc=3729,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 3729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3729)
      0.25 = coord(1/4)
    
    Abstract
    Feature-rich data, such as audio-video recordings, digital images, and results of scientific experiments, nowadays constitute the largest fraction of the massive data sets produced daily in the e-society. Content-based similarity search systems working on such data collections are rapidly growing in importance. Unfortunately, similarity search is in general very expensive and hardly scalable. In this paper we study the case of content-based image retrieval (CBIR) systems, and focus on the problem of increasing the throughput of a large-scale CBIR system that indexes a very large collection of digital images. By analyzing the query log of a real CBIR system available on the Web, we characterize the behavior of users who experience a novel search paradigm, where content-based similarity queries and text-based ones can easily be interleaved. We show that locality and self-similarity is present even in the stream of queries submitted to such a CBIR system. According to these results, we propose an effective way to exploit this locality, by means of a similarity caching system, which stores the results of recently/frequently submitted queries and associated results. Unlike traditional caching, the proposed cache can manage not only exact hits, but also approximate ones that are solved by similarity with respect to the result sets of past queries present in the cache. We evaluate extensively the proposed solution by using the real query stream recorded in the log and a collection of 100 millions of digital photographs. The high hit ratios and small average approximation error figures obtained demonstrate the effectiveness of the approach.
  16. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 3861) [ClassicSimilarity], result of:
          0.22828431 = score(doc=3861,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 3861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3861)
      0.25 = coord(1/4)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  17. Greenstein-Messica, A.; Rokach, L.; Shabtai, A.: Personal-discount sensitivity prediction for mobile coupon conversion optimization (2017) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 4751) [ClassicSimilarity], result of:
          0.22828431 = score(doc=4751,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 4751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4751)
      0.25 = coord(1/4)
    
    Abstract
    The high adoption of smart mobile devices among consumers provides an opportunity for e-commerce retailers to increase their sales by recommending consumers with real time, personalized coupons that take into account the specific contextual situation of the consumer. Although context-aware recommender systems (CARS) have been widely analyzed, personalized pricing or discount optimization in recommender systems to improve recommendations' accuracy and commercial KPIs has hardly been researched. This article studies how to model user-item personalized discount sensitivity and incorporate it into a real time contextual recommender system in such a way that it can be integrated into a commercial service. We propose a novel approach for modeling context-aware user-item personalized discount sensitivity in a sparse data scenario and present a new CARS algorithm that combines coclustering and random forest classification (CBRF) to incorporate the personalized discount sensitivity. We conducted an experimental study with real consumers and mobile discount coupons to evaluate our solution. We compared the CBRF algorithm to the widely used context-aware matrix factorization (CAMF) algorithm. The experimental results suggest that incorporating personalized discount sensitivity significantly improves the consumption prediction accuracy and that the suggested CBRF algorithm provides better prediction results for this use case.
  18. Greenstein-Messica, A.; Rokach, L.; Shabtai, A.: Personal-discount sensitivity prediction for mobile coupon conversion optimization (2017) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 4761) [ClassicSimilarity], result of:
          0.22828431 = score(doc=4761,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 4761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4761)
      0.25 = coord(1/4)
    
    Abstract
    The high adoption of smart mobile devices among consumers provides an opportunity for e-commerce retailers to increase their sales by recommending consumers with real time, personalized coupons that take into account the specific contextual situation of the consumer. Although context-aware recommender systems (CARS) have been widely analyzed, personalized pricing or discount optimization in recommender systems to improve recommendations' accuracy and commercial KPIs has hardly been researched. This article studies how to model user-item personalized discount sensitivity and incorporate it into a real time contextual recommender system in such a way that it can be integrated into a commercial service. We propose a novel approach for modeling context-aware user-item personalized discount sensitivity in a sparse data scenario and present a new CARS algorithm that combines coclustering and random forest classification (CBRF) to incorporate the personalized discount sensitivity. We conducted an experimental study with real consumers and mobile discount coupons to evaluate our solution. We compared the CBRF algorithm to the widely used context-aware matrix factorization (CAMF) algorithm. The experimental results suggest that incorporating personalized discount sensitivity significantly improves the consumption prediction accuracy and that the suggested CBRF algorithm provides better prediction results for this use case.
  19. Bilal, D.; Gwizdka, J.: Children's query types and reformulations in Google search (2018) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 47) [ClassicSimilarity], result of:
          0.22828431 = score(doc=47,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 47, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=47)
      0.25 = coord(1/4)
    
    Abstract
    We investigated the searching behaviors of twenty-four children in grades 6, 7, and 8 (ages 11-13) in finding information on three types of search tasks in Google. Children conducted 72 search sessions and issued 150 queries. Children's phrase- and question-like queries combined were much more prevalent than keyword queries (70% vs. 30%, respectively). Fifty two percent of the queries were reformulations (33 sessions). We classified children's query reformulation types into five classes based on the taxonomy by Liu et al. (2010). We found that most query reformulations were by Substitution and Specialization, and that children hardly repeated queries. We categorized children's queries by task facets and examined the way they expressed these facets in their query formulations and reformulations. Oldest children tended to target the general topic of search tasks in their queries most frequently, whereas younger children expressed one of the two facets more often. We assessed children's achieved task outcomes using the search task outcomes measure we developed. Children were mostly more successful on the fact-finding and fully self-generated task and partially successful on the research-oriented task. Query type, reformulation type, achieved task outcomes, and expressing task facets varied by task type and grade level. There was no significant effect of query length in words or of the number of queries issued on search task outcomes. The study findings have implications for human intervention, digital literacy, search task literacy, as well as for system intervention to support children's query formulation and reformulation during interaction with Google.
  20. Zhao, Y.C.; Peng, X.; Liu, Z.; Song, S.; Hansen, P.: Factors that affect asker's pay intention in trilateral payment-based social Q&A platforms : from a benefit and cost perspective (2020) 0.06
    0.05707108 = product of:
      0.22828431 = sum of:
        0.22828431 = weight(_text_:hardly in 812) [ClassicSimilarity], result of:
          0.22828431 = score(doc=812,freq=2.0), product of:
            0.507283 = queryWeight, product of:
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.062272966 = queryNorm
            0.45001376 = fieldWeight in 812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.146119 = idf(docFreq=34, maxDocs=44421)
              0.0390625 = fieldNorm(doc=812)
      0.25 = coord(1/4)
    
    Abstract
    More and more social Q&A platforms are launching a new business model to monetize online knowledge. This monetizing process introduces a more complicated cost and benefit tradeoff to users, especially for askers' concerns. Much of the previous research was conducted in the context of free-based Q&A platform, which hardly explains the triggers that motivate askers' pay intention. Based on the theories of social exchange and social capital, this study aims to identify and examine the antecedents of askers' pay intention from the perspective of benefit and cost. We empirically test our predictions based on survey data collected from 322 actual askers in a well-known trilateral payment-based social Q&A platform in China. The results by partial least squares (PLS) analysis indicate that besides noneconomic benefits including self-enhancement, social support, and entertainment, financial factors such as cost and benefit have significant influences on the perceived value of using trilateral payment-based Q&A platforms. More important, we further identify that the effect of financial benefit is moderated by perceived reciprocity belief, and the effect of perceived value is moderated by perceived trust in answerers. Our findings contribute to the previous literature by proposing a theoretical model that explains askers' behavioral intention, and the practical implications for payment-based Q&A service providers and participants.

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 796
  • m 313
  • el 103
  • s 93
  • i 21
  • n 17
  • x 12
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications