Search (1260 results, page 4 of 63)

  • × language_ss:"e"
  1. Calishain, T.; Dornfest, R.: Google hacks : 100 industrial-strength tips and tools (2003) 0.06
    0.061196208 = product of:
      0.122392416 = sum of:
        0.0876635 = weight(_text_:java in 134) [ClassicSimilarity], result of:
          0.0876635 = score(doc=134,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.19466174 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
        0.03472892 = weight(_text_:und in 134) [ClassicSimilarity], result of:
          0.03472892 = score(doc=134,freq=32.0), product of:
            0.14172435 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06390027 = queryNorm
            0.24504554 = fieldWeight in 134, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.4, S.253 (D. Lewandowski): "Mit "Google Hacks" liegt das bisher umfassendste Werk vor, das sich ausschließlich an den fortgeschrittenen Google-Nutzer wendet. Daher wird man in diesem Buch auch nicht die sonst üblichen Anfänger-Tips finden, die Suchmaschinenbücher und sonstige Anleitungen zur Internet-Recherche für den professionellen Nutzer in der Regel uninteressant machen. Mit Tara Calishain hat sich eine Autorin gefunden, die bereits seit nahezu fünf Jahren einen eigenen Suchmaschinen-Newsletter (www.researchbuzz.com) herausgibt und als Autorin bzw. Co-Autorin einige Bücher zum Thema Recherche verfasst hat. Für die Programmbeispiele im Buch ist Rael Dornfest verantwortlich. Das erste Kapitel ("Searching Google") gibt einen Einblick in erweiterte Suchmöglichkeiten und Spezifika der behandelten Suchmaschine. Dabei wird der Rechercheansatz der Autorin klar: die beste Methode sei es, die Zahl der Treffer selbst so weit einzuschränken, dass eine überschaubare Menge übrig bleibt, die dann tatsächlich gesichtet werden kann. Dazu werden die feldspezifischen Suchmöglichkeiten in Google erläutert, Tips für spezielle Suchen (nach Zeitschriftenarchiven, technischen Definitionen, usw.) gegeben und spezielle Funktionen der Google-Toolbar erklärt. Bei der Lektüre fällt positiv auf, dass auch der erfahrene Google-Nutzer noch Neues erfährt. Einziges Manko in diesem Kapitel ist der fehlende Blick über den Tellerrand: zwar ist es beispielsweise möglich, mit Google eine Datumssuche genauer als durch das in der erweiterten Suche vorgegebene Auswahlfeld einzuschränken; die aufgezeigte Lösung ist jedoch ausgesprochen umständlich und im Recherchealltag nur eingeschränkt zu gebrauchen. Hier fehlt der Hinweis, dass andere Suchmaschinen weit komfortablere Möglichkeiten der Einschränkung bieten. Natürlich handelt es sich bei dem vorliegenden Werk um ein Buch ausschließlich über Google, trotzdem wäre hier auch ein Hinweis auf die Schwächen hilfreich gewesen. In späteren Kapiteln werden durchaus auch alternative Suchmaschinen zur Lösung einzelner Probleme erwähnt. Das zweite Kapitel widmet sich den von Google neben der klassischen Websuche angebotenen Datenbeständen. Dies sind die Verzeichniseinträge, Newsgroups, Bilder, die Nachrichtensuche und die (hierzulande) weniger bekannten Bereichen Catalogs (Suche in gedruckten Versandhauskatalogen), Froogle (eine in diesem Jahr gestartete Shopping-Suchmaschine) und den Google Labs (hier werden von Google entwickelte neue Funktionen zum öffentlichen Test freigegeben). Nachdem die ersten beiden Kapitel sich ausführlich den Angeboten von Google selbst gewidmet haben, beschäftigt sich das Buch ab Kapitel drei mit den Möglichkeiten, die Datenbestände von Google mittels Programmierungen für eigene Zwecke zu nutzen. Dabei werden einerseits bereits im Web vorhandene Programme vorgestellt, andererseits enthält das Buch viele Listings mit Erläuterungen, um eigene Applikationen zu programmieren. Die Schnittstelle zwischen Nutzer und der Google-Datenbank ist das Google-API ("Application Programming Interface"), das es den registrierten Benutzern erlaubt, täglich bis zu 1.00o Anfragen über ein eigenes Suchinterface an Google zu schicken. Die Ergebnisse werden so zurückgegeben, dass sie maschinell weiterverarbeitbar sind. Außerdem kann die Datenbank in umfangreicherer Weise abgefragt werden als bei einem Zugang über die Google-Suchmaske. Da Google im Gegensatz zu anderen Suchmaschinen in seinen Benutzungsbedingungen die maschinelle Abfrage der Datenbank verbietet, ist das API der einzige Weg, eigene Anwendungen auf Google-Basis zu erstellen. Ein eigenes Kapitel beschreibt die Möglichkeiten, das API mittels unterschiedlicher Programmiersprachen wie PHP, Java, Python, usw. zu nutzen. Die Beispiele im Buch sind allerdings alle in Perl geschrieben, so dass es sinnvoll erscheint, für eigene Versuche selbst auch erst einmal in dieser Sprache zu arbeiten.
    Das sechste Kapitel enthält 26 Anwendungen des Google-APIs, die teilweise von den Autoren des Buchs selbst entwickelt wurden, teils von anderen Autoren ins Netz gestellt wurden. Als besonders nützliche Anwendungen werden unter anderem der Touchgraph Google Browser zur Visualisierung der Treffer und eine Anwendung, die eine Google-Suche mit Abstandsoperatoren erlaubt, vorgestellt. Auffällig ist hier, dass die interessanteren dieser Applikationen nicht von den Autoren des Buchs programmiert wurden. Diese haben sich eher auf einfachere Anwendungen wie beispielsweise eine Zählung der Treffer nach der Top-Level-Domain beschränkt. Nichtsdestotrotz sind auch diese Anwendungen zum großen Teil nützlich. In einem weiteren Kapitel werden pranks and games ("Streiche und Spiele") vorgestellt, die mit dem Google-API realisiert wurden. Deren Nutzen ist natürlich fragwürdig, der Vollständigkeit halber mögen sie in das Buch gehören. Interessanter wiederum ist das letzte Kapitel: "The Webmaster Side of Google". Hier wird Seitenbetreibern erklärt, wie Google arbeitet, wie man Anzeigen am besten formuliert und schaltet, welche Regeln man beachten sollte, wenn man seine Seiten bei Google plazieren will und letztlich auch, wie man Seiten wieder aus dem Google-Index entfernen kann. Diese Ausführungen sind sehr knapp gehalten und ersetzen daher keine Werke, die sich eingehend mit dem Thema Suchmaschinen-Marketing beschäftigen. Allerdings sind die Ausführungen im Gegensatz zu manch anderen Büchern zum Thema ausgesprochen seriös und versprechen keine Wunder in Bezug auf eine Plazienung der eigenen Seiten im Google-Index. "Google Hacks" ist auch denjenigen zu empfehlen, die sich nicht mit der Programmierung mittels des APIs beschäftigen möchten. Dadurch, dass es die bisher umfangreichste Sammlung von Tips und Techniken für einen gezielteren Umgang mit Google darstellt, ist es für jeden fortgeschrittenen Google-Nutzer geeignet. Zwar mögen einige der Hacks einfach deshalb mit aufgenommen worden sein, damit insgesamt die Zahl von i00 erreicht wird. Andere Tips bringen dafür klar erweiterte Möglichkeiten bei der Recherche. Insofern hilft das Buch auch dabei, die für professionelle Bedürfnisse leider unzureichende Abfragesprache von Google ein wenig auszugleichen." - Bergische Landeszeitung Nr.207 vom 6.9.2003, S.RAS04A/1 (Rundschau am Sonntag: Netzwelt) von P. Zschunke: Richtig googeln (s. dort)
  2. Hammond, T.; Hannay, T.; Lund, B.; Scott, J.: Social bookmarking tools (I) : a general review (2005) 0.06
    0.057129018 = product of:
      0.22851607 = sum of:
        0.22851607 = weight(_text_:hyperlinks in 2188) [ClassicSimilarity], result of:
          0.22851607 = score(doc=2188,freq=6.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48941118 = fieldWeight in 2188, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.02734375 = fieldNorm(doc=2188)
      0.25 = coord(1/4)
    
    Abstract
    Because, to paraphrase a pop music lyric from a certain rock and roll band of yesterday, "the Web is old, the Web is new, the Web is all, the Web is you", it seems like we might have to face up to some of these stark realities. With the introduction of new social software applications such as blogs, wikis, newsfeeds, social networks, and bookmarking tools (the subject of this paper), the claim that Shelley Powers makes in a Burningbird blog entry seems apposite: "This is the user's web now, which means it's my web and I can make the rules." Reinvention is revolution - it brings us always back to beginnings. We are here going to remind you of hyperlinks in all their glory, sell you on the idea of bookmarking hyperlinks, point you at other folks who are doing the same, and tell you why this is a good thing. Just as long as those hyperlinks (or let's call them plain old links) are managed, tagged, commented upon, and published onto the Web, they represent a user's own personal library placed on public record, which - when aggregated with other personal libraries - allows for rich, social networking opportunities. Why spill any ink (digital or not) in rewriting what someone else has already written about instead of just pointing at the original story and adding the merest of titles, descriptions and tags for future reference? More importantly, why not make these personal 'link playlists' available to oneself and to others from whatever browser or computer one happens to be using at the time? This paper reviews some current initiatives, as of early 2005, in providing public link management applications on the Web - utilities that are often referred to under the general moniker of 'social bookmarking tools'. There are a couple of things going on here: 1) server-side software aimed specifically at managing links with, crucially, a strong, social networking flavour, and 2) an unabashedly open and unstructured approach to tagging, or user classification, of those links.
  3. Webber, S.: Search engines and news services : developments on the Internet (1998) 0.06
    0.056543067 = product of:
      0.22617227 = sum of:
        0.22617227 = weight(_text_:hyperlinks in 6103) [ClassicSimilarity], result of:
          0.22617227 = score(doc=6103,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48439145 = fieldWeight in 6103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.046875 = fieldNorm(doc=6103)
      0.25 = coord(1/4)
    
    Abstract
    Focuses on some issues relating to Internet search engines, (such as Alta Vista, HotBot and Yahoo!) and their use in news information Web sites on the Internet, some of the ways in which search engine providers are trying to improve searching performance and some of the choices facing information providers. Reviews ways in which search engine providers are responding to the challenge of improving searching, including: adding a selective, browsable database as an alternative; including only home pages (producing fewer hits) and browsability; adding company information; adjusting the weightings on their relevance rankings; building up searches; and allowing Boolean logic and field searching. Also examines the options facing providers of news information on the Internet, particularly primary sources such as newspapers, news agencies and television companies. Discusses issues such as: whether or not to charge; the types of hyperlinks to provide; whether or not to partner and become a portal; the desirability of electronic mail alert; and the acceptability of news aggregation
  4. Poworoznek, E.L.: current practices in online physical sciences journals : Linking of errata: (2003) 0.06
    0.056543067 = product of:
      0.22617227 = sum of:
        0.22617227 = weight(_text_:hyperlinks in 2874) [ClassicSimilarity], result of:
          0.22617227 = score(doc=2874,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48439145 = fieldWeight in 2874, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.046875 = fieldNorm(doc=2874)
      0.25 = coord(1/4)
    
    Abstract
    Reader awareness of article corrections can be of critical importance in the physical and biomedical sciences. Comparison of errata and corrigenda in online versions of high-impact physical sciences journals across titles and publishers yielded surprising variability. Of 43 online journals surveyed, 17 had no links between original articles and later corrections. When present, hyperlinks between articles and errata showed patterns in presentation style, but lacked consistency. Variability in the presentation, linking, and availability of online errata indicates that practices are not evenly developed across the field. Comparison of finding tools showed excellent coverage of errata by Science Citation Index, lack of indexing in INSPEC, and lack of retrieval with SciFinder Scholar. The development of standards for the linking of original articles to errata is recommended.
  5. Zhang, J.; Nguyen, T.: WebStar: a visualization model for hyperlink structures (2005) 0.06
    0.056543067 = product of:
      0.22617227 = sum of:
        0.22617227 = weight(_text_:hyperlinks in 2056) [ClassicSimilarity], result of:
          0.22617227 = score(doc=2056,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48439145 = fieldWeight in 2056, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.046875 = fieldNorm(doc=2056)
      0.25 = coord(1/4)
    
    Abstract
    The authors introduce an information visualization model, WebStar, for hyperlink-based information systems. Hyperlinks within a hyperlink-based document can be visualized in a two-dimensional visual space. All links are projected within a display sphere in the visual space. The relationship between a specified central document and its hyperlinked documents is visually presented in the visual space. In addition, users are able to define a group of subjects and to observe relevance between each subject and all hyperlinked documents via movement of that subject around the display sphere center. WebStar allows users to dynamically change an interest center during navigation. A retrieval mechanism is developed to control retrieved results in the visual space. Impact of movement of a subject on the visual document distribution is analyzed. An ambiguity problem caused by projection is discussed. Potential applications of this visualization model in information retrieval are included. Future research directions on the topic are addressed.
  6. Kipp, M.E.I.: Searching with tags : do tags help users find things? (2008) 0.06
    0.056543067 = product of:
      0.22617227 = sum of:
        0.22617227 = weight(_text_:hyperlinks in 3278) [ClassicSimilarity], result of:
          0.22617227 = score(doc=3278,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48439145 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.046875 = fieldNorm(doc=3278)
      0.25 = coord(1/4)
    
    Content
    This study examines the question of whether tags can be useful in the process of information retrieval. Participants were asked to search a social bookmarking tool specialising in academic articles (CiteULike) and an online journal database (Pubmed) in order to determine if users found tags were useful in their search process. The actions of each participants were captured using screen capture software and they were asked to describe their search process. The preliminary study showed that users did indeed make use of tags in their search process, as a guide to searching and as hyperlinks to potentially useful articles. However, users also made use of controlled vocabularies in the journal database.
  7. Thelwall, M.: Webometrics (2009) 0.06
    0.056543067 = product of:
      0.22617227 = sum of:
        0.22617227 = weight(_text_:hyperlinks in 893) [ClassicSimilarity], result of:
          0.22617227 = score(doc=893,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48439145 = fieldWeight in 893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.046875 = fieldNorm(doc=893)
      0.25 = coord(1/4)
    
    Abstract
    Webometrics is an information science field concerned with measuring aspects of the World Wide Web (WWW) for a variety of information science research goals. It came into existence about five years after the Web was formed and has since grown to become a significant aspect of information science, at least in terms of published research. Although some webometrics research has focused on the structure or evolution of the Web itself or the performance of commercial search engines, most has used data from the Web to shed light on information provision or online communication in various contexts. Most prominently, techniques have been developed to track, map, and assess Web-based informal scholarly communication, for example, in terms of the hyperlinks between academic Web sites or the online impact of digital repositories. In addition, a range of nonacademic issues and groups of Web users have also been analyzed.
  8. Campbell, D.G.: Farradane's relational indexing and its relationship to hyperlinking in Alzheimer's information (2012) 0.06
    0.056543067 = product of:
      0.22617227 = sum of:
        0.22617227 = weight(_text_:hyperlinks in 1847) [ClassicSimilarity], result of:
          0.22617227 = score(doc=1847,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48439145 = fieldWeight in 1847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.046875 = fieldNorm(doc=1847)
      0.25 = coord(1/4)
    
    Abstract
    In an ongoing investigation of the relationship between Jason Farradane's relational indexing principles and concept combination in Web-based information on Alzheimer's Disease, the hyperlinks of three consumer health information websites are examined to see how well the linking relationships map to Farradane's relational operators, as well as to the linking attributes in HTML 5. The links were found to be largely bibliographic in nature, and as such mapped well onto HTML 5. Farradane's operators were less effective at capturing the individual links; nonetheless, the two dimensions of his relational matrix-association and discrimination-reveal a crucial underlying strategy of the emotionally-charged mediation between complex information and users who are consulting it under severe stress.
  9. Zhang, L.: Linking information through function (2014) 0.06
    0.056543067 = product of:
      0.22617227 = sum of:
        0.22617227 = weight(_text_:hyperlinks in 2526) [ClassicSimilarity], result of:
          0.22617227 = score(doc=2526,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.48439145 = fieldWeight in 2526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.046875 = fieldNorm(doc=2526)
      0.25 = coord(1/4)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
  10. Gibson, P.: Professionals' perfect Web world in sight : users want more information on the Web, and vendors attempt to provide (1998) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 2656) [ClassicSimilarity], result of:
          0.2103924 = score(doc=2656,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 2656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2656)
      0.25 = coord(1/4)
    
    Abstract
    Many information professionals feel that the time is still far off when the WWW can offer the combined funtionality and content of traditional online and CD-ROM databases, but there have been a number of recent Web developments to reflect on. Describes the testing and launch by Ovid of its Java client which, in effect, allows access to its databases on the Web with full search functionality, and the initiative of Euromonitor in providing Web access to its whole collection of consumer research reports and its entire database of business sources. Also reviews the service of a newcomer to the information scene, Information Quest (IQ) founded by Dawson Holdings which has made an agreement with Infonautics to offer access to its Electric Library database thus adding over 1.000 reference, consumer and business publications to its Web based journal service
  11. Nieuwenhuysen, P.; Vanouplines, P.: Document plus program hybrids on the Internet and their impact on information transfer (1998) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 2893) [ClassicSimilarity], result of:
          0.2103924 = score(doc=2893,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 2893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2893)
      0.25 = coord(1/4)
    
    Abstract
    Examines some of the advanced tools, techniques, methods and standards related to the Internet and WWW which consist of hybrids of documents and software, called 'document program hybrids'. Early Internet systems were based on having documents on one side and software on the other, neatly separated, apart from one another and without much interaction, so that the static document can also exist without computers and networks. Documentation program hybrids blur this classical distinction and all components are integrated, interwoven and exist in synergy with each other. Illustrates the techniques with particular reference to practical examples, including: dara collections and dedicated software; advanced HTML features on the WWW, multimedia viewer and plug in software for Internet and WWW browsers; VRML; interaction through a Web server with other servers and with instruments; adaptive hypertext provided by the server; 'webbots' or 'knowbots' or 'searchbots' or 'metasearch engines' or intelligent software agents; Sun's Java; Microsoft's ActiveX; program scripts for HTML and Web browsers; cookies; and Internet push technology with Webcasting channels
  12. Mills, T.; Moody, K.; Rodden, K.: Providing world wide access to historical sources (1997) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 3697) [ClassicSimilarity], result of:
          0.2103924 = score(doc=3697,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3697)
      0.25 = coord(1/4)
    
    Abstract
    A unique collection of historical material covering the lives and events of an English village between 1400 and 1750 has been made available via a WWW enabled information retrieval system. Since the expected readership of the documents ranges from school children to experienced researchers, providing this information in an easily accessible form has offered many challenges requiring tools to aid searching and browsing. The file structure of the document collection was replaced by an database, enabling query results to be presented on the fly. A Java interface displays each user's context in a form that allows for easy and intuitive relevance feedback
  13. Maarek, Y.S.: WebCutter : a system for dynamic and tailorable site mapping (1997) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 3739) [ClassicSimilarity], result of:
          0.2103924 = score(doc=3739,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 3739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3739)
      0.25 = coord(1/4)
    
    Abstract
    Presents an approach that integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables contextualization of the results by augmenting them with the documents' neighbourhoods. This approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runalble from any standard Web browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino application product line
  14. Pan, B.; Gay, G.; Saylor, J.; Hembrooke, H.: One digital library, two undergraduate casses, and four learning modules : uses of a digital library in cassrooms (2006) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 907) [ClassicSimilarity], result of:
          0.2103924 = score(doc=907,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=907)
      0.25 = coord(1/4)
    
    Abstract
    The KMODDL (kinematic models for design digital library) is a digital library based on a historical collection of kinematic models made of steel and bronze. The digital library contains four types of learning modules including textual materials, QuickTime virtual reality movies, Java simulations, and stereolithographic files of the physical models. The authors report an evaluation study on the uses of the KMODDL in two undergraduate classes. This research reveals that the users in different classes encountered different usability problems, and reported quantitatively different subjective experiences. Further, the results indicate that depending on the subject area, the two user groups preferred different types of learning modules, resulting in different uses of the available materials and different learning outcomes. These findings are discussed in terms of their implications for future digital library design.
  15. Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 2167) [ClassicSimilarity], result of:
          0.2103924 = score(doc=2167,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 2167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=2167)
      0.25 = coord(1/4)
    
    Abstract
    The Indiana University School of Library and Information Science opened a new research laboratory in January 2003; The Indiana University School of Library and Information Science Information Processing Laboratory [IU IP Lab]. The purpose of the new laboratory is to facilitate collaboration between scientists in the department in the areas of information retrieval (IR) and information visualization (IV) research. The lab has several areas of focus. These include grid and cluster computing, and a standard Java-based software platform to support plug and play research datasets, a selection of standard IR modules and standard IV algorithms. Future development includes software to enable researchers to contribute datasets, IR algorithms, and visualization algorithms into the standard environment. We decided early on to use OAI-PMH as a resource discovery tool because it is consistent with our mission.
  16. Song, R.; Luo, Z.; Nie, J.-Y.; Yu, Y.; Hon, H.-W.: Identification of ambiguous queries in web search (2009) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 3441) [ClassicSimilarity], result of:
          0.2103924 = score(doc=3441,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 3441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3441)
      0.25 = coord(1/4)
    
    Abstract
    It is widely believed that many queries submitted to search engines are inherently ambiguous (e.g., java and apple). However, few studies have tried to classify queries based on ambiguity and to answer "what the proportion of ambiguous queries is". This paper deals with these issues. First, we clarify the definition of ambiguous queries by constructing the taxonomy of queries from being ambiguous to specific. Second, we ask human annotators to manually classify queries. From manually labeled results, we observe that query ambiguity is to some extent predictable. Third, we propose a supervised learning approach to automatically identify ambiguous queries. Experimental results show that we can correctly identify 87% of labeled queries with the approach. Finally, by using our approach, we estimate that about 16% of queries in a real search log are ambiguous.
  17. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 3605) [ClassicSimilarity], result of:
          0.2103924 = score(doc=3605,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 3605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=3605)
      0.25 = coord(1/4)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  18. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.05
    0.0525981 = product of:
      0.2103924 = sum of:
        0.2103924 = weight(_text_:java in 4615) [ClassicSimilarity], result of:
          0.2103924 = score(doc=4615,freq=2.0), product of:
            0.45033762 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.06390027 = queryNorm
            0.46718815 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4615)
      0.25 = coord(1/4)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  19. Portable document formats (1996) 0.05
    0.04711922 = product of:
      0.18847688 = sum of:
        0.18847688 = weight(_text_:hyperlinks in 4810) [ClassicSimilarity], result of:
          0.18847688 = score(doc=4810,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.40365952 = fieldWeight in 4810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4810)
      0.25 = coord(1/4)
    
    Abstract
    Reports of the continued success of 3 electronic publishing software packages: Adobe Acrobat; Envoy; and Tumbelweed, preserves original printed layouts and typography. Compares some of the main features of the systems and reports the reasons, reported by the business information database provider, MAID Systems, for choosing Acrobat. Concludes that: all 3 systems have improved in ways which matter in a publishing context; with the very latest developments from Tumbleweed, Envoy appears to have caught up with Acrobat; Common Ground shares the same advantages as the other 3 bat has an admittedly small installed base that is less attractive. All systems now offer an attractive alternative to disseminate page images and users can search the text, follow hyperlinks and add their own bookmarks and annotations. The rise of the WWW is a double edged sword for these systems. As they integrate with browsers such as Netscape the potential reach of the systems increases yet the capability of WWW to free users from the printed page paradigm poses the question of whether it is really needed; and, finally, in the printed context, page layout can help to convey a clear and positive message yet the computer screen is less suited to browsing or catching the eye and the adhernce to the printed appearance when displaying data could be more of a restriction than an advantage
  20. Thelwall, M.: Extracting macroscopic information from Web links (2001) 0.05
    0.04711922 = product of:
      0.18847688 = sum of:
        0.18847688 = weight(_text_:hyperlinks in 851) [ClassicSimilarity], result of:
          0.18847688 = score(doc=851,freq=2.0), product of:
            0.46692044 = queryWeight, product of:
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.06390027 = queryNorm
            0.40365952 = fieldWeight in 851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.3070183 = idf(docFreq=80, maxDocs=44421)
              0.0390625 = fieldNorm(doc=851)
      0.25 = coord(1/4)
    
    Abstract
    Much has been written about the potential and pitfalls of macroscopic Web-based link analysis, yet there have been no studies that have provided clear statistical evidence that any of the proposed calculations can produce results over large areas of the Web that correlate with phenomena external to the Internet. This article attempts to provide such evidence through an evaluation of Ingwersen's (1998) proposed external Web Impact Factor (WIF) for the original use of the Web: the interlinking of academic research. In particular, it studies the case of the relationship between academic hyperlinks and research activity for universities in Britain, a country chosen for its variety of institutions and the existence of an official government rating exercise for research. After reviewing the numerous reasons why link counts may be unreliable, it demonstrates that four different WIFs do, in fact, correlate with the conventional academic research measures. The WIF delivering the greatest correlation with research rankings was the ratio of Web pages with links pointing at research-based pages to faculty numbers. The scarcity of links to electronic academic papers in the data set suggests that, in contrast to citation analysis, this WIF is measuring the reputations of universities and their scholars, rather than the quality of their publications

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 821
  • m 311
  • el 107
  • s 92
  • i 21
  • n 17
  • x 13
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications