Search (1226 results, page 2 of 62)

  • × language_ss:"e"
  1. Alavi, M.; Tiwana, A.: Knowledge integration in virtual teams : the potential role of KMS (2002) 0.09
    0.08795027 = product of:
      0.35180107 = sum of:
        0.35180107 = weight(_text_:harnessing in 1980) [ClassicSimilarity], result of:
          0.35180107 = score(doc=1980,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.6659286 = fieldWeight in 1980, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1980)
      0.25 = coord(1/4)
    
    Abstract
    Virtual teams are becoming a preferred mechanism for harnessing, integrating, and applying knowledge that is distributed across organizations and in pockets of collaborative networks. In this article we recognize that knowledge application, among the three phases of knowledge management, has received little research attention. Paradoxically, this phase contributes most to value creation. Extending communication theory, we identify four challenges to knowledge integration in virtual team environments: constraints an transactive memory, insufficient mutual understanding, failure in sharing and retaining contextual knowledge, and inflexibility of organizational ties. We then propose knowledge management system (KMS) approaches to meet these challenges. Finally, we identify promising avenues for future research in this area.
  2. Hawk, J.: OCLC SiteSearch (1998) 0.08
    0.08332475 = product of:
      0.333299 = sum of:
        0.333299 = weight(_text_:java in 3079) [ClassicSimilarity], result of:
          0.333299 = score(doc=3079,freq=4.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.7708211 = fieldWeight in 3079, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3079)
      0.25 = coord(1/4)
    
    Abstract
    Feature on OCLC's SiteSearch suite of software, first introduced in 1992, and how it is helping over 250 libraries integrate and manage their electronic library collections. Describes the new features of version 4.0, released in Apr 1997, which include a new interface, Java based architecture, and an online documentation and training site. Gives an account of how Java is helping the Georgia Library Learning Online (GALILEO) project to keep pace on the WWW; the use of SiteSearch by libraries to customize their interface to electronic resources; and gives details of Project Athena (Assessing Technological Horizons to Educate the Nashville Area), which is using OCLC SiteSearch to allow area library users to search the holdings of public and university libraries simultaneously
  3. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.08
    0.08332475 = product of:
      0.333299 = sum of:
        0.333299 = weight(_text_:java in 2673) [ClassicSimilarity], result of:
          0.333299 = score(doc=2673,freq=4.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.7708211 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2673)
      0.25 = coord(1/4)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
  4. Conrad, J.G.; Schriber, C.P.: Managing déjà vu : collection building for the identification of nonidentical duplicate documents (2006) 0.08
    0.07538594 = product of:
      0.30154377 = sum of:
        0.30154377 = weight(_text_:harnessing in 59) [ClassicSimilarity], result of:
          0.30154377 = score(doc=59,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.57079595 = fieldWeight in 59, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.046875 = fieldNorm(doc=59)
      0.25 = coord(1/4)
    
    Abstract
    As online document collections continue to expand, both on the Web and in proprietary environments, the need for duplicate detection becomes more critical. Few users wish to retrieve search results consisting of sets of duplicate documents, whether identical duplicates or close variants. The goal of this work is to facilitate (a) investigations into the phenomenon of near duplicates and (b) algorithmic approaches to minimizing its deleterious effect on search results. Harnessing the expertise of both client-users and professional searchers, we establish principled methods to generate a test collection for identifying and handling nonidentical duplicate documents. We subsequently examine a flexible method of characterizing and comparing documents to permit the identification of near duplicates. This method has produced promising results following an extensive evaluation using a production-based test collection created by domain experts.
  5. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.08
    0.07538594 = product of:
      0.30154377 = sum of:
        0.30154377 = weight(_text_:harnessing in 3556) [ClassicSimilarity], result of:
          0.30154377 = score(doc=3556,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.57079595 = fieldWeight in 3556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.046875 = fieldNorm(doc=3556)
      0.25 = coord(1/4)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
  6. Tan, B.; Pan, S.L.; Zuo, M.: Harnessing collective IT resources for sustainability : insights from the green leadership strategy of China mobile (2015) 0.08
    0.07538594 = product of:
      0.30154377 = sum of:
        0.30154377 = weight(_text_:harnessing in 2731) [ClassicSimilarity], result of:
          0.30154377 = score(doc=2731,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.57079595 = fieldWeight in 2731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.046875 = fieldNorm(doc=2731)
      0.25 = coord(1/4)
    
  7. Goslin, K.; Hofmann, M.: ¬A Wikipedia powered state-based approach to automatic search query enhancement (2018) 0.08
    0.07538594 = product of:
      0.30154377 = sum of:
        0.30154377 = weight(_text_:harnessing in 83) [ClassicSimilarity], result of:
          0.30154377 = score(doc=83,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.57079595 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.046875 = fieldNorm(doc=83)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes the development and testing of a novel Automatic Search Query Enhancement (ASQE) algorithm, the Wikipedia N Sub-state Algorithm (WNSSA), which utilises Wikipedia as the sole data source for prior knowledge. This algorithm is built upon the concept of iterative states and sub-states, harnessing the power of Wikipedia's data set and link information to identify and utilise reoccurring terms to aid term selection and weighting during enhancement. This algorithm is designed to prevent query drift by making callbacks to the user's original search intent by persisting the original query between internal states with additional selected enhancement terms. The developed algorithm has shown to improve both short and long queries by providing a better understanding of the query and available data. The proposed algorithm was compared against five existing ASQE algorithms that utilise Wikipedia as the sole data source, showing an average Mean Average Precision (MAP) improvement of 0.273 over the tested existing ASQE algorithms.
  8. Juhne, J.; Jensen, A.T.; Gronbaek, K.: Ariadne: a Java-based guided tour system for the World Wide Web (1998) 0.07
    0.07142121 = product of:
      0.28568485 = sum of:
        0.28568485 = weight(_text_:java in 4593) [ClassicSimilarity], result of:
          0.28568485 = score(doc=4593,freq=4.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.6607038 = fieldWeight in 4593, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.046875 = fieldNorm(doc=4593)
      0.25 = coord(1/4)
    
    Abstract
    Presents a Guided tour system for the WWW, called Ariadne, which implements the ideas of trails and guided tours, originating from the hypertext field. Ariadne appears as a Java applet to the user and it stores guided tours in a database format separated from the WWW documents included in the tour. Itd main advantages are: an independent user interface which does not affect the layout of the documents being part of the tour, branching tours where the user may follow alternative routes, composition of existing tours into aggregate tours, overview map with indication of which parts of a tour have been visited an support for getting back on track. Ariadne is available as a research prototype, and it has been tested among a group of university students as well as casual users on the Internet
  9. Reed, D.: Essential HTML fast (1997) 0.07
    0.067336574 = product of:
      0.2693463 = sum of:
        0.2693463 = weight(_text_:java in 6851) [ClassicSimilarity], result of:
          0.2693463 = score(doc=6851,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.62291753 = fieldWeight in 6851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=6851)
      0.25 = coord(1/4)
    
    Abstract
    This book provides a quick, concise guide to the issues surrounding the preparation of a well-designed, professional web site using HTML. Topics covered include: how to plan your web site effectively, effective use of hypertext, images, audio and video; layout techniques using tables and and list; how to use style sheets, font sizes and plans for mathematical equation make up. Integration of CGI scripts, Java and ActiveX into your web site is also discussed
  10. Lord Wodehouse: ¬The Intranet : the quiet (r)evolution (1997) 0.07
    0.067336574 = product of:
      0.2693463 = sum of:
        0.2693463 = weight(_text_:java in 171) [ClassicSimilarity], result of:
          0.2693463 = score(doc=171,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.62291753 = fieldWeight in 171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=171)
      0.25 = coord(1/4)
    
    Abstract
    Explains how the Intranet (in effect an Internet limited to the computer systems of a single organization) developed out of the Internet, and what its uses and advantages are. Focuses on the Intranet developed in the Glaxo Wellcome organization. Briefly discusses a number of technologies in development, e.g. Java, Real audio, 3D and VRML, and summarizes the issues involved in the successful development of the Intranet, that is, bandwidth, searching tools, security, and legal issues
  11. Wang, J.; Reid, E.O.F.: Developing WWW information systems on the Internet (1996) 0.07
    0.067336574 = product of:
      0.2693463 = sum of:
        0.2693463 = weight(_text_:java in 604) [ClassicSimilarity], result of:
          0.2693463 = score(doc=604,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.62291753 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    Gives an overview of Web information system development. Discusses some basic concepts and technologies such as HTML, HTML FORM, CGI and Java, which are associated with developing WWW information systems. Further discusses the design and implementation of Virtual Travel Mart, a Web based end user oriented travel information system. Finally, addresses some issues in developing WWW information systems
  12. Ameritech releases Dynix WebPac on NT (1998) 0.07
    0.067336574 = product of:
      0.2693463 = sum of:
        0.2693463 = weight(_text_:java in 2782) [ClassicSimilarity], result of:
          0.2693463 = score(doc=2782,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.62291753 = fieldWeight in 2782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=2782)
      0.25 = coord(1/4)
    
    Abstract
    Ameritech Library Services has released Dynix WebPac on NT, which provides access to a Dynix catalogue from any Java compatible Web browser. Users can place holds, cancel and postpone holds, view and renew items on loan and sort and limit search results from the Web. Describes some of the other features of Dynix WebPac
  13. OCLC completes SiteSearch 4.0 field test (1998) 0.07
    0.067336574 = product of:
      0.2693463 = sum of:
        0.2693463 = weight(_text_:java in 3078) [ClassicSimilarity], result of:
          0.2693463 = score(doc=3078,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.62291753 = fieldWeight in 3078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=3078)
      0.25 = coord(1/4)
    
    Abstract
    OCLC has announced that 6 library systems have completed field tests of the OCLC SiteSearch 4.0 suite of software, paving its way for release. Traces the beta site testing programme from its beginning in November 1997 and notes that OCLC SiteServer components have been written in Java programming language which will increase libraries' ability to extend the functionality of the SiteSearch software to create new features specific to local needs
  14. Robinson, D.A.; Lester, C.R.; Hamilton, N.M.: Delivering computer assisted learning across the WWW (1998) 0.07
    0.067336574 = product of:
      0.2693463 = sum of:
        0.2693463 = weight(_text_:java in 4618) [ClassicSimilarity], result of:
          0.2693463 = score(doc=4618,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.62291753 = fieldWeight in 4618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=4618)
      0.25 = coord(1/4)
    
    Abstract
    Demonstrates a new method of providing networked computer assisted learning to avoid the pitfalls of traditional methods. This was achieved using Web pages enhanced with Java applets, MPEG video clips and Dynamic HTML
  15. Bates, C.: Web programming : building Internet applications (2000) 0.07
    0.067336574 = product of:
      0.2693463 = sum of:
        0.2693463 = weight(_text_:java in 130) [ClassicSimilarity], result of:
          0.2693463 = score(doc=130,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.62291753 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0625 = fieldNorm(doc=130)
      0.25 = coord(1/4)
    
    Object
    Java
  16. Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003) 0.07
    0.067091465 = product of:
      0.13418293 = sum of:
        0.101004854 = weight(_text_:java in 55) [ClassicSimilarity], result of:
          0.101004854 = score(doc=55,freq=2.0), product of:
            0.43239477 = queryWeight, product of:
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.061354287 = queryNorm
            0.23359407 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.0475073 = idf(docFreq=104, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
        0.03317807 = weight(_text_:und in 55) [ClassicSimilarity], result of:
          0.03317807 = score(doc=55,freq=22.0), product of:
            0.13607761 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.061354287 = queryNorm
            0.24381724 = fieldWeight in 55, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
      0.5 = coord(2/4)
    
    Content
    "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
    Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
  17. ¬The World Wide Web and Databases : International Workshop WebDB'98, Valencia, Spain, March 27-28, 1998, Selected papers (1999) 0.06
    0.06282162 = product of:
      0.25128648 = sum of:
        0.25128648 = weight(_text_:harnessing in 4959) [ClassicSimilarity], result of:
          0.25128648 = score(doc=4959,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.47566327 = fieldWeight in 4959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.0390625 = fieldNorm(doc=4959)
      0.25 = coord(1/4)
    
    Content
    Enthält die Beiträge: SHIM, J. u.a.: A unified algorithm for cache replacement and consistency in Web proxy servers; BILLARD, D.: Transactional services for the Internet; CONNOR, R. u.a.: On the unification of persistent programming and the World Wide Web; GOLDMAN, R. u. J. WIDOM: Interactive query and search in semistructured databases; KONOPNICKI, D. u. O. SHMUELI, O.: Bringing database functionality to the WWW; BIDOIT, N. u. M. YKHLEF: Fixpoint calculus for querying semistructured data; SINDONI, G.: Incremental maintenance of hypertext views; SIMÉON, J. u. S. CLUET: Using YAT to build a Web server; FALQUET, G. u.a.: Languages and tools to specify hypertext views on databases; BEERI, C. u.a.: WebSuite: a tool suite for harnessing Web data; BRIN, S.: Extracting patterns and relations from the World wide Web; SPILIOPOULOU, M. u. L.C. FAULSTICH: WUM: a tool for Web utilization analysis; SHIVAKUMAR, N. u. H. GARCIA-MOLINA: Finding near-replicas of documents on the Web
  18. Wisser, K.M.; O'Brien Roper, J.: Maximizing metadata : exploring the EAD-MARC relationship (2003) 0.06
    0.06282162 = product of:
      0.25128648 = sum of:
        0.25128648 = weight(_text_:harnessing in 279) [ClassicSimilarity], result of:
          0.25128648 = score(doc=279,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.47566327 = fieldWeight in 279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.0390625 = fieldNorm(doc=279)
      0.25 = coord(1/4)
    
    Abstract
    Encoded Archival Description (EAD) has provided a new way to approach manuscript and archival collection representation. A review of previous representational practices and problems highlights the benefits of using EAD. This new approach should be considered a partner rather than an adversary in the access providing process. Technological capabilities now allow for multiple metadata schemas to be employed in the creation of the finding aid. Crosswalks allow for MARC records to be generated from the detailed encoding of an EAD finding aid. In the process of creating these crosswalks and detailed encoding, EAD has generated more changes in traditional processes and procedures than originally imagined. The North Carolina State University (NCSU) Libraries sought to test the process of crosswalking EAD to MARC, investigating how this process used technology as well as changed physical procedures. By creating a complex and indepth EAD template for finding aids, with accompanying related encoding analogs embedded within the element structure, MARC records were generated that required minor editing and revision for inclusion in the NCSU Libraries OPAC. The creation of this bridge between EAD and MARC has stimulated theoretical discussions about the role of collaboration, technology, and expertise in the ongoing struggle to maximize access to our collections. While this study is a only a first attempt at harnessing this potential, a presentation of the tensions, struggles, and successes provides illumination to some of the larger issues facing special collections today.
  19. Herman, E.: End-users in academia : meeting the information needs of university researchers in an electronic age: Part 2 Innovative information-accessing opportunities and the researcher: user acceptance of IT-based information resources in academia (2001) 0.06
    0.06282162 = product of:
      0.25128648 = sum of:
        0.25128648 = weight(_text_:harnessing in 824) [ClassicSimilarity], result of:
          0.25128648 = score(doc=824,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.47566327 = fieldWeight in 824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.0390625 = fieldNorm(doc=824)
      0.25 = coord(1/4)
    
    Abstract
    This paper is the second part of a two-part paper, which examines the transition to the electronic information era in academia. Seeks to establish from the published literature to what extent university researchers have accepted, and adapted to, the changes wrought in information activity by seemingly endless technological developments. Within the wider context of the impact of the changing information environment on each of the three clearly discernible components of academic research (the creation of knowledge and standards, the preservation of information, and the communication of knowledge and information to others), disciplinary-rooted differences in the conduct of research and their influence on information needs are identified, and the resulting inter- and intra-individual variations in researchers' information seeking behaviour are explored. Reviewing a large number of studies investigating the integration of electronic media into academic work, an attempt is made to paint the picture of academics' progressively harnessing the new technologies to scholarly information gathering endeavours, with the expressed hope of affording some insight into the directions and basic trends characterising the information activity of university faculty in an increasingly electronic environment.
  20. Herman, E.: End-users in academia : meeting the information needs of university researchers in an electronic age (2001) 0.06
    0.06282162 = product of:
      0.25128648 = sum of:
        0.25128648 = weight(_text_:harnessing in 825) [ClassicSimilarity], result of:
          0.25128648 = score(doc=825,freq=2.0), product of:
            0.52828646 = queryWeight, product of:
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.061354287 = queryNorm
            0.47566327 = fieldWeight in 825, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.610425 = idf(docFreq=21, maxDocs=44421)
              0.0390625 = fieldNorm(doc=825)
      0.25 = coord(1/4)
    
    Abstract
    This paper is the first part of a two-part paper, which examines the transition to the electronic information era in academia. Seeks to establish from the published literature to what extent university researchers have accepted, and adapted to, the changes wrought in information activity by seemingly endless technological developments. Within the wider context of the impact of the changing information environment on each of the three clearly discernible components of academic research (the creation of knowledge and standards, the preservation of information, and the communication of knowledge and information to others), disciplinary-rooted differences in the conduct of research and their influence on information needs are identified, and the resulting inter- and intra- individual variations in researchers' information seeking behaviour are explored. Reviewing a large number of studies investigating the integration of electronic media into academic work, an attempt is made to paint the picture of academics' progressively harnessing the new technologies to scholarly information gathering endeavours, with the expressed hope of affording some insight into the directions and basic trends characterizing the information activity of university faculty in an increasingly electronic environment.

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 787
  • m 312
  • el 102
  • s 94
  • i 21
  • n 17
  • x 12
  • r 10
  • b 7
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications