Search (1453 results, page 1 of 73)

  • × theme_ss:"Internet"
  1. Selected papers of the Annual Conference of the Internet Society : 5th Joint European Networking Conference (1994) 0.11
    0.10646419 = product of:
      0.42585677 = sum of:
        0.42585677 = weight(_text_:joint in 1829) [ClassicSimilarity], result of:
          0.42585677 = score(doc=1829,freq=4.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.9752941 = fieldWeight in 1829, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.078125 = fieldNorm(doc=1829)
      0.25 = coord(1/4)
    
    Abstract
    Issue devoted to selected papers of the Annual Conference of the Internet Society / 5th Joint European Networking Conference, Held in June 13-17, 1994, Prague
  2. Langer, U.: ZDF-Nachrichten aus dem Datennetz : Zur Funkausstellung startet 'heute.online' - Joint-venture mit MSNBC on the Internet (1997) 0.11
    0.10539417 = product of:
      0.42157668 = sum of:
        0.42157668 = weight(_text_:joint in 362) [ClassicSimilarity], result of:
          0.42157668 = score(doc=362,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.9654919 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.109375 = fieldNorm(doc=362)
      0.25 = coord(1/4)
    
  3. Second ACM/IEEE-CS Joint Conference on Digital Libraries : JCDL 2002 ; July 14 - 18, 2002, Portland, Oregon: Proceedings (2002) 0.09
    0.090337865 = product of:
      0.36135146 = sum of:
        0.36135146 = weight(_text_:joint in 5051) [ClassicSimilarity], result of:
          0.36135146 = score(doc=5051,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.8275645 = fieldWeight in 5051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.09375 = fieldNorm(doc=5051)
      0.25 = coord(1/4)
    
  4. Joint, N.: ¬The Web 2.0 challenge to libraries (2009) 0.08
    0.08478631 = product of:
      0.16957262 = sum of:
        0.1505631 = weight(_text_:joint in 3959) [ClassicSimilarity], result of:
          0.1505631 = score(doc=3959,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.34481853 = fieldWeight in 3959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3959)
        0.019009512 = weight(_text_:und in 3959) [ClassicSimilarity], result of:
          0.019009512 = score(doc=3959,freq=2.0), product of:
            0.15515085 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06995397 = queryNorm
            0.12252277 = fieldWeight in 3959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3959)
      0.5 = coord(2/4)
    
    Location
    Trinidad und Tobago
  5. Kesselman, M.: Beyond Bitnet : telnetting to the United Kingdom (1993) 0.08
    0.07528155 = product of:
      0.3011262 = sum of:
        0.3011262 = weight(_text_:joint in 5544) [ClassicSimilarity], result of:
          0.3011262 = score(doc=5544,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.68963706 = fieldWeight in 5544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.078125 = fieldNorm(doc=5544)
      0.25 = coord(1/4)
    
    Abstract
    Describes the reasons why US librarians might wish to access the Joint Academic Network (JANET), how to telnet to the UK, JANET discussion groups and bulletin boards, and how to access UK and European online services through JANET. Briefly discusses the Bath Information and Data Service
  6. Networked information in an international context (1996) 0.08
    0.07528155 = product of:
      0.3011262 = sum of:
        0.3011262 = weight(_text_:joint in 5576) [ClassicSimilarity], result of:
          0.3011262 = score(doc=5576,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.68963706 = fieldWeight in 5576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.078125 = fieldNorm(doc=5576)
      0.25 = coord(1/4)
    
    Abstract
    A conference organized by UK Office of Library and Information Networking (UKOLN) in association with the British Library, the Coalition for Networked Information (CNI), and the Joint Information Systems Committee of the Higher Education Funding Councils of England, Scotland, Wales and Northern Ireland (JISC) held 9-10 Feb 96, Heathrow, UK
  7. Gray, J.: Accessing electronic resources via the library catalogue at Monash University Library (1998) 0.06
    0.06022524 = product of:
      0.24090096 = sum of:
        0.24090096 = weight(_text_:joint in 4719) [ClassicSimilarity], result of:
          0.24090096 = score(doc=4719,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.55170965 = fieldWeight in 4719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0625 = fieldNorm(doc=4719)
      0.25 = coord(1/4)
    
    Footnote
    Adapted version of a presentation to the Joint Australian Library and Information Catalogues and UCRLS Meeting, Melbourne, Victoria, Australia, 16 Jul 1998
  8. Koch, T.: Experiments with automatic classification of WAIS databases and indexing of WWW : some results from the Nordic WAIS/WWW project (1994) 0.05
    0.052697085 = product of:
      0.21078834 = sum of:
        0.21078834 = weight(_text_:joint in 7208) [ClassicSimilarity], result of:
          0.21078834 = score(doc=7208,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.48274595 = fieldWeight in 7208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0546875 = fieldNorm(doc=7208)
      0.25 = coord(1/4)
    
    Abstract
    The Nordic WAIS/WWW project sponsored by NORDINFO is a joint project between Lund University Library and the National Technological Library of Denmark. It aims to improve the existing networked information discovery and retrieval tools Wide Area Information System (WAIS) and World Wide Web (WWW), and to move towards unifying WWW and WAIS. Details current results focusing on the WAIS side of the project. Describes research into automatic indexing and classification of WAIS sources, development of an orientation tool for WAIS, and development of a WAIS index of WWW resources
  9. Bishop, A.P.: ¬A pilot study of the Blacksburg Electronic village (1994) 0.05
    0.052697085 = product of:
      0.21078834 = sum of:
        0.21078834 = weight(_text_:joint in 3095) [ClassicSimilarity], result of:
          0.21078834 = score(doc=3095,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.48274595 = fieldWeight in 3095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3095)
      0.25 = coord(1/4)
    
    Abstract
    Describes a pilot study performed in the summer of 1993 to help develop instruments appropriate for a full-scale assessment of the Blacksburg Electronic Village. The Blacksburg Electronic Village is a joint effort of Virginia Polytechnic and State University, C&P Bell, and the town of Blacksburg, VA. It represents an attempt to 'wire the community' with high speed network connections in order to attract and provide new kinds of electronic information and communication services to town residents
  10. Mowat, I.R.M.: ¬A national union catalogue : the ? edition (1996) 0.05
    0.052697085 = product of:
      0.21078834 = sum of:
        0.21078834 = weight(_text_:joint in 296) [ClassicSimilarity], result of:
          0.21078834 = score(doc=296,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.48274595 = fieldWeight in 296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0546875 = fieldNorm(doc=296)
      0.25 = coord(1/4)
    
    Abstract
    Reports briefly on the development, by the Consortium of University Research Libraries (CURL) of the CURL OPAC, or COPAC. COPAC is seen as the partial realization of the aims of earlier projects, such as the UK Libraries Database System (UKLDS). Although COPAC was not designed as a union catalogue, it was a natural next step for CURL to obtain funding from the Joint Information Services Committee (JISC), following the Follett Report, to use the database to create a union catalogue. The work is being undertaken at Manchester University, which has held the CURL database since its creation, and the version was launched on 30 Apr 96
  11. Lahary, D.: ¬Le jeu de puzzle de l'acces aux catalogues : World Wide Web et/ou Z39.50 (1997) 0.05
    0.052697085 = product of:
      0.21078834 = sum of:
        0.21078834 = weight(_text_:joint in 1925) [ClassicSimilarity], result of:
          0.21078834 = score(doc=1925,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.48274595 = fieldWeight in 1925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1925)
      0.25 = coord(1/4)
    
    Abstract
    To access a remote catalogue the searcher can use the Z39.50 standard as interface, which requires appropriate software; or for databases accessible on the WWW use a common gateway interface. The multibase access advantage of Z39.50 can also be obtained through a Web navigator by inserting a Web server/Z39.50 client software connector: this can be located either on the search site, in an intermediary position or on the database site, so determining the range of databases which can be searched. Z39.50 also offers interesting possibilities for joint and local cataloguing: multibase searching can equally be realised on intranets
  12. Jünger, G.: ¬Ein neues Universum (2003) 0.05
    0.0502304 = product of:
      0.1004608 = sum of:
        0.06022524 = weight(_text_:joint in 2553) [ClassicSimilarity], result of:
          0.06022524 = score(doc=2553,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.13792741 = fieldWeight in 2553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.015625 = fieldNorm(doc=2553)
        0.040235553 = weight(_text_:und in 2553) [ClassicSimilarity], result of:
          0.040235553 = score(doc=2553,freq=56.0), product of:
            0.15515085 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.06995397 = queryNorm
            0.25933182 = fieldWeight in 2553, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.015625 = fieldNorm(doc=2553)
      0.5 = coord(2/4)
    
    Content
    Eine stetige Erfahrung der Techniksoziologie und -geschichte besagt, dass sich wirklich neue Konzepte, die ihrer Zeit vorauseilen, am Ende nicht durchsetzen können. Erfolg haben stattdessen mittelmäßige Nachbildungen der ersten Idee, die dann, um periphere Funktionen und Dekorationen erweitert, als große Innovationen auftreten. Beispiele für zweitbeste Lösungen, von denen jeder weiß, dass sie nur Krücken sind, liefert gerade die Informatik in großer Zahl. Das Gespann der Programmiersprachen Smalltalk und C++ gehört dazu, aber auch das World Wide Web, das wir heute kennen, bleibt weit hinter Konzepten eines universalen, globalen Informationssystems zurück, die lange vor der Definition des Hypertext-Protokolls durch Tim Berners-Lee entwickelt worden sind. Die Frage nach der technischen Vorgeschichte und ihren verpassten Chancen ist keineswegs nur von akademischem Interesse. Das "Xanadu" genannte System, das zum ersten Mal das weltweit vorhandene Wissen in digitaler Form radikal demokratisieren wollte, kann sehr gut als Folie dienen für die Diskussion über die zukünftige Entwicklung des WWW. Zweifellos ist der Wunsch, möglichst viel Wissen anzuhäufen, uralt. Er hat die Errichter der Bibliothek von Alexandria angetrieben, die kopierenden und kommentierenden Mönche des Mittelalters oder die Enzyklopädisten des 18. Jahrhunderts in Frankreich. Spätestens seit dem 20. Jahrhundert war die pure Menge des nun Wissbaren so nicht mehr zu bewältigen. Über die materielle Ablage der Dokumente hinaus mussten neue Organisationsprinzipien gefunden werden, um den Berg zu erschließen und seine Bestandteile untereinander in nutzbarer Weise zu verbinden. Nur dann konnte eine Wissenschaftlerin oder ein Wissenschaftler jetzt noch in vertretbarer Zeit zum aktuellen Wissensstand auf einem Gebiet aufschließen. Im Epochenjahr 1945 entwarf Vannevar Bush, ein wissenschaftlicher Berater von Roosevelt während des Zweiten Weltkriegs, eine erste Antwort auf die Frage nach einem solchen Organisationsprinzip. Er nannte sein System "Memex" (Memory Extender), also "Gedächtniserweiterer". Wissen sollte in der Form von Mikrofilmen archiviert und die dabei erzeugten Einzelbestandteile sollten so mit einander verknüpft werden, dass das sofortige Aufsuchen von Verweisen möglich würde. Technisch misslang das System, mit Hilfe von Mikrofilmen ließ es sich wohl kaum realisieren. Aber der Gedanke war formuliert, dass große Wissensbestände nicht unbedingt in separaten Dokumenten und überwiegend linear (Seite 2 folgt auf Seite 1) angeordnet zu werden brauchten. Sie können durch interne Verknüpfungen zwischen Einzelseiten zu etwas Neuem zusammengefügt werden. Der Flugzeugingenieur Douglas Engelbart las schon in den Vierzigerjahren von Bushs Idee. Ihm gebührt das Verdienst, sie auf die neue Technik der digitalen Computer übertragen zu haben. Eine Sitzung der "Fall Joint Computer Conference" im Jahr 1968 demonstrierte seine "NLS" (oN Line System) genannte Verwirklichung des Memex-Konzepts in der Praxis und war für viele Teilnehmer die Initialzündung zu eigenen Versuchen auf diesem Gebiet. NLS war ein riesiges Journal von einzelnen Memos und Berichten eines Vorgängerprojekts, das es den beteiligten Wissenschaftlern erlaubte, über adressierte Verweise unmittelbar zu einem benachbarten Dokument zu springen - ein Netz aus Knoten und `Kanten, dem nur noch ein geeigneter Name für seine neue Eigenschaft fehlte:
    - Hypertext - Nicht nur den Namen "Hypertext" für ein solches Netz, sondern auch entscheidende Impulse zu einer konkreten Ausgestaltung eines durch Links verknüpften Netzes steuerte ab 1965 Ted Nelson bei. Sein mit dem Namen "Xanadu" verbundenes Wissenschaftsnetz gibt noch heute die Messlatte ab, an der sich das WWW behaupten muss. Nelson versuchte, sein Konzept auch kommerziell zum Erfolg zu bringen. Zeitweise konnte er auf ein starkes finanzielles Engagement der CAD-Firma Autodesk rechnen, die sich jedoch nach ausbleibenden Erfolgen aus diesem Engagement zurückzog. Heute ist der Quellcode der Software frei zugänglich, und die Website xanadu.net informiert über die Aktivitäten der kleinen Xanadu-Gemeinde von heute. Nelson selbst stellt sein Projekt als ein geschlossenes System von Dokumenten dar, dessen Zugang man ähnlich erwirbt wie den zu einem Provider oder zum Zahlfernsehen. Dokumente werden in diesem aus vernetzten Computern bestehenden System binär gespeichert, unabhängig davon, ob im einzelnen Dokument Bilder, Müsik, Text oder sonst etwas vorliegen. Sie zerfallen in winzige, aber identifizierbare Bestandteile, so dass jeder Dokumententeil mit einer eindeutigen ID versehen ist und einem bestimmten Autor zugeordnet werden kann. Liest ein Leser ein Dokumententeil in Xanadu, wird dadurch automatisch eine Gutschrift für das Konto des Urhebers des Dokuments erzeugt. Wie im existierenden Web sind einzelne Wörter, Bilder oder andere Medieninhalte Anker zu Verweisen auf andere Dokumentenbestandteile, die sich per Mausklick aufrufen lassen. Im Unterschied zum Web aber führt der Weg nicht nur in eine Richtung. Stichwort A verweist nicht nur auf X, sondern X macht auch alle Dokumente kenntlich, von denen aus auf X gezeigt wird. Es ist also jederzeit nachvollziehbar, wo überall von einem Dokument Gebrauch gemacht wird. Dadurch lässt sich überprüfen, ob ein als Beleg verwendeter Verweis zu Recht oder zu Unrecht angegeben wird.
    - Gutschriften für Autoren - Es geht aber noch weiter: Prinzipiell wird allen Xanadu-Teilnehmern garantiert, vorhandene Dokumentebestandteile durch so genannte Transclusions zitieren zu können. Ein Rechtemanagement für Lesezugriffe ist ja bereits integriert. Es ist also jederzeit möglich, dass jemand für ein Thema, das interessant erscheint, eine neue Anthologie erzeugt, ohne dass vorher Genehmigungen eingeholt werden müssen. Und anders als das WWW ist Xanadu ein Instrument für Autoren: An jede vorhandenen Seite können Kommentare angefügt werden. Um den Überblick zu behalten, werden sie anders dargestellt als ein Quellennachweis ("getypte Links"). Änderungen, die an einem Knoten vorgenommen werden, indem etwa ein Kommentar hinzukommt, können anderen Lesern vom System angezeigt werden, so dass man auf diese Weise lebendige Debatten und permanente Onlinekonferenzen führen kann. Ohne dass Administratoren regelnd eingreifen müssen, spiegelt das Xanadu-Netz damit die Interdependenz der realen Welt wider. Im Zeitalter der Fachidioten wird Wissen in einer Form repräsentiert, die die Verflechtung der Dinge untereinander hervorhebt und Kontroversen sichtbar macht. Das System schreibt dabei seine Geschichte selbst, da eine Versionskontrolle, verbunden mit direktem Dokumentenvergleich, die unterschiedlichen redaktionellen Bearbeitungsstufen eines Dokumentknotens nachvollziehbar macht.
    - Forschungsdebatten - Die Vorteile dieses Systems vor dem Web liegen auf der Hand: Alle Urheberrechte sind grundsätzlich und zugunsten der tatsächlichen Autoren geregelt, auch im Falle der Belletristik, die in der Regel nur konsumiert wird. Darüber hinaus profitierte die wissenschaftliche Arbeit von der Möglichkeit, Texte zu kommentieren oder vorhandene Argumentationen zitierend in die eigene Darstellung aufzunehmen. Forschungsdebatten sind jederzeit an jeder Stelle möglich - und zugänglich werden sie durch die Funktion, Änderungsanzeigen für bestimmte Knoten abonnieren, zu können. Damit wird es einem Autor möglich, auf eine Entgegnung zügig zu antworten. So können nicht nur einzelne Knoten, sondern ganze Knotennetze bearbeitet werden. Man kann also eine vorhandene Darstellung zustimmend übernehmen, aber die zwei, drei Zusatzpunkte hinzufügen, in denen die eigene Meinung vom Knotenstandard abweicht. Schließlich schafft ein System wie Xanadu mit Versionskontrolle und garantiertem Speicherplatz ein Dauerproblem des vorhandenen Webs mit seinen toten Links aus der Welt und erzeugt mit dem eingebauten Dokumentenvergleich ein Maximum an Übersicht.
    - Technische Hürden - Bleibt die Frage, weshalb Xanadu mit seiner Vision, das Weltwissen one mouse-click away erreichbar zu machen - Nelson spricht gar von einem docuverse -, bislang so erfolglos blieb. Ernst zu nehmen sind zunächst die technischen Anforderungen, die Xanadu stellt. Sie beginnen bei einer Editorensoftware, die die Autorenkennungen- vorhandener und zitierter Dokumentknoten bewahrt. Dass diese Software am Ende weniger techniklastig ist als die heutigen HTML-Editoren, wie Nelson annimmt, darf bezweifelt werden. Hinzu kommen Anforderungen an Rechnersysteme und Verwaltungsaufgaben: Da Xanadu ein konsistentes Dokumentmanagement garantieren und Dokumente für den Notfall auch redundant auf mehreren Rechnern gleichzeitig vorhalten muss, wären die technischen und Managementherausforderungen für ein aus zig Milliarden Dokumenten bestehendes System beachtlich. Andere Gründe sind prinzipieller Art: Die Bereitschaft, für Inhalte auch nur mit kleinen Beiträgen zu zahlen, ist bei den meisten. Internetnutzern als recht gering anzusetzen. Anders ließe sich das Scheitern vieler Anbieter für Bezahlinhalte im bestehenden Web kaum interpretieren. Möglicherweise gibt es auch einen latenten Widerwillen, einer zentralen und weltweit auftretenden Organisation- und trete sie mit noch so ehrenhaften Zielen an - die Aufgabe anzuvertrauen, das Weltwissen abzulegen. Hier hat offenbar - Ironie der Geschichte - das Computernetz des Militärs die Nase vorn, das aus Gründen eines auch in der Katastrophe funktionierenden Systems auf stets austauschbare Rechner setzt und das Chaos ins Kalkül aufgenommen hat. Es ist daher absehbar, dass sich Xanadu in der skizzierten Form nicht mehr durchsetzen wird. Damit ist wahrscheinlich auch eine Chance für eine neue Internetarchitektur vertan. Was als Wunsch aber für die Weiterentwicklung des bestehenden Webs nicht laut genug vorgebracht werden kann, ist, dass die Visionen und konkreten Möglichkeiten von Xanadu stärker berücksichtigt werden sollten. Vielleicht sind die Aversionen gegen zentral geregelte Systeme aber immerhin dazu gut, dass das ".Net"; das Lieblingsprojekt des Softwaregiganten Microsoft, zu einem offenen System umgewandelt wird.
  13. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.05
    0.048148174 = product of:
      0.1925927 = sum of:
        0.1925927 = weight(_text_:headings in 4966) [ClassicSimilarity], result of:
          0.1925927 = score(doc=4966,freq=14.0), product of:
            0.33944473 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06995397 = queryNorm
            0.5673757 = fieldWeight in 4966, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.03125 = fieldNorm(doc=4966)
      0.25 = coord(1/4)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.
  14. Quick queries (1996) 0.05
    0.045495745 = product of:
      0.18198298 = sum of:
        0.18198298 = weight(_text_:headings in 4735) [ClassicSimilarity], result of:
          0.18198298 = score(doc=4735,freq=2.0), product of:
            0.33944473 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06995397 = queryNorm
            0.53611964 = fieldWeight in 4735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.078125 = fieldNorm(doc=4735)
      0.25 = coord(1/4)
    
    Abstract
    Provides a list of 19 WWW and gopher sites from which answers to ready reference queries may be obtained. These are arranged under the following headings: ready made collections; date and time; weights and measures; flag wavers; foreign currency; state by state; the elements; and case and tense
  15. Auer, N.J.: Bibliography on evaluating Internet resources (1998) 0.05
    0.045495745 = product of:
      0.18198298 = sum of:
        0.18198298 = weight(_text_:headings in 4528) [ClassicSimilarity], result of:
          0.18198298 = score(doc=4528,freq=2.0), product of:
            0.33944473 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06995397 = queryNorm
            0.53611964 = fieldWeight in 4528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.078125 = fieldNorm(doc=4528)
      0.25 = coord(1/4)
    
    Abstract
    Presents a bibliography on evaluating Internet resources in which titles are arranged under the following headings: Internet resources, print resources, and useful listservs
  16. Polat, H.; Du, W.: Privacy-preserving top-N recommendation on distributed data (2008) 0.05
    0.045168933 = product of:
      0.18067573 = sum of:
        0.18067573 = weight(_text_:joint in 2864) [ClassicSimilarity], result of:
          0.18067573 = score(doc=2864,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.41378224 = fieldWeight in 2864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.046875 = fieldNorm(doc=2864)
      0.25 = coord(1/4)
    
    Abstract
    Traditional collaborative filtering (CF) systems perform filtering tasks on existing databases; however, data collected for recommendation purposes may split between different online vendors. To generate better predictions, offer richer recommendation services, enhance mutual advantages, and overcome problems caused by inadequate data and/or sparseness, e-companies want to integrate their data. Due to privacy, legal, and financial reasons, however, they do not want to disclose their data to each other. Providing privacy measures is vital to accomplish distributed databased top-N recommendation (TN), while preserving data holders' privacy. In this article, the authors present schemes for binary ratings-based TN on distributed data (horizontally or vertically), and provide accurate referrals without greatly exposing data owners' privacy. Our schemes make it possible for online vendors, even competing companies, to collaborate and conduct TN with privacy, using the joint data while introducing reasonable overhead costs.
  17. Liu, Y.; Du, F.; Sun, J.; Silva, T.; Jiang, Y.; Zhu, T.: Identifying social roles using heterogeneous features in online social networks (2019) 0.05
    0.045168933 = product of:
      0.18067573 = sum of:
        0.18067573 = weight(_text_:joint in 293) [ClassicSimilarity], result of:
          0.18067573 = score(doc=293,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.41378224 = fieldWeight in 293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.046875 = fieldNorm(doc=293)
      0.25 = coord(1/4)
    
    Abstract
    Role analysis plays an important role when exploring social media and knowledge-sharing platforms for designing marking strategies. However, current methods in role analysis have overlooked content generated by users (e.g., posts) in social media and hence focus more on user behavior analysis. The user-generated content is very important for characterizing users. In this paper, we propose a novel method which integrates both user behavior and posted content by users to identify roles in online social networks. The proposed method models a role as a joint distribution of Gaussian distribution and multinomial distribution, which represent user behavioral feature and content feature respectively. The proposed method can be used to determine the number of roles concerned automatically. The experimental results show that the proposed method can be used to identify various roles more effectively and to get more insights on such characteristics.
  18. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.04
    0.039004475 = product of:
      0.1560179 = sum of:
        0.1560179 = weight(_text_:headings in 1097) [ClassicSimilarity], result of:
          0.1560179 = score(doc=1097,freq=12.0), product of:
            0.33944473 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.06995397 = queryNorm
            0.45962682 = fieldWeight in 1097, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.02734375 = fieldNorm(doc=1097)
      0.25 = coord(1/4)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
  19. Lee, M.C.; Fung, C.-K.: ¬A public-key based authentication and key establishment protocol coupled with a client puzzle (2003) 0.04
    0.037640776 = product of:
      0.1505631 = sum of:
        0.1505631 = weight(_text_:joint in 3062) [ClassicSimilarity], result of:
          0.1505631 = score(doc=3062,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.34481853 = fieldWeight in 3062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3062)
      0.25 = coord(1/4)
    
    Abstract
    Network Denial-of-Service (DoS) attacks, which exhaust server resources and network bandwidth, can casee the target servers to be unable to provide proper services to the legitimate users and in some cases render the target systems inoperable and/or the target networks inaccessible. DoS attacks have now become a serious and common security threat to the Internet community. Public Key Infrastructure (PKI) has long been incorporated in various authentication protocols to facilitate verifying the identities of the communicating parties. The use of PKI has, however, an inherent problem as it involves expensive computational operations such as modular exponentiation. An improper deployment of the publickey operations in a protocol could create an opportunity for DoS attackers to exhaust the server's resources. This paper presents a public-key based authentication and key establishment protocol coupled with a sophisticated client puzzle, which together provide a versatile solution for possible DoS attacks and various other common attacks during an authentication process. Besides authentication, the protocol also supports a joint establishment of a session key by both the client and the server, which protects the session communications after the mutual authentication. The proposed protocol has been validated using a formal logic theory and has been shown, through security analysis, to be able to resist, besides DoS attacks, various other common attacks.
  20. Wood, D.J.: Peer review and the Web : the implications of electronic peer review for biomedical authors, referees and learned society publishers (1998) 0.04
    0.037640776 = product of:
      0.1505631 = sum of:
        0.1505631 = weight(_text_:joint in 5717) [ClassicSimilarity], result of:
          0.1505631 = score(doc=5717,freq=2.0), product of:
            0.43664446 = queryWeight, product of:
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.06995397 = queryNorm
            0.34481853 = fieldWeight in 5717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2418823 = idf(docFreq=234, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5717)
      0.25 = coord(1/4)
    
    Abstract
    The Internet provides researchers with exciting new opportunities for finding information and communicating with each other. However, the process of peer review is something of a Cinderella in all this. Peer review in biomedical disciplines is still largely carried out using hard copy and the postal system even if the authors' text files are used for the production of the paper or electronic journal. This article introduces one of the Electronic Libraries (eLib) projects, funded by the Joint Information Systems Committee (JISC). The project - Electronic Submission and Peer Review (ESPERE) - is examining the cultural and technical problems of implementing an electronic peer review process for biomedical academics and learned society publishers. The paper describes preliminary work in doscovering the issues involved and describes interviews with 7 learned society publishers, analysis of a questionnaire sent to 200 editorial board members and a focus group of 5 biomedical academics. Academics and learned publishers were enthusiastic about electronic peer review and the possibilities which it offers for a less costly, more streamlined and more effective process. Use of the Internet makes collaborative and interactive refereeing a practical option and allows academics from countries all over the world to take part

Years

Languages

  • d 1329
  • e 107
  • m 14
  • f 1
  • More… Less…

Types

  • a 1144
  • m 206
  • s 62
  • el 55
  • x 32
  • r 6
  • i 4
  • b 3
  • h 2
  • ? 1
  • l 1
  • More… Less…

Subjects

Classifications