Search (1441 results, page 1 of 73)

  • × theme_ss:"Internet"
  1. Kaiser, R.: Literarische Spaziergänge im Internet : Bücher und Bibliotheken online (1996) 0.20
    0.19796073 = product of:
      0.39592147 = sum of:
        0.31811783 = weight(_text_:james in 6617) [ClassicSimilarity], result of:
          0.31811783 = score(doc=6617,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.64477074 = fieldWeight in 6617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.0625 = fieldNorm(doc=6617)
        0.07780365 = weight(_text_:und in 6617) [ClassicSimilarity], result of:
          0.07780365 = score(doc=6617,freq=14.0), product of:
            0.1500079 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.067635134 = queryNorm
            0.51866364 = fieldWeight in 6617, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=6617)
      0.5 = coord(2/4)
    
    Abstract
    Nach einer launigen Einführung in das Internet werden 15 Vorschläge für das Vorwärtsgehen im Netz gemacht sowie 'Wege, Plätze und Lichtungen' aufgezeigt, die zu Bibliotheken und zur Literatur Auskunft geben können. Zu Leben und Werk von Autoren und auch zu einzelnen Autoren wie z.B. William Faulkner, James Joyce, Stephen King u.a. gibt es Hinweise und Adressen wie auch zu Literaturinhalten, zur Lyrik, Literaturgeschichte, zuu Literaturzeitschriften, NAchschlagewerken, Bibliothekskatalogen, Online-Textarchiven, Preisträgern u.a. Listen. Ein Personen- und Sachregister ergänzen den vergnüglich geschriebenen Band
  2. James-Catalano, C.: Cyberlibrarian (1995) 0.10
    0.099411815 = product of:
      0.39764726 = sum of:
        0.39764726 = weight(_text_:james in 1911) [ClassicSimilarity], result of:
          0.39764726 = score(doc=1911,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.8059634 = fieldWeight in 1911, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.078125 = fieldNorm(doc=1911)
      0.25 = coord(1/4)
    
  3. James, J.W.; Rosenfeld, L.B.: Networked information retrieval and organization : issues and questions (1996) 0.10
    0.099411815 = product of:
      0.39764726 = sum of:
        0.39764726 = weight(_text_:james in 6661) [ClassicSimilarity], result of:
          0.39764726 = score(doc=6661,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.8059634 = fieldWeight in 6661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.078125 = fieldNorm(doc=6661)
      0.25 = coord(1/4)
    
  4. ¬The Internet searcher's handbook : locating information, people and software (1996) 0.08
    0.07952946 = product of:
      0.31811783 = sum of:
        0.31811783 = weight(_text_:james in 3935) [ClassicSimilarity], result of:
          0.31811783 = score(doc=3935,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.64477074 = fieldWeight in 3935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.0625 = fieldNorm(doc=3935)
      0.25 = coord(1/4)
    
    Editor
    Morville, P., L. Rosenfeld u. J. James
  5. Lischka, K.: Archiv statt Deponie : Die US-Congressbibliothek soll das digitale Kulturerbe sichern - das dürfte teuer und schwierig werden (2003) 0.07
    0.06973438 = product of:
      0.13946876 = sum of:
        0.099411815 = weight(_text_:james in 2418) [ClassicSimilarity], result of:
          0.099411815 = score(doc=2418,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.20149085 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2418)
        0.040056936 = weight(_text_:und in 2418) [ClassicSimilarity], result of:
          0.040056936 = score(doc=2418,freq=38.0), product of:
            0.1500079 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.067635134 = queryNorm
            0.26703218 = fieldWeight in 2418, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.01953125 = fieldNorm(doc=2418)
      0.5 = coord(2/4)
    
    Abstract
    Selbst wenn es nach heutigem Ermessen nicht wertvoll wäre, müsste man Onlinematerial zumindest teilweise archivieren. Denn die Bedeutung von Quellen wandelt sich minder Zeit. Heutige Forschergenerationen würden viel für einen Blick auf jene Kurzfilme geben, die man Anfang des vergangenen Jahrhunderts nach einigen Durchläufen als Unterhaltungsmaterial achtlos zur Seite warf. Schon heute ist absehbar, dass lnternetseiten von 1998 Kommunikationswissenschaftler viel über die Beschleunigung und Aufheizung des Mediengeschäfts erzählen können. Wie schnell zogen kommerzielle Medien im Netz im Vergleich zur gedruckten Version der Skandalberichterstattung Matt Drudges über Bill Clintons Affaire nach? Welche Funktion hatten dabei öffentliche Nachrichtenforen? Historiker dürften vom frühen E-Mail-Verkehr in Regierungen und großen Unternehmen einst weit weniger finden als von den früher so geheimen Depeschen.
    Content
    "Fast eine Million britische Schulkinder, Beamte und Journalisten haben im Jahr 1986 Informationen über ihr Land gesammelt. Sie trugen 250000 Ortsbeschreibungen zusammen, 50 000 Fotos, 25 000 Landkarten und eine nie quantifizierte Textmenge. Der Sender BBC wollte ein Dokument über den britischen Alltag jener Zeit für die Nachwelt schaffen. Etwa 2,5 Millionen Pfund kostete der Datenberg, der auf einer Videodisk gespeichert wurde. Die galt als unzerstörbar. War sie bis heute auch. Nur gab es 16 Jahre später im Jahr 2002 keinen Computer mehr, der das Material lesen kann. Denn der entsprechende BBC Micro-Rechner war schnell verschwunden, weil ein zu teurer Flop. Ähnlich könnte es dem digital vorliegenden, kulturellen Erbe der Menschheit ergehen, Das denkt zumindest die Mehrheit der US-Kongressabgeordneten. Sie haben der Kongressbibliothek für die Entwicklung eines Systems zur Sammlung und Katalogisierung digitaler Informationen 20 Millionen Dollar bewilligt. Weitere 75 Millionen könnten in den nächsten Jahren vom Staat dazukommen - wenn auch Sponsoren so viel Geld bringen. Halten die sich zurück, knausert auch der Staat beim "National Digital Information Infrastructure and Preservation Program" (NDIIPP). Deutsche Bibliothekare betrachten das Projekt mit gemischten Gefühlen. "Wir sehen neidisch, was in den USA mit Sponsoren möglich ist. Doch andererseits kann man den Erhalt des nationalen Kulturerbes nicht darauf gründen, dass in 50 Jahren noch Sponsoren dafür zu haben sind", erklärt Katrin Ansorge, Sprecherin der Deutschen Bibliothek (DDB). Die hat - noch - keinen gesetzlichen Sammelauftrag für digitale Dokumente, die nicht auf physischen Datenträgern wie CD-ROMs vorliegen. Doch Ansorge ist zuversichtlich, "dass der Bund das Gesetz noch in dieser Legislaturperiode anpasst". Bis dahin dürfte mehr Material verloren sein als beim Brand der Bibliothek von Alexandria. Nach einer Studie der US-Kongressbibliothek war die Hälfte des 1998 im Internet verfügbaren Materials 1999 wieder verschwunden. "Vieles davon war wichtiges, einzigartiges Material. das man nicht zurückholen kann, aber einst dringend suchen wird", sagt der Leiter der Kongressbibliothek, James H. Billington. Den hier wirkenden Widerspruch des Internets als Medium formuliert Franziska Nori, die wissenschaftliche Leiterin der Abteilung Digitalcraft am Frankfurter Museum für Angewandte Kunst (MAK): "Es ist kurzlebig, hat aber alle Bereiche unserer Gesellschaft einschneidend verändert." Einen kleinen Beitrag zur Archivierung digitalen Kunsthandwerks leistet das MAK mit seiner Sammlung Webdesign. Ausgewählte Seiten von Gestaltern, Agenturen, Onlinemagazinen und Angeboten für Kinder werden auf den Servern von Digitalcraft archiviert. Auch die DDB sammelt bestimmte Dokumente wie Onlinedissertationen, Habilitationen oder nur online verfügbare wissenschaftliche Magazine. Die vergleichsweise kleinen Projekte zeigen vor allem eines: Die Archivierung digitaler Dokumente ist teuer. Denn die Probleme sind komplexer und vor allem neuer als der am Papier nagende Säurefraß und die nötige systematische Katalogisierung gedruckter Werke. Die Probleme beginnen schon beim Sammeln. Die von privaten Stiftungen getragene US-Initiative "Internet Archive" speichert beispielsweise jeden Tag 400 Gigabyte an neuem Material. Ausgedruckt wären das mehr als 20 Buchregal-Kilometer. Um zusätzlichen Speicherplatz zu kaufen, gibt das "Internet Archive" jeden Monat etwa 40000 Euro aus. Die Wartung und vor allem die Katalogisierung des vorhandenen Bestands ist weit teurer. Dabei erfasst das "Internet Archive" nur alle zwei Monate komplett einen großen Ausschnitt des Webs. Mit Passworten geschützte Seiten bleiben ebenso außen vor wie Kommunikation in Chaträumen und E-Mails.
    Angesichts der Datenmengen scheint es, dass Bibliotheken beim Sammeln digitaler Dokumente rigider auswählen müssen. Weit drastischer als heute, wo noch immer der Grundgedanke wirkt, spätere Generationen müssten selbst den Wert der Quellen bewerten dürfen. Die DDB denkt laut Kathrin Ansorge an getrennte Sammlungsverfahren: "einerseits für Dokumente, die einen gesicherten Publikationsprozess wie etwa in Verlagen erfahren haben, andererseits für den großen Rest, den man mit Suchrobotern abgreifen könnte". Beim Sammeln werden Bibliotheken dieselben Schwierigkeiten haben, mit denen das "Internet Archive" heute schon kämpft: Urheber schützen ihr Material; Passworte sind das kleinere Problem. Eine gesetzliche Ablieferungspflicht wie bei gedrucktem Material könnte da helfen. Schwieriger sind Dateiformate, die schon heute das Auslesen der Dokumente ebenso wie zu häufige Transfers verhindern. Manche Verlage legen gar ein Verfallsdatum fest. Diese Verschlüsselungen sind nur schwer zu knacken. Die Versuche könnte ein novelliertes Urheberrecht gar strafbar machen. Aber auch Dateiformate ohne solche Schutzmechanismen werden zum Problem. Denn Dokumente sollen ja nicht nur auf Deponien gesammelt werden, sondern vor allem in Archiven zugänglich sein. Die drohende Gefahr: Die Soft- und Hardware zum Lesen bestimmter Formate ist in wenigen Jahren verschwunden. Die Dokumente sind dann so wertvoll wie Text in Geheimtinte ohne das Wissen, wie man sie sichtbar macht. Hier haben digitale Archive drei Möglichkeiten. Die erste ist Migration. Alte Software wird für jede neue Computergeneration neu programmiert. Das ist aufwendig. Und vor allem gehen Informationen verloren, während neue hinzukommen. Es ist so, als würde man ein Gemälde alle fünf Jahre abmalen. Wie Rembrandts Nachtwache dann heute aussähe? Eine andere Möglichkeit ist die Emulation. Dabei ahmen spezielle Programme alte Hardware nach. Man müsste dann nicht jede Software neu schreiben, weil sie sich in einer vertrauten, da emulierten Umgebung wähnt. Der Nachteil: Alle paar Jahre ist eine neue Emulation nötig, um die alten Emulatoren mit neuer Hardware nutzen zu können. Ein auf kurze Sicht bequemer, auf lange Sicht gefährlicher Teufelskreis, sagt David Bearman, Präsident des kanadischen Beratungsunternehmens "Archives and Museum Informatics": "Das gibt Managern und Regierungen auf der Welt eine Entschuldigung, um Entscheidungen herauszuzögern, die jetzt getroffen werden müssen. Eine dritte Möglichkeit wäre, alle Dateien in einer zweiten Version zu speichern, die auf einem so genannten Universal Virtual Computer lesbar ist. Der existiert als Beschreibung auf wenigen Seiten Papier. Er ist einfach und umfasst die bislang unveränderten und in Zukunft sicher reproduzierbaren technischen Grundsätze eines Rechners wie Arbeitsspeicher, Hauptprozessor und dergleichen. Diese Möglichkeit erwägt die Koninklijke Bibliotheek der Niederlande. Sie hat IBM mit der Entwicklung eines Depotsystem für digitale Dokumente beauftragt. Ein auf mittlere Sicht angelegtes Programm läuft schon. Die langfristige, gegenüber Hard- und Softwareänderungen resistente Erhaltung soll auf dem UVC-Konzept aufbauen. Dass es im Prinzip funktioniert, belegt ein Prototyp: Ein PDF-Dokument wurde in das Format für einen UVC konvertiert und ohne Informationsverlust wieder ausgelesen. Noch besteht Hoffnung für das digitale Kulturerbe. Sogar das von der BBC 1986 gesammelte Material konnten Forscher - nach mehr als einem halben Jahr Arbeit - Ende vergangenen Jahres auslesen. Aller- dings wissen sie noch nicht, wie sie es nun für die Ewigkeit archivieren sollen - oder zumindest für die nächsten 16 Jahre."
  6. James, J.: Digital preparedness versus the digital divide : a confusion of means and ends (2008) 0.07
    0.069588274 = product of:
      0.2783531 = sum of:
        0.2783531 = weight(_text_:james in 2616) [ClassicSimilarity], result of:
          0.2783531 = score(doc=2616,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.5641744 = fieldWeight in 2616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2616)
      0.25 = coord(1/4)
    
  7. James, J.: Re-estimating the difficulty of closing the digital divide (2008) 0.06
    0.05964709 = product of:
      0.23858836 = sum of:
        0.23858836 = weight(_text_:james in 3379) [ClassicSimilarity], result of:
          0.23858836 = score(doc=3379,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.48357806 = fieldWeight in 3379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.046875 = fieldNorm(doc=3379)
      0.25 = coord(1/4)
    
  8. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.05
    0.046552155 = product of:
      0.18620862 = sum of:
        0.18620862 = weight(_text_:headings in 4966) [ClassicSimilarity], result of:
          0.18620862 = score(doc=4966,freq=14.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.5673757 = fieldWeight in 4966, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.03125 = fieldNorm(doc=4966)
      0.25 = coord(1/4)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.
  9. Quick queries (1996) 0.04
    0.04398765 = product of:
      0.1759506 = sum of:
        0.1759506 = weight(_text_:headings in 4735) [ClassicSimilarity], result of:
          0.1759506 = score(doc=4735,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.53611964 = fieldWeight in 4735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.078125 = fieldNorm(doc=4735)
      0.25 = coord(1/4)
    
    Abstract
    Provides a list of 19 WWW and gopher sites from which answers to ready reference queries may be obtained. These are arranged under the following headings: ready made collections; date and time; weights and measures; flag wavers; foreign currency; state by state; the elements; and case and tense
  10. Auer, N.J.: Bibliography on evaluating Internet resources (1998) 0.04
    0.04398765 = product of:
      0.1759506 = sum of:
        0.1759506 = weight(_text_:headings in 4528) [ClassicSimilarity], result of:
          0.1759506 = score(doc=4528,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.53611964 = fieldWeight in 4528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.078125 = fieldNorm(doc=4528)
      0.25 = coord(1/4)
    
    Abstract
    Presents a bibliography on evaluating Internet resources in which titles are arranged under the following headings: Internet resources, print resources, and useful listservs
  11. Feigenbaum, L.; Herman, I.; Hongsermeier, T.; Neumann, E.; Stephens, S.: ¬The Semantic Web in action (2007) 0.04
    0.03976473 = product of:
      0.15905891 = sum of:
        0.15905891 = weight(_text_:james in 4000) [ClassicSimilarity], result of:
          0.15905891 = score(doc=4000,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.32238537 = fieldWeight in 4000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.03125 = fieldNorm(doc=4000)
      0.25 = coord(1/4)
    
    Abstract
    Six years ago in this magazine, Tim Berners-Lee, James Hendler and Ora Lassila unveiled a nascent vision of the Semantic Web: a highly interconnected network of data that could be easily accessed and understood by any desktop or handheld machine. They painted a future of intelligent software agents that would head out on the World Wide Web and automatically book flights and hotels for our trips, update our medical records and give us a single, customized answer to a particular question without our having to search for information or pore through results. They also presented the young technologies that would make this vision come true: a common language for representing data that could be understood by all kinds of software agents; ontologies--sets of statements--that translate information from disparate databases into common terms; and rules that allow software agents to reason about the information described in those terms. The data format, ontologies and reasoning software would operate like one big application on the World Wide Web, analyzing all the raw data stored in online databases as well as all the data about the text, images, video and communications the Web contained. Like the Web itself, the Semantic Web would grow in a grassroots fashion, only this time aided by working groups within the World Wide Web Consortium, which helps to advance the global medium. Since then skeptics have said the Semantic Web would be too difficult for people to understand or exploit. Not so. The enabling technologies have come of age. A vibrant community of early adopters has agreed on standards that have steadily made the Semantic Web practical to use. Large companies have major projects under way that will greatly improve the efficiencies of in-house operations and of scientific research. Other firms are using the Semantic Web to enhance business-to-business interactions and to build the hidden data-processing structures, or back ends, behind new consumer services. And like an iceberg, the tip of this large body of work is emerging in direct consumer applications, too.
  12. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.04
    0.037711553 = product of:
      0.15084621 = sum of:
        0.15084621 = weight(_text_:headings in 1097) [ClassicSimilarity], result of:
          0.15084621 = score(doc=1097,freq=12.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.45962682 = fieldWeight in 1097, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.02734375 = fieldNorm(doc=1097)
      0.25 = coord(1/4)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
  13. Reference sources on the Internet : off the shelf and onto the Web (1997) 0.04
    0.03519012 = product of:
      0.14076048 = sum of:
        0.14076048 = weight(_text_:headings in 1616) [ClassicSimilarity], result of:
          0.14076048 = score(doc=1616,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.4288957 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0625 = fieldNorm(doc=1616)
      0.25 = coord(1/4)
    
    Abstract
    Issue devoted to reference sources on the Internet. Provides reference librarians with a core list of resources in a variety of subject areas available on the Internet. Articles are grouped under the following headings: general; business and social sciences; humanities; leisure studies; sciences; and a feature column on government information sources
  14. El-Sherbini, M.: Selected cataloging tools on the Internet (2003) 0.04
    0.03519012 = product of:
      0.14076048 = sum of:
        0.14076048 = weight(_text_:headings in 2997) [ClassicSimilarity], result of:
          0.14076048 = score(doc=2997,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.4288957 = fieldWeight in 2997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0625 = fieldNorm(doc=2997)
      0.25 = coord(1/4)
    
    Abstract
    This bibliography contains selected cataloging tools an the Internet. It is divided into seven sections as follows: authority management and subject headings tools; cataloging tools by type of materials; dictionaries, encyclopedias, and place names; listservs and workshops; software and vendors; technical service professional organizations; and journals and newsletters. Resources are arranged in alphabetical order under each topic. Selected cataloging tools are annotated. There is some overlap since a given web site can cover many tools.
  15. Weinberg, B.H.: Complexity in indexing systems abandonment and failure : implications for organizing the Internet (1996) 0.03
    0.030791355 = product of:
      0.12316542 = sum of:
        0.12316542 = weight(_text_:headings in 6187) [ClassicSimilarity], result of:
          0.12316542 = score(doc=6187,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.37528375 = fieldWeight in 6187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6187)
      0.25 = coord(1/4)
    
    Abstract
    The past 100 years have seen the development of numerous systems for the structured representation of knowledge and information, including hierarchical classification systems and with sophisticated features for the representation of term relationships. Discusses reasons for the lack of widespread adoption of these systems, particularly in the USA. The suggested structure for indexing the Internet or other large electronic collections of documents is based on that of book indexes: specific headings with coined modifications
  16. Shafer, K.: Scorpion Project explores using Dewey to organize the Web (1996) 0.03
    0.030791355 = product of:
      0.12316542 = sum of:
        0.12316542 = weight(_text_:headings in 6818) [ClassicSimilarity], result of:
          0.12316542 = score(doc=6818,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.37528375 = fieldWeight in 6818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6818)
      0.25 = coord(1/4)
    
    Abstract
    As the amount of accessible information on the WWW increases, so will the cost of accessing it, even if search servcies remain free, due to the increasing amount of time users will have to spend to find needed items. Considers what the seemingly unorganized Web and the organized world of libraries can offer each other. The OCLC Scorpion Project is attempting to combine indexing and cataloguing, specifically focusing on building tools for automatic subject recognition using the technqiues of library science and information retrieval. If subject headings or concept domains can be automatically assigned to electronic items, improved filtering tools for searching can be produced
  17. Beall, J.: Cataloging World Wide Web sites consisting mainly of links (1997) 0.03
    0.030791355 = product of:
      0.12316542 = sum of:
        0.12316542 = weight(_text_:headings in 4408) [ClassicSimilarity], result of:
          0.12316542 = score(doc=4408,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.37528375 = fieldWeight in 4408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4408)
      0.25 = coord(1/4)
    
    Abstract
    WWW sites, consisting mainly of links to other Internet resources, have begun to proliferate and these sites are valuable to library users and researchers because they bring together in a single Web site links to a comprehensive array of information resources. Because libraries may elect to include bibliographic records for these sites in their online catalogues, cataloguers should be aware of some of the main aspects of cataloguing this new type of resource. Concludes that cataloguers should be aware of the main types and different characteristics of these Web sites, how to describe them in bibliographic records and how to assign appropriate subject headings for them
  18. Russell, B.M.; Spillane, J.L.: Using the Web for name authority work (2001) 0.03
    0.030791355 = product of:
      0.12316542 = sum of:
        0.12316542 = weight(_text_:headings in 292) [ClassicSimilarity], result of:
          0.12316542 = score(doc=292,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.37528375 = fieldWeight in 292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.0546875 = fieldNorm(doc=292)
      0.25 = coord(1/4)
    
    Abstract
    While many catalogers are using the Web to find the information they need to perform authority work quickly and accurately, the full potential of the Web to assist catalogers in name authority work has yet to be realized. The ever-growing nature of the Web means that available information for creating personal name, corporate name, and other types of headings will increase. In this article, we examine ways in which simple and effective Web searching can save catalogers time and money in the process of authority work. In addition, questions involving evaluating authority information found on the Web are explored.
  19. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.03
    0.029823545 = product of:
      0.11929418 = sum of:
        0.11929418 = weight(_text_:james in 378) [ClassicSimilarity], result of:
          0.11929418 = score(doc=378,freq=2.0), product of:
            0.4933813 = queryWeight, product of:
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.067635134 = queryNorm
            0.24178903 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2947483 = idf(docFreq=81, maxDocs=44421)
              0.0234375 = fieldNorm(doc=378)
      0.25 = coord(1/4)
    
    Content
    Demo/Position Papers * Conjunctive Query Answering in Distributed Ontology Systems for Ontologies with Large OWL ABoxes, Xueying Chen and Michel Dumontier. * Node-Link and Containment Methods in Ontology Visualization, Julia Dmitrieva and Fons J. Verbeek. * A JC3IEDM OWL-DL Ontology, Steven Wartik. * Semantically Enabled Temporal Reasoning in a Virtual Observatory, Patrick West, Eric Rozell, Stephan Zednik, Peter Fox and Deborah L. McGuinness. * Developing an Ontology from the Application Up, James Malone, Tomasz Adamusiak, Ele Holloway, Misha Kapushesky and Helen Parkinson.
  20. Long, C.E.: ¬The Internet's value to catalogers : results of a survey (1997) 0.03
    0.02639259 = product of:
      0.10557036 = sum of:
        0.10557036 = weight(_text_:headings in 494) [ClassicSimilarity], result of:
          0.10557036 = score(doc=494,freq=2.0), product of:
            0.3281928 = queryWeight, product of:
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.067635134 = queryNorm
            0.32167178 = fieldWeight in 494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8524013 = idf(docFreq=942, maxDocs=44421)
              0.046875 = fieldNorm(doc=494)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a questionnaire survey of cataloguers, conducted over the AUTOCAT Internet discussion group, to determine those areas of cataloguing for which the Internet is a valuable tool and those areas for which it is not as useful. Respondents indicated 4 areas in which cataloguers use the Internet: searching the OPACs of other libraries, communicating with colleagues, accessing online cataloguing documentation and publications, and authority work. Cataloguers who found access to other libraries' OPACs did so for the following reasons: assigning call numbers and subject headings; finding full cataloguing copy from other libraries; enriching their local catalogue with notes present in records in other libraries; finding copy for foreign language items that cannot be read by library staff; and resolving difficult problems when important parts of the item are missing ar are in disarray. Some cataloguers also related processes for which they have found the Internet to not be efficient

Years

Languages

  • d 1329
  • e 97
  • m 14
  • More… Less…

Types

  • a 1134
  • m 204
  • s 62
  • el 56
  • x 32
  • r 5
  • i 4
  • b 3
  • h 2
  • ? 1
  • l 1
  • More… Less…

Subjects

Classifications