Search (1393 results, page 5 of 70)

  • × language_ss:"e"
  1. Powell, A.: ¬An idiot's guide to the Dublin Core (1997) 0.04
    0.041889492 = product of:
      0.16755797 = sum of:
        0.16755797 = weight(_text_:html in 1939) [ClassicSimilarity], result of:
          0.16755797 = score(doc=1939,freq=2.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.56872755 = fieldWeight in 1939, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.078125 = fieldNorm(doc=1939)
      0.25 = coord(1/4)
    
    Abstract
    The Dublin Core metadata element set is a simple set of elements intended for use in describing Internet based resources. Gives an overview of the Dublin Core elements and shows by example how to embed them into HTML Web pages
  2. Bradley, P.; Smith, A.: World Wide Web : how do design and construct home pages (1995) 0.04
    0.041889492 = product of:
      0.16755797 = sum of:
        0.16755797 = weight(_text_:html in 2298) [ClassicSimilarity], result of:
          0.16755797 = score(doc=2298,freq=2.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.56872755 = fieldWeight in 2298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.078125 = fieldNorm(doc=2298)
      0.25 = coord(1/4)
    
    Object
    HTML
  3. Bell, H.K.: History of societies of indexing : part VII: 1992-95 (2000) 0.04
    0.041889492 = product of:
      0.16755797 = sum of:
        0.16755797 = weight(_text_:html in 1113) [ClassicSimilarity], result of:
          0.16755797 = score(doc=1113,freq=2.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.56872755 = fieldWeight in 1113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.078125 = fieldNorm(doc=1113)
      0.25 = coord(1/4)
    
    Footnote
    Vgl.: http://www.aidanbell.com/html/hkbell/History7.htm.
  4. Wall, C.E.; Cole, T.W.; Kazmer, M.M.: HyperText MARCup : a conceptualization for encoding, de-constructing, searching, retrieving, and using traditional knowledge tools (1995) 0.04
    0.041468486 = product of:
      0.16587394 = sum of:
        0.16587394 = weight(_text_:html in 4254) [ClassicSimilarity], result of:
          0.16587394 = score(doc=4254,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.5630116 = fieldWeight in 4254, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4254)
      0.25 = coord(1/4)
    
    Abstract
    Pierian Press and the University of Illinois have been experimenting with directly parsing classified, analytical bibliographies into an electronic structure using the respective strengths of both HTML and MARC. This structure, which is explained and illustrated in this article, mitigates the weaknesses of each standard by drawing on the strengths of the other. The resulting electronic knowledge constructs can be mounted on local library systems and function as dynamic maps onto a specified subset of resources on those systems. Linkages can be added and/or removed to customize each construct to local holdings and/or needs
    Object
    HTML
  5. Boeri, R.J.; Hensel, M.: Corporate online/CD-ROM publishing : the desing and tactical issues (1996) 0.04
    0.041468486 = product of:
      0.16587394 = sum of:
        0.16587394 = weight(_text_:html in 4621) [ClassicSimilarity], result of:
          0.16587394 = score(doc=4621,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.5630116 = fieldWeight in 4621, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0546875 = fieldNorm(doc=4621)
      0.25 = coord(1/4)
    
    Abstract
    Although existing document imaging software effectively serves small business needs whenit comes to publishing documents from multiple sources in various formats on CD-ROM and the WWW, the same cannot be said when it comes to large scale corporate publishing. Sets out the requirements of corporate in house document publishing, which typically include: avoiding hand crafting documents for different media, having the flexibility not to be hostage to changing word processors, vendor alliances, operating systems, or output media; reducing exception handling as volumes of published documents increase; and incorporating support for upcoming changes in HTML, if WWW publishing is planned. Focuses on the importance of SGML and DTD in this process
    Object
    HTML
  6. Fisher, Y.: Spinning the Web : a guide to serving information on the World Wide Web (1996) 0.04
    0.041468486 = product of:
      0.16587394 = sum of:
        0.16587394 = weight(_text_:html in 6014) [ClassicSimilarity], result of:
          0.16587394 = score(doc=6014,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.5630116 = fieldWeight in 6014, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6014)
      0.25 = coord(1/4)
    
    Abstract
    Most books on the Internet describe it from the user's end. This one, however, is unique in its focus on serving information on the WWW. It presents everything from the basics to advanced techniques and will thus prove invaluable to site administrators and developers. The author - an expert developer and researcher at UCSD - covers such topics as HTML 3.0, serving documents, interfaces, WWW utilities and browsers such as Netscape. Fisher also includes an introduction to programming with JAVA and JAVA sript, as well as the complete VRML 1.0 specification
    Object
    HTML
  7. Ossenbruggen, J.v.; Eliens, A.; Schönhage, B.: Web applications and SGML (1996) 0.04
    0.041468486 = product of:
      0.16587394 = sum of:
        0.16587394 = weight(_text_:html in 155) [ClassicSimilarity], result of:
          0.16587394 = score(doc=155,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.5630116 = fieldWeight in 155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0546875 = fieldNorm(doc=155)
      0.25 = coord(1/4)
    
    Abstract
    Advocates the use of SGML technology for the creation, dissemination and display of WWW documents. Presents a software architecture that allows for defining the opertaional interpretation of arbitrary document types by means of style sheets written in a scripting language. This approach has been motivated by the desire to extend the functionality of the WWW with support for multimedia and active documents. Provides a brief introduction to SGML and illustrates how the approach outlined accomodates extensions of HTML as well as SGML documents containing multimedia data such as video and audio. Briefly sketches the software components used and discusses some topics for further research
    Object
    HTML
  8. Hirsch, C.C.: InterBRAIN : topographical atlas of the anatomy of the human CNS (1998) 0.04
    0.041468486 = product of:
      0.16587394 = sum of:
        0.16587394 = weight(_text_:html in 1822) [ClassicSimilarity], result of:
          0.16587394 = score(doc=1822,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.5630116 = fieldWeight in 1822, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1822)
      0.25 = coord(1/4)
    
    Abstract
    The intricate 3D structure of the CNS lends itself to multimedia presentation, and is depicted here by way of dynamic 3D models that can be freely rotated, and in over 200 illustrations taken from the successful book "The Human Central Nervous System" by R. Nieuwenhuys et al, allowing the user to explore all aspects of this complex and fascinating subject. All this fully hyperlinked with over 2000 specialist terms. Optimal exam revision is guaranteed with the self-study option. For further information please contact: http://www.brainmedia.de/html/frames/pr/pr<BL>5/pr<BL>5<BL>02.html
  9. Hartman, J.H.; Proebsting, T.A.; Sundaram, R.: Index-based hyperlinks (1997) 0.04
    0.041468486 = product of:
      0.16587394 = sum of:
        0.16587394 = weight(_text_:html in 3723) [ClassicSimilarity], result of:
          0.16587394 = score(doc=3723,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.5630116 = fieldWeight in 3723, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3723)
      0.25 = coord(1/4)
    
    Abstract
    Proposes a new mechanism for implicitly specifying hyperlinks in HTML documents using indices. Indices maintain these key /a ttribute bindings over all or part of a document, and are used by browsers to create hyperlinks dynamically. Indices may also include bindings of other indices, in a hierarchical fashion. Indices are both simpler and more general than the current HTML hyperlink mechnisms. Develops a prototype browser that user index-based hyperlinks
  10. Hancock, B.; Giarlo, M.J.: Moving to XML : Latin texts XML conversion project at the Center for Electronic Texts in the Humanities (2001) 0.04
    0.041468486 = product of:
      0.16587394 = sum of:
        0.16587394 = weight(_text_:html in 5801) [ClassicSimilarity], result of:
          0.16587394 = score(doc=5801,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.5630116 = fieldWeight in 5801, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0546875 = fieldNorm(doc=5801)
      0.25 = coord(1/4)
    
    Abstract
    The delivery of documents on the Web has moved beyond the restrictions of the traditional Web markup language, HTML. HTML's static tags cannot deal with the variety of data formats now beginning to be exchanged between various entities, whether corporate or institutional. XML solves many of the problems by allowing arbitrary tags, which describe the content for a particular audience or group. At the Center for Electronic Texts in the Humanities the Latin texts of Lector Longinquus are being transformed to XML in readiness for the expected new standard. To allow existing browsers to render these texts, a Java program is used to transform the XML to HTML on the fly.
  11. Zschunke, P.: Richtig googeln : Ein neues Buch hilft, alle Möglichkeiten der populären Suchmaschine zu nutzen (2003) 0.04
    0.04060895 = product of:
      0.0812179 = sum of:
        0.030950509 = weight(_text_:und in 55) [ClassicSimilarity], result of:
          0.030950509 = score(doc=55,freq=22.0), product of:
            0.12694143 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057234988 = queryNorm
            0.24381724 = fieldWeight in 55, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
        0.05026739 = weight(_text_:html in 55) [ClassicSimilarity], result of:
          0.05026739 = score(doc=55,freq=2.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.17061827 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.0234375 = fieldNorm(doc=55)
      0.5 = coord(2/4)
    
    Content
    "Fünf Jahre nach seiner Gründung ist Google zum Herz des weltweiten Computernetzes geworden. Mit seiner Konzentration aufs Wesentliche hat die Suchmaschine alle anderen Anbieter weit zurück gelassen. Aber Google kann viel mehr, als im Web nach Texten und Bildern zu suchen. Gesammelt und aufbereitet werden auch Beiträge in Diskussionsforen (Newsgroups), aktuelle Nachrichten und andere im Netz verfügbare Informationen. Wer sich beim "Googeln" darauf beschränkt, ein einziges Wort in das Suchformular einzutippen und dann die ersten von oft mehreren hunderttausend Treffern anzuschauen, nutzt nur einen winzigen Bruchteil der Möglichkeiten. Wie man Google bis zum letzten ausreizt, haben Tara Calishain und Rael Dornfest in einem bislang nur auf Englisch veröffentlichten Buch dargestellt (Tara Calishain/Rael Dornfest: Google Hacks", www.oreilly.de, 28 Euro. Die wichtigsten Praxistipps kosten als Google Pocket Guide 12 Euro). - Suchen mit bis zu zehn Wörtern - Ihre "100 Google Hacks" beginnen mit Google-Strategien wie der Kombination mehrerer Suchbegriffe und enden mit der Aufforderung zur eigenen Nutzung der Google API ("Application Programming Interface"). Diese Schnittstelle kann zur Entwicklung von eigenen Programmen eingesetzt werden,,die auf die Google-Datenbank mit ihren mehr als drei Milliarden Einträgen zugreifen. Ein bewussteres Suchen im Internet beginnt mit der Kombination mehrerer Suchbegriffe - bis zu zehn Wörter können in das Formularfeld eingetippt werden, welche Google mit dem lo-gischen Ausdruck "und" verknüpft. Diese Standardvorgabe kann mit einem dazwischen eingefügten "or" zu einer Oder-Verknüpfung geändert werden. Soll ein bestimmter Begriff nicht auftauchen, wird ein Minuszeichen davor gesetzt. Auf diese Weise können bei einer Suche etwa alle Treffer ausgefiltert werden, die vom Online-Buchhändler Amazon kommen. Weiter gehende Syntax-Anweisungen helfen ebenfalls dabei, die Suche gezielt einzugrenzen: Die vorangestellte Anweisung "intitle:" etwa (ohne Anführungszeichen einzugeben) beschränkt die Suche auf all diejenigen Web-Seiten, die den direkt danach folgenden Begriff in ihrem Titel aufführen. Die Computer von Google bewältigen täglich mehr als 200 Millionen Anfragen. Die Antworten kommen aus einer Datenbank, die mehr als drei Milliarden Einträge enthält und regelmäßig aktualisiert wird. Dazu Werden SoftwareRoboter eingesetzt, so genannte "Search-Bots", die sich die Hyperlinks auf Web-Seiten entlang hangeln und für jedes Web-Dokument einen Index zur Volltextsuche anlegen. Die Einnahmen des 1998 von Larry Page und Sergey Brin gegründeten Unternehmens stammen zumeist von Internet-Portalen, welche die GoogleSuchtechnik für ihre eigenen Dienste übernehmen. Eine zwei Einnahmequelle ist die Werbung von Unternehmen, die für eine optisch hervorgehobene Platzierung in den GoogleTrefferlisten zahlen. Das Unternehmen mit Sitz im kalifornischen Mountain View beschäftigt rund 800 Mitarbeiter. Der Name Google leitet sich ab von dem Kunstwort "Googol", mit dem der amerikanische Mathematiker Edward Kasner die unvorstellbar große Zahl 10 hoch 100 (eine 1 mit hundert Nullen) bezeichnet hat. Kommerzielle Internet-Anbieter sind sehr, daran interessiert, auf den vordersten Plätzen einer Google-Trefferliste zu erscheinen.
    Da Google im Unterschied zu Yahoo oder Lycos nie ein auf möglichst viele Besuche angelegtes Internet-Portal werden wollte, ist die Suche in der Datenbank auch außerhalb der Google-Web-Site möglich. Dafür gibt es zunächst die "Google Toolbar" für den Internet Explorer, mit der dieser Browser eine eigene Leiste, für die Google-Suche erhält. Freie Entwickler bieten im Internet eine eigene Umsetzung: dieses Werkzeugs auch für den Netscape/ Mozilla-Browser an. Daneben kann ein GoogleSucheingabefeld aber auch auf die eigene WebSeite platziert werden - dazu sind nur vier Zei-len HTML-Code nötig. Eine Google-Suche zu starten, ist übrigens auch ganz ohne Browser möglich. Dazu hat das Unternehmen im Aprilvergangenen Jahres die API ("Application Programming Interface") frei gegeben, die in eigene Programme' eingebaut wird. So kann man etwa eine Google-Suche mit einer E-Mail starten: Die Suchbegriffe werden in die Betreff Zeile einer ansonsten leeren EMail eingetragen, die an die Adresse google@capeclear.com geschickt wird. Kurz danach trifft eine automatische Antwort-Mail mit den ersten zehn Treffern ein. Die entsprechenden Kenntnisse vorausgesetzt, können Google-Abfragen auch in Web-Services eingebaut werden - das sind Programme, die Daten aus dem Internet verarbeiten. Als Programmiertechniken kommen dafür Perl, PHP, Python oder Java in Frage. Calishain und Dornfest stellen sogar eine Reihe von abgedrehten Sites vor, die solche Programme für abstrakte Gedichte oder andere Kunstwerke einsetzen."
  12. XML in libraries (2002) 0.04
    0.036622237 = product of:
      0.073244475 = sum of:
        0.006221286 = weight(_text_:und in 4100) [ClassicSimilarity], result of:
          0.006221286 = score(doc=4100,freq=2.0), product of:
            0.12694143 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.057234988 = queryNorm
            0.049009107 = fieldWeight in 4100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.015625 = fieldNorm(doc=4100)
        0.06702319 = weight(_text_:html in 4100) [ClassicSimilarity], result of:
          0.06702319 = score(doc=4100,freq=8.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.22749102 = fieldWeight in 4100, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.015625 = fieldNorm(doc=4100)
      0.5 = coord(2/4)
    
    Content
    Sammelrezension mit: (1) The ABCs of XML: The Librarian's Guide to the eXtensible Markup Language. Norman Desmarais. Houston, TX: New Technology Press, 2000. 206 pp. $28.00. (ISBN: 0-9675942-0-0) und (2) Learning XML. Erik T. Ray. Sebastopol, CA: O'Reilly & Associates, 2003. 400 pp. $34.95. (ISBN: 0-596-00420-6)
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  13. Nieuwenhuysen, P.; Vanouplines, P.: Document plus program hybrids on the Internet and their impact on information transfer (1998) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 2893) [ClassicSimilarity], result of:
          0.14217766 = score(doc=2893,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 2893, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=2893)
      0.25 = coord(1/4)
    
    Abstract
    Examines some of the advanced tools, techniques, methods and standards related to the Internet and WWW which consist of hybrids of documents and software, called 'document program hybrids'. Early Internet systems were based on having documents on one side and software on the other, neatly separated, apart from one another and without much interaction, so that the static document can also exist without computers and networks. Documentation program hybrids blur this classical distinction and all components are integrated, interwoven and exist in synergy with each other. Illustrates the techniques with particular reference to practical examples, including: dara collections and dedicated software; advanced HTML features on the WWW, multimedia viewer and plug in software for Internet and WWW browsers; VRML; interaction through a Web server with other servers and with instruments; adaptive hypertext provided by the server; 'webbots' or 'knowbots' or 'searchbots' or 'metasearch engines' or intelligent software agents; Sun's Java; Microsoft's ActiveX; program scripts for HTML and Web browsers; cookies; and Internet push technology with Webcasting channels
  14. Lawrence, S.; Giles, C.L.: Accessibility and distribution of information on the Web (1999) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 5952) [ClassicSimilarity], result of:
          0.14217766 = score(doc=5952,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 5952, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=5952)
      0.25 = coord(1/4)
    
    Abstract
    Search engine coverage relative to the estimated size of the publicly indexable web has decreased substantially since December 97, with no engine indexing more than about 16% of the estimated size of the publicly indexable web. (Note that many queries can be satisfied with a relatively small database). Search engines are typically more likely to index sites that have more links to them (more 'popular' sites). They are also typically more likely to index US sites than non-US sites (AltaVista is an exception), and more likely to index commercial sites than educational sites. Indexing of new or modified pages byjust one of the major search engines can take months. 83% of sites contain commercial content and 6% contain scientific or educational content. Only 1.5% of sites contain pornographic content. The publicly indexable web contains an estimated 800 million pages as of February 1999, encompassing about 15 terabytes of information or about 6 terabytes of text after removing HTML tags, comments, and extra whitespace. The simple HTML "keywords" and "description" metatags are only used on the homepages of 34% of sites. Only 0.3% of sites use the Dublin Core metadata standard.
  15. Peek, R.: Web page design standards : Part 1: CCS (Cascading Style Sheets) is the cornerstone of standards to come (1998) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 6104) [ClassicSimilarity], result of:
          0.14217766 = score(doc=6104,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 6104, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=6104)
      0.25 = coord(1/4)
    
    Abstract
    The World Wide Web Consortium (W3C) has worked well in the rapid establishment of HTML standards but it has been criticized for not assuring compliance and in June 1998 a new organization, the Web Standards Project (WSP) was formed. Membership is free to individuals and at present consists of Web designers and W3C members. Describes the stages in the implementation of standards and focuses on Cascading Style Sheets (CSS). A style sheet is essentially a template that can be used to create a consistent appearance across documents. 'Cascading' means that a single page can use multiple style sheets. Explains how style sheets can replace HTML tags, using the example of fonts, and why CSS is a greater attraction to designers than to Web users. Outlines the current state of the CSS standard and predicts that Web users will be adopting it sooner or later
  16. Ervin, J.R.: Dynamic delivery of information via the World Wide Web (2000) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 5869) [ClassicSimilarity], result of:
          0.14217766 = score(doc=5869,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 5869, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=5869)
      0.25 = coord(1/4)
    
    Abstract
    Among the most ballyhooed interactive uses of the Web, database access has, until recently, been a cross-platform, multi-language, multi-interface endeavor not suited to the faint-of-heart. Fortunately, Microsoft's ever-increasing domination of the software industry has led to the consolidation of many tools in one application. Beginning with Internet Information Server 2 (IIS 2), Microsoft brought together in one service all the tools necessary to deliver an existing database over the Web. This paper will present a case study of converting a Web resource (News and Newspapers Online, a comprehensive directory of online newspapers from around the world that offer free access to current, full-text content) from static HTML files to a database (using MS Access 97), mounting the database on a Web server (using IIS 4), building the user interface (using HTML), and dynamically delivering the requested information (using Active Server Pages and Active Data Objects).
  17. Davis, P.M.; Price, J.S.: eJournal interface can influence usage statistics : Implications for libraries, publishers, and Project COUNTER (2006) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 324) [ClassicSimilarity], result of:
          0.14217766 = score(doc=324,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 324, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=324)
      0.25 = coord(1/4)
    
    Abstract
    The design of a publisher's electronic interface can have a measurable effect on electronic journal usage statistics. A study of journal usage from six COUNTER-compliant publishers at 32 research institutions in the United States, the United Kingdom, and Sweden indicates that the ratio of PDF to HTML views is not consistent across publisher interfaces, even after controlling for differences in publisher content. The number of full-text downloads may be artificially inflated when publishers require users to view HTML versions before accessing PDF versions or when linking mechanisms, such as CrossRef, direct users to the full text rather than the abstract of each article. These results suggest that usage reports from COUNTER-compliant publishers are not directly comparable in their current form. One solution may be to modify publisher numbers with adjustment factors deemed to be representative of the benefit or disadvantage due to its interface. Standardization of some interface and linking protocols may obviate these differences and allow for more accurate cross-publisher comparisons.
  18. Méndez, E.; López, L.M.; Siches, A.; Bravo, A.G.: DCMF: DC & Microformats, a good marriage (2008) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 3634) [ClassicSimilarity], result of:
          0.14217766 = score(doc=3634,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 3634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=3634)
      0.25 = coord(1/4)
    
    Abstract
    This report introduces the Dublin Core Microformats (DCMF) project, a new way to use the DC element set within X/HTML. The DC microformats encode explicit semantic expressions in an X/HTML webpage, by using a specific list of terms for values of the attributes "rev" and "rel" for <a> and <link> elements, and "class" and "id" of other elements. Microformats can be easily processed by user agents and software, enabling a high level of interoperability. These characteristics are crucial for the growing number of social applications allowing users to participate in the Web 2.0 environment as information creators and consumers. This report reviews the origins of microformats; illustrates the coding of DC microformats using the Dublin Core Metadata Gen tool, and a Firefox extension for extraction and visualization; and discusses the benefits of creating Web services utilizing DC microformats.
  19. What is Schema.org? (2011) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 437) [ClassicSimilarity], result of:
          0.14217766 = score(doc=437,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=437)
      0.25 = coord(1/4)
    
    Abstract
    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.
  20. Campbell, D.G.: Farradane's relational indexing and its relationship to hyperlinking in Alzheimer's information (2012) 0.04
    0.035544414 = product of:
      0.14217766 = sum of:
        0.14217766 = weight(_text_:html in 1847) [ClassicSimilarity], result of:
          0.14217766 = score(doc=1847,freq=4.0), product of:
            0.29461905 = queryWeight, product of:
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.057234988 = queryNorm
            0.48258135 = fieldWeight in 1847, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1475344 = idf(docFreq=701, maxDocs=44421)
              0.046875 = fieldNorm(doc=1847)
      0.25 = coord(1/4)
    
    Abstract
    In an ongoing investigation of the relationship between Jason Farradane's relational indexing principles and concept combination in Web-based information on Alzheimer's Disease, the hyperlinks of three consumer health information websites are examined to see how well the linking relationships map to Farradane's relational operators, as well as to the linking attributes in HTML 5. The links were found to be largely bibliographic in nature, and as such mapped well onto HTML 5. Farradane's operators were less effective at capturing the individual links; nonetheless, the two dimensions of his relational matrix-association and discrimination-reveal a crucial underlying strategy of the emotionally-charged mediation between complex information and users who are consulting it under severe stress.

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 908
  • m 334
  • el 142
  • s 96
  • i 21
  • n 18
  • x 15
  • r 12
  • b 8
  • ? 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications