Search (42 results, page 1 of 3)

  • × theme_ss:"Automatisches Abstracting"
  1. Marcu, D.: Automatic abstracting and summarization (2009) 0.02
    0.019154195 = product of:
      0.07661678 = sum of:
        0.07661678 = weight(_text_:have in 735) [ClassicSimilarity], result of:
          0.07661678 = score(doc=735,freq=4.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.34487724 = fieldWeight in 735, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0546875 = fieldNorm(doc=735)
      0.25 = coord(1/4)
    
    Abstract
    After lying dormant for a few decades, the field of automated text summarization has experienced a tremendous resurgence of interest. Recently, many new algorithms and techniques have been proposed for identifying important information in single documents and document collections, and for mapping this information into grammatical, cohesive, and coherent abstracts. Since 1997, annual workshops, conferences, and large-scale comparative evaluations have provided a rich environment for exchanging ideas between researchers in Asia, Europe, and North America. This entry reviews the main developments in the field and provides a guiding map to those interested in understanding the strengths and weaknesses of an increasingly ubiquitous technology.
  2. Kuhlen, R.: Abstracts, abstracting : intellektuelle und maschinelle Verfahren (1990) 0.02
    0.018953284 = product of:
      0.07581314 = sum of:
        0.07581314 = weight(_text_:und in 2332) [ClassicSimilarity], result of:
          0.07581314 = score(doc=2332,freq=4.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.48516542 = fieldWeight in 2332, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.109375 = fieldNorm(doc=2332)
      0.25 = coord(1/4)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. 3. Aufl. Hrsg.: M. Buder u.a. Bd.1
  3. Yang, C.C.; Wang, F.L.: Hierarchical summarization of large documents (2008) 0.02
    0.016756432 = product of:
      0.06702573 = sum of:
        0.06702573 = weight(_text_:have in 2719) [ClassicSimilarity], result of:
          0.06702573 = score(doc=2719,freq=6.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.30170476 = fieldWeight in 2719, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2719)
      0.25 = coord(1/4)
    
    Abstract
    Many automatic text summarization models have been developed in the last decades. Related research in information science has shown that human abstractors extract sentences for summaries based on the hierarchical structure of documents; however, the existing automatic summarization models do not take into account the human abstractor's behavior of sentence extraction and only consider the document as a sequence of sentences during the process of extraction of sentences as a summary. In general, a document exhibits a well-defined hierarchical structure that can be described as fractals - mathematical objects with a high degree of redundancy. In this article, we introduce the fractal summarization model based on the fractal theory. The important information is captured from the source document by exploring the hierarchical structure and salient features of the document. A condensed version of the document that is informatively close to the source document is produced iteratively using the contractive transformation in the fractal theory. The fractal summarization model is the first attempt to apply fractal theory to document summarization. It significantly improves the divergence of information coverage of summary and the precision of summary. User evaluations have been conducted. Results have indicated that fractal summarization is promising and outperforms current summarization techniques that do not consider the hierarchical structure of documents.
  4. Lam, W.; Chan, K.; Radev, D.; Saggion, H.; Teufel, S.: Context-based generic cross-lingual retrieval of documents and automated summaries (2005) 0.02
    0.016417881 = product of:
      0.065671526 = sum of:
        0.065671526 = weight(_text_:have in 2965) [ClassicSimilarity], result of:
          0.065671526 = score(doc=2965,freq=4.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.29560906 = fieldWeight in 2965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.046875 = fieldNorm(doc=2965)
      0.25 = coord(1/4)
    
    Abstract
    We develop a context-based generic cross-lingual retrieval model that can deal with different language pairs. Our model considers contexts in the query translation process. Contexts in the query as weIl as in the documents based an co-occurrence statistics from different granularity of passages are exploited. We also investigate cross-lingual retrieval of automatic generic summaries. We have implemented our model for two different cross-lingual settings, namely, retrieving Chinese documents from English queries as weIl as retrieving English documents from Chinese queries. Extensive experiments have been conducted an a large-scale parallel corpus enabling studies an retrieval performance for two different cross-lingual settings of full-length documents as weIl as automated summaries.
  5. Abdi, A.; Idris, N.; Alguliev, R.M.; Aliguliyev, R.M.: Automatic summarization assessment through a combination of semantic and syntactic information for intelligent educational systems (2015) 0.02
    0.016417881 = product of:
      0.065671526 = sum of:
        0.065671526 = weight(_text_:have in 3681) [ClassicSimilarity], result of:
          0.065671526 = score(doc=3681,freq=4.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.29560906 = fieldWeight in 3681, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.046875 = fieldNorm(doc=3681)
      0.25 = coord(1/4)
    
    Abstract
    Summary writing is a process for creating a short version of a source text. It can be used as a measure of understanding. As grading students' summaries is a very time-consuming task, computer-assisted assessment can help teachers perform the grading more effectively. Several techniques, such as BLEU, ROUGE, N-gram co-occurrence, Latent Semantic Analysis (LSA), LSA_Ngram and LSA_ERB, have been proposed to support the automatic assessment of students' summaries. Since these techniques are more suitable for long texts, their performance is not satisfactory for the evaluation of short summaries. This paper proposes a specialized method that works well in assessing short summaries. Our proposed method integrates the semantic relations between words, and their syntactic composition. As a result, the proposed method is able to obtain high accuracy and improve the performance compared with the current techniques. Experiments have displayed that it is to be preferred over the existing techniques. A summary evaluation system based on the proposed method has also been developed.
  6. Ruda, S.: Abstracting: eine Auswahlbibliographie (1992) 0.02
    0.016414026 = product of:
      0.0656561 = sum of:
        0.0656561 = weight(_text_:und in 6671) [ClassicSimilarity], result of:
          0.0656561 = score(doc=6671,freq=12.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.42016557 = fieldWeight in 6671, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6671)
      0.25 = coord(1/4)
    
    Abstract
    Die vorliegende Auswahlbibliographie ist in 9 Themenbereiche unterteilt. Der erste Abschnitt enthält Literatur, in der auf Abstracts und Abstracting-Verfahren allgemein eingegangen und ein Überblick über den Stand der Forschung gegeben wird. Im nächsten Abschnitt werden solche Aufsätze referiert, die die historische Entwicklung des Abstracting beschreiben. Im dritten Teil sind Abstracting-Richtlinien verschiedener Institutionen aufgelistet. Lexikalische, syntaktische und semantische Textkondensierungsverfahren sind das Thema der in Abschnitt 4 präsentierten Arbeiten. Textstrukturen von Abstracts werden unter Punkt 5 betrachtet, und die Arbeiten des nächsten Themenbereiches befassen sich mit dem Problem des Schreibens von Abstracts. Der siebte Abschnitt listet sog. 'maschinelle' und maschinen-unterstützte Abstracting-Methoden auf. Anschließend werden 'maschinelle' und maschinenunterstützte Abstracting-Verfahren, Abstracts im Vergleich zu ihren Primärtexten sowie Abstracts im allgemeien bewertet. Den Abschluß bilden Bibliographien
  7. Kuhlen, R.: Abstracts, abstracting : intellektuelle und maschinelle Verfahren (1997) 0.02
    0.016245672 = product of:
      0.06498269 = sum of:
        0.06498269 = weight(_text_:und in 869) [ClassicSimilarity], result of:
          0.06498269 = score(doc=869,freq=4.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.41585606 = fieldWeight in 869, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=869)
      0.25 = coord(1/4)
    
    Source
    Grundlagen der praktischen Information und Dokumentation: ein Handbuch zur Einführung in die fachliche Informationsarbeit. 4. Aufl. Hrsg.: M. Buder u.a
  8. Over, P.; Dang, H.; Harman, D.: DUC in context (2007) 0.02
    0.015478929 = product of:
      0.061915714 = sum of:
        0.061915714 = weight(_text_:have in 1934) [ClassicSimilarity], result of:
          0.061915714 = score(doc=1934,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.2787029 = fieldWeight in 1934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0625 = fieldNorm(doc=1934)
      0.25 = coord(1/4)
    
    Abstract
    Recent years have seen increased interest in text summarization with emphasis on evaluation of prototype systems. Many factors can affect the design of such evaluations, requiring choices among competing alternatives. This paper examines several major themes running through three evaluations: SUMMAC, NTCIR, and DUC, with a concentration on DUC. The themes are extrinsic and intrinsic evaluation, evaluation procedures and methods, generic versus focused summaries, single- and multi-document summaries, length and compression issues, extracts versus abstracts, and issues with genre.
  9. Endres-Niggemeyer, B.: Referierregeln und Referate : Abstracting als regelgesteuerter Textverarbeitungsprozeß (1985) 0.01
    0.014983889 = product of:
      0.059935555 = sum of:
        0.059935555 = weight(_text_:und in 6670) [ClassicSimilarity], result of:
          0.059935555 = score(doc=6670,freq=10.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.38355696 = fieldWeight in 6670, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=6670)
      0.25 = coord(1/4)
    
    Abstract
    Referierregeln steuern Referierprozesse. Inhaltsbezogene Vorschriften aus drei Referierregeln wurden mit zugehö-rigen Abstracts verglichen. Das Ergebnis war unbefriedi-gend: Referierregeln sind teilweise inkonsistent, ihre Angaben sind nicht immer sachgerecht und oft als Hand-lungsanleitung nicht geeignet. Referieren erscheint als unterbestimmter Denk- und Textverarbeitungsvorgang mit beachtlichem Klärungs- und Gestaltungsbedarf. Die Regeln enthalten zuwenig Wissen über die von ihnen geregelten Sachverhalte. Sie geben oft zu einfache und sachferne Inhaltsstrukturen vor. Ideen für differenziertere Referatstrukturen werden entwickelt. Sie berücksichtigen die Abhängigkeit der Referatstruktur von der Textstruktur des Originaldokuments stärker. Die Klärung des Referier-vorganges bis zu einer gemeinsamen Zieldefinition ist für die weitere Entwicklung des intellektuellen wie des automatischen Referierens wichtig.
  10. Kuhlen, R.: In Richtung Summarizing für Diskurse in K3 (2006) 0.01
    0.014983889 = product of:
      0.059935555 = sum of:
        0.059935555 = weight(_text_:und in 67) [ClassicSimilarity], result of:
          0.059935555 = score(doc=67,freq=10.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.38355696 = fieldWeight in 67, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=67)
      0.25 = coord(1/4)
    
    Abstract
    Der Bedarf nach Summarizing-Leistungen, in Situationen der Fachinformation, aber auch in kommunikativen Umgebungen (Diskursen) wird aufgezeigt. Summarizing wird dazu in den Kontext des bisherigen (auch automatischen) Abstracting/Extracting gestellt. Der aktuelle Forschungsstand, vor allem mit Blick auf Multi-Document-Summarizing, wird dargestellt. Summarizing ist eine wichtige Funktion in komplex und umfänglich werdenden Diskussionen in elektronischen Foren. Dies wird am Beispiel des e-Learning-Systems K3 aufgezeigt. Rudimentäre Summarizing-Funktionen von K3 und des zugeordneten K3VIS-Systems werden dargestellt. Der Rahmen für ein elaborierteres, Template-orientiertes Summarizing unter Verwendung der vielfältigen Auszeichnungsfunktionen von K3 (Rollen, Diskurstypen, Inhaltstypen etc.) wird aufgespannt.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  11. Endres-Niggemeyer, B.; Jauris-Heipke, S.; Pinsky, S.M.; Ulbricht, U.: Wissen gewinnen durch Wissen : Ontologiebasierte Informationsextraktion (2006) 0.01
    0.0143592805 = product of:
      0.057437122 = sum of:
        0.057437122 = weight(_text_:und in 16) [ClassicSimilarity], result of:
          0.057437122 = score(doc=16,freq=18.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.36756828 = fieldWeight in 16, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=16)
      0.25 = coord(1/4)
    
    Abstract
    Die ontologiebasierte Informationsextraktion, über die hier berichtet wird, ist Teil eines Systems zum automatischen Zusammenfassen, das sich am Vorgehen kompetenter Menschen orientiert. Dahinter steht die Annahme, dass Menschen die Ergebnisse eines Systems leichter übernehmen können, wenn sie mit Verfahren erarbeitet worden sind, die sie selbst auch benutzen. Das erste Anwendungsgebiet ist Knochenmarktransplantation (KMT). Im Kern des Systems Summit-BMT (Summarize It in Bone Marrow Transplantation) steht eine Ontologie des Fachgebietes. Sie ist als MySQL-Datenbank realisiert und versorgt menschliche Benutzer und Systemkomponenten mit Wissen. Summit-BMT unterstützt die Frageformulierung mit einem empirisch fundierten Szenario-Interface. Die Retrievalergebnisse werden durch ein Textpassagenretrieval vorselektiert und dann kognitiv fundierten Agenten unterbreitet, die unter Einsatz ihrer Wissensbasis / Ontologie genauer prüfen, ob die Propositionen aus der Benutzerfrage getroffen werden. Die relevanten Textclips aus dem Duelldokument werden in das Szenarioformular eingetragen und mit einem Link zu ihrem Vorkommen im Original präsentiert. In diesem Artikel stehen die Ontologie und ihr Gebrauch zur wissensbasierten Informationsextraktion im Mittelpunkt. Die Ontologiedatenbank hält unterschiedliche Wissenstypen so bereit, dass sie leicht kombiniert werden können: Konzepte, Propositionen und ihre syntaktisch-semantischen Schemata, Unifikatoren, Paraphrasen und Definitionen von Frage-Szenarios. Auf sie stützen sich die Systemagenten, welche von Menschen adaptierte Zusammenfassungsstrategien ausführen. Mängel in anderen Verarbeitungsschritten führen zu Verlusten, aber die eigentliche Qualität der Ergebnisse steht und fällt mit der Qualität der Ontologie. Erste Tests der Extraktionsleistung fallen verblüffend positiv aus.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.301-308
  12. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.01
    0.013681569 = product of:
      0.054726277 = sum of:
        0.054726277 = weight(_text_:have in 1890) [ClassicSimilarity], result of:
          0.054726277 = score(doc=1890,freq=4.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.2463409 = fieldWeight in 1890, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1890)
      0.25 = coord(1/4)
    
    Abstract
    The automatic summarization of scientific articles differs from other text genres because of the structured format and longer text length. Previous approaches have focused on tackling the lengthy nature of scientific articles, aiming to improve the computational efficiency of summarizing long text using a flat, unstructured abstract. However, the structured format of scientific articles and characteristics of each section have not been fully explored, despite their importance. The lack of a sufficient investigation and discussion of various characteristics for each section and their influence on summarization results has hindered the practical use of automatic summarization for scientific articles. To provide a balanced abstract proportionally emphasizing each section of a scientific article, the community introduced the structured abstract, an abstract with distinct, labeled sections. Using this information, in this study, we aim to understand tasks ranging from data preparation to model evaluation from diverse viewpoints. Specifically, we provide a preprocessed large-scale dataset and propose a summarization method applying the introduction, methods, results, and discussion (IMRaD) format reflecting the characteristics of each section. We also discuss the objective benchmarks and perspectives of state-of-the-art algorithms and present the challenges and research directions in this area.
  13. Ercan, G.; Cicekli, I.: Using lexical chains for keyword extraction (2007) 0.01
    0.013544062 = product of:
      0.05417625 = sum of:
        0.05417625 = weight(_text_:have in 1951) [ClassicSimilarity], result of:
          0.05417625 = score(doc=1951,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.24386504 = fieldWeight in 1951, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1951)
      0.25 = coord(1/4)
    
    Abstract
    Keywords can be considered as condensed versions of documents and short forms of their summaries. In this paper, the problem of automatic extraction of keywords from documents is treated as a supervised learning task. A lexical chain holds a set of semantically related words of a text and it can be said that a lexical chain represents the semantic content of a portion of the text. Although lexical chains have been extensively used in text summarization, their usage for keyword extraction problem has not been fully investigated. In this paper, a keyword extraction technique that uses lexical chains is described, and encouraging results are obtained.
  14. Lee, J.-H.; Park, S.; Ahn, C.-M.; Kim, D.: Automatic generic document summarization based on non-negative matrix factorization (2009) 0.01
    0.013544062 = product of:
      0.05417625 = sum of:
        0.05417625 = weight(_text_:have in 3448) [ClassicSimilarity], result of:
          0.05417625 = score(doc=3448,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.24386504 = fieldWeight in 3448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.0546875 = fieldNorm(doc=3448)
      0.25 = coord(1/4)
    
    Abstract
    In existing unsupervised methods, Latent Semantic Analysis (LSA) is used for sentence selection. However, the obtained results are less meaningful, because singular vectors are used as the bases for sentence selection from given documents, and singular vector components can have negative values. We propose a new unsupervised method using Non-negative Matrix Factorization (NMF) to select sentences for automatic generic document summarization. The proposed method uses non-negative constraints, which are more similar to the human cognition process. As a result, the method selects more meaningful sentences for generic document summarization than those selected using LSA.
  15. Hahn, U.: Automatisches Abstracting (2013) 0.01
    0.013538062 = product of:
      0.054152247 = sum of:
        0.054152247 = weight(_text_:und in 1721) [ClassicSimilarity], result of:
          0.054152247 = score(doc=1721,freq=4.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.34654674 = fieldWeight in 1721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.078125 = fieldNorm(doc=1721)
      0.25 = coord(1/4)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
  16. Yusuff, A.: Automatisches Indexing and Abstracting : Grundlagen und Beispiele (2002) 0.01
    0.013401995 = product of:
      0.05360798 = sum of:
        0.05360798 = weight(_text_:und in 2577) [ClassicSimilarity], result of:
          0.05360798 = score(doc=2577,freq=2.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.34306374 = fieldWeight in 2577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.109375 = fieldNorm(doc=2577)
      0.25 = coord(1/4)
    
  17. Endres-Niggemeyer, B.: Bessere Information durch Zusammenfassen aus dem WWW (1999) 0.01
    0.013264537 = product of:
      0.053058147 = sum of:
        0.053058147 = weight(_text_:und in 496) [ClassicSimilarity], result of:
          0.053058147 = score(doc=496,freq=6.0), product of:
            0.15626246 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.07045517 = queryNorm
            0.33954507 = fieldWeight in 496, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=496)
      0.25 = coord(1/4)
    
    Abstract
    Am Beispiel der Knochenmarktransplantation, eines medizinischen Spezialgebietes, wird im folgenden dargelegt, wie man BenutzerInnen eine großen Teil des Aufwandes bei der Wissensbeschaffung abnehmen kann, indem man Suchergebnisse aus dem Netz fragebezogen zusammenfaßt. Dadurch wird in zeitkritischen Situationen, wie sie in Diagnose und Therapie alltäglich sind, die Aufnahme neuen Wissens ermöglicht. Auf einen Überblick über den Stand des Textzusammenfassens und der Ontologieentwicklung folgt eine Systemskizze, in der die Informationssuche im WWW durch ein kognitiv fundiertes Zusammenfassungssystem ergänzt wird. Dazu wird eine Fach-Ontologie vorgeschlagen, die das benötigte Wissen organisiert und repräsentiert.
  18. Moens, M.-F.; Uyttendaele, C.; Dumotier, J.: Abstracting of legal cases : the potential of clustering based on the selection of representative objects (1999) 0.01
    0.011609196 = product of:
      0.046436783 = sum of:
        0.046436783 = weight(_text_:have in 3944) [ClassicSimilarity], result of:
          0.046436783 = score(doc=3944,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.20902719 = fieldWeight in 3944, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.046875 = fieldNorm(doc=3944)
      0.25 = coord(1/4)
    
    Abstract
    The SALOMON project automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts text units from the case text to form a case summary. Such a case summary facilitates the rapid determination of the relevance of the case or may be employed in text search. an important part of the research concerns the development of techniques for automatic recognition of representative text paragraphs (or sentences) in texts of unrestricted domains. these techniques are employed to eliminate redundant material in the case texts, and to identify informative text paragraphs which are relevant to include in the case summary. An evaluation of a test set of 700 criminal cases demonstrates that the algorithms have an application potential for automatic indexing, abstracting, and text linkage
  19. Craven, T.C.: Abstracts produced using computer assistance (2000) 0.01
    0.011609196 = product of:
      0.046436783 = sum of:
        0.046436783 = weight(_text_:have in 5809) [ClassicSimilarity], result of:
          0.046436783 = score(doc=5809,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.20902719 = fieldWeight in 5809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.046875 = fieldNorm(doc=5809)
      0.25 = coord(1/4)
    
    Abstract
    Experimental subjects wrote abstracts using a simplified version of the TEXNET abstracting assistance software. In addition to the full text, subjects were presented with either keywords or phrases extracted automatically. The resulting abstracts, and the times taken, were recorded automatically; some additional information was gathered by oral questionnaire. Selected abstracts produced were evaluated on various criteria by independent raters. Results showed considerable variation among subjects, but 37% found the keywords or phrases 'quite' or 'very' useful in writing their abstracts. Statistical analysis failed to support several hypothesized relations: phrases were not viewed as significantly more helpful than keywords; and abstracting experience did not correlate with originality of wording, approximation of the author abstract, or greater conciseness. Requiring further study are some unanticipated strong correlations including the following: Windows experience and writing an abstract like the author's; experience reading abstracts and thinking one had written a good abstract; gender and abstract length; gender and use of words and phrases from the original text. Results have also suggested possible modifications to the TEXNET software
  20. Sparck Jones, K.: Automatic summarising : the state of the art (2007) 0.01
    0.011609196 = product of:
      0.046436783 = sum of:
        0.046436783 = weight(_text_:have in 1932) [ClassicSimilarity], result of:
          0.046436783 = score(doc=1932,freq=2.0), product of:
            0.22215667 = queryWeight, product of:
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.07045517 = queryNorm
            0.20902719 = fieldWeight in 1932, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1531634 = idf(docFreq=5157, maxDocs=44421)
              0.046875 = fieldNorm(doc=1932)
      0.25 = coord(1/4)
    
    Abstract
    This paper reviews research on automatic summarising in the last decade. This work has grown, stimulated by technology and by evaluation programmes. The paper uses several frameworks to organise the review, for summarising itself, for the factors affecting summarising, for systems, and for evaluation. The review examines the evaluation strategies applied to summarising, the issues they raise, and the major programmes. It considers the input, purpose and output factors investigated in recent summarising research, and discusses the classes of strategy, extractive and non-extractive, that have been explored, illustrating the range of systems built. The conclusions drawn are that automatic summarisation has made valuable progress, with useful applications, better evaluation, and more task understanding. But summarising systems are still poorly motivated in relation to the factors affecting them, and evaluation needs taking much further to engage with the purposes summaries are intended to serve and the contexts in which they are used.

Years

Languages

  • e 26
  • d 16

Types

  • a 38
  • el 2
  • r 2
  • m 1
  • x 1
  • More… Less…