-
Hartley, J.; Betts, L.: ¬The effects of spacing and titles on judgments of the effectiveness of structured abstracts (2007)
0.05
0.048477694 = product of:
0.19391078 = sum of:
0.19391078 = weight(_text_:judge in 2325) [ClassicSimilarity], result of:
0.19391078 = score(doc=2325,freq=2.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.42709115 = fieldWeight in 2325, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=2325)
0.25 = coord(1/4)
- Abstract
- Previous research assessing the effectiveness of structured abstracts has been limited in two respects. First, when comparing structured abstracts with traditional ones, investigators usually have rewritten the original abstracts, and thus confounded changes in the layout with changes in both the wording and the content of the text. Second, investigators have not always included the title of the article together with the abstract when asking participants to judge the quality of the abstracts, yet titles alert readers to the meaning of the materials that follow. The aim of this research was to redress these limitations. Three studies were carried out. Four versions of each of four abstracts were prepared. These versions consisted of structured/traditional abstracts matched in content, with and without titles. In Study 1, 64 undergraduates each rated one of these abstracts on six separate rating scales. In Study 2, 225 academics and research workers rated the abstracts electronically, and in Study 3, 252 information scientists did likewise. In Studies 1 and 3, the respondents rated the structured abstracts significantly more favorably than they did the traditional ones, but the presence or absence of titles had no effect on their judgments. In Study 2, no main effects were observed for structure or for titles. The layout of the text, together with the subheadings, contributed to the higher ratings of effectiveness for structured abstracts, but the presence or absence of titles had no clear effects in these experimental studies. It is likely that this spatial organization, together with the greater amount of information normally provided in structured abstracts, explains why structured abstracts are generally judged to be superior to traditional ones.
-
Xu, Y.; Yin, H.: Novelty and topicality in interactive information retrieval (2008)
0.05
0.048477694 = product of:
0.19391078 = sum of:
0.19391078 = weight(_text_:judge in 2355) [ClassicSimilarity], result of:
0.19391078 = score(doc=2355,freq=2.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.42709115 = fieldWeight in 2355, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=2355)
0.25 = coord(1/4)
- Abstract
- The information science research community is characterized by a paradigm split, with a system-centered cluster working on information retrieval (IR) algorithms and a user-centered cluster working on user behavior. The two clusters rarely leverage each other's insight and strength. One major suggestion from user-centered studies is to treat the relevance judgment of documents as a subjective, multidimensional, and dynamic concept rather than treating it as objective and based on topicality only. This study explores the possibility to enhance users' topicality-based relevance judgment with subjective novelty judgment in interactive IR. A set of systems is developed which differs in the way the novelty judgment is incorporated. In particular, this study compares systems which assume that users' novelty judgment is directed to a certain subtopic area and those which assume that users' novelty judgment is undirected. This study also compares systems which assume that users judge a document based on topicality first and then novelty in a stepwise, noncompensatory fashion and those which assume that users consider topicality and novelty simultaneously and as compensatory to each other. The user study shows that systems assuming directed novelty in general have higher relevance precision, but systems assuming a stepwise judgment process and systems assuming a compensatory judgment process are not significantly different.
-
Hartley, J.; Betts, L.: Revising and polishing a structured abstract : is it worth the time and effort? (2008)
0.05
0.048477694 = product of:
0.19391078 = sum of:
0.19391078 = weight(_text_:judge in 3362) [ClassicSimilarity], result of:
0.19391078 = score(doc=3362,freq=2.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.42709115 = fieldWeight in 3362, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=3362)
0.25 = coord(1/4)
- Abstract
- Many writers of structured abstracts spend a good deal of time revising and polishing their texts - but is it worth it? Do readers notice the difference? In this paper we report three studies of readers using rating scales to judge (electronically) the clarity of an original and a revised abstract, both as a whole and in its constituent parts. In Study 1, with approximately 250 academics and research workers, we found some significant differences in favor of the revised abstract, but in Study 2, with approximately 210 information scientists, we found no significant effects. Pooling the data from Studies 1 and 2, however, in Study 3, led to significant differences at a higher probability level between the perception of the original and revised abstract as a whole and between the same components as found in Study 1. These results thus indicate that the revised abstract as a whole, as well as certain specific components of it, were judged significantly clearer than the original one. In short, the results of these experiments show that readers can and do perceive differences between original and revised texts - sometimes - and that therefore these efforts are worth the time and effort.
-
Lee, K.C.; Lee, N.; Li, H.: ¬A particle swarm optimization-driven cognitive map approach to analyzing information systems project risk (2009)
0.05
0.048477694 = product of:
0.19391078 = sum of:
0.19391078 = weight(_text_:judge in 3855) [ClassicSimilarity], result of:
0.19391078 = score(doc=3855,freq=2.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.42709115 = fieldWeight in 3855, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0390625 = fieldNorm(doc=3855)
0.25 = coord(1/4)
- Abstract
- Project risks encompass both internal and external factors that are interrelated, influencing others in a causal way. It is very important to identify those factors and their causal relationships to reduce the project risk. In the past, most IT companies evaluate project risk by roughly measuring the related factors, but ignoring the important fact that there are complicated causal relationships among them. There is a strong need to develop more effective mechanisms to systematically judge all factors related to project risk and identify the causal relationships among those factors. To accomplish this research objective, our study adopts a cognitive map (CM)-based mechanism called the MACOM (Multi-Agents COgnitive Map), where CM is represented by a set of multi-agents, each embedded with basic intelligence to determine its causal relationships with other agents. CM has proven especially useful in solving unstructured problems with many variables and causal relationships; however, simply applying CM to project risk management is not enough because most causal relationships are hard to identify and measure exactly. To overcome this problem, we have borrowed a multi-agent metaphor in which CM is represented by a set of multi-agents, and project risk is explained through the interaction of the multi-agents. Such an approach presents a new computational capability for resolving complicated decision problems. Using the MACOM framework, we have proved that the task of resolving the IS project risk management could be systematically and intelligently solved, and in this way, IS project managers can be given robust decision support.
-
Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008)
0.04
0.04113469 = product of:
0.16453876 = sum of:
0.16453876 = weight(_text_:judge in 3417) [ClassicSimilarity], result of:
0.16453876 = score(doc=3417,freq=4.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.36239886 = fieldWeight in 3417, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.0234375 = fieldNorm(doc=3417)
0.25 = coord(1/4)
- Abstract
- Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
-
Kim, W.; Wilbur, W.J.: Corpus-based statistical screening for content-bearing terms (2001)
0.04
0.038782157 = product of:
0.15512863 = sum of:
0.15512863 = weight(_text_:judge in 188) [ClassicSimilarity], result of:
0.15512863 = score(doc=188,freq=2.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.34167293 = fieldWeight in 188, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.03125 = fieldNorm(doc=188)
0.25 = coord(1/4)
- Abstract
- Kim and Wilber present three techniques for the algorithmic identification in text of content bearing terms and phrases intended for human use as entry points or hyperlinks. Using a set of 1,075 terms from MEDLINE evaluated on a zero to four, stop word to definite content word scale, they evaluate the ranked lists of their three methods based on their placement of content words in the top ranks. Data consist of the natural language elements of 304,057 MEDLINE records from 1996, and 173,252 Wall Street Journal records from the TIPSTER collection. Phrases are extracted by breaking at punctuation marks and stop words, normalized by lower casing, replacement of nonalphanumerics with spaces, and the reduction of multiple spaces. In the ``strength of context'' approach each document is a vector of binary values for each word or word pair. The words or word pairs are removed from all documents, and the Robertson, Spark Jones relevance weight for each term computed, negative weights replaced with zero, those below a randomness threshold ignored, and the remainder summed for each document, to yield a score for the document and finally to assign to the term the average document score for documents in which it occurred. The average of these word scores is assigned to the original phrase. The ``frequency clumping'' approach defines a random phrase as one whose distribution among documents is Poisson in character. A pvalue, the probability that a phrase frequency of occurrence would be equal to, or less than, Poisson expectations is computed, and a score assigned which is the negative log of that value. In the ``database comparison'' approach if a phrase occurring in a document allows prediction that the document is in MEDLINE rather that in the Wall Street Journal, it is considered to be content bearing for MEDLINE. The score is computed by dividing the number of occurrences of the term in MEDLINE by occurrences in the Journal, and taking the product of all these values. The one hundred top and bottom ranked phrases that occurred in at least 500 documents were collected for each method. The union set had 476 phrases. A second selection was made of two word phrases occurring each in only three documents with a union of 599 phrases. A judge then ranked the two sets of terms as to subject specificity on a 0 to 4 scale. Precision was the average subject specificity of the first r ranks and recall the fraction of the subject specific phrases in the first r ranks and eleven point average precision was used as a summary measure. The three methods all move content bearing terms forward in the lists as does the use of the sum of the logs of the three methods.
-
Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008)
0.04
0.038782157 = product of:
0.15512863 = sum of:
0.15512863 = weight(_text_:judge in 3659) [ClassicSimilarity], result of:
0.15512863 = score(doc=3659,freq=2.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.34167293 = fieldWeight in 3659, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.03125 = fieldNorm(doc=3659)
0.25 = coord(1/4)
- Abstract
- The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
-
White, H.D.: Relevance in theory (2009)
0.04
0.038782157 = product of:
0.15512863 = sum of:
0.15512863 = weight(_text_:judge in 859) [ClassicSimilarity], result of:
0.15512863 = score(doc=859,freq=2.0), product of:
0.45402667 = queryWeight, product of:
7.731176 = idf(docFreq=52, maxDocs=44421)
0.058726728 = queryNorm
0.34167293 = fieldWeight in 859, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
7.731176 = idf(docFreq=52, maxDocs=44421)
0.03125 = fieldNorm(doc=859)
0.25 = coord(1/4)
- Abstract
- Relevance is the central concept in information science because of its salience in designing and evaluating literature-based answering systems. It is salient when users seek information through human intermediaries, such as reference librarians, but becomes even more so when systems are automated and users must navigate them on their own. Designers of classic precomputer systems of the nineteenth and twentieth centuries appear to have been no less concerned with relevance than the information scientists of today. The concept has, however, proved difficult to define and operationalize. A common belief is that it is a relation between a user's request for information and the documents the system retrieves in response. Documents might be considered retrieval-worthy because they: 1) constitute evidence for or against a claim; 2) answer a question; or 3) simply match the request in topic. In practice, literature-based answering makes use of term-matching technology, and most evaluation of relevance has involved topical match as the primary criterion for acceptability. The standard table for evaluating the relation of retrieved documents to a request has only the values "relevant" and "not relevant," yet many analysts hold that relevance admits of degrees. Moreover, many analysts hold that users decide relevance on more dimensions than topical match. Who then can validly judge relevance? Is it only the person who put the request and who can evaluate a document on multiple dimensions? Or can surrogate judges perform this function on the basis of topicality? Such questions arise in a longstanding debate on whether relevance is objective or subjective. One proposal has been to reframe the debate in terms of relevance theory (imported from linguistic pragmatics), which makes relevance increase with a document's valuable cognitive effects and decrease with the effort needed to process it. This notion allows degree of topical match to contribute to relevance but allows other considerations to contribute as well. Since both cognitive effects and processing effort will differ across users, they can be taken as subjective, but users' decisions can also be objectively evaluated if the logic behind them is made explicit. Relevance seems problematical because the considerations that lead people to accept documents in literature searches, or to use them later in contexts such as citation, are seldom fully revealed. Once they are revealed, relevance may be seen as not only multidimensional and dynamic, but also understandable.
-
Emerging frameworks and methods : Proceedings of the Fourth International Conference on the Conceptions of Library and Information Science (CoLIS4), Seattle, WA, July 21 - 25, 2002 (2002)
0.03
0.02865915 = product of:
0.1146366 = sum of:
0.1146366 = weight(_text_:harmon in 1055) [ClassicSimilarity], result of:
0.1146366 = score(doc=1055,freq=2.0), product of:
0.55196565 = queryWeight, product of:
9.398883 = idf(docFreq=9, maxDocs=44421)
0.058726728 = queryNorm
0.20768793 = fieldWeight in 1055, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
9.398883 = idf(docFreq=9, maxDocs=44421)
0.015625 = fieldNorm(doc=1055)
0.25 = coord(1/4)
- Content
- LIS research and evaluation methodologies fell under the same scrutiny and systematization, particularly in the presentations employing multiple and mixed methodologies. Jaana Kekäläinen's and Kalervo Järvelin's proposal for a framework of laboratory information retrieval evaluation measures, applied along with analyses of information seeking and work task contexts, employed just such a mix. Marcia Bates pulled together Bradford's Law of Scattering of decreasingly relevant information sources and three information searching techniques (browsing, directed searching, and following links) to pose the question: what are the optimum searching techniques for the different regions of information concentrations? Jesper Schneider and Pia Borlund applied bibliometric methods (document co-citation, bibliographic coupling, and co-word analysis) to augment manual thesaurus construction and maintenance. Fredrik Åström examined document keyword co-occurrence measurement compared to and then combined with bibliometric co-citation analysis to map LIS concept spaces. Ian Ruthven, Mounia Lalmas, and Keith van Rijsbergen compared system-supplied query expansion terms with interactive user query expansion, incorporating both partial relevance assessment feedback (how relevant is a document) and ostensive relevance feedback (measuring when a document is assessed as relevant over time). Scheduled in the midst of the presentations were two stimulating panel and audience discussions. The first panel, chaired by Glynn Harmon, explored the current re-positioning of many library and information science schools by renaming themselves to eliminate the "library" word and emphasize the "information" word (as in "School of Information," "Information School," and schools of "Information Studies"). Panelists Marcia Bates, Harry Bruce, Toni Carbo, Keith Belton, and Andrew Dillon presented the reasons for name changes in their own information programs, which include curricular change and expansion beyond a "stereotypical" library focus, broader contemporary theoretical approaches to information, new clientele and markets for information services and professionals, new media formats and delivery models, and new interdisciplinary student and faculty recruitment from crossover fields. Sometimes criticized for over-broadness and ambiguity-and feared by library practitioners who were trained in more traditional library schools-renaming schools both results from and occasions a renewed examination of the definitions and boundaries of the field as a whole and the educational and research missions of individual schools.
-
Sprache - Kognition - Kultur : Sprache zwischen mentaler Struktur und kultureller Prägung. Vorträge der Jahrestagung 2007 des Instituts für Deutsche Sprache (2008)
0.02
0.021647282 = product of:
0.08658913 = sum of:
0.08658913 = weight(_text_:und in 1143) [ClassicSimilarity], result of:
0.08658913 = score(doc=1143,freq=92.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.66479194 = fieldWeight in 1143, product of:
9.591663 = tf(freq=92.0), with freq of:
92.0 = termFreq=92.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.03125 = fieldNorm(doc=1143)
0.25 = coord(1/4)
- Abstract
- Dieses Jahrbuch des Instituts für Deutsche Sprache ist dem Jahr der Geisteswissenschaften gewidmet und beleuchtet aus interdisziplinärer Perspektive das Zusammenwirken von cultural und linguistic turn. Die Beiträge aus Linguistik, Kultur- und Kognitionswissenschaft sowie Literatur- und Geschichtswissenschaft zielen darauf ab, die kulturwissenschaftlichen Traditionen der Sprachwissenschaft zu vergegenwärtigen und gleichzeitig den Anschluss der Linguistik an die modernen Forschungsrichtungen der Kulturwissenschaft zu dokumentieren: Hermeneutik, Rhetorik und Lexikographie, Kognitionstheorie und Diskursanalyse werden aus sprachwissenschaftlicher Perspektive diskutiert. Darüber hinaus beleuchten die Beiträge die Folgen des linguistic turn in den Nachbarwissenschaften exemplarisch anhand der Literaturwissenschaft und der Historiographie. Insgesamt präsentiert der Band das Spektrum von Grundlagen, Theorien und Methoden sowie anwendungsbezogene Beispiele einer kulturwissenschaftlichen Linguistik.
- BK
- 17.10 Sprache in Beziehung zu anderen Bereichen der Wissenschaft und Kultur
18.00 Einzelne Sprachen und Literaturen allgemein
- Classification
- ES 360: Kultur- und Sozialwissenschaften / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft
ER 300: Kongressberichte, Sammelwerke (verschiedener Autoren) / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Allgemeine Sprachwissenschaft
ER 940: Sprechen und Denken, Kompetenz und Performanz, Pragmatik / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Allgemeine Sprachwissenschaft
ES 110: Sprache und Kultur / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft
17.10 Sprache in Beziehung zu anderen Bereichen der Wissenschaft und Kultur
18.00 Einzelne Sprachen und Literaturen allgemein
- RVK
- ES 360: Kultur- und Sozialwissenschaften / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft
ER 300: Kongressberichte, Sammelwerke (verschiedener Autoren) / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Allgemeine Sprachwissenschaft
ER 940: Sprechen und Denken, Kompetenz und Performanz, Pragmatik / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Allgemeine Sprachwissenschaft
ES 110: Sprache und Kultur / Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft
-
Schüling, H.: ¬Die Mechanisierung und Automation der erkennenden Akte und Operationen (2005)
0.02
0.021632569 = product of:
0.086530276 = sum of:
0.086530276 = weight(_text_:und in 5221) [ClassicSimilarity], result of:
0.086530276 = score(doc=5221,freq=30.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.66434014 = fieldWeight in 5221, product of:
5.477226 = tf(freq=30.0), with freq of:
30.0 = termFreq=30.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=5221)
0.25 = coord(1/4)
- Abstract
- Im vorliegenden Band 8 wird die Mechanisierung und Automation der hauptsächlichen Akte und Operationen des Erkennens in ihrer Genese und in ihrer gnoseologischen Bedeutung erforscht. Die Untersuchung geht aus von technikgeschichtlichen Spezialabhandlungen, wissenschafts- und technik-journalistischen Berichten sowie von Prospekten automatenherstellender Firmen samt Besichtigung der Geräte und Maschinen. Die Gliederung der enormen Stoffmassen erfolgt nach den Gruppen der einzelnen erkennenden Akte und Operationen: den perzeptiven, speichernden und inventiven Akten und den sprachlichen, mathematischen und wissensbasiert-deduktiven Operationen. Für die einzelnen Akte und Operationen werden die geschichtlichen Entwicklungen skizziert und die hauptsächlichen Automaten in anschaulichen Beispielen vorgestellt. In der Synthese entsteht ein Oberblick über eine der umwälzendsten Bewegungen in der jüngsten Phase der Evolution des Erkennens.
- Footnote
- Band 8 von: System und Evolution des menschlichen Erkennens: Ein Handbuch der evolutionären Erkenntnistheorie
- Series
- Philosophische Texte und Studien; Bd 46,8
-
Schmitz, K.-D.: Wörterbuch, Thesaurus, Terminologie, Ontologie : Was tragen Terminologiewissenschaft und Informationswissenschaft zur Wissensordnung bei? (2006)
0.02
0.021632569 = product of:
0.086530276 = sum of:
0.086530276 = weight(_text_:und in 75) [ClassicSimilarity], result of:
0.086530276 = score(doc=75,freq=30.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.66434014 = fieldWeight in 75, product of:
5.477226 = tf(freq=30.0), with freq of:
30.0 = termFreq=30.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=75)
0.25 = coord(1/4)
- Abstract
- Im Rahmen der technischen Redaktion, der Fachübersetzung und der Terminologiearbeit werden Verfahren und Werkzeuge zur Verwaltung und Nutzung des (technischen) Fachwortschatzes benötigt; im Bereich der Information und Dokumentation erarbeitet und nutzt man Systeme, die Information und Wissen verwaltbar, zugänglich und wieder auffindbar machen. Die in diesen Anwendungsbereichen erarbeiteten und genutzten Sammlungen von meist fachsprachlichen Informationen werden in der Praxis häufig undifferenziert als Glossar, Wörterbuch, Lexikon, Vokabular, Nomenklatur, Thesaurus, Terminologie oder Ontologie bezeichnet. Dieser Beitrag zeigt die Unterschiede und Gemeinsamkeiten dieser einzelnen Typen von geordneten Wissensbeständen auf, wobei auch auf die spezielle Methoden und Paradigmen der Terminologiewissenschaft und der Informationswissenschaft eingegangen wird.
- Source
- Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
-
Zillmann, H.: OSIRIS und eLib : Information Retrieval und Search Engines in Full-text Databases (2001)
0.02
0.020186191 = product of:
0.080744766 = sum of:
0.080744766 = weight(_text_:und in 6937) [ClassicSimilarity], result of:
0.080744766 = score(doc=6937,freq=20.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.6199216 = fieldWeight in 6937, product of:
4.472136 = tf(freq=20.0), with freq of:
20.0 = termFreq=20.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=6937)
0.25 = coord(1/4)
- Abstract
- OSIRIS und ELIB sind von der Deutschen Forschungsgemeinschaft (DFG) und dem Niedersächsischen Ministerium für Wissenschaft und Kultur (MWK) geförderte Projekte an der Universität Osnabrück. Sie beschäftigen sich mit intuitiv-natürlichsprachlichen Retrievalsystemen und mit Fragen der Indexierung großer Volltexdatenbanken in dieser Technik. Die Entwicklungen haben dazu geführt, daß an sich aufwendige und komplexe Verfahren der syntaktisch-semantischen Analyse und Bewertung von textuellen Phrasen in relationale Datenbanken für Massendaten eingebettet werden konnten und nun im Produktionsbetrieb eingesetzt werden können
- Source
- Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 54(2001) H.1, S.55-62
-
Müller, M.: ¬Das Fremde und die Medien : interkulturelle Vergleiche der Darstellung von Ethnizität im öffentlich-rechtlichen Fernsehen und deren Rezeption in den Metropolen Hamburg und Sydney (2004)
0.02
0.020186191 = product of:
0.080744766 = sum of:
0.080744766 = weight(_text_:und in 4717) [ClassicSimilarity], result of:
0.080744766 = score(doc=4717,freq=20.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.6199216 = fieldWeight in 4717, product of:
4.472136 = tf(freq=20.0), with freq of:
20.0 = termFreq=20.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0625 = fieldNorm(doc=4717)
0.25 = coord(1/4)
- Abstract
- Nach historischem Abriß des öffentlich-rechtlichen Rundfunks in Deutschland und Australien werden Programmstrukturen und -richtlinien behandelt, die sich auf die ethnische Vielfalt der Länder beziehen, ferner die multikulturelle Gesellschaft beider Länder und die Rolle der Medien bei der Integration. Ausgewählte Formate und ihre Programme werden auf ihren Anteil an Multikulturalität hin ausgewertet und Ergebnisse einer in Sydney und Hamburg durchgeführten Rezeptionsstudie miteinander verglichen.
- Imprint
- Hamburg : Hochschule für Angewandte Wissenschaften, FB Bibliothek und Information
-
Weisel, L.; Vogts, I.; Bürk, K.: Mittler zwischen Content und Markt : Die neue Rolle des FIZ Karlsruhe (2000)
0.02
0.020138824 = product of:
0.0805553 = sum of:
0.0805553 = weight(_text_:und in 6437) [ClassicSimilarity], result of:
0.0805553 = score(doc=6437,freq=26.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.618467 = fieldWeight in 6437, product of:
5.0990195 = tf(freq=26.0), with freq of:
26.0 = termFreq=26.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=6437)
0.25 = coord(1/4)
- Abstract
- Das Fachinformationszentrum (FIZ) Karlsruhe ist als internationale Drehscheibe für Fachinformation seit Jahrzehnten verlässlicher und professioneller Servicepartner der Informationssuchenden in Wissenschaft und Technik. Neue web-basierte Dienstleistungen und Produkte erlauben dem professionellen Informationsbroker ebenso wie dem gelegentlichen Onliner oder Internet Pedestrian den effizienten und kostengünstigen Zugang zu Metadaten, naturwissenschaftlich-technisehen Daten und Fakten. Elektronische Volltexte per Hyperlink oder die komplette Dokument-Vermittlung werden gleichfalls angeboten. Die Weiterentwicklung und flexible Anpassung der Informationssysteme ermöglichen auch die Verknüpfung mit lokalen und regionalen Netzen der Forschungseinrichtungen und Hochschulen. Neue Serviceleistungen und Abrechnungsverfahren bieten besonders preisgünstige Konditionen für Hochschulen durch akademische Programme und Festpreise auf Subskriptionsbasis für ausgewählte Datenbanken. Darüber hinaus ist das FIZ Karlsruhe kompetenter Kooperationspartner bei Entwicklung und Betrieb von Informationssystemen
- Source
- nfd Information - Wissenschaft und Praxis. 51(2000) H.7, S.397-406
-
Hiller, H.; Füssel, S. (Bearb.): Wörterbuch des Buches (2002)
0.02
0.01973968 = product of:
0.07895872 = sum of:
0.07895872 = weight(_text_:und in 215) [ClassicSimilarity], result of:
0.07895872 = score(doc=215,freq=34.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.60620916 = fieldWeight in 215, product of:
5.8309517 = tf(freq=34.0), with freq of:
34.0 = termFreq=34.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.046875 = fieldNorm(doc=215)
0.25 = coord(1/4)
- Abstract
- Der "Hiller/Füssel", das bewährte Nachschlagewerk für Buch und Verlag, Papier und Druck, Einband und Restaurierung, Redaktionen und Bibliotheken, Internet und Medienkonzerne für die Hand jedes Studierenden, Auszubildenden, Praktikers und Bücherfreundes. Die sechste, von den Spezialisten des Mainzer Instituts für Buchwissenschaft grundlegend überarbeitete Fassung ist kompakt, zuverlässig und aktuell. Hier sind nun auch die neuesten Entwicklungen und Tendenzen in Buchmarkt und Buchwissenschaft berücksichtigt und umfassend dargestellt: Globalisierung und Marktkonzentration, elektronisches Publizieren und Printing an Demand, der Internet-Buchhandel, Preisbindung, Urhebervertragsrecht und und und ...
- Classification
- AN 17000 [Allgemeines # Buch- und Bibliothekswesen, Informationswissenschaft # Buchwesen # Nachschlagewerke, Allgemeine Darstellungen # Fachwörterbücher einsprachig]
- RVK
- AN 17000 [Allgemeines # Buch- und Bibliothekswesen, Informationswissenschaft # Buchwesen # Nachschlagewerke, Allgemeine Darstellungen # Fachwörterbücher einsprachig]
-
Mönnich, M.: Elektronisches Publizieren von Hochschulschriften : Formate und Datenbanken (2000)
0.02
0.01934876 = product of:
0.07739504 = sum of:
0.07739504 = weight(_text_:und in 5709) [ClassicSimilarity], result of:
0.07739504 = score(doc=5709,freq=6.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.5942039 = fieldWeight in 5709, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.109375 = fieldNorm(doc=5709)
0.25 = coord(1/4)
- Series
- Zeitschrift für Bibliothekswesen und Bibliographie: Sonderh.80
- Source
- Wissenschaft online: Elektronisches Publizieren in Bibliothek und Hochschule. Hrsg. B. Tröger
-
Birkenbihl, V.F.: KaGa und Mehrfachdenken : Gehirntraining mit Birkenbihl (2002)
0.02
0.01934876 = product of:
0.07739504 = sum of:
0.07739504 = weight(_text_:und in 2073) [ClassicSimilarity], result of:
0.07739504 = score(doc=2073,freq=6.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.5942039 = fieldWeight in 2073, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.109375 = fieldNorm(doc=2073)
0.25 = coord(1/4)
- Source
- Gehirn und Geist: Das Magazin für Hirnforschung und Psychologie. 2002, H.2, S.90-92
-
Birkenbihl, V.F.: Abruf und Erinnerung : Gehirntraining mit Birkenbihl (2002)
0.02
0.01934876 = product of:
0.07739504 = sum of:
0.07739504 = weight(_text_:und in 2074) [ClassicSimilarity], result of:
0.07739504 = score(doc=2074,freq=6.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.5942039 = fieldWeight in 2074, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.109375 = fieldNorm(doc=2074)
0.25 = coord(1/4)
- Source
- Gehirn und Geist: Das Magazin für Hirnforschung und Psychologie. 2002, H.3, S.92-94
-
Gabrys-Deutscher, E.; Tobschall, E.: Zielgruppenspezifische Aufbereitung von Informationen als Angebot der Virtuellen Fachbibliotheken Technik und Physik (2004)
0.02
0.01934876 = product of:
0.07739504 = sum of:
0.07739504 = weight(_text_:und in 3310) [ClassicSimilarity], result of:
0.07739504 = score(doc=3310,freq=24.0), product of:
0.13024996 = queryWeight, product of:
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.058726728 = queryNorm
0.5942039 = fieldWeight in 3310, product of:
4.8989797 = tf(freq=24.0), with freq of:
24.0 = termFreq=24.0
2.217899 = idf(docFreq=13141, maxDocs=44421)
0.0546875 = fieldNorm(doc=3310)
0.25 = coord(1/4)
- Abstract
- Die Berücksichtigung der Informationsbedarfe und Informationsgewohnheiten ihrer jeweiligen Zielgruppe ist ein wesentliches Charakteristikum desAngebots der Virtuellen Fachbibliotheken Technik und Physik: Von Ingenieuren und Physikern gestellte Anforderungen an ein Informationsangebot werden in den Virtuellen Fachbibliotheken Technik und Physik umgesetzt, um demAnspruch gerecht zu werden, einen integrierten Zugang zu fachrelevanten Informationen und Dienstleistungen zu bieten. Dabei ist nicht nur bei derAuswahl der bereitgestellten Informationsquellen, sondern insbesondere auch bei derAufbereitung (wie z.B. der sachlichen Erschließung) und Präsentation der Inhalte und Angebote der Virtuellen Fachbibliotheken auf fachspezifische Gewohnheiten und Konventionen zu achten. Kooperationen z.B. mit Fachwissenschaftlern und mit Informationslieferanten sind für die Bereitstellung eines umfassenden und qualitativ hochwertigen Angebots wesentlich.
- Source
- Information - Wissenschaft und Praxis. 55(2004) H.2, S.81-88