Search (1201 results, page 3 of 61)

  • × language_ss:"e"
  1. MacFarlane, A.; Al-Wabil, A.; Marshall, C.R.; Albrair, A.; Jones, S.A.; Zaphiris, P.: ¬The effect of dyslexia on information retrieval : a pilot study (2010) 0.05
    0.045945287 = product of:
      0.18378115 = sum of:
        0.18378115 = weight(_text_:judge in 611) [ClassicSimilarity], result of:
          0.18378115 = score(doc=611,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.42709115 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0390625 = fieldNorm(doc=611)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to resolve a gap in the knowledge of how people with dyslexia interact with information retrieval (IR) systems, specifically an understanding of their information-searching behaviour. Design/methodology/approach - The dyslexia cognitive profile is used to design a logging system, recording the difference between two sets of participants: dyslexic and control users. A standard Okapi interface is used - together with two standard TREC topics - in order to record the information searching behaviour of these users. Findings - Using the log data, the differences in information-searching behaviour of control and dyslexic users, i.e. in the way the two groups interact with Okapi, are established and it also established that qualitative information collected (such as experience etc.) may not be able to account for these differences. Evidence from query variables was unable to distinguish between groups, but differences on topic for the same variables were recorded. Users who view more documents tended to judge more documents as being relevant, in terms of either the user group or topic. Session data indicated that there may be an important difference between the number of iterations used in a search between the user groups, as there may be little effect from the topic on this variable. Originality/value - This is the first study of the effect of dyslexia on information search behaviour, and it provides some evidence to take the field forward.
  2. Nunes, S.; Ribeiro, C.; David, G.: Term weighting based on document revision history (2011) 0.05
    0.045945287 = product of:
      0.18378115 = sum of:
        0.18378115 = weight(_text_:judge in 946) [ClassicSimilarity], result of:
          0.18378115 = score(doc=946,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.42709115 = fieldWeight in 946, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0390625 = fieldNorm(doc=946)
      0.25 = coord(1/4)
    
    Abstract
    In real-world information retrieval systems, the underlying document collection is rarely stable or definitive. This work is focused on the study of signals extracted from the content of documents at different points in time for the purpose of weighting individual terms in a document. The basic idea behind our proposals is that terms that have existed for a longer time in a document should have a greater weight. We propose 4 term weighting functions that use each document's history to estimate a current term score. To evaluate this thesis, we conduct 3 independent experiments using a collection of documents sampled from Wikipedia. In the first experiment, we use data from Wikipedia to judge each set of terms. In a second experiment, we use an external collection of tags from a popular social bookmarking service as a gold standard. In the third experiment, we crowdsource user judgments to collect feedback on term preference. Across all experiments results consistently support our thesis. We show that temporally aware measures, specifically the proposed revision term frequency and revision term frequency span, outperform a term-weighting measure based on raw term frequency alone.
  3. Berendsen, R.; Rijke, M. de; Balog, K.; Bogers, T.; Bosch, A. van den: On the assessment of expertise profiles (2013) 0.05
    0.045945287 = product of:
      0.18378115 = sum of:
        0.18378115 = weight(_text_:judge in 2089) [ClassicSimilarity], result of:
          0.18378115 = score(doc=2089,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.42709115 = fieldWeight in 2089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2089)
      0.25 = coord(1/4)
    
    Abstract
    Expertise retrieval has attracted significant interest in the field of information retrieval. Expert finding has been studied extensively, with less attention going to the complementary task of expert profiling, that is, automatically identifying topics about which a person is knowledgeable. We describe a test collection for expert profiling in which expert users have self-selected their knowledge areas. Motivated by the sparseness of this set of knowledge areas, we report on an assessment experiment in which academic experts judge a profile that has been automatically generated by state-of-the-art expert-profiling algorithms; optionally, experts can indicate a level of expertise for relevant areas. Experts may also give feedback on the quality of the system-generated knowledge areas. We report on a content analysis of these comments and gain insights into what aspects of profiles matter to experts. We provide an error analysis of the system-generated profiles, identifying factors that help explain why certain experts may be harder to profile than others. We also analyze the impact on evaluating expert-profiling systems of using self-selected versus judged system-generated knowledge areas as ground truth; they rank systems somewhat differently but detect about the same amount of pairwise significant differences despite the fact that the judged system-generated assessments are more sparse.
  4. White, H.; Willis, C.; Greenberg, J.: HIVEing : the effect of a semantic web technology on inter-indexer consistency (2014) 0.05
    0.045945287 = product of:
      0.18378115 = sum of:
        0.18378115 = weight(_text_:judge in 2781) [ClassicSimilarity], result of:
          0.18378115 = score(doc=2781,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.42709115 = fieldWeight in 2781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0390625 = fieldNorm(doc=2781)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine the effect of the Helping Interdisciplinary Vocabulary Engineering (HIVE) system on the inter-indexer consistency of information professionals when assigning keywords to a scientific abstract. This study examined first, the inter-indexer consistency of potential HIVE users; second, the impact HIVE had on consistency; and third, challenges associated with using HIVE. Design/methodology/approach - A within-subjects quasi-experimental research design was used for this study. Data were collected using a task-scenario based questionnaire. Analysis was performed on consistency results using Hooper's and Rolling's inter-indexer consistency measures. A series of t-tests was used to judge the significance between consistency measure results. Findings - Results suggest that HIVE improves inter-indexing consistency. Working with HIVE increased consistency rates by 22 percent (Rolling's) and 25 percent (Hooper's) when selecting relevant terms from all vocabularies. A statistically significant difference exists between the assignment of free-text keywords and machine-aided keywords. Issues with homographs, disambiguation, vocabulary choice, and document structure were all identified as potential challenges. Research limitations/implications - Research limitations for this study can be found in the small number of vocabularies used for the study. Future research will include implementing HIVE into the Dryad Repository and studying its application in a repository system. Originality/value - This paper showcases several features used in HIVE system. By using traditional consistency measures to evaluate a semantic web technology, this paper emphasizes the link between traditional indexing and next generation machine-aided indexing (MAI) tools.
  5. Zhao, Y.W.; Chi. C.-H.; Heuvel, W.J. van den: Imperfect referees : reducing the impact of multiple biases in peer review (2015) 0.05
    0.045945287 = product of:
      0.18378115 = sum of:
        0.18378115 = weight(_text_:judge in 3271) [ClassicSimilarity], result of:
          0.18378115 = score(doc=3271,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.42709115 = fieldWeight in 3271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0390625 = fieldNorm(doc=3271)
      0.25 = coord(1/4)
    
    Abstract
    Bias in peer review entails systematic prejudice that prevents accurate and objective assessment of scientific studies. The disparity between referees' opinions on the same paper typically makes it difficult to judge the paper's quality. This article presents a comprehensive study of peer review biases with regard to 2 aspects of referees: the static profiles (factual authority and self-reported confidence) and the dynamic behavioral context (the temporal ordering of reviews by a single reviewer), exploiting anonymized, real-world review reports of 2 different international conferences in information systems / computer science. Our work extends conventional bias research by considering multiple biases occurring simultaneously. Our findings show that the referees' static profiles are more dominant in peer review bias when compared to their dynamic behavioral context. Of the static profiles, self-reported confidence improved both conference fitness and impact-based bias reductions, while factual authority could only contribute to conference fitness-based bias reduction. Our results also clearly show that the reliability of referees' judgments varies along their static profiles and is contingent on the temporal interval between 2 consecutive reviews.
  6. Zhang, Y.; Trace, C.B.: ¬The quality of health and wellness self-tracking data : a consumer perspective (2022) 0.05
    0.045945287 = product of:
      0.18378115 = sum of:
        0.18378115 = weight(_text_:judge in 1460) [ClassicSimilarity], result of:
          0.18378115 = score(doc=1460,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.42709115 = fieldWeight in 1460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1460)
      0.25 = coord(1/4)
    
    Abstract
    Information quality (IQ) is key to users' satisfaction with information systems. Understanding what IQ means to users can effectively inform system improvement. Existing inquiries into self-tracking data quality primarily focus on accuracy. Interviewing 20 consumers who had self-tracked health indicators for at least 6 months, we identified eight dimensions that consumers apply to evaluate self-tracking data quality: value-added, accuracy, completeness, accessibility, ease of understanding, trustworthiness, aesthetics, and invasiveness. These dimensions fell into four categories-intrinsic, contextual, representational, and accessibility-suggesting that consumers judge self-tracking data quality not only based on the data's inherent quality but also considering tasks at hand, the clarity of data representation, and data accessibility. We also found that consumers' self-tracking data quality judgments are shaped primarily by their goals or motivations, subjective experience with tracked activities, mental models of how systems work, self-tracking tools' reputation, cost, and design, and domain knowledge and intuition, but less by more objective criteria such as scientific research results, validated devices, or consultation with experts. Future studies should develop and validate a scale for measuring consumers' perceptions of self-tracking data quality and commit efforts to develop technologies and training materials to enhance consumers' ability to evaluate data quality.
  7. Tang, M.-C.; Liao, I.-H.: Preference diversity and openness to novelty : scales construction from the perspective of movie recommendation (2022) 0.05
    0.045945287 = product of:
      0.18378115 = sum of:
        0.18378115 = weight(_text_:judge in 1649) [ClassicSimilarity], result of:
          0.18378115 = score(doc=1649,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.42709115 = fieldWeight in 1649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0390625 = fieldNorm(doc=1649)
      0.25 = coord(1/4)
    
    Abstract
    In response to calls for recommender systems to balance accuracy and alternative measures such as diversity and novelty, we propose that recommendation strategies should be applied adaptively according to users' preference traits. Psychological scales for "preference diversity" and "openness to novelty" were developed to measure users' willingness to accept diverse and novel recommendations, respectively. To validate the scales empirically, a user study was conducted in which 293 regular moviegoers were asked to judge a set of 220 movies representing both mainstream and "long-tail" appeals. The judgment task involved indicating and rating movies they had seen, heard of but not seen, and not known previously. Correlatoin analyses were then conducted between the participants' preference diversity and openness to novelty scores with the diversity and novelty of their past movie viewing profile and movies they had not seen before but shown interest in. Preference diversity scores were shown to be significantly related to the diversity of the movies they had seen. Higher preference diversity scores were also associated with greater diversity in favored unknown movies. Similarly, participants who scored high on the openness to novelty scale had viewed more little-known movies and were generally interested in less popular movies as well as movies that differed from those they had seen before. Implications of these psychological traits for recommendation strategies are also discussed.
  8. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.04
    0.038985867 = product of:
      0.15594347 = sum of:
        0.15594347 = weight(_text_:judge in 3417) [ClassicSimilarity], result of:
          0.15594347 = score(doc=3417,freq=4.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.36239886 = fieldWeight in 3417, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.0234375 = fieldNorm(doc=3417)
      0.25 = coord(1/4)
    
    Abstract
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
  9. Kim, W.; Wilbur, W.J.: Corpus-based statistical screening for content-bearing terms (2001) 0.04
    0.03675623 = product of:
      0.14702491 = sum of:
        0.14702491 = weight(_text_:judge in 188) [ClassicSimilarity], result of:
          0.14702491 = score(doc=188,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.34167293 = fieldWeight in 188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.03125 = fieldNorm(doc=188)
      0.25 = coord(1/4)
    
    Abstract
    Kim and Wilber present three techniques for the algorithmic identification in text of content bearing terms and phrases intended for human use as entry points or hyperlinks. Using a set of 1,075 terms from MEDLINE evaluated on a zero to four, stop word to definite content word scale, they evaluate the ranked lists of their three methods based on their placement of content words in the top ranks. Data consist of the natural language elements of 304,057 MEDLINE records from 1996, and 173,252 Wall Street Journal records from the TIPSTER collection. Phrases are extracted by breaking at punctuation marks and stop words, normalized by lower casing, replacement of nonalphanumerics with spaces, and the reduction of multiple spaces. In the ``strength of context'' approach each document is a vector of binary values for each word or word pair. The words or word pairs are removed from all documents, and the Robertson, Spark Jones relevance weight for each term computed, negative weights replaced with zero, those below a randomness threshold ignored, and the remainder summed for each document, to yield a score for the document and finally to assign to the term the average document score for documents in which it occurred. The average of these word scores is assigned to the original phrase. The ``frequency clumping'' approach defines a random phrase as one whose distribution among documents is Poisson in character. A pvalue, the probability that a phrase frequency of occurrence would be equal to, or less than, Poisson expectations is computed, and a score assigned which is the negative log of that value. In the ``database comparison'' approach if a phrase occurring in a document allows prediction that the document is in MEDLINE rather that in the Wall Street Journal, it is considered to be content bearing for MEDLINE. The score is computed by dividing the number of occurrences of the term in MEDLINE by occurrences in the Journal, and taking the product of all these values. The one hundred top and bottom ranked phrases that occurred in at least 500 documents were collected for each method. The union set had 476 phrases. A second selection was made of two word phrases occurring each in only three documents with a union of 599 phrases. A judge then ranked the two sets of terms as to subject specificity on a 0 to 4 scale. Precision was the average subject specificity of the first r ranks and recall the fraction of the subject specific phrases in the first r ranks and eleven point average precision was used as a summary measure. The three methods all move content bearing terms forward in the lists as does the use of the sum of the logs of the three methods.
  10. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.04
    0.03675623 = product of:
      0.14702491 = sum of:
        0.14702491 = weight(_text_:judge in 3659) [ClassicSimilarity], result of:
          0.14702491 = score(doc=3659,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.34167293 = fieldWeight in 3659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.03125 = fieldNorm(doc=3659)
      0.25 = coord(1/4)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
  11. White, H.D.: Relevance in theory (2009) 0.04
    0.03675623 = product of:
      0.14702491 = sum of:
        0.14702491 = weight(_text_:judge in 859) [ClassicSimilarity], result of:
          0.14702491 = score(doc=859,freq=2.0), product of:
            0.43030894 = queryWeight, product of:
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.055658925 = queryNorm
            0.34167293 = fieldWeight in 859, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.731176 = idf(docFreq=52, maxDocs=44421)
              0.03125 = fieldNorm(doc=859)
      0.25 = coord(1/4)
    
    Abstract
    Relevance is the central concept in information science because of its salience in designing and evaluating literature-based answering systems. It is salient when users seek information through human intermediaries, such as reference librarians, but becomes even more so when systems are automated and users must navigate them on their own. Designers of classic precomputer systems of the nineteenth and twentieth centuries appear to have been no less concerned with relevance than the information scientists of today. The concept has, however, proved difficult to define and operationalize. A common belief is that it is a relation between a user's request for information and the documents the system retrieves in response. Documents might be considered retrieval-worthy because they: 1) constitute evidence for or against a claim; 2) answer a question; or 3) simply match the request in topic. In practice, literature-based answering makes use of term-matching technology, and most evaluation of relevance has involved topical match as the primary criterion for acceptability. The standard table for evaluating the relation of retrieved documents to a request has only the values "relevant" and "not relevant," yet many analysts hold that relevance admits of degrees. Moreover, many analysts hold that users decide relevance on more dimensions than topical match. Who then can validly judge relevance? Is it only the person who put the request and who can evaluate a document on multiple dimensions? Or can surrogate judges perform this function on the basis of topicality? Such questions arise in a longstanding debate on whether relevance is objective or subjective. One proposal has been to reframe the debate in terms of relevance theory (imported from linguistic pragmatics), which makes relevance increase with a document's valuable cognitive effects and decrease with the effort needed to process it. This notion allows degree of topical match to contribute to relevance but allows other considerations to contribute as well. Since both cognitive effects and processing effort will differ across users, they can be taken as subjective, but users' decisions can also be objectively evaluated if the logic behind them is made explicit. Relevance seems problematical because the considerations that lead people to accept documents in literature searches, or to use them later in contexts such as citation, are seldom fully revealed. Once they are revealed, relevance may be seen as not only multidimensional and dynamic, but also understandable.
  12. Chafe, W.L.: Meaning and the structure of language (1980) 0.02
    0.021174902 = product of:
      0.08469961 = sum of:
        0.08469961 = weight(_text_:und in 1220) [ClassicSimilarity], result of:
          0.08469961 = score(doc=1220,freq=32.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.6861275 = fieldWeight in 1220, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=1220)
      0.25 = coord(1/4)
    
    Classification
    ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
    ET 430 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Synchrone Semantik / Allgemeines (Gesamtdarstellungen)
    RVK
    ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
    ET 430 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Synchrone Semantik / Allgemeines (Gesamtdarstellungen)
  13. Boßmeyer, C.: UNIMARC und MAB : Strukturunterschiede und Kompatibilitätsfragen (1995) 0.02
    0.02095772 = product of:
      0.08383088 = sum of:
        0.08383088 = weight(_text_:und in 2436) [ClassicSimilarity], result of:
          0.08383088 = score(doc=2436,freq=6.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.67909014 = fieldWeight in 2436, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.125 = fieldNorm(doc=2436)
      0.25 = coord(1/4)
    
    Source
    Zeitschrift für Bibliothekswesen und Bibliographie. 42(1995) H.5, S.465-480
  14. SimTown : baue deine eigene Stadt (1995) 0.02
    0.018524181 = product of:
      0.074096724 = sum of:
        0.074096724 = weight(_text_:und in 5546) [ClassicSimilarity], result of:
          0.074096724 = score(doc=5546,freq=12.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.60023654 = fieldWeight in 5546, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.078125 = fieldNorm(doc=5546)
      0.25 = coord(1/4)
    
    Abstract
    SimTown wurde entwickelt, um Kindern die wichtigsten Konzepte der Wirtschaft (Angebot und Nachfrage), Ökologie (Rohstoffe, Umweltverschmutzung und Recycling) und Städteplanung (Gleichgewicht zwischen Wohnraum, Arbeitsplätzen und Erholungsstätten) auf einfache und unterhaltsame Art nahezubringen
    Issue
    PC CD-ROM Windows. 8 Jahre und älter.
  15. Atzbach, R.: ¬Der Rechtschreibtrainer : Rechtschreibübungen und -spiele für die 5. bis 9. Klasse (1996) 0.02
    0.018338004 = product of:
      0.07335202 = sum of:
        0.07335202 = weight(_text_:und in 5647) [ClassicSimilarity], result of:
          0.07335202 = score(doc=5647,freq=6.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.5942039 = fieldWeight in 5647, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.109375 = fieldNorm(doc=5647)
      0.25 = coord(1/4)
    
    Abstract
    Alte und neue Rechtschreibregeln
    Issue
    MS-DOS und Windows.
  16. Geiß, D.: Gewerbliche Schutzrechte : Rationelle Nutzung ihrer Informations- und Rechtsfunktion in Wirtschaft und Wissenschaft Bericht über das 29.Kolloquium der Technischen Universität Ilmenau über Patentinformation und gewerblichen Rechtsschutz (2007) 0.02
    0.018149916 = product of:
      0.072599664 = sum of:
        0.072599664 = weight(_text_:und in 1629) [ClassicSimilarity], result of:
          0.072599664 = score(doc=1629,freq=8.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.58810925 = fieldWeight in 1629, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=1629)
      0.25 = coord(1/4)
    
    Source
    Information - Wissenschaft und Praxis. 58(2007) H.6/7, S.376-379
  17. Engel, P.: Teleosemantics: realistic or anti-realistic? : Votum (1992) 0.02
    0.018149916 = product of:
      0.072599664 = sum of:
        0.072599664 = weight(_text_:und in 609) [ClassicSimilarity], result of:
          0.072599664 = score(doc=609,freq=8.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.58810925 = fieldWeight in 609, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.09375 = fieldNorm(doc=609)
      0.25 = coord(1/4)
    
    Series
    Philosophie und Geschichte der Wissenschaften; Bd.18
    Source
    Wirklichkeit und Wissen: Realismus, Antirealismus und Wirklichkeits-Konzeptionen in Philosophie und Wissenschaften. Hrsg.: H.J. Sandkühler
  18. Pires, C.M.; Guédon, J.-C.; Blatecky, A.: Scientific data infrastructures : transforming science, education, and society (2013) 0.02
    0.017557302 = product of:
      0.07022921 = sum of:
        0.07022921 = weight(_text_:und in 2843) [ClassicSimilarity], result of:
          0.07022921 = score(doc=2843,freq=22.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.5689069 = fieldWeight in 2843, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0546875 = fieldNorm(doc=2843)
      0.25 = coord(1/4)
    
    Abstract
    Data is everywhere - praktisch bei allen wissenschaftlichen, staatlichen, gesellschaftlichen und wirtschaftlichen Aktivitäten entstehen sie. Die Daten werden erzeugt durch Befragungen, mobile und eingebettete Systeme, Sensoren, Beobachtungssysteme, wissenschaftliche Instrumente, Publikationen, Experimente, Simulationen, Auswertungen und Analysen. Bürger, Wissenschaftler, Forschende und Lehrende kommunizieren durch den Austausch von Daten, Software, Veröffentlichungen, Berichte, Simulationen und Visualisierungen. Darüber hinaus führen die zunehmende Nutzung der visuellen Kommunikation für Unterhaltung und zwischenmenschlichen Beziehungen sowie die rasche Zunahme der sozialen Netzwerke zu riesigen Datenmengen. Daten von Observatorien, Experimenten und Umweltüberwachung sowie aus der Genforschung und dem Gesundheitswesen generieren eine Größenordnung von Daten alle zwei Jahre, die weit über das Mooresche Gesetz hinausgeht - und dabei ist noch kein Ende in Sicht. Wissenschaftliche Publikationen sind Datengrundlage für die weitere wissenschaftliche Arbeit und Publikationen.
    Source
    Zeitschrift für Bibliothekswesen und Bibliographie. 60(2013) H.6, S.325-331
  19. OCLC PICA übernimmt die Sisis Informationssysteme (2005) 0.02
    0.017327784 = product of:
      0.069311135 = sum of:
        0.069311135 = weight(_text_:und in 5212) [ClassicSimilarity], result of:
          0.069311135 = score(doc=5212,freq=42.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.56146985 = fieldWeight in 5212, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0390625 = fieldNorm(doc=5212)
      0.25 = coord(1/4)
    
    Abstract
    Mit dem Ziel, ihre Position als einer der führenden Hersteller von Bibliothekssystemen weiter auszubauen, übernimmt die OCLC PICA B.V. in Leiden (NL) die Sisis Informationssysteme GmbH in Oberhaching. Beide Unternehmen ergänzen sich hervorragend in technologischer Hinsichtwie auch im Servicebereich. Durch die entstehenden Synergien kann die neue, gestärkte Organisation ihre Produkte und Services künftig noch schneller und wirtschaftlicher anbieten.
    Content
    "Der stetige Wandel macht auch vor Bibliotheken nicht Halt. Immer wichtiger werden neue Geschäftsprozesse und die optimale Vernetzung der unterschiedlichen Arbeitsbereiche. Das Behaupten der Spitzenposition in diesem Markt erfordert ständige Investitionen und Ausbau der Ressourcen. Mit der Obernahme der Sisis Informationssysteme GmbH und den dort vorhandenen Kenntnissen und Fähigkeiten wurde ein effizienter Weg gefunden, die gegenwärtige Marktposition auszubauen und die Produktqualität weiter zu verbessern. Die Sisis Informationssysteme ist ein im Markt bekannter und erfolgreicher Anbieter von Bibliothekssystemen und Portallösungen mit Kunden in Deutschland, der Schweiz und den Niederlanden. Wie OCLC PICA suchte auch das Sisis Management nach Lösungen, um weiterhin in Produkte und Marktentwicklungen zu investieren und die erreichte Marktposition und Produktqualität auszubauen. Der erfolgte Zusammenschluss bietet hierfür die besten Voraussetzungen. Künftig werden OCLC PICA und Sisis ihre Technologien, Fähigkeiten und Methoden zum Vorteil ihrer Kunden gemeinsam nutzen und aufeinander abstimmen und einen besseren und vor allem kundennäheren Service anbieten können. Durch die Verstärkung des Entwicklungsbereichs kann der Ausbau der vorhandenen Produkte fachlich und funktional vorangetrieben werden. Die Kunden werden von der wechselseitigen Nutzung innovativer Komponenten und dem erweiterten Produktportfolio nur profitieren."
    Footnote
    Vgl.: www. oclcpica.org und www.sisis.de
  20. Mult IK media : eine multimediale Präsentation des Fachbereichs Informations- und Kommunikationswesen der Fachhochschule Hannover (1997) 0.02
    0.017111907 = product of:
      0.06844763 = sum of:
        0.06844763 = weight(_text_:und in 204) [ClassicSimilarity], result of:
          0.06844763 = score(doc=204,freq=16.0), product of:
            0.123445876 = queryWeight, product of:
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.055658925 = queryNorm
            0.5544748 = fieldWeight in 204, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.217899 = idf(docFreq=13141, maxDocs=44421)
              0.0625 = fieldNorm(doc=204)
      0.25 = coord(1/4)
    
    Abstract
    Diese CD-ROM enthält eine multimediale Präsentation des Fachbereichs Informations- und Kommunikationswesen der FH Hannover, die über folgende Themen informiert: (1) Berufsbild der Informationspezialisten, Einsatzbereiche und Tätigkeiten (2) Geschichte des Fachbereichs, Gründung, Studentenzahlen, etc. (3) Vorstellung der Studiengänge des Fachbereichs unter Berücksichtigung der Berufsbilder, der Zulassungsbedingungen, der Studienorganisationen und der Praktikumsstellen (4) Ausstattung und Kapazitäten des Fachbereichs (5) Ausgewählte Diplom- und Projektarbeiten (6) Aktivitäten des Fachbereichs in Kooperation mit Partnerhochschulen, a.B. Auslandsprogramme und -projekte, Studenten-Summer-Seminare (7) Präsenz des Fachbereichs im WWW des Internet
    Imprint
    Hannover : FH, Fb Informations- und Kommunikationswesen

Authors

Languages

  • d 32
  • m 3
  • nl 1
  • More… Less…

Types

  • a 777
  • m 300
  • el 97
  • s 91
  • i 21
  • n 17
  • x 12
  • r 11
  • b 7
  • ? 1
  • p 1
  • v 1
  • More… Less…

Themes

Subjects

Classifications