-
Stamatatos, E.: Author identification : using text sampling to handle the class imbalance problem (2008)
0.05
0.05461435 = product of:
0.2184574 = sum of:
0.2184574 = weight(_text_:handle in 3063) [ClassicSimilarity], result of:
0.2184574 = score(doc=3063,freq=4.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.511126 = fieldWeight in 3063, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0390625 = fieldNorm(doc=3063)
0.25 = coord(1/4)
- Abstract
- Authorship analysis of electronic texts assists digital forensics and anti-terror investigation. Author identification can be seen as a single-label multi-class text categorization problem. Very often, there are extremely few training texts at least for some of the candidate authors or there is a significant variation in the text-length among the available training texts of the candidate authors. Moreover, in this task usually there is no similarity between the distribution of training and test texts over the classes, that is, a basic assumption of inductive learning does not apply. In this paper, we present methods to handle imbalanced multi-class textual datasets. The main idea is to segment the training texts into text samples according to the size of the class, thus producing a fairer classification model. Hence, minority classes can be segmented into many short samples and majority classes into less and longer samples. We explore text sampling methods in order to construct a training set according to a desirable distribution over the classes. Essentially, by text sampling we provide new synthetic data that artificially increase the training size of a class. Based on two text corpora of two languages, namely, newswire stories in English and newspaper reportage in Arabic, we present a series of authorship identification experiments on various multi-class imbalanced cases that reveal the properties of the presented methods.
-
Nguyen, T.T.; Tho Thanh Quan, T.T.; Tuoi Thi Phan, T.T.: Sentiment search : an emerging trend on social media monitoring systems (2014)
0.05
0.05461435 = product of:
0.2184574 = sum of:
0.2184574 = weight(_text_:handle in 2625) [ClassicSimilarity], result of:
0.2184574 = score(doc=2625,freq=4.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.511126 = fieldWeight in 2625, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0390625 = fieldNorm(doc=2625)
0.25 = coord(1/4)
- Abstract
- Purpose - The purpose of this paper is to discuss sentiment search, which not only retrieves data related to submitted keywords but also identifies sentiment opinion implied in the retrieved data and the subject targeted by this opinion. Design/methodology/approach - The authors propose a retrieval framework known as Cross-Domain Sentiment Search (CSS), which combines the usage of domain ontologies with specific linguistic rules to handle sentiment terms in textual data. The CSS framework also supports incrementally enriching domain ontologies when applied in new domains. Findings - The authors found that domain ontologies are extremely helpful when CSS is applied in specific domains. In the meantime, the embedded linguistic rules make CSS achieve better performance as compared to data mining techniques. Research limitations/implications - The approach has been initially applied in a real social monitoring system of a professional IT company. Thus, it is proved to be able to handle real data acquired from social media channels such as electronic newspapers or social networks. Originality/value - The authors have placed aspect-based sentiment analysis in the context of semantic search and introduced the CSS framework for the whole sentiment search process. The formal definitions of Sentiment Ontology and aspect-based sentiment analysis are also presented. This distinguishes the work from other related works.
-
Pooja, K.M.; Mondal, S.; Chandra, J.: ¬A graph combination with edge pruning-based approach for author name disambiguation (2020)
0.05
0.05461435 = product of:
0.2184574 = sum of:
0.2184574 = weight(_text_:handle in 1060) [ClassicSimilarity], result of:
0.2184574 = score(doc=1060,freq=4.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.511126 = fieldWeight in 1060, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0390625 = fieldNorm(doc=1060)
0.25 = coord(1/4)
- Abstract
- Author name disambiguation (AND) is a challenging problem due to several issues such as missing key identifiers, same name corresponding to multiple authors, along with inconsistent representation. Several techniques have been proposed but maintaining consistent accuracy levels over all data sets is still a major challenge. We identify two major issues associated with the AND problem. First, the namesake problem in which two or more authors with the same name publishes in a similar domain. Second, the diverse topic problem in which one author publishes in diverse topical domains with a different set of coauthors. In this work, we initially propose a method named ATGEP for AND that addresses the namesake issue. We evaluate the performance of ATGEP using various ambiguous name references collected from the Arnetminer Citation (AC) and Web of Science (WoS) data set. We empirically show that the two aforementioned problems are crucial to address the AND problem that are difficult to handle using state-of-the-art techniques. To handle the diverse topic issue, we extend ATGEP to a new variant named ATGEP-web that considers external web information of the authors. Experiments show that with enough information available from external web sources ATGEP-web can significantly improve the results further compared with ATGEP.
-
Lau, B.: Developing auxiliary search facilities in the framework of a free-text-search, UNIX-based OPAC : a report on a current project at Roskilde University Library, Denmark (1991)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 2644) [ClassicSimilarity], result of:
0.21626179 = score(doc=2644,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 2644, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=2644)
0.25 = coord(1/4)
- Abstract
- Describes the RUBIKON on-line catalogue (OPAC) at Roskilde University Library, Denmark, and the auxiliary search tool, proposed by the library, designed to give better search facilities and handle 'no hit' situations. The tool consists of a search command which expands a search by consulting various types of lexical reference records and performs a series of Boolean OR searches on items from the list. Proposes 3 types of lexical reference records: name records; terminological help records; and classification schedule records
-
Boss, R.W.: CD-LANs (1992)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 3745) [ClassicSimilarity], result of:
0.21626179 = score(doc=3745,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 3745, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=3745)
0.25 = coord(1/4)
- Abstract
- Discusses the problems of using a CD-ROM drive with a network. Many CD-LANs being installed are commercially available products, but there are a number of vendors offering software or hardware components that can be used to design and implement CD-LANs. One reason in-house development of CD-LANs has not been more common is that most LANs don't handle CD-ROMs very well. Describes the operation of a CD-LAN considering topologies, operating systems, low-cost LAN software products, jukebox versus server, high-density PCs, multi-user PCs, gateway access, caching, CD-LAN performance and CD-ROM servers for large CD-LANs. Assesses turnkey systems from Meridian Data, Online Computer Products Inc. EBSCO and SilverPlatter
-
Bjorner, S.N.; Pensyl, M.E.: Connecting to the future at MIT : the effects of ISDN on remote online searching (1992)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 4281) [ClassicSimilarity], result of:
0.21626179 = score(doc=4281,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 4281, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=4281)
0.25 = coord(1/4)
- Abstract
- ISDN is an emerging telecommunications system that simultaneously supports formerly disparate media; voice, data, images, video or fax over a single broadband network. Through multitasking, an online searcher can conduct a phone conversation while downloading or retrieving information over the same line. Discusses the development of ISDN and the involvement of MIT. Covers the use of online searching of publicly available databases external to the MIT campus such as DIALOG Information Services, STN, Orbit, Nexis, BRS and Dow Jones News Retrieval. ISDN will enable librarians to download patent diagrams, newspaper pictures, journal charts and graphics. It has the potential to develop into wider band networks with the capacity to handle bulk data, high fidelity audio, high resolution images, moving pictures and hypertext. Makes recommendations for implementing ISDN
-
Del Bigio, G.: ¬The CDS/ISIS software : recent developments and results (1991)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 5039) [ClassicSimilarity], result of:
0.21626179 = score(doc=5039,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 5039, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=5039)
0.25 = coord(1/4)
- Abstract
- CDS/ISIS is a menu-driven generalized information storage and retrieval system designed specifically for the computerized management of structured non-numerical data bases. The unique characteristic of CDS/ISIS is that it is specifically designed to handle fields (and consequently records) of varying length, thus allowing, on the one hand, an optimum utilization of disk storage and, on the other hand, a complete freedom in defining the maximum length of each field. Although some features of CDS/ISIS require some knowledge of and experience with computerized information systems, once an application has been designed the system may be used by persons having little or no prior computer experience.
-
Bourne, R.: Standards: who needs them? (1994)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 630) [ClassicSimilarity], result of:
0.21626179 = score(doc=630,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 630, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=630)
0.25 = coord(1/4)
- Abstract
- Discusses the role of the British Standards Institution (BSI) in formulating bibliographic standards. Outline how library stanbdards have developed and why librarians need to standardize. The democratic aspect of BSI's work has been adversely affected by its 1990 decision to move its IT related activities to a new body; Delivery Information Solutions to Customers through International Standars (Disc), which operates on the basis of standards being formulated only by those prepared to pay a separate subscription. Questions whether BSI is the most appropriate body to serve the interest of the library and information services community. Proposes an alternative standards umbrella that would be better informed and more representative on LIS matters, and canvasses opinion as th whether the UK Library Association could handle standards work on behalf of BSI
-
Wells, K.L.: Cataloging standards in the '90s : infinite possibilities vs. financial realities (1994)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 1109) [ClassicSimilarity], result of:
0.21626179 = score(doc=1109,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 1109, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=1109)
0.25 = coord(1/4)
- Abstract
- Financial crisis in many libraries in the USA are leading to catalogue managers finding themselves faced with growing backlogs and fewer people to handle them. Considers the use of minimal level cataloging (MLC) to alleviate the problem. AACR2 allows for considerable variation in the amount of detail cataloguers can provide. Discusses the creation of a local MLC standard which could allow varying levels of description according to type of material. The financial realities of the 1990s mean that the ideal of creating a perfect catalogue record for each title must be balanced agianst the desirability of having a bibliographic record, albeit an imperfect one, available to patrons in the near future
-
Kemp, A. de: Electronic information : solving old or creating new problems? (1994)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 1279) [ClassicSimilarity], result of:
0.21626179 = score(doc=1279,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 1279, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=1279)
0.25 = coord(1/4)
- Abstract
- Electronic publishing seems to be the future for efficient and fast information dissemination. Describes a variety of new projects, products and services. In addition, concentrates on the development of information systems: relational, object-oriented and hybrid databases, that will have a major impact on the way we handle internal and external information in our organisations. Springer-Verlag carried out an extensive international survey on the future use of information, external such as information from publishers, as well as internal information such as technical documents. New systems like Right-Pages and integrated information and document management systems like DocMan will be the next generation for information handling, dissemination and retrieval
-
Lavin, M.R.: Improving the quality of business reference service (1995)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 1954) [ClassicSimilarity], result of:
0.21626179 = score(doc=1954,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 1954, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=1954)
0.25 = coord(1/4)
- Abstract
- Business librarianship is affected by a combination of forces. Among them are the nature of business as a discipline, the characteristics of business publications, and the needs and expectations of business patrons. Business reference queries are almost always complex. To handle them well, the librarian must spend considerable time with each patron. Bibliographic expertise and subject knowledge are also required. Ways to improve the quality of business reference service include a willingness to help patrons devise appropriate search strategies, assisting them in understanding and evaluating search results, investing in self-education, developing service-oriented reference policies, implementing flexible reference desk schedules, and establishing formal staff training programs
-
Liang, T.-Y.: ¬The basic entity model : a theoretical model of information processing, decision making and information systems (1996)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 5476) [ClassicSimilarity], result of:
0.21626179 = score(doc=5476,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 5476, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=5476)
0.25 = coord(1/4)
- Abstract
- The basic entity model aims to provide information processing with a better theoretical foundation. Human information processing systems are perceived as physical symbol systems. The 4 basic entities that these systems handle are: data, information, knowledge and wisdom. The postulates fundamental to the model are the laws of boundary, interaction, and constructed information systems. The transformation of the basic entities taking place in the model create an information space that contains a set of information states in a particular knowledge domain. The space serves as a platform for decision making. Uses the model to analyze the strucuture of constructed information systems mathematically. Adopts the ontological, deep structure approach
-
Gödert, W.: Information as a cognitive construction : a communication-theoretic model and consequences for information systems (1996)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 6100) [ClassicSimilarity], result of:
0.21626179 = score(doc=6100,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 6100, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=6100)
0.25 = coord(1/4)
- Abstract
- In this paper a model for understanding the concept of information is presented and how the processes of externalization and perception of information by human beings could be understood. This model is different from the standard information theoretic model. It combines the understanding of cognitive information processing as an act of information generation from sense impressions with communication theoretic considerations. This approach can be of value for any system that is regarded as a knowledge system with an in-built ordering structure. As an application some consequences will be drawn for the design of information systems which claims to handle information itself (e.g. multimedia information systems) instead of giving references to bibliographic entities
-
Slater, R.: Authority control in a multilingual OPAC : MultiLIS at Laurentian (1991)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 470) [ClassicSimilarity], result of:
0.21626179 = score(doc=470,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 470, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=470)
0.25 = coord(1/4)
- Abstract
- There is an increasing awareness of the need for authority systems available to handle a wide variety of thesauri. The MultiLIS system at Laurentian University, a biligual institution in Northern Ontario, has an authority control module that satisfies many of the requirements for the maintenance of catalog access points in more than one language. The major feature of the MultiLIS authority module and its current use in a biligual setting, as well as its potential in a multilingual or multithesaurus environment, are descrideb. A brief evaluation and critique of the authority module is also presented, principally in terms of its success in meeting the criteria for a multithesaurus management system
-
Zwadlo, J.: We don't need a philosophy of library and information science : we're confused enough already (1997)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 827) [ClassicSimilarity], result of:
0.21626179 = score(doc=827,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 827, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=827)
0.25 = coord(1/4)
- Abstract
- Presents the thesis that there is no philosophy of library and information sciecne and that the profession does not need one. Argues that instead, a way must be found to manage a confusion, a 'fused together' mass of many contradictory ideas, in order to do useful things, and to be helpful to library users. This search amounts to a philosophical discussion about why librarians and information scientists do not need a philosophy. Shows how to handle this kind of contradiction and shows that for librarians and information scientists, a 'con-fusion' of ideas is worth seeking, rather than resolving
-
Ledesma, L.D.: ¬A computational approach to George Boole's discovery of mathematical logic (1997)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 1463) [ClassicSimilarity], result of:
0.21626179 = score(doc=1463,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 1463, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=1463)
0.25 = coord(1/4)
- Abstract
- Reports a computational model of George Boole's discovery of logic as part of mathematics. Studies the different historical factors that influences this theory, and produces a computational representation of Boole's logic before it was mathematized, and a production system, BOOLE2, that rediscovers logic as a science that behaves exactly as a branch of mathematics, and that thus validates to some extent the historical explanation. The system's discovery methods are found to be general enough to handle 3 other cases: 2 version of a geometry due to a contemporary of Boole, and a small subset of the differential calculus
-
Lee, Y.-H.; Evens, M.W.: Natural language interface for an expert system (1998)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 6108) [ClassicSimilarity], result of:
0.21626179 = score(doc=6108,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 6108, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=6108)
0.25 = coord(1/4)
- Abstract
- Presents a complete analysis of the underlying principles of natural language interfaces from the screen manager to the parser / understander. The main focus is on the design and development of a subsystem for understanding natural language input in an expert system. Considers that fast response time and user friendliness are the most important considerations in the design. The screen manager provides an easy editing capability for users and the spelling correction system can detect most spelling errors and correct them automatically, quickly and effectively. The Lexical Functional Grammar (LFG) parser and the understander are designed to handle most types of simple sentences, fragments, and ellipses
-
Albertsen, K.; Nuys, C. van: Paradigma: FRBR and digital documents (2004)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 5182) [ClassicSimilarity], result of:
0.21626179 = score(doc=5182,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 5182, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=5182)
0.25 = coord(1/4)
- Abstract
- This paper describes the Paradigma Project at the National Library of Norway and its work to ensure the legal deposit of all types of digital documents. The Paradigma project plans to implement extensions to IFLA's FRBR model for handling composite Group 1 entities at all abstraction levels. A new taxonomy is introduced: This is done by forming various relationships into component aggregates, and grouping these aggregates into various classes. This serves two main purposes: New applications may be introduced without requiring modifications to the model, and automated mechanisms may be designed to handle each class in a common way, largely unaffected by the details of the relationship semantics.
-
Stine, D.: Suggested standards for cataloging textbooks (1991)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 637) [ClassicSimilarity], result of:
0.21626179 = score(doc=637,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 637, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=637)
0.25 = coord(1/4)
- Abstract
- To determine the feasibility of our library providing full cataloging for textbooks using OCLC, I conducted a study of records in the OCLC data base and performed a literature search on the topic. I found that a preponderance of duplicate OCLC records and a lack of uniformity in cataloging practices would make this a costly proposition. It would also make it difficult to train a paraprofessional to select catalog records and to handle the cataloging of these materials. This paper suggests standards to be considered by the appropriate ALA committees in order to alleviate the duplication of records and to make textbook cataloging easier.
-
Cory, K.A.: ¬The imaging industry wants us! (1992)
0.05
0.054065447 = product of:
0.21626179 = sum of:
0.21626179 = weight(_text_:handle in 660) [ClassicSimilarity], result of:
0.21626179 = score(doc=660,freq=2.0), product of:
0.42740422 = queryWeight, product of:
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.06532823 = queryNorm
0.5059889 = fieldWeight in 660, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
6.5424123 = idf(docFreq=173, maxDocs=44421)
0.0546875 = fieldNorm(doc=660)
0.25 = coord(1/4)
- Abstract
- Paper-based manual filing systems are inadequate to handle the flood of information found in most commercial offices and government agencies. Examples are included to delineate the dimensions of the problem. In response, imaging technology, which converts information in paper format to computer-readable binary format, is creating a multitude of electronic databases. However, imaging vendors are minimizing the difficulties of database organization. The author, drawing on personal experience, recounts instances of inadequate database organization. Because classification and indexing principles are only imparted in schools of library and/or information science, the imaging industry is highly dependent upon expertise possessed by library science graduates. In order to take advantage of this new job market, recommendations for library science students and faculty are included.