-
Greene, A.: Managing subject guides with SQL Server and ASP.Net (2008)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 3601) [ClassicSimilarity], result of:
0.083778985 = score(doc=3601,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 3601, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=3601)
0.25 = coord(1/4)
- Abstract
- Purpose - The purpose of this paper is to report on the content management solution for 50 subject guides maintained by librarian subject specialists at the University of Nevada, Reno Libraries. Design/methodology/approach - The Web Development Librarian designed an SQL Server database to store subject guide content and wrote ASP.Net scripts to generate dynamic web pages. Subject specialists provided input throughout the process. Hands-on workshops were held the summer before the new guides were launched. Findings - The new method has successfully produced consistent but individually customized subject guides while greatly reducing maintenance time. Simple reports reveal the association between guides and licensed resources. Using the system to create course-specific guides would be a useful follow-up project. Skills learned in training workshops should be refreshed at regular intervals to boost confidence and introduce changes in the system. Practical implications - The advantages of centralizing content and separating it from presentation cannot be overstated. More consistency and less maintenance is just the beginning. Once accomplished, a library can incorporate Web 2.0 features into the application by repurposing the data or modifying the ASP.Net template. The now-organized data is clean and ready to migrate to web services or next-generation research guides when the time is right. Originality/value - This paper uniquely reports on an SQL Server, ASP.Net solution for managing subject guides. SQL Server includes data management features that increase application security and ASP.Net offers built-in functionality for manipulating and presenting data. Utmost attention was given to creating simple user interfaces that enable subject specialists to create complex web pages without coding HTML.
-
Köhler, J.; Philippi, S.; Specht, M.; Rüegg, A.: Ontology based text indexing and querying for the semantic web (2006)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 267) [ClassicSimilarity], result of:
0.083778985 = score(doc=267,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 267, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=267)
0.25 = coord(1/4)
- Abstract
- This publication shows how the gap between the HTML based internet and the RDF based vision of the semantic web might be bridged, by linking words in texts to concepts of ontologies. Most current search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. However, the indexes do not contain synonyms, cannot differentiate between homonyms ('mouse' as a pointing vs. 'mouse' as an animal) and users receive different search results when they use different conjugation forms of the same word. In this publication, we present a system that uses ontologies and Natural Language Processing techniques to index texts, and thus supports word sense disambiguation and the retrieval of texts that contain equivalent words, by indexing them to concepts of ontologies. For this purpose, we developed fully automated methods for mapping equivalent concepts of imported RDF ontologies (for this prototype WordNet, SUMO and OpenCyc). These methods will thus allow the seamless integration of domain specific ontologies for concept based information retrieval in different domains. To demonstrate the practical workability of this approach, a set of web pages that contain synonyms and homonyms were indexed and can be queried via a search engine like query frontend. However, the ontology based indexing approach can also be used for other data mining applications such text clustering, relation mining and for searching free text fields in biological databases. The ontology alignment methods and some of the text mining principles described in this publication are now incorporated into the ONDEX system http://ondex.sourceforge.net/.
-
Bates, M.J.: Defining the information disciplines in encyclopedia development (2007)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 387) [ClassicSimilarity], result of:
0.083778985 = score(doc=387,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 387, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=387)
0.25 = coord(1/4)
- Footnote
- Vgl.: http://informationr.net/ir/12-4/colis/colis29.html.
-
Sarinder, K.K.S.; Lim, L.H.S.; Merican, A.F.; Dimyati, K.: Biodiversity information retrieval across networked data sets (2010)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 938) [ClassicSimilarity], result of:
0.083778985 = score(doc=938,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 938, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=938)
0.25 = coord(1/4)
- Abstract
- Purpose - Biodiversity resources are inevitably digital and stored in a wide variety of formats by researchers or stakeholders. In Malaysia, although digitizing biodiversity data has long been stressed, the interoperability of the biodiversity data is still an issue that requires attention. This is because, when data are shared, the question of copyright occurs, creating a setback among researchers wanting to promote or share data through online presentations. To solve this, the aim is to present an approach to integrate data through wrapping of datasets stored in relational databases located on networked platforms. Design/methodology/approach - The approach uses tools such as XML, PHP, ASP and HTML to integrate distributed databases in heterogeneous formats. Five current database integration systems were reviewed and all of them have common attributes such as query-oriented, using a mediator-based approach and integrating a structured data model. These common attributes were also adopted in the proposed solution. Distributed Generic Information Retrieval (DiGIR) was used as a model in designing the proposed solution. Findings - A new database integration system was developed, which is user-friendly and simple with common attributes found in current integration systems.
-
Harth, A.; Hogan, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing linked data with SWSE* (2012)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1410) [ClassicSimilarity], result of:
0.083778985 = score(doc=1410,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1410, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1410)
0.25 = coord(1/4)
- Abstract
- Web search engines such as Google, Yahoo! MSN/Bing, and Ask are far from the consummate Web search solution: they do not typically produce direct answers to queries but instead typically recommend a selection of related documents from the Web. We note that in more recent years, search engines have begun to provide direct answers to prose queries matching certain common templates-for example, "population of china" or "12 euro in dollars"-but again, such functionality is limited to a small subset of popular user queries. Furthermore, search engines now provide individual and focused search interfaces over images, videos, locations, news articles, books, research papers, blogs, and real-time social media-although these tools are inarguably powerful, they are limited to their respective domains. In the general case, search engines are not suitable for complex information gathering tasks requiring aggregation from multiple indexed documents: for such tasks, users must manually aggregate tidbits of pertinent information from various pages. In effect, such limitations are predicated on the lack of machine-interpretable structure in HTML-documents, which is often limited to generic markup tags mainly concerned with document renderign and linking. Most of the real content is contained in prose text which is inherently difficult for machines to interpret.
-
Ioannou, E.; Nejdl, W.; Niederée, C.; Velegrakis, Y.: Embracing uncertainty in entity linking (2012)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1433) [ClassicSimilarity], result of:
0.083778985 = score(doc=1433,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1433, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1433)
0.25 = coord(1/4)
- Abstract
- The modern Web has grown from a publishing place of well-structured data and HTML pages for companies and experienced users into a vivid publishing and data exchange community in which everyone can participate, both as a data consumer and as a data producer. Unavoidably, the data available on the Web became highly heterogeneous, ranging from highly structured and semistructured to highly unstructured user-generated content, reflecting different perspectives and structuring principles. The full potential of such data can only be realized by combining information from multiple sources. For instance, the knowledge that is typically embedded in monolithic applications can be outsourced and, thus, used also in other applications. Numerous systems nowadays are already actively utilizing existing content from various sources such as WordNet or Wikipedia. Some well-known examples of such systems include DBpedia, Freebase, Spock, and DBLife. A major challenge during combining and querying information from multiple heterogeneous sources is entity linkage, i.e., the ability to detect whether two pieces of information correspond to the same real-world object. This chapter introduces a novel approach for addressing the entity linkage problem for heterogeneous, uncertain, and volatile data.
-
Blanco, L.; Bronzi, M.; Crescenzi, V.; Merialdo, P.; Papotti, P.: Flint: from Web pages to probabilistic semantic data (2012)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1437) [ClassicSimilarity], result of:
0.083778985 = score(doc=1437,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1437, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1437)
0.25 = coord(1/4)
- Abstract
- The Web is a surprisingly extensive source of information: it offers a huge number of sites containing data about a disparate range of topics. Although Web pages are built for human fruition, not for automatic processing of the data, we observe that an increasing number of Web sites deliver pages containing structured information about recognizable concepts, relevant to specific application domains, such as movies, finance, sport, products, etc. The development of scalable techniques to discover, extract, and integrate data from fairly structured large corpora available on the Web is a challenging issue, because to face the Web scale, these activities should be accomplished automatically by domain-independent techniques. To cope with the complexity and the heterogeneity of Web data, state-of-the-art approaches focus on information organized according to specific patterns that frequently occur on the Web. Meaningful examples are WebTables, which focuses on data published in HTML tables, and information extraction systems, such as TextRunner, which exploits lexical-syntactic patterns. As noticed by Cafarella et al., even if a small fraction of the Web is organized according to these patterns, due to the Web scale, the amount of data involved is impressive. In this chapter, we focus on methods and techniques to wring out value from the data delivered by large data-intensive Web sites.
-
Zarrad, R.; Doggaz, N.; Zagrouba, E.: Wikipedia HTML structure analysis for ontology construction (2018)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 302) [ClassicSimilarity], result of:
0.083778985 = score(doc=302,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 302, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=302)
0.25 = coord(1/4)
-
Rajagopal, P.; Ravana, S.D.; Koh, Y.S.; Balakrishnan, V.: Evaluating the effectiveness of information retrieval systems using effort-based relevance judgment (2019)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 287) [ClassicSimilarity], result of:
0.083778985 = score(doc=287,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 287, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=287)
0.25 = coord(1/4)
- Abstract
- Purpose The effort in addition to relevance is a major factor for satisfaction and utility of the document to the actual user. The purpose of this paper is to propose a method in generating relevance judgments that incorporate effort without human judges' involvement. Then the study determines the variation in system rankings due to low effort relevance judgment in evaluating retrieval systems at different depth of evaluation. Design/methodology/approach Effort-based relevance judgments are generated using a proposed boxplot approach for simple document features, HTML features and readability features. The boxplot approach is a simple yet repeatable approach in classifying documents' effort while ensuring outlier scores do not skew the grading of the entire set of documents. Findings The retrieval systems evaluation using low effort relevance judgments has a stronger influence on shallow depth of evaluation compared to deeper depth. It is proved that difference in the system rankings is due to low effort documents and not the number of relevant documents. Originality/value Hence, it is crucial to evaluate retrieval systems at shallow depth using low effort relevance judgments.
-
Savolainen, R.: Cognitive authority as an instance of informational and expert power (2022)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 304) [ClassicSimilarity], result of:
0.083778985 = score(doc=304,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 304, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=304)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.degruyter.com/document/doi/10.1515/libri-2020-0128/html.
-
Farney, T.: using Google Tag Manager to share code : Designing shareable tags (2019)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 443) [ClassicSimilarity], result of:
0.083778985 = score(doc=443,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 443, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=443)
0.25 = coord(1/4)
- Abstract
- Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It's a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library's GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
-
Bosancic, B.: Information, data, and knowledge in the cognitive system of the observer (2020)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 972) [ClassicSimilarity], result of:
0.083778985 = score(doc=972,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 972, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=972)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.emerald.com/insight/content/doi/10.1108/JD-09-2019-0184/full/html.
-
González-Teruel, A.; Pérez-Pulido, M.: ¬The diffusion and influence of theoretical models of information behaviour : the case of Savolainen's ELIS model (2020)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 974) [ClassicSimilarity], result of:
0.083778985 = score(doc=974,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 974, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=974)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.emerald.com/insight/content/doi/10.1108/JD-10-2019-0197/full/html.
-
Haggar, E.: Fighting fake news : exploring George Orwell's relationship to information literacy (2020)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 978) [ClassicSimilarity], result of:
0.083778985 = score(doc=978,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 978, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=978)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.emerald.com/insight/content/doi/10.1108/JD-11-2019-0223/full/html.
-
Lor, P.; Wiles, B.; Britz, J.: Re-thinking information ethics : truth, conspiracy theories, and librarians in the COVID-19 era (2021)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1405) [ClassicSimilarity], result of:
0.083778985 = score(doc=1405,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1405, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1405)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.degruyter.com/document/doi/10.1515/libri-2020-0158/html.
-
Wu, Z.; Lu, C.; Zhao, Y.; Xie, J.; Zou, D.; Su, X.: ¬The protection of user preference privacy in personalized information retrieval : challenges and overviews (2021)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1521) [ClassicSimilarity], result of:
0.083778985 = score(doc=1521,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1521, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1521)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.degruyter.com/document/doi/10.1515/libri-2019-0140/html.
-
Kyprianos, K.; Efthymiou, F.; Kouis, D.: Students' perceptions on cataloging course (2022)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1624) [ClassicSimilarity], result of:
0.083778985 = score(doc=1624,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1624, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1624)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.degruyter.com/document/doi/10.1515/libri-2021-0054/html. Vgl. auch: https://doi.org/10.1515/libri-2021-0054.
-
Wang, L.; Qiu, J.: Domain analytic paradigm : a quarter century exploration of fundamental ideas in information science (2022)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1680) [ClassicSimilarity], result of:
0.083778985 = score(doc=1680,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1680, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1680)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.emerald.com/insight/content/doi/10.1108/JD-12-2020-0219/full/html.
-
Beck, T.S.: Image manipulation in scholarly publications : are there ways to an automated solution? (2022)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1681) [ClassicSimilarity], result of:
0.083778985 = score(doc=1681,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1681, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1681)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.emerald.com/insight/content/doi/10.1108/JD-06-2021-0113/full/html.
-
Smith, A.O.; Hemsley, J.: Memetics as informational difference : offering an information-centric conception of memes (2022)
0.02
0.020944746 = product of:
0.083778985 = sum of:
0.083778985 = weight(_text_:html in 1683) [ClassicSimilarity], result of:
0.083778985 = score(doc=1683,freq=2.0), product of:
0.29461905 = queryWeight, product of:
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.057234988 = queryNorm
0.28436378 = fieldWeight in 1683, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.1475344 = idf(docFreq=701, maxDocs=44421)
0.0390625 = fieldNorm(doc=1683)
0.25 = coord(1/4)
- Content
- Vgl.: https://www.emerald.com/insight/content/doi/10.1108/JD-07-2021-0140/full/html.