首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
本文从创建索引、,使用数据完整性约束,使用视图格式化检索数据,执行带参教的存储过程以动态检索数据和定义触发器以自动执行SQL语句五方面来阐述SQLServer2005优化操作的方法.  相似文献   

2.
通过智能SQL解析动态构造的SQL,并为SQL设置参数随机值,然后在数据库执行这条SQL,可以得到SQL的输入参数及参数类型,还有查询返回列。根据这些数据生成代码,可以为SQL驱动的信息管理等企业应用开发加速,提升开发效率。  相似文献   

3.
SQL查询思路优化与语句优化   总被引:1,自引:0,他引:1  
林丽贞 《科教文汇》2012,(15):79-80
数据库查询优化是取得良好执行性能并简化管理的关键因素,SQL查询是一个有序的查询,不同语句的使用和使用顺序将直接影响其查询速度。查询速度的快慢直接影响着数据库的推广与应用,本文就优化思路及语句书写优化两方面进行讨论,提出优化方法及语句实现。  相似文献   

4.
Oraele C++调用接口   总被引:1,自引:0,他引:1  
OCCI是一种新的高性能Internet应用程序调用API。通过它可以很方便的连接数据库,执行SQL语句,插入、更新数据库表单的取值,获取查询结果,执行数据库中存储过程,以及访问数据库方案对象的元数据。  相似文献   

5.
随着B/S模式应用开发的发展,使用这种模式编写应用程序的程序员也越来越多。但是由于这个行业的入门门槛不高,程序员的水平及经验也参差不齐,相当大一部分程序员在编写代码的时候,没有对用户输入数据的合法性进行判断,使应用程序存在安全隐患。用户可以提交一段数据库查询代码,根据程序返回的结果,获得某些他想得知的数据,这就是所谓的SQL Injection,即SQL注入。现主要介绍了什么是SQL注入,总结了目前流行的一些SQL注入的方法以及发生SQL注入时的防范和补救措施。  相似文献   

6.
刘冲  张玮炜 《今日科苑》2009,(22):131-131
Access数据库处理和分析数据的工具之一是查询,根据对数据源操作方式和操作结果的不同分为选择查询、参数查询、交叉查询、操作查询和SQL特定查询。本文采用案例的方法,详细讲述了重复项查询功能的应用。  相似文献   

7.
Oracle是一种适用于大、中型和微型计算机的关系数据库管理系统,它使用SQL作为它的数据库语言。对Oracle中优化SQL查询方法从减少花在剖析OracleSQL表达式上的时间和优化SQL语句在Oracle数据库中执行的三个过程这两个方面进行了探讨,从而达到优化Oracle数据库性能。  相似文献   

8.
数据库系统是管理信息系统的核心,从大多数系统的应用实例来看,查询操作在各种数据库操作中所占据的比重最大,而查询操作所基于的SELECT语句在SQL语句中又是代价最大的语句。由于SQL语言是面向结果而不是面向过程的查询语言,所以一般支持SQL语言的大型关系型数据库都需要使用一个基于成本的优化器,为即时查询提供一个最佳的执行策略。  相似文献   

9.
介绍了Oracle的应用程序接口OCI,分析了SQL的执行过程。针对常规数据存储方法在海量数据存储和系统并发用户较多时效率低下的问题,定义一种简洁的数据结构,创建存储过程并完成对SQL信息的压缩。对比优化前后的实验数据,可以发现此方法能显著提高Oracle的数据存储效率。  相似文献   

10.
潘慧 《人天科学研究》2011,10(4):136-137
SQL注入攻击是计算机数据库安全方面的重要问题,黑客通过SQL注入攻击对数据库进行非法访问。随着B/S系统的应用和发展,按照此模式编写的应用程序越来越多,但是程序编程过程中不同程序员的经验是不一样的,代码编写过程中用户输入数据合法性判断方面存在问题,导致用户输入数据信息安全性没有形成一个有效的判断标准,程序安全隐患相对较多。实际应用过程中用户输入一段数据库查询代码,程序的执行策略决定返回结果,通过程序执行获取想要的结果,通过所说的SQL Injection,即SQL注入。通过数据库访问策略分析,可以解决SQL注入攻击方面的问题。  相似文献   

11.
Networked information retrieval aims at the interoperability of heterogeneous information retrieval (IR) systems. In this paper, we show how differences concerning search operators and database schemas can be handled by applying data abstraction concepts in combination with uncertain inference. Different data types with vague predicates are required to allow for queries referring to arbitrary attributes of documents. Physical data independence separates search operators from access paths, thus solving text search problems related to noun phrases, compound words and proper nouns. Projection and inheritance on attributes support the creation of unified views on a set of IR databases. Uncertain inference allows for query processing even on incompatible database schemas.  相似文献   

12.
Classical test theory offers theoretically derived reliability measures such as Cronbach’s alpha, which can be applied to measure the reliability of a set of Information Retrieval test results. The theory also supports item analysis, which identifies queries that are hampering the test’s reliability, and which may be candidates for refinement or removal. A generalization of Classical Test Theory, called Generalizability Theory, provides an even richer set of tools. It allows us to estimate the reliability of a test as a function of the number of queries, assessors (relevance judges), and other aspects of the test’s design. One novel aspect of Generalizability Theory is that it allows this estimation of reliability even before the test collection exists, based purely on the numbers of queries and assessors that it will contain. These calculations can help test designers in advance, by allowing them to compare the reliability of test designs with various numbers of queries and relevance assessors, and to spend their limited budgets on a design that maximizes reliability. Empirical analysis shows that in cases for which our data is representative, having more queries is more helpful for reliability than having more assessors. It also suggests that reliability may be improved with a per-document performance measure, as opposed to a document-set based performance measure, where appropriate. The theory also clarifies the implicit debate in IR literature regarding the nature of error in relevance judgments.  相似文献   

13.
The relevance feedback process uses information obtained from a user about a set of initially retrieved documents to improve subsequent search formulations and retrieval performance. In extended Boolean models, the relevance feedback implies not only that new query terms must be identified and re-weighted, but also that the terms must be connected with Boolean And/Or operators properly. Salton et al. proposed a relevance feedback method, called DNF (disjunctive normal form) method, for a well established extended Boolean model. However, this method mainly focuses on generating Boolean queries but does not concern about re-weighting query terms. Also, this method has some problems in generating reformulated Boolean queries. In this study, we investigate the problems of the DNF method and propose a relevance feedback method using hierarchical clustering techniques to solve those problems. We also propose a neural network model in which the term weights used in extended Boolean queries can be adjusted by the users’ relevance feedbacks.  相似文献   

14.
The dynamic nature and size of the Internet can result in difficulty finding relevant information. Most users typically express their information need via short queries to search engines and they often have to physically sift through the search results based on relevance ranking set by the search engines, making the process of relevance judgement time-consuming. In this paper, we describe a novel representation technique which makes use of the Web structure together with summarisation techniques to better represent knowledge in actual Web Documents. We named the proposed technique as Semantic Virtual Document (SVD). We will discuss how the proposed SVD can be used together with a suitable clustering algorithm to achieve an automatic content-based categorization of similar Web Documents. The auto-categorization facility as well as a “Tree-like” Graphical User Interface (GUI) for post-retrieval document browsing enhances the relevance judgement process for Internet users. Furthermore, we will introduce how our cluster-biased automatic query expansion technique can be used to overcome the ambiguity of short queries typically given by users. We will outline our experimental design to evaluate the effectiveness of the proposed SVD for representation and present a prototype called iSEARCH (Intelligent SEarch And Review of Cluster Hierarchy) for Web content mining. Our results confirm, quantify and extend previous research using Web structure and summarisation techniques, introducing novel techniques for knowledge representation to enhance Web content mining.  相似文献   

15.
The methods of information queries building in the SDI systems on the basis of the user's publications are presented in this paper. In most cases the users of the SDI system are scientists whose work is marked by publications resulting from the research they do. It was found that the users' publications may constitute input data for information queries building.The examination of the possible compatibility between the user's information queries and his publications consisted of determining the similarity between of a set of keywords indexed from the information query and a set of keywords indexed from the user's publications.Two methods of information query constructions determined by logical operators AND, OR, NOT and a set of weighted keywords are described.  相似文献   

16.
Search engines are the gateway for users to retrieve information from the Web. There is a crucial need for tools that allow effective analysis of search engine queries to provide a greater understanding of Web users' information seeking behavior. The objective of the study is to develop an effective strategy for the selection of samples from large-scale data sets. Millions of queries are submitted to Web search engines daily and new sampling techniques are required to bring these databases to a manageable size, while preserving the statistically representative characteristics of the entire data set. This paper reports results from a study using data logs from the Excite Web search engine. We use Poisson sampling to develop a sampling strategy, and show how sample sets selected by Poisson sampling statistically effectively represent the characteristics of the entire dataset. In addition, this paper discusses the use of Poisson sampling in continuous monitoring of stochastic processes, such as Web site dynamics.  相似文献   

17.
Recreational queries from users searching for places to go and things to do or see are very common in web and mobile search. Users specify constraints for what they are looking for, like suitability for kids, romantic ambiance or budget. Queries like “restaurants in New York City” are currently served by static local results or the thumbnail carousel. More complex queries like “things to do in San Francisco with kids” or “romantic places to eat in Seattle” require the user to click on every element of the search engine result page to read articles from Yelp, TripAdvisor, or WikiTravel to satisfy their needs. Location data, which is an essential part of web search, is even more prevalent with location-based social networks and offers new opportunities for many ways of satisfying information seeking scenarios.In this paper, we address the problem of recreational queries in information retrieval and propose a solution that combines search query logs with LBSNs data to match user needs and possible options. At the core of our solution is a framework that combines social, geographical, and temporal information for a relevance model centered around the use of semantic annotations on Points of Interest with the goal of addressing these recreational queries. A central part of the framework is a taxonomy derived from behavioral data that drives the modeling and user experience. We also describe in detail the complexity of assessing and evaluating Point of Interest data, a topic that is usually not covered in related work, and propose task design alternatives that work well.We demonstrate the feasibility and scalability of our methods using a data set of 1B check-ins and a large sample of queries from the real-world. Finally, we describe the integration of our techniques in a commercial search engine.  相似文献   

18.
The negation operator, in various forms in which it appears in Information Retrieval queries, is investigated. The applications include negated terms in Boolean queries, more specifically in the presence of metrical constraints, but also negated characters used in the definition of extended keywords by means of regular expressions. Exact definitions are suggested and their usefulness is shown on several examples. Finally, some implementation issues are discussed, in particular as to the order in which the terms of long queries, with or without negated keywords, should be processed, and efficient heuristics for choosing a good order are suggested.  相似文献   

19.
20.
Nowadays, data scientists are capable of manipulating and extracting complex information from time series data, given the current diversity of tools at their disposal. However, the plethora of tools that target data exploration and pattern search may require an extensive amount of time to develop methods that correspond to the data scientist's reasoning, in order to solve their queries. The development of new methods, tightly related with the reasoning and visual analysis of time series data, is of great relevance to improving complexity and productivity of pattern and query search tasks. In this work, we propose a novel tool, capable of exploring time series data for pattern and query search tasks in a set of 3 symbolic steps: Pre-Processing, Symbolic Connotation and Search. The framework is called SSTS (Symbolic Search in Time Series) and uses regular expression queries to search the desired patterns in a symbolic representation of the signal. By adopting a set of symbolic methods, this approach has the purpose of increasing the expressiveness in solving standard pattern and query tasks, enabling the creation of queries more closely related to the reasoning and visual analysis of the signal. We demonstrate the tool's effectiveness by presenting 9 examples with several types of queries on time series. The SSTS queries were compared with standard code developed in Python, in terms of cognitive effort, vocabulary required, code length, volume, interpretation and difficulty metrics based on the Halstead complexity measures. The results demonstrate that this methodology is a valid approach and delivers a new abstraction layer on data analysis of time series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号