首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper describes an intelligent spelling error correction system for use in a word processing environment. The system employs a dictionary of 93,769 words and provided the intended word is in the dictionary it identifies 80 to 90% of spelling and typing errors.  相似文献   

2.
现阶段,绝大多数自动分词系统都是基于词典的方法,词典的完备性是决定分词系统性能的基础和关键,但词典的完备性一直都是很难完善的。本文介绍了机械分词法与无词典分词法,并利用两种分词法各自的优点将其整合,提出了具有自学习功能的智能词典这个概念,以弥补分词词典无法完备的缺陷。  相似文献   

3.
郑阳  莫建文 《大众科技》2012,14(4):20-23
针对在科技文献中,未登录词等相关专业术语其变化多端,在中文分词中难以识别,影响了专业领域文章的分词准确度,结合实际情况给出了一种基于专业术语提取的中文分词方法。通过大量特定领域的专业语料库,基于互信息和统计的方法,对文中的未登录词等专业术语进行提取,构造专业术语词典,并结合通用词词典,利用最大匹配方法进行中文分词。经实验证明,该分词方法可以较准确的抽取出相关专业术语,从而提高分词的精度,具有实际的应用价值。  相似文献   

4.
采用基于词典的正向增字最大匹配算法,分词词典采用改进的双层哈希表加动态数组的数据结构。在不提升已有典型词典机制空间复杂度与维护复杂度的情况下,一定程度上提高了中文分词的速度和效率。  相似文献   

5.
将大量中英文对照的专利文本作为平行语料库,提出一种自动抽取中英文词典的方法。先利用外部语义资源维基百科构建种子双语词典,再通过计算点互信息获得中英文词对的候补,并设置阈值筛选出用于补充种子词典的词对。实验结果表明:对英语文档进行单词的短语化有助于提高自动抽取结果的综合性能;另一方面,虽然通过句对齐方式可以提高自动抽取结果的正确率,但会对抽取结果的召回率产生负面影响。通过所述方法构建的专利双语词典能够在构建多语言版本的技术知识图谱中起到积极作用。  相似文献   

6.
In Korean text, foreign words, which are mostly transliterations of English words, are frequently used. Foreign words are usually very important index terms in Korean information retrieval since most of them are technical terms or names. So accurate foreign word extraction is crucial for high performance of information retrieval. However, accurate foreign word extraction is not easy because it inevitably accompanies word segmentation and most of the foreign words are unknown. In this paper, we present an effective foreign word recognition and extraction method. In order to accurately extract foreign words, we developed an effective method of word segmentation that involves unknown foreign words. Our word segmentation method effectively utilizes both unknown word information acquired through the automatic dictionary compilation and foreign word recognition information. Our HMM-based foreign word recognition method does not require large labeled examples for the model training unlike the previously proposed method.  相似文献   

7.
Work performed under the SPElling Error Detection COrrection Project (SPEEDCOP) supported by National Science Foundation (NSF) at Chemical Abstracts Service (CAS) to devise effective automatic methods of detecting and correcting misspellings in scholarly and scientific text is described. The investigation was applied to 50,000 word/misspelling pairs collected from six datasets (Chemical Industry Notes (CIN), Biological Abstracts (BA). Chemical Abstracts (CA), Americal Chemical Society primary journal keyboarding (ACS), Information Science Abstracts (ISA), and Distributed On-Line Editing (DOLE) (a CAS internal dataset especially suited to spelling error studies). The purpose of this study was to determine the utility of trigram analysis in the automatic detection and/or correction of misspellings. Computer programs were developed to collect data on trigram distribution in each dataset and to explore the potential of trigram analysis for detecting spelling errors, verifying correctly-spelled words, locating the error site within a misspelling, and distinguishing between the basic kinds of spelling errors. The results of the trigram analysis were largely independent of the dataset to which it was applied but trigram compositions varied with the dataset. The trigram analysis technique developed determined the error site within a misspelling accurately, but did not distinguish effectively between different error types or between valid words and misspellings. However, methods for increasing its accuracy are suggested.  相似文献   

8.
面向信息检索的汉语同义词自动识别和挖掘   总被引:3,自引:0,他引:3  
为了提高同义词自动挖掘的效率,本文提出了从词典释义中自动识别和挖掘同义词的方法,使用超链接分析算法和模式匹配算法,从不同的角度提取同义词:第一部分是把词汇之间注释与被注释的关系看成是一种链接关系。对给定的词汇进行分析,把与给定词汇具有链接关系的所有相关词汇构造一个词汇图,图中的每一个节点代表相关词,每条弧代表了词汇之间注释与被注释的关系。利用超链接分析方法并结合PageRank算法,计算词汇的PageRank值,把PageRank值看成是体现词汇之间语义相似性的衡量指标,最后为每一个词汇生成候选同义词集,并通过一定的筛选原则和方法,推荐出最佳的同义词。第二部分是利用词汇定义模式,对词汇的释义方式进行分析,归纳总结出在词典释义中同义词出现的模式,进而利用模式匹配方法识别和挖掘同义词。此外,利用模式匹配方法对Web网页和期刊论文中的同义词也进行了挖掘测试。测试结果表明,利用模式匹配和超链接分析方法来自动识别和挖掘同义词具有可行性和实用性。  相似文献   

9.
Word sense disambiguation (WSD) is meant to assign the most appropriate sense to a polysemous word according to its context. We present a method for automatic WSD using only two resources: a raw text corpus and a machine-readable dictionary (MRD). The system learns the similarity matrix between word pairs from the unlabeled corpus, and it uses the vector representations of sense definitions from MRD, which are derived based on the similarity matrix. In order to disambiguate all occurrences of polysemous words in a sentence, the system separately constructs the acyclic weighted digraph (AWD) for every occurrence of polysemous words in a sentence. The AWD is structured based on consideration of the senses of context words which occur with a target word in a sentence. After building the AWD per each polysemous word, we can search the optimal path of the AWD using the Viterbi algorithm. We assign the most appropriate sense to the target word in sentences with the sense on the optimal path in the AWD. By experiments, our system shows 76.4% accuracy for the semantically ambiguous Korean words.  相似文献   

10.
Word sense disambiguation (WSD) is meant to assign the most appropriate sense to a polysemous word according to its context. We present a method for automatic WSD using only two resources: a raw text corpus and a machine-readable dictionary (MRD). The system learns the similarity matrix between word pairs from the unlabeled corpus, and it uses the vector representations of sense definitions from MRD, which are derived based on the similarity matrix. In order to disambiguate all occurrences of polysemous words in a sentence, the system separately constructs the acyclic weighted digraph (AWD) for every occurrence of polysemous words in a sentence. The AWD is structured based on consideration of the senses of context words which occur with a target word in a sentence. After building the AWD per each polysemous word, we can search the optimal path of the AWD using the Viterbi algorithm. We assign the most appropriate sense to the target word in sentences with the sense on the optimal path in the AWD. By experiments, our system shows 76.4% accuracy for the semantically ambiguous Korean words.  相似文献   

11.
12.
【目的/意义】从海量微博信息中提取准确的主题词,以期为政府和企业进行舆情分析提供有价值的参考。 【方法/过程】通过分析传统微博主题词提取方法的特点及不足,提出了基于语义概念和词共现的微博主题词提取 方法,该方法利用文本扩充策略将微博从短文本扩充为较长文本,借助于语义词典对微博文本中的词汇进行语义 概念扩展,结合微博文本结构特点分配词汇权重,再综合考虑词汇的共现度来提取微博主题词。【结果/结论】实验 结果表明本文提出的微博主题词提取算法优于传统方法,它能够有效提高微博主题词提取的性能。【创新/局限】利 用语义概念结合词共现思想进行微博主题词提取是一种新的探索,由于算法中的分词方法对个别网络新词切分可 能不合适,会对关键词提取准确性造成微小影响。  相似文献   

13.
In a preceding experiment in text-searching retrieval for cancer questions, search words were humanly selected with the aid of a medical dictionary and cancer textbooks. Recall results were (1) using only stems of question words (humanly stemmed): 20%; (2) adding dictionary search words: 29%; (3) adding also textbook search words: 70%. For the experiment reported here, computer procedures for using the medical dictionary to select search words were developed. Recall results were (1) for question stems (computer stemmed): 19%; (2) adding search words computer selected from the dictionary: 24 %. Thus the computer procedures compared to human use of the dictionary were 50% successful. Human and computer false retrieval rates were almost equal. Some hypotheses about computer selection of search words from textbooks are also described.  相似文献   

14.
[目的/意义]针对技术功效图构建过程中的主要问题和薄弱环节,提出了一种基于SAO结构和词向量的专利技术功效图构建方法。[方法/过程]利用Python程序获取专利摘要中的SAO结构,从中识别技术词和功效词;结合领域词典与专利领域语料库,运用Word2Vec和WordNet计算词语间的语义相似度;利用基于网络关系的主题聚类算法实现主题的自动标引;采用基于SAO结构的共现关系构建技术功效矩阵。[结果/结论]实现了基于SAO结构和词向量的技术功效图自动构建,该构建方法提高了构建技术功效主题的合理性和专利分类标注的准确性,为技术功效图的自动化构建提供新的思路。  相似文献   

15.
The fundamental idea of the work reported here is to extract index phrases from texts with the help of a single word concept dictionary and a thesaurus containing relations among concepts. The work is based on the fact, that, within every phrase, the single words the phrase is composed of are related in a certain well denned manner, the type of relations holding between concepts depending only on the concepts themselves. Therefore relations can be stored in a semantic network. The algorithm described extracts single word concepts from texts and combines them to phrases using the semantic relations between these concepts, which are stored in the network. The results obtained show that phrase extraction from texts by this semantic method is possible and offers many advantages over other (purely syntactic or statistic) methods concerning preciseness and completeness of the meaning representation of the text. But the results show, too, that some syntactic and morphologic “filtering” should be included for effectivity reasons.  相似文献   

16.
针对向量空间模型中语义缺失问题,将语义词典(知网)应用到文本分类的过程中以提高文本分类的准确度。对于中文文本中的一词多义现象,提出改进的词汇语义相似度计算方法,通过词义排歧选取义项进行词语的相似度计算,将相似度大于阈值的词语进行聚类,对文本特征向量进行降维,给出基于语义的文本分类算法,并对该算法进行实验分析。结果表明,该算法可有效提高中文文本分类效果。  相似文献   

17.
李旭晖  周怡 《情报科学》2022,40(3):99-108
【目的/意义】关键词抽取的本质是找到能够表达文档核心语义信息的关键词汇,因此使用语义代替词语进 行分析更加符合实际需求。本文基于TextRank词图模型,利用语义代替词语进行分析,提出了一种基于语义聚类 的关键词抽取方法。【方法/过程】首先,将融合知网(HowNet)义原信息训练的词向量聚类,把词义相近的词语聚集 在一起,为各个词语获取相应的语义类别。然后,将词语所属语义类别的窗口共现频率作为词语间的转移概率计 算节点得分。最后,将TF-IDF值与节点得分进行加权求和,对关键词抽取结果进行修正。【结果/结论】从整体的关 键词抽取结果看,本文提出的关键词抽取方法在抽取效果上有一定提升,相比于TextRank算法在准确率P,召回率 R以及 F值上分别提升了 12.66%、13.77%、13.16%。【创新/局限】本文的创新性在于使用语义代替词语,从语义层面 对相关性网络进行分析。同时,首次引入融合知网义原信息的词向量用于关键词抽取工作。局限性在于抽取方法 依赖知网信息,只适用于中文文本抽取。  相似文献   

18.
In contrast with their monolingual counterparts, little attention has been paid to the effects that misspelled queries have on the performance of Cross-Language Information Retrieval (CLIR) systems. The present work makes a first attempt to fill this gap by extending our previous work on monolingual retrieval in order to study the impact that the progressive addition of misspellings to input queries has, this time, on the output of CLIR systems. Two approaches for dealing with this problem are analyzed in this paper. Firstly, the use of automatic spelling correction techniques for which, in turn, we consider two algorithms: the first one for the correction of isolated words and the second one for a correction based on the linguistic context of the misspelled word. The second approach to be studied is the use of character n-grams both as index terms and translation units, seeking to take advantage of their inherent robustness and language-independence. All these approaches have been tested on a from-Spanish-to-English CLIR system, that is, Spanish queries on English documents. Real, user-generated spelling errors have been used under a methodology that allows us to study the effectiveness of the different approaches to be tested and their behavior when confronted with different error rates. The results obtained show the great sensitiveness of classic word-based approaches to misspelled queries, although spelling correction techniques can mitigate such negative effects. On the other hand, the use of character n-grams provides great robustness against misspellings.  相似文献   

19.
在互联网环境下,新闻数量以海量方式增长,对其进行智能化分类、知识提取处理迫在眉睫。基于此,主要研究了如何在原有关键词词典的基础上,提出一种发现新词的方法,并将提取出的未登录词添加到原始词库中,从而构造一部数量适当、覆盖面全、更新方便的关键词词典。基于大规模的新闻语料作为实验资源,采用了一种利用N-gram算法切分,用关键词抽词词典、停用词词典等过滤筛选非专名的新词识别方法。实验结果的测评表明这一方法是简便易行的。  相似文献   

20.
Arabic is a morphologically rich language that presents significant challenges to many natural language processing applications because a word often conveys complex meanings decomposable into several morphemes (i.e. prefix, stem, suffix). By segmenting words into morphemes, we could improve the performance of English/Arabic translation pair’s extraction from parallel texts. This paper describes two algorithms and their combination to automatically extract an English/Arabic bilingual dictionary from parallel texts that exist in the Internet archive after using an Arabic light stemmer as a preprocessing step. Before using the Arabic light stemmer, the total system precision and recall were 88.6% and 81.5% respectively, then the system precision an recall increased to 91.6% and 82.6% respectively after applying the Arabic light stemmer on the Arabic documents.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号