首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 154 毫秒
1.
[目的/意义]为了帮助情报学学科背景的就业人员掌握市场对情报学人才的具体需要,为情报学的教育者拟定情报学的教育体系和人才培养的目标提供指导。[方法/过程]采集国内各大招聘网站情报学相关职位招聘公告,构建情报学招聘语料库,基于CRF机器学习模型和Bi-LSTM-CRF、BERT、BERT-Bi-LSTM-CRF深度学习模型,从语料库中抽取5类情报学招聘实体进行挖掘分析。[结果/结论]通过在已有2000篇经过标注的职位招聘公告语料库上开展情报学招聘实体自动抽取对比实验,识别效果最佳的CRF模型的整体F值为85.07%,其中对"专业要求"实体的识别F值达到了91.67%。BERT模型在"专业要求"实体识别任务中更是取得了92.10%的F值。使用CRF模型对全部符合要求的5287篇招聘公告进行实体抽取,构建了情报学招聘实体社会网络,并通过信息计量分析与社会网络分析的方式挖掘隐含知识。  相似文献   

2.
[目的/意义]当前各学科领域文献增长迅速,迫切需要以面向“问题解决”的思路,从大量科技文献中抽取出研究问题、解决方案及其解决关系,并以此为基础开展领域知识演化研究。[方法/过程]文章提出了可应用于实践的低成本领域实体关系抽取方案:依托词嵌入类比的思想,仅从领域知识资源中提取的少量实体关系对作为基准即可实现关系分类。[结果/结论]在人工智能领域数据集上使用基于词嵌入类比方案的集成模型,抽取解决关系、问题层级关系、方法层级关系的F1值分别为82.33,81.49,74.81。最后,将集成模型应用于全量数据抽取实体关系,从宏观、中观、微观三个层面展示了面向问题解决的人工智能领域知识演化情况。  相似文献   

3.
[目的/意义]针对在线医疗社区问答文本复杂程度高、结构化程度低的特点,结合卷积神经网络(CNN)和双向长短记忆神经网络(BiLSTM)两种深度学习模型以及条件随机场(CRF)模型,提出一套适用于在线医疗问答文本的实体识别方法并进行验证。 [过程/方法] 将问答文本进行清洗和BIO标注后,分别用CNN和BiLSTM进行字级别的特征抽取,将两种模型抽取到的特征进行融合,后放入CRF中训练出实体预测模型,再将问答文本放入训练好的模型中得到最终的实体识别结果。[结果/结论]在所选取的乳腺癌医疗社区问答文本数据集上,所提出的方法结果优于其他模型,且识别准确率达到92.3%,召回率达到89.3%,F值达到90.8%。  相似文献   

4.
丁晟春  方振  王楠 《现代情报》2009,40(3):103-110
[目的/意义] 为解决目前网络公开平台的多源异构的企业数据的散乱、无序、碎片化问题,提出Bi-LSTM-CRF深度学习模型进行商业领域中的命名实体识别工作。[方法/过程] 该方法包括对企业全称实体、企业简称实体与人名实体3类命名实体识别。[结果/结论] 实验结果显示对企业全称实体、企业简称实体与人名实体3类命名实体识别的识别率平均F值为90.85%,验证了所提方法的有效性,证明了本研究有效地改善了商业领域中的命名实体识别效率。  相似文献   

5.
[目的/意义]为促进健康UGC的利用、共享以及规范化管理,提出一种符合用户认知的知识标注方法。[方法/过程]首先,从健康UGC中抽取认知概念,将其归纳到认知情感、认知需求、认知风格、认知内容4个类目;其次,分析各类目下认知概念间的关联关系以构建个体认知图式,并通过图式筛选和特征融合生成群体认知图式;再次,依据群体认知图式确定知识标注的实体及其关系,进而设计健康UGC的知识标注框架;最后,利用OWL语言对健康UGC的知识标注框架进行语义描述。[结果/结论]本研究方法与大众标注法相比,在查全率、查准率、F1值方面更具优势,能更好地满足用户对健康知识的获取需求。  相似文献   

6.
李枫林  柯佳 《情报科学》2018,36(3):169-176
【目的/意义】从大量非结构化文本中抽取出结构化的实体及其关系,是优化搜索引擎、建立知识图谱、开发 智能问答系统的基础工作。【方法/过程】介绍了深度学习框架下不同神经网络模型实现实体关系抽取的方法,比较 了各种模型的优劣势,结合远程监督和注意力机制进一步提高关系抽取性能,最后指出了深度学习模型的不足及 未来发展方向。【结果/结论】实验发现,卷积神经网络擅长捕获句子局部关键信息,循环神经网络擅长捕获句子的 上下文信息,能反映句子多个实体之间的高阶关系,递归神经网络适合短文本的关系抽取。如果模型能结合自然 语言的先验知识,实体关系抽取将会取得更好的效果。  相似文献   

7.
基于本体的跨语言信息检索在数字图书馆中的应用   总被引:2,自引:0,他引:2  
鲍丽倩  张自然 《现代情报》2011,31(7):169-172
首先对跨语言信息检索和相关技术进行了介绍,了解当前跨语言信息检索技术的不足,然后阐述了传统跨语言信息检索技术在数字图书馆应用中的局限性,并由此引出了基于本体的跨语言技术。最后提出了一种基于本体的数字图书馆跨语言信息检索系统,并详细阐述了系统的流程,着重讲述了数字图书馆跨语言领域本体的构建。由于本体具有良好的概念层次和对逻辑推理的支持,对源语言和目标语言进行语义扩展,提高了数字图书馆跨语言系统的检索效率。  相似文献   

8.
[目的/意义]实现对领域概念的自动学习抽取,解决领域本体自动化构建的首要基础任务。[方法/过程]以无监督的学习方法和端到端的识别模式为理论技术基础,首先通过对主流词嵌入模型进行对比分析,设计提出了基于Word2Vec和Skip-Gram的领域文本特征词嵌入模型的自动生成方法;其次研究构建了以IOB格式的标注文本作为输入,基于自注意力机制的BLSTM-CRF领域概念自动抽取模型;最后以资源环境学科领域为例进行了实验研究与评估分析。[结果/结论]模型能够实现对领域概念的自动抽取,对领域新概念或术语的自动识别也具有一定的健壮性。[局限]模型精度尚未达到峰值,有待进一步优化提升。  相似文献   

9.
鲍玉来  耿雪来  飞龙 《现代情报》2019,39(8):132-136
[目的/意义]在非结构化语料集中抽取知识要素,是实现知识图谱的重要环节,本文探索了应用深度学习中的卷积神经网络(CNN)模型进行旅游领域知识关系抽取方法。[方法/过程]抓取专业旅游网站的相关数据建立语料库,对部分语料进行人工标注作为训练集和测试集,通过Python语言编程实现分词、向量化及CNN模型,进行关系抽取实验。[结果/结论]实验结果表明,应用卷积神经网络对非结构化的旅游文本进行关系抽取时能够取得满意的效果(Precision 0.77,Recall 0.76,F1-measure 0.76)。抽取结果通过人工校对进行优化后,可以为旅游知识图谱构建、领域本体构建等工作奠定基础。  相似文献   

10.
刘春丽  陈爽 《现代情报》2023,(12):143-163
[目的/意义]科学文献中的知识实体的挖掘、利用与评价对知识发现、构建知识网络、探索知识之间潜在关联均具有重要意义。随着机器学习、深度学习和大语言模型的发展及其应用,相比最早的基于人工标注的知识实体抽取技术,如今已经发生了翻天覆地的变化;此外,近年来,学者对科学文献中知识实体的评价也进行一些探索,取得了较大进展。[方法/过程]在相关文献调研基础上,回顾并比较了基于人工标注的方法、基于规则的方法、传统机器学习、基于深度学习与大语言模型在知识实体抽取方面的优缺点,列举了相关数据集、软件与工具及相关专业会议;从提及频率、替代计量及其影响因素、实体共现网络及实体扩散/引文网络、基于知识实体的同行评议、基于知识实体的论文新颖性和临床转化进展五大方面,对知识实体的评价研究最新进展进行了归纳与整理。[结果/结论]针对目前存在的问题,建议在具体的知识实体抽取任务中,抽取方法选择应权衡多方面因素,再依此选择一个或多个模型完成实体抽取任务;在知识实体评价方面,应重视指标多样化、可靠性、有效性、系统性和规范化研究,关注对知识实体评价指标的影响因素、指标间相关关系与因果关系的实证分析,构建基于知识实体的论文评价...  相似文献   

11.
We study the selection of transfer languages for different Natural Language Processing tasks, specifically sentiment analysis, named entity recognition and dependency parsing. In order to select an optimal transfer language, we propose to utilize different linguistic similarity metrics to measure the distance between languages and make the choice of transfer language based on this information instead of relying on intuition. We demonstrate that linguistic similarity correlates with cross-lingual transfer performance for all of the proposed tasks. We also show that there is a statistically significant difference in choosing the optimal language as the transfer source instead of English. This allows us to select a more suitable transfer language which can be used to better leverage knowledge from high-resource languages in order to improve the performance of language applications lacking data. For the study, we used datasets from eight different languages from three language families.  相似文献   

12.
Technical terms and proper names constitute a major problem in dictionary-based cross-language information retrieval (CLIR). However, technical terms and proper names in different languages often share the same Latin or Greek origin, being thus spelling variants of each other. In this paper we present a novel two-step fuzzy translation technique for cross-lingual spelling variants. In the first step, transformation rules are applied to source words to render them more similar to their target language equivalents. The rules are generated automatically using translation dictionaries as source data. In the second step, the intermediate forms obtained in the first step are translated into a target language using fuzzy matching. The effectiveness of the technique was evaluated empirically using five source languages and English as a target language. The two-step technique performed better, in some cases considerably better, than fuzzy matching alone. Even using the first step as such showed promising results.  相似文献   

13.
[目的/意义]实体语义关系分类是信息抽取重要任务之一,将非结构化文本转化成结构化知识,是构建领域本体、知识图谱、开发问答系统、信息检索系统的基础工作。[方法/过程]本文详细梳理了实体语义关系分类的发展历程,从技术方法、应用领域两方面回顾和总结了近5年国内外的最新研究成果,并指出了研究的不足及未来的研究方向。[结果/结论]热门的深度学习方法抛弃了传统浅层机器学习方法繁琐的特征工程,自动学习文本特征,实验发现,在神经网络模型中融入词法、句法特征、引入注意力机制能有效提升关系分类性能。  相似文献   

14.
In this paper, we propose a new learning method for extracting bilingual word pairs from parallel corpora in various languages. In cross-language information retrieval, the system must deal with various languages. Therefore, automatic extraction of bilingual word pairs from parallel corpora with various languages is important. However, previous works based on statistical methods are insufficient because of the sparse data problem. Our learning method automatically acquires rules, which are effective to solve the sparse data problem, only from parallel corpora without any prior preparation of a bilingual resource (e.g., a bilingual dictionary, a machine translation system). We call this learning method Inductive Chain Learning (ICL). Moreover, the system using ICL can extract bilingual word pairs even from bilingual sentence pairs for which the grammatical structures of the source language differ from the grammatical structures of the target language because the acquired rules have the information to cope with the different word orders of source language and target language in local parts of bilingual sentence pairs. Evaluation experiments demonstrated that the recalls of systems based on several statistical approaches were improved through the use of ICL.  相似文献   

15.
We study the selection of transfer languages for automatic abusive language detection. Instead of preparing a dataset for every language, we demonstrate the effectiveness of cross-lingual transfer learning for zero-shot abusive language detection. This way we can use existing data from higher-resource languages to build better detection systems for low-resource languages. Our datasets are from seven different languages from three language families. We measure the distance between the languages using several language similarity measures, especially by quantifying the World Atlas of Language Structures. We show that there is a correlation between linguistic similarity and classifier performance. This discovery allows us to choose an optimal transfer language for zero shot abusive language detection.  相似文献   

16.
Text categorization pertains to the automatic learning of a text categorization model from a training set of preclassified documents on the basis of their contents and the subsequent assignment of unclassified documents to appropriate categories. Most existing text categorization techniques deal with monolingual documents (i.e., written in the same language) during the learning of the text categorization model and category assignment (or prediction) for unclassified documents. However, with the globalization of business environments and advances in Internet technology, an organization or individual may generate and organize into categories documents in one language and subsequently archive documents in different languages into existing categories, which necessitate cross-lingual text categorization (CLTC). Specifically, cross-lingual text categorization deals with learning a text categorization model from a set of training documents written in one language (e.g., L1) and then classifying new documents in a different language (e.g., L2). Motivated by the significance of this demand, this study aims to design a CLTC technique with two different category assignment methods, namely, individual- and cluster-based. Using monolingual text categorization as a performance reference, our empirical evaluation results demonstrate the cross-lingual capability of the proposed CLTC technique. Moreover, the classification accuracy achieved by the cluster-based category assignment method is statistically significantly higher than that attained by the individual-based method.  相似文献   

17.
王仁武  孟现茹  孔琦 《现代情报》2018,38(10):57-64
[目的/意义]研究利用深度学习的循环神经网络GRU结合条件随机场CRF对标注的中文文本序列进行预测,来抽取在线评论文本中的实体-属性。[方法/过程]首先根据设计好的文本序列标注规范,对评论语料分词后进行实体及其属性的命名实体标注,得到单词序列、词性序列和标注序列;然后将单词序列、词性序列转为分布式词向量表示并用于GRU循环神经网络的输入;最后输出层采用条件随机场CRF,输出标签即是实体或属性。[结果/结论]实验结果表明,本文的方法将实体-属性抽取简化为命名实体标注,并利用深度学习的GRU捕获输入数据的上下文语义以及条件随机场CRF获取输出标签的前后关系,比传统的基于规则或一般的机器学习方法具有较大的应用优势。  相似文献   

18.
Probabilistic topic models are unsupervised generative models which model document content as a two-step generation process, that is, documents are observed as mixtures of latent concepts or topics, while topics are probability distributions over vocabulary words. Recently, a significant research effort has been invested into transferring the probabilistic topic modeling concept from monolingual to multilingual settings. Novel topic models have been designed to work with parallel and comparable texts. We define multilingual probabilistic topic modeling (MuPTM) and present the first full overview of the current research, methodology, advantages and limitations in MuPTM. As a representative example, we choose a natural extension of the omnipresent LDA model to multilingual settings called bilingual LDA (BiLDA). We provide a thorough overview of this representative multilingual model from its high-level modeling assumptions down to its mathematical foundations. We demonstrate how to use the data representation by means of output sets of (i) per-topic word distributions and (ii) per-document topic distributions coming from a multilingual probabilistic topic model in various real-life cross-lingual tasks involving different languages, without any external language pair dependent translation resource: (1) cross-lingual event-centered news clustering, (2) cross-lingual document classification, (3) cross-lingual semantic similarity, and (4) cross-lingual information retrieval. We also briefly review several other applications present in the relevant literature, and introduce and illustrate two related modeling concepts: topic smoothing and topic pruning. In summary, this article encompasses the current research in multilingual probabilistic topic modeling. By presenting a series of potential applications, we reveal the importance of the language-independent and language pair independent data representations by means of MuPTM. We provide clear directions for future research in the field by providing a systematic overview of how to link and transfer aspect knowledge across corpora written in different languages via the shared space of latent cross-lingual topics, that is, how to effectively employ learned per-topic word distributions and per-document topic distributions of any multilingual probabilistic topic model in various cross-lingual applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号