Purpose: To describe the results of showing farmer learning videos through different types of volunteers.
Design/Methodology/Approach: Semi-structured interviews with volunteers from different occupational groups in Bangladesh, and a phone survey with 227 respondents.
Findings: Each occupational group acted differently. Shop keepers, tillage service providers, agricultural input and machine dealers reached fairly small audiences. Tea stall owners had large, male audiences. Non-governmental organisations and community-based organisations, reached more women. The cable TV (dish-line) operators showed the videos on local TV, but some were reluctant to do so again. The Union Information Service Centres showed the videos and reached women viewers. Half of the official government extension agents surveyed also showed the videos publically.
Practical Implication: This video featured maize, wheat and rice seeding machinery. Because the machinery is complex and requires hands-on training, this first video aimed to expose tillage and sowing service providers and farmers to the machinery, without trying to teach them how to use it. But some farmers were so interested that they watched the video many times to learn more about the equipment. Before farmers and service providers decide to buy machinery for direct seeding, they still want to see and learn from demonstration plantings, to examine first-hand how the crop behaves when planted with the new equipment.
Originality/Value: Video can be an effective way of sharing high-quality information with a large audience, if properly distributed. 相似文献
Recently, question series have become one focus of research in question answering. These series are comprised of individual factoid, list, and “other” questions organized around a central topic, and represent abstractions of user–system dialogs. Existing evaluation methodologies have yet to catch up with this richer task model, as they fail to take into account contextual dependencies and different user behaviors. This paper presents a novel simulation-based methodology for evaluating answers to question series that addresses some of these shortcomings. Using this methodology, we examine two different behavior models: a “QA-styled” user and an “IR-styled” user. Results suggest that an off-the-shelf document retrieval system is competitive with state-of-the-art QA systems in this task. Advantages and limitations of evaluations based on user simulations are also discussed. 相似文献
Question categorization, which suggests one of a set of predefined categories to a user’s question according to the question’s topic or content, is a useful technique in user-interactive question answering systems. In this paper, we propose an automatic method for question categorization in a user-interactive question answering system. This method includes four steps: feature space construction, topic-wise words identification and weighting, semantic mapping, and similarity calculation. We firstly construct the feature space based on all accumulated questions and calculate the feature vector of each predefined category which contains certain accumulated questions. When a new question is posted, the semantic pattern of the question is used to identify and weigh the important words of the question. After that, the question is semantically mapped into the constructed feature space to enrich its representation. Finally, the similarity between the question and each category is calculated based on their feature vectors. The category with the highest similarity is assigned to the question. The experimental results show that our proposed method achieves good categorization precision and outperforms the traditional categorization methods on the selected test questions. 相似文献