首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  免费   0篇
教育   1篇
科学研究   1篇
  2022年   1篇
  2021年   1篇
排序方式: 共有2条查询结果,搜索用时 0 毫秒
1
1.
Hormonal imbalance, inflammation and alteration in synaptic plasticity are reported to play a crucial role in the pathogenesis of schizophrenia. The objective of the study was to assess the serum levels of brain derived neurotrophic factor (BDNF) and its association with interleukin-23 (IL-23), testosterone and disease severity in schizophrenia. 40 cases and 40 controls were included in the study. Serum levels of BDNF, IL-23 and testosterone were estimated in all the subjects. Disease severity was assessed using Positive and Negative Syndrome Scale (PANSS). The study was designed in Tertiary care hospital, South India. The results were compared between two groups using Mann–Whitney U test. Spearman Correlation analysis was used to assess the association between biochemical parameters and PANSS. Interleukin-23 and testosterone were significantly increased and BDNF was significantly reduced in schizophrenia cases when compared with controls. BDNF was negatively correlated with IL-23 (r = − 400, p = 0.011), positive symptom subscale (r = − 0.393, p = 0.012), general psychopathology score subscale (r = − 407, p = 0.009) and total symptom subscale (r = − 404, p = 0.010). There was no significant association of IL-23 and testosterone with disease severity in schizophrenia cases. BDNF was reduced in schizophrenia cases and negatively associated with interleukin-23 and disease severity scores.  相似文献   
2.
With the widespread use of learning analytics (LA), ethical concerns about fairness have been raised. Research shows that LA models may be biased against students of certain demographic subgroups. Although fairness has gained significant attention in the broader machine learning (ML) community in the last decade, it is only recently that attention has been paid to fairness in LA. Furthermore, the decision on which unfairness mitigation algorithm or metric to use in a particular context remains largely unknown. On this premise, we performed a comparative evaluation of some selected unfairness mitigation algorithms regarded in the fair ML community to have shown promising results. Using a 3-year program dropout data from an Australian university, we comparatively evaluated how the unfairness mitigation algorithms contribute to ethical LA by testing for some hypotheses across fairness and performance metrics. Interestingly, our results show how data bias does not always necessarily result in predictive bias. Perhaps not surprisingly, our test for fairness-utility tradeoff shows how ensuring fairness does not always lead to drop in utility. Indeed, our results show that ensuring fairness might lead to enhanced utility under specific circumstances. Our findings may to some extent, guide fairness algorithm and metric selection for a given context.

Practitioner notes

What is already known about this topic
  • LA is increasingly being used to leverage actionable insights about students and drive student success.
  • LA models have been found to make discriminatory decisions against certain student demographic subgroups—therefore, raising ethical concerns.
  • Fairness in education is nascent. Only a few works have examined fairness in LA and consequently followed up with ensuring fair LA models.
What this paper adds
  • A juxtaposition of unfairness mitigation algorithms across the entire LA pipeline showing how they compare and how each of them contributes to fair LA.
  • Ensuring ethical LA does not always lead to a dip in performance. Sometimes, it actually improves performance as well.
  • Fairness in LA has only focused on some form of outcome equality, however equality of outcome may be possible only when the playing field is levelled.
Implications for practice and/or policy
  • Based on desired notion of fairness and which segment of the LA pipeline is accessible, a fairness-minded decision maker may be able to decide which algorithm to use in order to achieve their ethical goals.
  • LA practitioners can carefully aim for more ethical LA models without trading significant utility by selecting algorithms that find the right balance between the two objectives.
  • Fairness enhancing technologies should be cautiously used as guides—not final decision makers. Human domain experts must be kept in the loop to handle the dynamics of transcending fair LA beyond equality to equitable LA.
  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号