The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.ĪB - Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. In this context, we organized SemEval-2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers. N2 - Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. T1 - SemEval-2015 Task 3: Answer Selection in Community Question Answering The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.", The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.Ībstract = "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. This item is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 3.0 License.Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. © 2015 Association for Computational Linguistics Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015).Īssociation for Computational Linguistics.Īssociation for Computational Linguistics Research Initiatives and Centres > Centre for Next Generation Localisation (CNGL) Item Type:Ĭomputer Science > Computational linguisticsĭCU Faculties and Schools > Faculty of Engineering and Computing > School of Computing Our best submission averaged over all test sets ranked 26 out of the 73 systems. For two of the test sets, belief and headlines, our best system ranked second and fourth out of the 73 submitted systems. Our team submitted 3 runs for each of the five English test sets. Our system exploits distributional semantics in combination with tried-and-tested features from previous tasks We learn a regression model to predict a semantic similarity score between a sentence pair. We describe the work carried out by the DCU team on the Semantic Textual Similarity task at SemEval-2015.
0 Comments
Leave a Reply. |