Data Science Asked on March 16, 2021
I’m currently working on a multiple-choice question answering system. The training set consists of a question, answer and 4 options and I need to predict the correct answer among 4 options. Sometimes there is one paragraph too, For example :
1.Which among the following is measured using a Vernier Caliper?
[A] Dimensions
[B] Time
[C] Sound
[D] Temperature
Answer : A [Dimensions]
Chapter text: [Book chapter related to Dimension, time, sound and temperature ]
How to feed this input to any of deep learning models?
I thought two approaches :
and correct and as one hot encoding => [1, 0, 0, 0 ]
Generating fix sized word embedding for each text :
- Chapter text = [1,1024]
- Text = [1,1024]
- option_a = [1,1024]
- option_b = [1,1024]
- option_c = [1,1024]
- option_d = [1,1024]
final_input = concat( [ Chapter text, Text, option_a, option_b, option_c, option_d] ) ==> [1,6144]
and correct and as one hot encoding => [1, 0, 0, 0 ]
Is it good representation for understanding and reasoning over text for mcqa task?
The basic idea of most of the current question answering architectures is:
The architectures are typically based on the Bidirectional Attention Flow Model, although it was designed for a slightly different task and although the pre-trained word embeddings and RNNs in the model are today usually replaced with BERT-like models. In 2018, there was a competition in question answering at SemEval where many interesting ideas on this problem were presented.
Answered by Jindřich on March 16, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP