TransWikia.com

Closed Domain Question Answering which doesn't answer Questions

Data Science Asked by Anirban Saha on September 3, 2021

I’ve been exploring Closed Domain Question Answering Implementations which have been trained on SQuAD 2.0 dataset. Ideally, it should not answer questions which the context text corpus doesn’t contain answers to. But while implementing such models using the Haystack repo or the FARM repo, I’m finding that it always answers these questions even when it shouldn’t. Is there any implementation available that takes into account the fact that it shouldn’t answer questions when it doesn’t find a suitable answer.

References:

  1. https://colab.research.google.com/github/deepset-ai/haystack/blob/update-tutorials/tutorials/Tutorial3_Basic_QA_Pipeline_without_Elasticsearch.ipynb#scrollTo=KS4nTwxIbRb6

  2. https://github.com/deepset-ai/FARM

  3. https://github.com/deepset-ai/haystack

  4. https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2

  5. https://colab.research.google.com/drive/1UrKlHlf68hD3wwQDTctx2cQs6FMUxLLH?usp=sharing#scrollTo=J4jxYsxaG77O

One Answer

Question Answering model of simpletransformers takes data with is_impossible option in the training phase. Also during prediciton, it does not generate answers for all questions since the model learns which questions should be answered.

You can find details below:

https://pypi.org/project/simpletransformers/

Answered by Ilker Kurtulus on September 3, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP