Challenges - Tasks 10a, 10b, Synergy10, DisTEMIST - Year 10

This year the BioASQ challenge will comprise the following tasks. Participants may choose to participate in any or all of the tasks and their subtasks.

BioASQ Task Synergy: Biomedical Semantic QA For COVID-19
Task Synergy will use benchmark datasets of test biomedical questions for COVID-19, in English. The participants will have to respond to each test question with relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). No special training questions are available for this task, but expert feedback will be incrementally provided instead, based on participant responses for each round. Using this feedback, the participants can improve their systems and provide better answers for persisting and/or new questions. Meanwhile, the participants may also trainin their systems using the training dataset from previous versions of task Synergy and of task 10b as well, both available at the BioASQ Participants Area . All the questions are constructed and assessed by biomedical experts from around Europe. Participation in the task can be partial, i.e. participants may enter the task in any of the rounds.  
The Synergy Task will run in four rounds, starting with approximately 100 questions for COVID-19 in the first round on December 6, 2021. The questions will persist in later rounds until fully answered. In addition, new versions of the questions or new questions may also added in later rounds. Separate winners will be announced for each round. Participation in the task can be partial; for example, it is acceptable to participate in only some of the rounds, to return only relevant articles (or only article snippets), or to return only exact answers (or only `ideal' answers). System responses will be manually assessed and feedback for the responses will be provided at the end of each round.

BioASQ Task 10a: Large-scale online biomedical semantic indexing
This task will be based on the standard process followed by PubMed to index journal abstracts. The participants will be asked to classify new PubMed documents, written in English, as they become available online, before PubMed curators annotate (in effect, classify) them manually. The classes will come from the MeSH hierarchy; they will be the subject headings that are currently used to manually index the abstracts, excluding those that are already provided by the authors of each article. As new manual annotations become available, they will be used to evaluate the classification performance of participating systems (that classify articles before they are manually annotated), using standard IR measures (e.g., precision, recall, accuracy), as well as hierarchical variants of them. The participants will be able to train their classifiers, using the whole history of manually annotated abstracts.
Task 10a will run for three consecutive periods (batches) of 5 weeks each. The first batch will start on February 7, 2022. Separate winners will be announced for each batch. Participation in the task can be partial, i.e. in some of the batches.
 

BioASQ Task 10b: Biomedical Semantic QA (involves IR, QA, summarization)
Task 10b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). More than 4,200 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries). About 500 new test questions will be used this year. All the questions are constructed by biomedical experts from around Europe. 
The test dataset of Task 10b will be released in batches, each containing approximately 100 questions. The first batch will start on March 09, 2022. Separate winners will be announced for each batch. Participation in the task can be partial; for example, it is acceptable to participate in only some of the batches, to return only relevant articles (and no concepts, triples, article snippets), or to return only exact answers (or only `ideal' answers). System responses will be evaluated both automatically and manually.

BioASQ Task DisTEMIST: Disease Text Mining and Indexing

The novel DisTEMIST task will focus on the recognition and indexing of diseases in medical documents, by posing subtasks on (1) indexing medical documents with controlled  terminologies (2) automatic detection indexing textual evidence, i.e. disease entity mentions in text and (3) normalization of these disease mentions to terminologies.

 

The BioASQ DisTEMIST track will rely primarily on 1,000 clinical case report publications in Spanish (SciELO full text articles) for indexing diseases with concept identifiers from SNOMED-CT, MeSH and ICD10-CM. A large silver standard collection of additional case reports and medical abstracts will also be provided. The evaluation of systems for this task will use flat evaluation measures following the task 10a track (mainly micro-averaged F-measure, MiF)

 

More information on this task will be available soon.
 

BioASQ Participants Area

 
View Descriptions of