BioASQ organizes challenges on biomedical semantic indexing and question answering (QA). The challenges include tasks relevant to hierarchical text classification, machine learning, information retrieval, QA from texts and structured data, multi-document summarization and many other areas.

Monetary and other prizes are awarded to the best performing systems.


The Challenge

The BioASQ challenge comprises the following two tasks.

BioASQ Task on Large-scale online biomedical semantic indexing

This task is based on the standard process followed by PubMed to index journal abstracts. The participants are asked to classify new PubMed documents, written in English, as they become available online, before PubMed curators annotate (in effect, classify) them manually. The classes come from the MeSH hierarchy; they are the subject headings that are currently used to manually index the abstracts, excluding those that are already provided by the authors of each article. As new manual annotations become available, they are used to evaluate the classification performance of participating systems (that classify articles before they are manually annotated), using standard IR measures (e.g., precision, recall, accuracy), as well as hierarchical variants of them. The participants are able to train their classifiers, using the whole history of manually annotated abstracts.

BioASQ Task on Biomedical Semantic QA (involves IR, QA, summarization and more)

This task uses benchmark datasets containing development and test questions, in English, along with gold standard (reference) answers. The benchmark datasets contain at least 500 questions and are constructed by a team of biomedical experts from around Europe. The participants have to respond with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (paragraph-sized summaries), both in English.

Read more