OHDSI Home | Forums | Wiki | Github

FDA Challenge to parse product labels to identify adverse reactions


(Patrick Ryan) #1

@tbergvall just pointed me to this ongoing challenge: https://sites.mitre.org/adeeval/

Per the challenge, “OSE is interested in a tool that would enable pharmacovigilance safety evaluators to automate the identification of labeled AEs which could facilitate triage, review and processing of safety case reports.” Basically, they want to know what is ‘known’ on the product labels, so that they can only flag ‘unknown’ signals that arise in their signal detection and clinical adjudication phase. I know this is a shared interest at Uppsala Monitoring Centre, and of interest to various safety organizations within the pharmaceutical industry.

Given the work by @jon_duke on SPLICER, @rkboyce on LAERTES, @ericaVoss on the common evidence model, @Chunhua_Weng’s lab in parsing CT.gov, as well as the expertise within the NLP workgroup (@HuaXu @noemie @nigam ) , this seems like a great opportunity for the OHDSI community to showcase its expertise and collaborate as a group to make an important contribution.

They’ve made a reference dataset available for training your model, which you can download once you register on their site now. It appears submissions will be due end of January, based on a testset that will be released at a later time.

If anyone is interested in collaborating, we can use this forum threads to keep the conversation going.


(Hamed Abedtash) #2

@Patrick_Ryan Great opportunity! Would like to join the effort.


(Jon Duke) #3

Thanks for the heads up @Patrick_Ryan! We are on the case. @abedtash_hamed I will reach out to you. If anyone else is interesting in participating, let me know!

Jon


(Erica Voss) #4

I’m interested.

@schuemie and I had been working with Rave Harpez on a similar competition that occurred earlier which Rave presented his work at OHDSI late last year.

We (including @PaolaSaroufim & @rkboyce) were currently comparing his results to what we currently have in CEM from SPLICER so see if we can learn anything from that (i.e. what are the difference). Maybe that work can be informative for this work as well?


(Susant Mallick, Amazon Digital Evangelist, HCLS) #5

Hi Patrick, I am interested, We are doing similar things in Amazon using our SageMaker, but had training data challenges… would love to support…


(Alex) #6

This sounds interesting. Yes I am interested. It is a great opportunity


(Patrick Ryan) #7

Team: Just pinging this thread to see if anybody in the community took the action to compete in this FDA SPL Challenge? Submissions are due in 3 days.

After today’s great talk by @JamesSWiggins on Amazon Comprehend Medical NLP parsing, some wondered how well that API would perform at the label parsing task…

Certainly our community could benefit from a drug-condition dataset that was reliably extracted from product labels, and I believe we have the collective skills in the community to do this work.


(Rkboyce) #8

Patrick, I am not participating in this specific challenge but did contribute a validation set to the related DDI extraction challenge that was ran last year. Otherwise, I am currently working in the related space. An NLM funded postdoc and I are working on predicated semantic indexing over all of SPLs. See this paper that applied predicated semantic indexing methods to MEDLINE for ADE detection.. The output of our work will be contributed to the OHDSI PV Investigation Workgroup and the broader community. As for using AWS Comprehend for SPL parsing, I suspect that its performance is going to depend on the sections it runs over. It is difficult to find the kind of details an NLP researcher wants about Comprehend but it seems that it was trained exclusively on clinical notes which have considerably different content than most sections of labelning (also, SPLs have tons of tables - how would Comprehend handle that?). Still, it would be interesting to see what performance results someone acquires. Also, I would like to see how different the performance would be if compared with the same task using UMLS SemRep + Metamap. The latter is free (there is cost to use Comprehend) and has the advantage of having been trained on a larger range of material that might be more closely related to the language of SPLs. Also, it maps directly to terminologies.


(Lucie Gattepaille) #9

Hi @Patrick_Ryan, UMC research has made an attempt at this. Nothing fancy though, some enhanced dictionary lookups, bi-directional LSTM and search engine indexed on MedDRA. Will keep you posted, but from the look of it, we might suffer from a drop in recall compared to the performance we observed on the validation set (F1 around 0.7, depending on the evaluation metric chosen).
Still, we are curious about how such a rather simple system will perform.


t