Skip to content

Latest commit

 

History

History
58 lines (46 loc) · 3.4 KB

README.md

File metadata and controls

58 lines (46 loc) · 3.4 KB

BioBART

BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model [ACL-BioNLP 2022] Paper

Tsinghua University & International Digital Economy Academy.

Model Checkpoints

BioBART

  • Base Version (6 + 6 Layers): GanjinZero/biobart-base or IDEA-CCNL/Yuyuan-Bart-139M (same model)
  • Large Version (12 + 12 Layers): GanjinZero/biobart-large or IDEA-CCNL/Yuyuan-Bart-400M (same model)

P.S. Yuyuan is a character in novel Fengshenbang. Chinese Introduction \ English Introduction

Two line usages:

model = AutoModel.from_pretrained('GanjinZero/biobart-base')
# model = AutoModel.from_pretrained('GanjinZero/biobart-large')
tok = AutoTokenizer.from_pretrained('GanjinZero/biobart-base')

BioBART-v2

New generative language model with domain-adaptive pre-training on biomedical corpus BioBART-v2 is released. Compared to BioBART, the main difference of BioBART-v2 is using a cross-domain vocabulary of 85,401 tokens and pre-training for longer steps.

The detailed implementation introduction and experiment results on bimedical downstream tasks are here.

  • Base Version (6 + 6 Layers): GanjinZero/biobart-v2-base
  • Large Version (12 + 12 Layers): GanjinZero/biobart-v2-large

Two line usages:

model = AutoModel.from_pretrained('GanjinZero/biobart-v2-base')
# model = AutoModel.from_pretrained('GanjinZero/biobart-v2-large')
tok = AutoTokenizer.from_pretrained('GanjinZero/biobart-v2-base')

Citation

@inproceedings{yuan-etal-2022-biobart,
    title = "{B}io{BART}: Pretraining and Evaluation of A Biomedical Generative Language Model",
    author = "Yuan, Hongyi  and
      Yuan, Zheng  and
      Gan, Ruyi  and
      Zhang, Jiaxing  and
      Xie, Yutao  and
      Yu, Sheng",
    booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.bionlp-1.9",
    pages = "97--109",
    abstract = "Pretrained language models have served as important backbones for natural language processing. Recently, in-domain pretraining has been shown to benefit various domain-specific downstream tasks. In the biomedical domain, natural language generation (NLG) tasks are of critical importance, while understudied. Approaching natural language understanding (NLU) tasks as NLG achieves satisfying performance in the general domain through constrained language generation or language prompting. We emphasize the lack of in-domain generative language models and the unsystematic generative downstream benchmarks in the biomedical domain, hindering the development of the research community. In this work, we introduce the generative language model BioBART that adapts BART to the biomedical domain. We collate various biomedical language generation tasks including dialogue, summarization, entity linking, and named entity recognition. BioBART pretrained on PubMed abstracts has enhanced performance compared to BART and set strong baselines on several tasks. Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.",
}