TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation

1Hong Kong Polytechnic University, 2Shanghai Jiao Tong University, 3Shanghai AI Lab
UnderReview

*Indicates Equal Contribution

Abstract

In this paper, we propose Text-based Open Molecule Generation Benchmark (TOMG-Bench), the first benchmark to evaluate the open-domain molecule generation capability of LLMs. TOMG-Bench encompasses a dataset of three major tasks: molecule editing (MolEdit), molecule optimization (MolOpt), and customized molecule generation (MolCustom). Each task further contains three subtasks, with each subtask comprising 5,000 test samples. Given the inherent complexity of open molecule generation, we have also developed an automated evaluation system that helps measure both the quality and the accuracy of the generated molecules. Our comprehensive benchmarking of 25 LLMs reveals the current limitations and potential areas for improvement in text-guided molecule discovery. Furthermore, with the assistance of OpenMolIns, a specialized instruction tuning dataset proposed for solving challenges raised by TOMG-Bench, Llama3.1-8B could outperform all the open-source general LLMs, even surpassing GPT-3.5-turbo by 46.5% on TOMG-Bench.

Leaderboard

Rank Model #Parameters A̅cc (%) wA̅cc (%)
1 Claude-3.5 N/A 51.10 35.92
2 Gemini-1.5-pro N/A 52.25 34.80
3 GPT-4-turbo N/A 50.74 34.23
4 GPT-4o N/A 49.08 32.29
5 Claude-3 N/A 46.14 30.47
6 OpenMolIns-large (Llama-3.1-8B) 8B 43.1 27.22
7 OpenMolIns-xlarge (Galactica-125M) 125M 44.48 25.73
8 Llama3-70B-Instruct (Int4) 70B 38.54 23.93
9 OpenMolIns-large (Galactica-125M) 125M 39.28 23.42
10 OpenMolIns-medium (Galactica-125M) 125M 34.54 19.89
11 GPT-3.5-turbo N/A 28.93 18.58
12 OpenMolIns-small (Galactica-125M) 125M 24.17 15.18
13 Llama3.1-8B-Instruct 8B 26.26 14.09
14 Llama3-8B-Instruct 8B 26.40 13.75
15 chatglm-9B 9B 18.50 13.13(7)
16 OpenMolIns-light (Galactica-125M) 125M 20.95 13.13(6)
17 OpenMolIns-large (Llama3.2-1B) 1B 14.11 8.10
18 yi-1.5-9B 9B 14.10 7.32
19 Mistral-7B-Instruct-v0.2 7B 11.17 4.81
20 BioT5-base 250M 24.19 4.21
21 MolT5-large 780M 23.11 2.89
22 Llama-3.1-1B-Instruct 1B 3.95 1.99
23 MolT5-base 250M 11.11 1.30(0)
24 MolT5-small 80M 11.55 1.29(9)
25 Qwen2-7B-Instruct 7B 0.18 0.15

BibTeX

@misc{li2024tomgbenchevaluatingllmstextbased,
        title={TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation}, 
        author={Jiatong Li and Junxian Li and Yunqing Liu and Dongzhan Zhou and Qing Li},
        year={2024},
        eprint={2412.14642},
        archivePrefix={arXiv},
        primaryClass={cs.CL},
        url={https://arxiv.org/abs/2412.14642}, 
  }