Introduction
I am currently a Ph.D. candidate of the Department of Computing (COMP), The Hong Kong Polytechnic University (funded by HKPFS). Before joining the PolyU, I received my Master degree of Information Technology (with Distinction) from the University of Melbourne, under the supervision of Dr. Lea Frermann. In 2021, I got my bachelor degree of Information Security from Shanghai Jiao Tong University. I am a self-motivated person and have strong passion for scientific research. Currently, my research interest lies in Natural Language Processing, Drug Discovery, and Recommender Systems. I have published several papers in top-tier conferences and journals, such as ACL, IJCAI, and IEEE TKDE. I served as a Program Chair of AAAI 2024 and AAAI 2025 and I am also a reviewer of several top-tier conferences and journals, such as Neurips and ICLR. I am always open to new opportunities and collaborations. If you are interested in my research or have any questions, please feel free to contact me.
Research Interest
- Natural Language Processing
- Drug Discovery
- Large Language Models
News
- Our paper, “TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation” is released on arXiv. Paper Link. Please visit our benchmark homepage. Codes are available here. You could also download our datasets via Huggingface Link.
- Our paper, “MolReFlect: Towards In-Context Fine-grained Alignments between Molecules and Texts” is released on arXiv. Paper Link
- Our paper, “Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective” is accpeted by IEEE TKDE. Paper Link
- Our paper, “Large Language Models are In-Context Molecule Learners” is released on arXiv. Paper Link We also release the model weights on Huggingface Link.
- Our paper, “Recommender Systems in the Era of Large Language Models (LLMs)” is accepted by IEEE TKDE! More
Publications
- Jiatong Li, Junxian Li, Yunqing Liu, Dongzhan Zhou, and Qing Li. (2024). TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation. arXiv preprint arXiv:2412.14642.
- Di Zhang, Jingdi Lei, Junxian Li, Xunzhi Wang, Yujie Liu, Zonglin Yang, Jiatong Li, Weida Wang, Suorong Yang, Jianbo Wu, Peng Ye, Wanli Ouyang, Dongzhan Zhou. (2024). Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning. arXiv preprint arXiv:2411.18203.
- Jiatong Li, Yunqing Liu, Wei Liu, Jingdi Le, Di Zhang, Wenqi Fan, Dongzhan Zhou, Yuqiang Li, and Qing Li. (2024). MolReFlect: Towards In-Context Fine-grained Alignments between Molecules and Texts. arXiv preprint arXiv:2411.14721.
- Di Zhang, Jianbo Wu, Jingdi Lei, Tong Che, Jiatong Li, Tong Xie, Xiaoshui Huang, Shufei Zhang, Marco Pavone, Yuqiang Li, Wanli Ouyang, Dongzhan Zhou. (2024). Llama-berry: Pairwise optimization for o1-like olympiad-level mathematical reasoning. arXiv preprint arXiv:2410.02884.
- Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li. (2024). Chemllm: A chemical large language model. arXiv preprint arXiv:2402.06852.
- Jiatong Li, Wei Liu, Zhihao Ding, Wenqi Fan, Yuqiang Li, Qing Li. (2024). Large Language Models are In-Context Molecule Learners. arXiv preprint arXiv:2403.04197.
- Lin Wang, Wenqi Fan, Jiatong Li, Yao Ma, and Qing Li. (2023). Fast graph condensation with structure-based neural tangent kernel. arXiv preprint arXiv:2310.11046. (Accepted by WWW 24)
- Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Jiliang Tang, and Qing Li. (2023). Recommender Systems in the Era of Large Language Models (LLMs). arXiv preprint arXiv:2307.02046. (Accepted by IEEE TKDE)
- Jiatong Li, Yunqing Liu, Wenqi Fan, Xiao-yong Wei, Hui Liu, Jiliang Tang, Qing Li. (2023). Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective. arXiv preprint arXiv:2306.06615. (Accepted by IEEE TKDE)
- Lea Ferrmann, Jiatong Li, Shima Khanehzar, Gosia Mikolajczak. (2023). Conflicts, Villains, Resolutions: Towards models of Narrative Media Framing. ACL 2023. (Oral Presentation)
- Wenqi Fan, Chengyi Liu, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, Qing Li. (2023). Generative Diffusion Models on Graphs: Methods and Applications. IJCAI 2023.
- Mao, Qinghua, Jiatong Li, and Kui Meng. (2022). Improving Chinese Named Entity Recognition by Search Engine Augmentation. arXiv preprint arXiv:2210.12662.
- Jiatong Li, Bin He, and Fei Mi. (2022). Exploring Effective Information Utilization in Multi-Turn Topic-Driven Conversations. arXiv preprint arXiv:2209.00250.
- Jiatong Li, Kui Meng. (2021). MFE-NER: Multi-feature Fusion Embedding for Chinese Named Entity Recognition. arXiv preprint arXiv:2109.07877.
- Chaowang Zhao, Jian Yang*, Jiatong Li. (2021). Generation of Hospital Emergency Department Layouts Based on Generative Adversarial Networks. Journal of Building Engineering, 43, 102539.
- Chaowang Zhao, Jian Yang*, Wuyue Xiong, Jiatong Li. (2021). Two Generative Design Methods of Hospital Operating Department Layouts Based on Healthcare Systematic Layout Planning and Generative Adversarial Network. Journal of Shanghai Jiaotong University (Science), 26, 103-115.
Scholarship
- Hong Kong PhD Fellowship Scheme
- Melbourne Graduate Grant
Awards
- Second Prize, Aecore Cup Digital Twin Application Competition 2021
- Finalist Award, Mathematical Contest in Modelling (MCM), 2020
Contact with me
Welcome to contact with me via email jiatong.li AT connect.polyu.hk