Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Journal of Shanghai Jiaotong University (Science), 2021
Two intelligent design processes based on healthcare systematic layout planning (HSLP) and generative adversarial network (GAN) are proposed in this paper, which aim to solve the generation problem of the plane functional layout of the operating departments (ODs) of general hospitals.
Recommended citation: Zhao, C., Yang, J., Xiong, W., & Li, J. (2021). Two generative design methods of hospital operating department layouts based on healthcare systematic layout planning and generative adversarial network. Journal of Shanghai Jiaotong University (Science), 26, 103-115. https://link.springer.com/article/10.1007/s12204-021-2265-9
Published in arXiv preprint, 2021
TIn this paper, we propose a new method, Multi-Feature Fusion Embedding for Chinese Named Entity Recognition (MFE-NER), to strengthen the language pattern of Chinese and handle the character substitution problem in Chinese Named Entity Recognition.
Recommended citation: Li, J., & Meng, K. (2021). MFE-NER: multi-feature fusion embedding for Chinese named entity recognition. arXiv preprint arXiv:2109.07877. https://arxiv.org/abs/2109.07877
Published in Journal of Building Engineering, 2021
This paper takes the functional layout of the emergency departments (EDs) of general hospitals as the research object, combines the hierarchical design concepts and proposes an intelligent functional layout generation method for EDs. It aims to explore the application of intelligent algorithms in architectural design and build an intelligent design method to solve the generation problem of ED layouts.
Recommended citation: Zhao, C. W., Yang, J., & Li, J. (2021). Generation of hospital emergency department layouts based on generative adversarial networks. Journal of Building Engineering, 43, 102539. https://www.sciencedirect.com/science/article/abs/pii/S235271022100396X
Published in arXiv preprint, 2022
In order to expand the information that Pretrained Language Models can utilize, we encode topic and dialogue history information using certain prompts with multiple channels of Fusion-in-Decoder (FiD) and explore the influence of three different channel settings.
Recommended citation: Li, J., He, B., & Mi, F. (2022). Exploring Effective Information Utilization in Multi-Turn Topic-Driven Conversations. arXiv preprint arXiv:2209.00250. https://arxiv.org/abs/2209.00250
Published in arXiv preprint, 2022
We propose a neural-based approach to perform semantic augmentation using external knowledge from search engine for Chinese NER.
Recommended citation: Mao, Q., Li, J., & Meng, K. (2022). Improving Chinese Named Entity Recognition by Search Engine Augmentation. arXiv preprint arXiv:2210.12662. https://arxiv.org/abs/2210.12662
Published in IJCAI 2023, 2023
In this paper, we first provide a comprehensive overview of generative diffusion models on graphs, In particular, we review representative algorithms for three variants of graph diffusion models, i.e., Score Matching with Langevin Dynamics (SMLD), Denoising Diffusion Probabilistic Model (DDPM), and Score-based Generative Model (SGM). Then, we summarize the major applications of generative diffusion models on graphs with a specific focus on molecule and protein modeling. Finally, we discuss promising directions in generative diffusion models on graph-structured data.
Recommended citation: Wenqi Fan, Chengyi Liu, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, Qing Li. (2023). Generative Diffusion Models on Graphs: Methods and Applications. IJCAI 2023.
Published in ACL 2023, 2023
Narrative Framing
Recommended citation: Lea Ferrmann, Jiatong Li, Shima Khanehzar, Gosia Mikolajczak. (2023). Conflicts, Villains, Resolutions: Towards models of Narrative Media Framing. ACL 2023. https://aclanthology.org/2023.acl-long.486/
Published in arXiv preprint, 2023
Molecule Discovery, Large Language Models
Recommended citation: Li, J., Liu, Y.*, Fan, W., Wei, X., Liu, H., Tang, J., Li, Q. (2023) Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective. arXiv preprint arXiv:2306.06615. https://arxiv.org/abs/2306.06615
Published in arXiv preprint, 2023
Recommender Systems, Large Language Models
Recommended citation: Fan, W., Zhao, Z., Li, J., Liu, Y., Mei, X., Wang, Y., Tang, J., Li, Qing. (2023) Recommender Systems in the Era of Large Language Models (LLMs). arXiv preprint arXiv:2307.02046. https://arxiv.org/abs/2307.02046
Published in WWW 2024, 2024
Graph Condensation
Recommended citation: Lin Wang, Wenqi Fan, Jiatong Li, Yao Ma, and Qing Li. (2023). Fast graph condensation with structure-based neural tangent kernel. arXiv preprint arXiv:2310.11046. (Accepted by WWW 24) https://arxiv.org/abs/2310.11046
Published in arXiv preprint, 2024
Molecule Discovery, Large Language Models
Recommended citation: Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li. (2024). Chemllm: A chemical large language model. arXiv preprint arXiv:2402.06852. https://arxiv.org/pdf/2402.06852
Published in arXiv preprint, 2024
Molecule Discovery, Large Language Models
Recommended citation: Jiatong Li, Wei Liu, Zhihao Ding, Wenqi Fan, Yuqiang Li, Qing Li. (2024). Large Language Models are In-Context Molecule Learners. arXiv preprint arXiv:2403.04197. https://arxiv.org/pdf/2403.04197.pdf
Published in arXiv preprint, 2024
Large Language Models, Reasoning
Recommended citation: Di Zhang, Jianbo Wu, Jingdi Lei, Tong Che, **Jiatong Li**, Tong Xie, Xiaoshui Huang, Shufei Zhang, Marco Pavone, Yuqiang Li, Wanli Ouyang, Dongzhan Zhou. (2024). Llama-berry: Pairwise optimization for o1-like olympiad-level mathematical reasoning. arXiv preprint arXiv:2410.02884. https://arxiv.org/pdf/2410.02884
Published in arXiv preprint, 2024
Molecule Discovery, Large Language Models
Recommended citation: Jiatong Li, Yunqing Liu, Wei Liu, Jingdi Le, Di Zhang, Wenqi Fan, Dongzhan Zhou, Yuqiang Li, and Qing Li. (2024). MolReFlect: Towards In-Context Fine-grained Alignments between Molecules and Texts. arXiv preprint arXiv:2411.14721 https://arxiv.org/pdf/2411.14721
Published in arXiv preprint, 2024
Vision Large Language Models, Reasoning
Recommended citation: Di Zhang, Jingdi Lei, Junxian Li, Xunzhi Wang, Yujie Liu, Zonglin Yang, Jiatong Li, Weida Wang, Suorong Yang, Jianbo Wu, Peng Ye, Wanli Ouyang, Dongzhan Zhou. (2024). Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning. arXiv preprint arXiv:2411.18203. https://arxiv.org/pdf/2411.18203
Published in arXiv preprint, 2024
Molecule Discovery, Large Language Models
Recommended citation: Jiatong Li, Junxian Li, Yunqing Liu, Dongzhan Zhou, and Qing Li. (2024). TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation. arXiv preprint arXiv:2412.14642. https://arxiv.org/pdf/2412.14642
Published:
Wenqi Fan, Jiatong Li, Zihuai Zhao, Yunqing Liu, Yiqi Wang
Teaching Assistant, The Hong Kong Polytechnic University, Department of Computing, 2023
I worked as a teaching assistant of COMP2411_20231_A DATABASE SYSTEMS in the semester of 2023 Fall.
Teaching Assistant, The Hong Kong Polytechnic University, Department of Computing, 2024
I worked as a teaching assistant of COMP6703_20241_A ADVANCED TOPICS IN DATA ANALYTICSS in the semester of 2024 Fall. I am glad to work with Prof. Hongxia Yang to deliver the course content and help students with their assignments and projects.