Research
I do research in Machine Learning. My recent works focus on Large Language Models, Graph Representation Learning, and Trustworthy ML. Below are my selected publications.
|
|
OpenTab: Advancing Large Language Models as Open-domain Table Reasoners
Kezhi Kong, Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Chuan Lei, Christos Faloutsos, Huzefa Rangwala, George Karypis.
ICLR, 2024
|
|
On the Reliability of Watermarks for Large Language Models
John Kirchenbauer*, Jonas Geiping*, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein.
ICLR, 2024
|
|
GOAT: A Global Transformer on Large-scale Graphs
Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Renkun Ni, C. Bayan Bruss, Tom Goldstein.
ICML, 2023
|
|
Robust Optimization as Data Augmentation for Large-scale Graphs
Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, Tom Goldstein.
CVPR, 2022
|
|
VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization
Kezhi Kong*, Mucong Ding*, Jingling Li, Chen Zhu, John P Dickerson, Furong Huang, Tom Goldstein.
NeurIPS, 2021
|
|
A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs
Mucong Ding*, Kezhi Kong*, Jiuhai Chen*, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, Tom Goldstein.
DistShift Workshop @ NeurIPS (Spotlight), 2021
|
|
GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
Chen Zhu, Renkun Ni, Zheng Xu, Kezhi Kong, W Ronny Huang, Tom Goldstein.
NeurIPS, 2021
|
|
Data Augmentation for Meta-Learning
Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, Tom Goldstein.
ICML, 2021
|
|
SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations
Hao-Zhe Feng, Kezhi Kong, Minghao Chen, Tianye Zhang, Minfeng Zhu, Wei Chen.
AAAI, 2021
|
|