Yichen (Zach) Wang   王奕辰

yichenzw at uchicago dot edu



I am an incoming Ph.D. student at the University of Chicago to be advised by Prof. Mina Lee and Prof. Ari Holtzman, within UChicago C&I. I am broadly interested in human-LLM interaction on evaluation and generation, LLM safety (i.e., machine-generated text detection, watermarking, and stress testing), planning in controlled generation, and alignment anlaysis.

Currently, I am an undergrad at Xi’an Jiaotong University (XJTU) in the CS Honors Program, mentored by Prof. Xiaoming Liu and Prof. Chao Shen. I’ll be graduating at the end of June 2024. I am also interning at UW NLP (now remote), working with Dr. Tianxing He and Prof. Yulia Tsvetkov. And I stay in close contact with Prof. Minnan Luo and the LUD Lab. I was at UC Berkeley as a visiting student in 2023, working with Dr. Kevin Yang and Prof. Dan Klein within Berkeley NLP.


news

May 18, 2024: 🆕 ACL 2024 accepts three of our papers (2 main 1 findings)! Please feel free to check them out! And we also won a competition at the SDP workshop. See you in Bangkok this summer!
Apr. 16, 2024: 💕 Here marks the end of my application season. My sincerest thanks to all who helped and supported me -- families, friends, advisors, mentors, faculties (especially those I applied to), and mates. I'm super excited for my new journey!
Mar. 24, 2024: SemStamp, the semantic watermark, is now accepted by NAACL24! Fantastic work by Abe, and he is seeking a PhD chance in Fall 25. Please consider him!
Dec. 11, 2023: Happy to share that AAAI24 accepts our paper DP2O on prompt optimization!
Oct. 10, 2023: Two papers accepted by EMNLP23! All applause and thanks to my co-authors! And I'll be in Singapore on Dec.!
Sep. 19, 2023: Today is the date of birth of my academic webpage! Working towards the application season!


publications

  • Stumbling Blocks: Stress Testing the Robustness of Machine-Generated Text Detectors Under Attacks
    Yichen Wang, Shangbin Feng, Abe Bohan Hou, Xiao Pu, Chao Shen, Xiaoming Liu, Yulia Tsvetkov, and Tianxing He
    🆕 ACL 2024
    We comprehensively study the robustness of popular machine-generated text detectors under attacks from diverse categories: editing, paraphrasing, prompting, and co-generating. Our experiments reveal that all detectors exhibit different loopholes. Further, we investigate the reasons behind these defects and propose initial out-of-the-box patches.
    Citation // Code // Dataset
  • k-SemStamp: A Clustering-Based Semantic Watermark for Detection of Machine-Generated Text
    Abe Bohan Hou, Jingyu Zhang, Yichen Wang, Daniel Khashabi, and Tianxing He
    🆕 ACL 2024 Findings
    We propose k-SemStamp, a simple yet effective enhancement of SemStamp, utilizing k-means clustering as an alternative of LSH to partition the embedding space with awareness of inherent semantic structure.
    Citation
  • Does DetectGPT Fully Utilize Perturbation? Bridging Selective Perturbation to Fine-tuned Contrastive Learning Detector would be Better
    Shengchao Liu, Xiaoming Liu, Yichen Wang, Zehua Cheng, Chengzhengxu Li, Zhaohan Zhang, Yu Lan, and Chao Shen
    🆕 ACL 2024
    We propose a novel fine-tuned machine-generated text detector, Pecola, bridging metric-based and fine-tuned methods by contrastive learning on selective perturbation further than DetectGPT.
    Citation
  • SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
    Abe Bohan Hou, Jingyu Zhang, Tianxing He, Yichen Wang, Yung-Sung Chuang, Hongwei Wang, Lingfeng Shen, Benjamin Van Durme, Daniel Khashabi, and Yulia Tsvetkov
    NAACL 2024
    Existing watermarking algorithms are vulnerable to paraphrase attacks because of their token-level design. To address this issue, we propose SemStamp, a robust sentence-level semantic watermarking algorithm based on locality-sensitive hashing (LSH), which partitions the semantic space of sentences.
    Citation
  • Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Optimization for Few-shot Learning
    Chengzhengxu Li, Xiaoming Liu, Yichen Wang, Duyi Li, Yu Lan, and Chao Shen
    AAAI 2024
    We propose a dialogue-comprised policy-gradient-based discrete prompt optimization (DP2O) method with dialogue prompt alignment and reinforcement learning to efficiently and effectively generate prompt demonstrations.
    Citation
  • Improving Pacing in Long-Form Story Planning
    Yichen Wang, Kevin Yang, Xiaoming Liu, and Dan Klein
    EMNLP 2023 Findings
    Existing LLM-based systems for writing long-form stories or story outlines frequently suffer from unnatural pacing, resulting in a jarring experience for the reader. We propose a Concrete Outline Control (CONCOCT) system to improve pacing when automatically generating story outlines. Compared to a baseline hierarchical outline generator, humans judge CONCOCT’s pacing to be more consistent over 57% of the time across multiple outline lengths, and the gains also translate to downstream stories.
    Citation // Code // Dataset // Poster
  • CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Data Limitation With Contrastive Learning
    Xiaoming Liu=, Zhaohan Zhang=, Yichen Wang=, Hang Pu, Yu Lan, and Chao Shen
    EMNLP 2023
    We present a coherence-based contrastive learning model named CoCo to detect the possible machine-generated texts (MGTs) under the low-resource scenario. We encode coherence information in the form of graph into the text representation and employ an improved contrastive learning framework. Our approach outperforms the state-of-the-art methods at least 1.23%. Also, we surprisingly find that MGTs originated from up-to-date language models could be easier to detect than these from previous models, in our experiments, and we propose some preliminary explanations.
    Citation // Code // Dataset // Poster

competition