Yong Zheng-Xin

Computer Science Ph.D. student @ Brown University
Research Scientist Intern (now Collaborator) @ Meta AI (FAIR), Collaborator @ Cohere For AI

prof_pic.jpg

I am an incoming fourth-year Ph.D. student in Computer Science at Brown University, advised by Prof. Stephen Bach. I am also fortunate to have done research at Meta (FAIR and GenAI Safety Alignment), Cohere For AI, and BigScience. My broad research vision is to develop LLMs that are safe and helpful for all users around the world.

I work on safety alignment after discovering that low-resource languages can jailbreak GPT-4 (⭑Best Paper Award, NeurIPS 2023 Socially Responsible Language Modeling Workshop). This work pioneered multilingual red-teaming and was highlighted in the International Scientific Report on the Safety of Advanced AI 2024.

My other safety research work:

  • Explaining success/failure in safety generalization such as zero-shot toxicity reduction (EMNLP 2024 Findings) across languages and finetuning attacks on multilingual LLMs (preprint).
  • Performing safety red-teaming for frontier NLP/Speech models: Aya (⭑Best Paper Award, ACL 2024), MMS (to appear).

⭐️ I also worked on making LLMs overcome language barriers and support underrepresented languages. I’ve worked on adapting LLMs to low-resource languages (ACL 2023), generating synthetic data for extremely low-resource languages (EMNLP 2024 Findings), and developing massively multilingual models at frontier AI groups such as Meta AI FAIR (MMS model), Cohere For AI (Aya model) and BigScience (T0, BLOOM, mT0/BLOOMZ, BLOOM+1).

🇲🇾 As a Malaysian, I also contributed to NLP for Southeast Asian (SEA) languages. I’ve hosted *ACL tutorials, helped curate SEACrowd data hub (EMNLP 2024), and studied how well LLMs can handle SEA linguistic phenomenon, such as code-switching (EMNLP 2023 CALCS Workshop), and understand culture in SEA region (NeurIPS 2024).

Other Misc Stuff:

  • If you want to chat or collaborate on any of the research directions above (or just talk about graduate schools), feel free to send an email to me: contact [dot] yong @ brown [dot] edu.
  • My passion hobby is dancing, especially salsa and bachata. I also dance a bit of Lindy Hop, Argentine Tango and K-pop.
    I usually check out the dance scenes in the city when I travel to conferences ––– if you also enjoy dancing, hmu we can check them out together.
  • I went to Minerva University during undergrad so I had the opportunity to travel and live in six different cities around the world: 🇺🇸 San Francisco, 🇰🇷 Seoul, 🇮🇳 Hyderabad, 🇩🇪 Berlin, 🇦🇷 Buenos Aires and 🇬🇧 London.

selected publications (see all)

  1. Towards Understanding the Fragility of Multilingual LLMs against Fine-Tuning Attacks
    Samuele Poppi ,  Zheng-Xin Yong ,  Yifei He , and 4 more authors
    arXiv preprint arXiv:2410.18210, 2024
  2. Preference Tuning for Toxicity Mitigation Generalizes Across Languages
    Xiaochen Li* ,  Zheng-Xin Yong* ,  and  Stephen H Bach
    EMNLP Findings, 2024
  3. Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
    Ahmet Üstün* ,  Viraat Aryabumi* ,  Zheng-Xin Yong* , and 14 more authors
    ACL, 2024 (Best Paper Award)
  4. Low-Resource Languages Jailbreak GPT-4
    Zheng-Xin Yong ,  Cristina Menghini ,  and  Stephen Bach
    NeurIPS Workshop: Socially Responsible Language Modelling Research (SoLaR) , 2023 (Best Paper Award)

news

09 / 2024 4 papers accepted! LexC-Gen and explanations of cross-lingual LLM toxicity reduction are accepted to Findings of EMNLP 2024. SEACrowd is also accepted to EMNLP 2024. CVQA is accepted to NeurIPS 2024 Datasets & Benchmarks.
08 / 2024 Aya Model paper received the ⭑Best Paper Award at ACL 2024.
07 / 2024 Gave a talk about multilingual AI safety at London Data Week (organized by The Alan Turing Institute and supported by Mayor of London).
06 / 2024 Meta AI: Started my research scientist internship at Meta AI (FAIR), working on Massively Multilingual Speech (MMS) models. Also collaborated with GenAI Trust Team on a multilingual safety project.
05 / 2024 1 paper accepted! A Safe Harbor for AI Evaluation and Red Teaming is accepted to ICML 2024, accompanied with an open letter signed by 300+ researchers urging for legal and technical protections for AI red-teaming by independent researchers.
02 / 2024 Aya model and dataset papers are released! I presented Aya multilingual safety research at Aya Grand Finale.
11 / 2023 Co-organized the tutorial of Current Status of NLP in South East Asia at AACL 2023.
10 / 2023 Low-Resource Languages Jailbreak GPT-4” received the ⭑Best Paper Award at NeurIPS 2023 Socially Responsible Language Modeling (SoLaR) workshop.
09 / 2023 Cohere For AI: Joining the Responsible Deployment Team for Aya red-teaming.
05 / 2023 Interviewed by Wired on our code-switching paper and grassroot research initiative for Southeast Asian (SEA) languages.
05 / 2023 3 papers accepted! BLOOM+1, BLOOMZ and code-switching survey are accepted to ACL 2023.
03 / 2022 2 papers accepted! T0 is accepted to ICLR 2022 (Spotlight) and its blog post is out! PromptSource is also accepted to ACL 2022 Demo track.
06 / 2021 Started PhD at Brown University.