Context-Faithful Large Language Models: Bridging the Past and Future in the Dynamic World

发布者:曹玲玲发布时间:2024-07-15浏览次数:10

报告时间:2024年7月17日(周三)上午9:30-10:30

报告地点:九龙湖校区计算机楼106室,#腾讯会议:903-765-732

报告人:Merced Yiwei Wang 博士,University of California

报告人简介:Yiwei Wang is currently a Postdoc at the UCLA NLP group. Yiwei Wang will join the Department of Computer Science at University of California, Merced as an Assistant Professor in the Spring of 2025. Yiwei Wang was an Applied Scientist at Amazon Inc (Seattle). He received his Ph.D. in Computer Science at National University of Singapore, where he was fortunate to be advised by Prof. Bryan Hooi. His current research is focused on natural language processing, and especially interested in building trustworthy AI assistants to provide responsible services to humans in real-world applications. Yiwei Wang is actively recruiting strong and motivated students as Ph.D. students or research interns. Please feel free to email him at wangyw.evan@gmail.com if you are interested.

报告摘要:Following the rapid variability of the real world, real-world knowledge is rapidly changing every day. This phenomenon raises a requirement: the ideal AI assistant should not only memorize history but also adapt to new knowledge. However, I would like to ask: “Are current LLMs such an ideal AI assistant?” The short answer is no. For advanced LLMs, due to their fast-growing parameter sizes, frequently updating them through retraining is becoming more and more expensive. Hence, it is vital to find a solution that can update the LLMs' knowledge without retraining. Treating new knowledge as the “context” of LLMs is a popular scheme followed by recent research. In this sense, we need context-faithful LLMs to generate correct outputs in accordance with the context. On the other hand, current LLMs exhibit poor context faithfulness due to multiple reasons, including biases, hallucinations, and privacy concerns. In this talk, I will introduce my recent research devoted to addressing these issues to build context-faithful LLMs. First, I will introduce my recent research on knowledge editing of LLMs to enhance their context-faithfulness to new knowledge. Second, I will introduce my future work on building context-faithful LLMs to create trustworthy AI assistants that serve humans in real-world applications with fewer concerns about safety and ethics. By addressing the above challenges, we can significantly improve the reliability and applicability of AI in dynamic and complex environments.

  • 联系方式
  • 通信地址:南京市江宁区东南大学路2号东南大学九龙湖校区计算机学院
  • 邮政编码:211189
  • ​办公地点:东南大学九龙湖校区计算机楼
  • 学院微信公众号