2025年6月6日-7日,第7届北京智源大会将以线上+线下联动的形式召开,本次智源大会汇聚四位图灵奖得主、海内外顶尖机构学者与产业领袖,在思辨与实证的交织中,为 AI 的未来绘制航图。报名通道已开启。
PyTorch Day China 论坛丨6月7日 全天
2025年6月7日,PyTorch Day将首次登陆中国大陆,作为2025智源大会(BAAI Conference)最重要的分论坛之一,PyTorch Day China由PyTorch基金会与北京智源人工智能研究院 (BAAI) 联合主办。现已汇聚全球开发者与研究者,共同参与这场聚焦PyTorch生态的技术盛会,探索开源 AI 和机器学习的最新进展。
论坛主席
Yonghua Lin,Vice President and Chief Engineer, BAAI
Yonghua Lin serves as the Vice President and Chief Engineer at the Beijing Academy of Artificial Intelligence (BAAI). She oversees key research directions including general technologies for large-scale AI models, AI system research, open-source initiatives, and industrial ecosystem collaboration. Additionally, she chairs the IEEE International Standard Group for Large Model Evaluation. Matt White,The Linux FoundationMatt White is the Executive Director of the PyTorch Foundation and GM of AI at the Linux Foundation, as well as the Director of the Generative AI Commons under the LF AI & Data Foundation. With nearly 30 years of experience in AI research, standards, and applications across telecom, media, and gaming, Matt has specialized since 2012 in machine learning, simulations, and multi-sensory learning. He co-founded the Open Metaverse Foundation, chairs the Metaverse Standards Forum, and co-organizes both the Silicon Valley Generative AI paper reading group and the GenAI Collective.1、Running Large Models on Diverse AI Chips: PyTorch + Open-Source Stack (FlagOS) for Architecture-Free Deployment2、Diving in Hugging Face Hub; Share Your Model Weights on the #1 AI Hub, Home of 700k+ PyTorch ModelsTiezhen Wang,Hugging Face
Tiezhen Wang is an Engineer at Hugging Face, specializing in LLMs, open-source AI ecosystems, and cross-cultural AI development. Prior to joining Hugging Face, Tiezhen was a core developer of TensorFlow Lite Micro, an open-source machine learning inference framework designed for embedded systems with limited resources.3、verl: An Open Source Large Scale LLM RL Framework for Agentic TasksYuxuan is a student at Department of Computer Science and Technology, Tsinghua University, and a core contributor of verl project. Yuxuan led the infrastruture and contributed to the algorithm of DAPO: an open source advanced LLM reinforcement learning recipe at scale as a member of Seed team at ByteDance.4、PyTorch in China: Community Growth, Localization, and Interaction
Currently, trying to let Chinese users to have easier access to PyTorch resources and make a friendly user experiences for beginners.5、The Development of AI Open Source and Its Influence on the AI Ecosystem Jianzhong Li,CSDN & BoolanJason Li, Senior Vice President of CSDN, Chief Technical Expert of Boolan, Chairman of Machine Learning Summit (ML-Summit) , and member of the ISO-C++ Standard Committee. He has rich experience and in-depth research in artificial intelligence, software architecture and product innovation. In recent years, he has mainly focused on the application of artificial intelligence methods centered around large language models. He proposed the "ParaShift Cube" for technological product innovation, and his related research and speeches have attracted strong attention from the industry. He provides high-end product innovation and technology strategy consulting services for well-known brands, including many Fortune Global 500 companies. He has been awarded the Microsoft Most Valuable Professional (MVP), Microsoft Regional Director and Tencent Most Valuable Professional (TVP) and other industry honorary titles for many times. He is a popular lecturer at many tech conferences and has taught a wide range of courses, including AI, software design, influencing more than one million technical professionals. He is also a serial entrepreneur who has founded well-known technology companies such as SoftCompass, SlideIdea, and Boolan.6、torch.accelerator: A Unified, Device-Agnostic Runtime API for Stream-Based Accelerators
An AI framework engineer at Intel, dedicated to supporting Intel GPUs in PyTorch and advancing the generalization and improvement of PyTorch and its ecosystem. With deep experience in SYCL, XPU backend integration, and performance optimization across heterogeneous platforms, I’ve contributed to enhancing Intel GPU support in PyTorch and collaborated across teams to drive a better developer experience.7、vLLM: Easy, Fast, and Cheap LLM Serving for EveryoneKaichao You,Tsinghua UniversityKaichao You is a fifth year Ph.D. student from Tsinghua University. He is working on the vLLM project, a high-throughput and memory-efficient inference and serving engine for LLMs. He is also an open-source contributor to PyTorch/Triton.8、A torch.fx Based Compression Toolkit Empowered by torch_musa Fan Mo received a M.S degree in 2020 from Shanghai JiaoTong University, who has held machine learning engineer at Moore Threads. His current work focuses on AI Infrastructure, and is the principal maintainer of torch_musa, an open-source project that provides PyTorch backend support for MUSA architecture developed by Moore Threads.9、Efficient Training of Video Generation Foundation Model at ByteDanceXiaonan Nie,ByteDance SeedXiaonan Nie is currently a research scientist in MLSys at ByteDance, within the TopSeed Program. He received his Ph.D from Peking University in 2024, supervised by Prof. Bin Cui.His research focuses on optimizing the training of deep learning models at large scale. He has published over 20 papers in top conferences and journals, including SIGMOD, SOSP, VLDB and NeurIPS. His research outcomes have been successfully applied to production-level model training at ByteDance, Tencent, and Baichuan.10、torch.compile Practice and Optimization in Different ScenariosYichen Yan now works as a senior software engineer at Alibaba, focusing on optimization of runtime (CPython, Java, Node.js) and machine learning frameworks.11、PyTorch in Production: Boosting LLM Training and Inferencing on Ascend NPUJiawei Li,Huawei Technologies Co., Ltd.6+ years experience on open source, worked on OpenStack development in openEuler community and arm ecosystem.Currently work on contributing AI ecosystem.- ONNXRuntime Ascend Support Author12、Galvatron: An Automatic Distributed Training System for Efficient Large-Scale Transformer TrainingFangcheng Fu,Peking UniversityFangcheng Fu is currently a Boya Postdoctoral Researcher at the School of CS, Peking University, and a recipient of the China National Postdoctoral Program for Innovative Talent. Before that, he received his Bachelor's and Ph.D. degrees in computer science from Peking University in 2018 and 2023, respectively, supervised by Prof. Bin Cui.Xinyi Liu,Peking UniversityXinyi Liu is a Ph.D. student at the School of Computer Science, Peking University, and a member of the DAIR lab, which is led by Professor Bin Cui. His research is centered on distributed deep learning systems and the infrastructure for Large Language Models (LLMs). Currently, his primary focus is on optimizing parallelism in the training of LLMs.13、Intel's PyTorch Journey: Open Source Optimization Makes AI More Accessible Mingfei Ma,Intel Asia-Pacific Research & Development Ltd.Mingfei Ma is a senior deep learning software engineer in Intel. He is also the maintainer of CPU performance module in PyTorch. Mingfei holds a Master degree from Harbin Institute of Technology where he majored in Control Science and Technology. Mingfei has a 12 years’ experience in performance modeling and optimizations. He has a long history contributing to PyTorch and his primary focus has been on CPU performance optimization for PyTorch as well as other libraries in the ecosystem.14、FlagTree: Unified AI Compiler for Diverse AI ChipsChunlei Men,Beijing Academy of Artificial Intelligence aka Zhiyuan InstituteMen Chunlei is the R&D Manager and Senior Engineer at the Beijing Academy of Artificial Intelligence (BAAI). He is responsible for the research on intelligent computing power scheduling platforms, and AI compilers. He has been granted 13 invention patents.<br /> He successively served as a technical supervisor/expert in several major Internet companies, engaging in artificial intelligence R&D work, including the research and development of basic technologies and the implementation of applications.15、KTransformers: Unleashing the Full Potential of CPU/GPU Hybrid Inference for MoE ModelsDr. Mingxing Zhang,Tsinghua UniversityDr. Mingxing Zhang, Assistant Professor at Tsinghua University, focuses on memory systems research. He is the co-founder of the open-source projects Mooncake and KTransformers. His work has been published in over thirty papers at top international conferences and journals, including OSDI, SOSP, ASPLOS, HPCA, and EuroSys. He has received several prestigious awards, including the FAST Best Paper Award, the SIGSOFT Distinguished Paper Award, and authored the first OSDI paper from a Chinese university. He is a recipient of the ChinaSys Rising Star Award, the Outstanding Doctoral Dissertation Award, and the IEEE TCSC Outstanding Ph.D. Dissertation Award. He previously served as Chief Algorithm Expert and Director of the Innovation Research Institute at Sangfor Technologies, where his work contributed to products used by tens of thousands of clients.16、SGLang: An Efficient Open-Source Framework for Large-Scale LLM Serving Liangsheng Yin,Shanghai Jiao Tong University / LMSYSHe is an undergraduate student at Shanghai Jiao Tong University and one of the earliest core developers of SGLang, a popular open-source inference engine with 15K+ GitHub stars and 20K+ monthly downloads. SGLang is used by xAI (Grok 3), Microsoft Azure (DeepSeek R1), NVIDIA, AMD, LinkedIn, Meituan, and more. He will pursue a Ph.D. at the University of California, Berkeley.本届大会采用线下与线上模式融合,报名通道已开启,欢迎扫码免费注册。由于线下席位有限,请尽早完成注册,组委会将根据注册次序审核,并在会前发送审核结果通知。公开环节将向注册用户全程线上直播。大会合作、咨询、赞助欢迎联系:press@baai.ac.cn大会官网 https://2025.baai.ac.cn/2025智源大会议程公开|深度推理模型论坛
2025智源大会议程公开|青年科学家发展与创新动能本文版权归智源社区所有
内容中包含的图片若涉及版权问题,请及时与我们联系删除
评论
沙发等你来抢