Qwen's Core Team Exodus — Cracks in China's Top Open-Source AI Squad

· # AI 뉴스
Qwen Alibaba Open-Source AI Junyang Lin AI Team Departure China AI

In the early hours of March 3, 2026, a short tweet appeared in AI community feeds. It was posted by Junyang Lin, the tech lead of Alibaba’s Qwen team.

“me stepping down. bye my beloved qwen.”1

Fewer than ten words — but the impact was enormous. Within minutes, hundreds of retweets rolled in, followed by thousands of likes. On r/LocalLLaMA, related posts shot to the top with hundreds of upvotes.2 The public face of one of China’s most successful open-source AI projects was walking away.

Who Left

It wasn’t just Junyang Lin. On the same day, Qwen team researcher Kaixin Li (@kxli_2000) posted a farewell on X:

“Signing off from @Alibaba_Qwen. Grateful for the chance to work with such brilliant minds. Proud of our impact. Onwards and upwards!”1

Shortly after, another researcher, Binyuan Hui, updated his X profile to read “former MTS at Qwen.” In a single day, the team lead and at least two core members had all departed simultaneously.

Whether the departures were voluntary or forced was never officially confirmed. But colleague Chen Chang, commenting on Lin’s exit, wrote:

“I’m truly heartbroken. I know leaving wasn’t your choice. Just last night, we were side by side launching the Qwen3.5 small model. I honestly can’t imagine Qwen without you.”1

That one line — “I know leaving wasn’t your choice” — spoke volumes. Alibaba has offered no official comment.

Even more striking was another tweet Kaixin Li posted shortly after his departure:

“Qwen could have had a Singapore base, all thanks to Junyang. But now that he’s gone, there’s no reason left to stay here.”3

This suggested that plans Lin had championed — to establish a Singapore hub as a way to operate outside China’s regulatory constraints and compute limitations — had effectively died with his departure. The community read this not as a routine personnel change, but as a signal that the team’s entire direction had shifted.

The Man Who Built Qwen

Understanding who Junyang Lin is puts this event in proper perspective.

He joined Alibaba in 2019 as a senior algorithm engineer, working on NLP and multimodal research. He went on to become a key contributor to M6, Alibaba’s large-scale MoE model; OFA (Unifying Architectures, Tasks, and Modalities), a multimodal pretraining paper presented at ICML 2022; and Chinese-CLIP, which accumulated over 2,000 GitHub stars. From 2023, he served as the official tech lead of the Qwen team.1

His Google Scholar citation count exceeds 42,000. The Qwen3 technical report alone accounts for roughly 9,000 of those — an unusually high number for a model technical report.

But his contribution to the Qwen project went beyond technical achievements. He was Qwen’s public face. Through his X account (@JustinLin610), he personally announced model releases, shared benchmark results, and responded to questions from developers around the world — building a level of international trust and community that Chinese AI projects rarely achieve. It was one of the rare cases where a Chinese AI team had a recognizable, human face in the English-speaking open-source ecosystem.

Leaving at the Peak

The timing made it all the more remarkable. Just one day before Lin posted his farewell tweet, the Qwen team had released the Qwen 3.5 Small series — a set of lightweight on-device models in four sizes: 0.8B, 2B, 4B, and 9B.4 Elon Musk commented on X with “Impressive intelligence density,” to which Lin himself publicly replied “thx elon!” The team was firing on all cylinders.

Just before that, on February 16, 2026, Qwen3.5’s flagship model — 397B-A17B — had launched. It featured a hybrid MoE architecture with 397 billion total parameters, activating only 17 billion per token.5 It was released through NVIDIA NIM and immediately listed on Hugging Face’s Qwen organization page. Together with the Small series, the Qwen3.5 lineup now covered the full spectrum from 0.8B to 397B.

The Qwen project’s growth trajectory had been remarkable in its own right. Alibaba first released the model in beta under the name Tongyi Qianwen in April 2023, with public availability following regulatory approval in September of the same year. From there, the project expanded rapidly into language models, multimodal models (Qwen-VL), audio models, math-specialized models, coding models, and the reasoning-focused QwQ series. By the time Qwen3 launched in April 2025, cumulative downloads on Hugging Face had surpassed 600 million, and over 170,000 derivative models had been built on top of Qwen — more than Meta’s Llama.1

Fortune recognized the achievement by including Alibaba on its 2025 Change the World list.

In Lin’s Own Words

Junyang Lin was more than a researcher shipping models. He was one of the rare voices willing to publicly acknowledge the structural limits of China’s AI ecosystem.

At an AI summit held at Tsinghua University in January 2026 — sharing the stage with representatives from Zhipu AI, Moonshot AI, and Tencent — he openly acknowledged that U.S. computing infrastructure was ahead of China’s by one to two orders of magnitude. He noted that while American AI labs could pour resources into next-generation research, the Alibaba team had to spend a substantial portion of their available compute just keeping up with model release schedules. That said, he emphasized that these constraints had spurred creative solutions, such as algorithm-hardware co-design.1

In 2025, he also delivered a keynote at ICLR, presenting the technical foundations of Qwen2.5 and its specialized variants directly to the global research community.

The Community Reacts

On r/LocalLLaMA, the news dominated the front page for a day, accumulating hundreds of upvotes across multiple threads.2 Within the open-source AI community, Qwen was never just another model series. High-performance models you could run locally, transparently published technical reports, and Lin’s habit of personally engaging with developers had combined to build a genuine trust brand.

The comments ranged widely. Pun-laced tributes like “Qwent out on top” and “He qwont be forgotten,” anxious speculation like “It’s the end for everyone else too for what the financial guys are doing in the next 6 months,” and heartfelt well-wishes like “Good Luck and Fortune Junyang, please let us know where you land.” Many commenters zeroed in on the collapsed Singapore hub discussions, asking whether this signaled not just a personal career change, but a fundamental shift in the team’s direction.

Alibaba has said nothing about who will take over Lin’s role, or how his style of open, direct community engagement will be continued.

The Pipeline Remains — But

With no official statement forthcoming, one thing is certain: the model release pipeline hasn’t stopped. The Qwen3.5 Small series launched on schedule, and 397B-A17B is already available through NVIDIA NIM and Hugging Face. Models released into the open-source ecosystem will continue to be used, fine-tuned, and forked regardless of who’s on the team.

But that’s not what the community is worried about. Shipping models and keeping a team intact are two different things. When core researchers exit all at once — at the height of the project’s success, with no explanation given — it reads as something other than ordinary turnover.

Neither Lin nor his departing colleagues have announced where they’re headed. Whether they’re founding an AI lab, moving to another company, or independently pursuing the Singapore hub Lin had originally championed — no one knows yet.

Qwen’s code lives on at Hugging Face. But the person who explained it to the world is gone.


Footnotes

  1. OfficeChai, “Alibaba Qwen’s Tech Lead Junyang Lin, 2 Other Researchers Step Down”, 2026-03-03, https://officechai.com/ai/alibaba-qwens-tech-lead-junyang-lin-steps-down/ 2 3 4 5 6

  2. Reddit r/LocalLLaMA, “Junyang Lin has left Qwen :(”, 2026-03-03, https://www.reddit.com/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/ 2

  3. Kaixin Li (@kxli_2000) X 게시물, 2026-03-03, https://x.com/kxli_2000/status/2028885313247162750

  4. OfficeChai, “Alibaba Releases Qwen 3.5 Small Model Series, Achieves GPT-OSS-Level Performance With A Fraction Of The Parameters”, 2026-03-02, https://officechai.com/ai/alibaba-qwen-3-5-0-8b-2b-4b-9b-benchmarks/

  5. NVIDIA NIM, “qwen3.5-397b-a17b Model Card”, 2026-02-16, https://build.nvidia.com/qwen/qwen3.5-397b-a17b/modelcard

← KOSPI -7%, KRW Breaks 1,500: The Day the Iran War Rocked Korea's Stock Market Skills vs MCP: The Difference Between Expanding an AI Agent's Brain and Its Hands →