Back to Transformers

Qwen3.5 Moe

docs/source/en/model_doc/qwen3_5_moe.md

5.8.02.3 KB
Original Source
<!--Copyright 2026 The Qwen Team and The HuggingFace Inc. team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2026-01-01 and added to Hugging Face Transformers on 2026-02-09.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

Qwen3.5 Moe

Qwen3.5 Moe TODO @shuaibai @bozheng

Model usage

<hfoptions id="usage"> <hfoption id="AutoModel">
py
TODO
</hfoption> </hfoptions>

Qwen3_5MoeConfig

[[autodoc]] Qwen3_5MoeConfig

Qwen3OmniMoeVisionEncoderConfig

[[autodoc]] Qwen3OmniMoeVisionEncoderConfig

Qwen3OmniMoeTextConfig

[[autodoc]] Qwen3OmniMoeTextConfig

Qwen3OmniMoeTalkerTextConfig

[[autodoc]] Qwen3OmniMoeTalkerTextConfig

Qwen3OmniMoeTalkerCodePredictorConfig

[[autodoc]] Qwen3OmniMoeTalkerCodePredictorConfig

Qwen3OmniMoeAudioEncoderConfig

[[autodoc]] Qwen3OmniMoeAudioEncoderConfig

Qwen3_5MoeTextConfig

[[autodoc]] Qwen3_5MoeTextConfig

Qwen3_5MoeVisionModel

[[autodoc]] Qwen3_5MoeVisionModel - forward

Qwen3_5MoeTextModel

[[autodoc]] Qwen3_5MoeTextModel - forward

Qwen3_5MoeModel

[[autodoc]] Qwen3_5MoeModel - forward

Qwen3_5MoeForCausalLM

[[autodoc]] Qwen3_5MoeForCausalLM - forward

Qwen3_5MoeForConditionalGeneration

[[autodoc]] Qwen3_5MoeForConditionalGeneration - forward