DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
Paper
•
2406.11617
•
Published
•
8
This is a merge of pre-trained language models created using mergekit, combining several Qwen 3 14b finetunes to maximize reasoning performance.
This model was merged using the DELLA merge method using Qwen/Qwen3-14B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: della
dtype: bfloat16
parameters:
normalize: true
models:
- model: ValiantLabs/Qwen3-14B-Esper3
parameters:
density: 0.25
weight: 0.4
- model: ValiantLabs/Qwen3-14B-Cobalt2
parameters:
density: 0.25
weight: 0.25
- model: DMindAI/DMind-1-mini
parameters:
density: 0.25
weight: 0.25
- model: soob3123/GrayLine-Qwen3-14B
parameters:
density: 0.25
weight: 0.25
base_model: Qwen/Qwen3-14B