mlx-community/Qwen3-Embedding-0.6B-8bit

This model mlx-community/Qwen3-Embedding-0.6B-8bit was converted to MLX format from Qwen/Qwen3-Embedding-0.6B

Downloads last month
527
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlx-community/Qwen3-Embedding-0.6B-8bit

Finetuned
(88)
this model