Text Generation
Transformers
Safetensors
minimax_m2
conversational
custom_code
fp8
oql commited on
Commit
d8e43fb
·
1 Parent(s): b8e098b

fix m2.1 to m2

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -186,7 +186,7 @@ We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2.
186
 
187
  ### KTransformers
188
 
189
- We recommend using [KTransformers](https://github.com/kvcache-ai/ktransformers) to serve MiniMax-M2.1. KTransformers provides efficient day-0 support for MiniMax-M2.1 model and can run the native weights with **≥32GB VRAM** and **≥256GB DRAM**. For installation and usage, see [KTransformers MiniMax-M2.1 Tutorial](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/MiniMax-M2.1-Tutorial.md).
190
 
191
  ### MLX
192
 
 
186
 
187
  ### KTransformers
188
 
189
+ We recommend using [KTransformers](https://github.com/kvcache-ai/ktransformers) to serve MiniMax-M2. KTransformers can run the native weights with **≥32GB VRAM** and **≥256GB DRAM**. For installation and usage, see [KT-Kernel Deployment Guide](https://github.com/kvcache-ai/ktransformers/blob/main/kt-kernel/README.md).
190
 
191
  ### MLX
192