fix m2.1 to m2
Browse files
README.md
CHANGED
|
@@ -186,7 +186,7 @@ We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2.
|
|
| 186 |
|
| 187 |
### KTransformers
|
| 188 |
|
| 189 |
-
We recommend using [KTransformers](https://github.com/kvcache-ai/ktransformers) to serve MiniMax-M2.
|
| 190 |
|
| 191 |
### MLX
|
| 192 |
|
|
|
|
| 186 |
|
| 187 |
### KTransformers
|
| 188 |
|
| 189 |
+
We recommend using [KTransformers](https://github.com/kvcache-ai/ktransformers) to serve MiniMax-M2. KTransformers can run the native weights with **≥32GB VRAM** and **≥256GB DRAM**. For installation and usage, see [KT-Kernel Deployment Guide](https://github.com/kvcache-ai/ktransformers/blob/main/kt-kernel/README.md).
|
| 190 |
|
| 191 |
### MLX
|
| 192 |
|