Llama.cpp Models LLM models in GGUF format for inference via the llama.cpp project MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 1.23k • 34 QuantFactory/Meta-Llama-3-8B-Instruct-GGUF Text Generation • 8B • Updated Jan 23 • 19.9k • 324 TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF 47B • Updated Dec 14, 2023 • 41.2k • 654
MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 1.23k • 34
Llama.cpp Models LLM models in GGUF format for inference via the llama.cpp project MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 1.23k • 34 QuantFactory/Meta-Llama-3-8B-Instruct-GGUF Text Generation • 8B • Updated Jan 23 • 19.9k • 324 TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF 47B • Updated Dec 14, 2023 • 41.2k • 654
MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 1.23k • 34