SOKRATES: Qwen3-8B PrOntoQA OaK-DPO Iteration 2

Second DPO iteration achieving 98.1% accuracy on PrOntoQA.

Performance

Stage Accuracy
SFT 93.3%
DPO Iter 1 96.8%
DPO Iter 2 98.1%

Usage

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
    "Moonlight556/sokrates-qwen3-8b-prontoqa-oak-dpo-iter2",
    torch_dtype="bfloat16"
)
Downloads last month
3
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Moonlight556/sokrates-qwen3-8b-prontoqa-oak-dpo-iter2

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(794)
this model