Qwen 3 Story Point Estimator - talendesb - mesos
This model is fine-tuned on issue descriptions from talendesb and tested on mesos for story point estimation.
Model Details
Base Model: Qwen 3
Training Project: talendesb
Test Project: mesos
Task: Story Point Estimation (Regression)
Architecture: PEFT (LoRA)
Tokenizer: Qwen BPE Tokenizer
Input: Issue titles
Output: Story point estimation (continuous value)
Usage
from transformers import AutoModelForSequenceClassification
from peft import PeftConfig, PeftModel
from transformers import AutoTokenizer
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/333-Qwen3SP-talendesb-mesos")
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/333-Qwen3SP-talendesb-mesos")
base_model = AutoModelForSequenceClassification.from_pretrained(
config.base_model_name_or_path,
num_labels=1,
torch_dtype=torch.float16,
device_map='auto'
)
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/333-Qwen3SP-talendesb-mesos")
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")
outputs = model(**inputs)
story_points = outputs.logits.item()
Training Details
- Fine-tuning method: LoRA (Low-Rank Adaptation)
- Sequence length: 20 tokens
- Best training epoch: 7 / 20 epochs
- Batch size: 32
- Training time: 572.997 seconds
- Mean Absolute Error (MAE): 1.635
- Median Absolute Error (MdAE): 1.025
Framework versions