| tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", padding_side="left") | |
| tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default | |
| model_inputs = tokenizer( | |
| ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" | |
| ).to("cuda") | |
| generated_ids = model.generate(**model_inputs) | |
| tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] | |
| '1, 2, 3, 4, 5, 6,' | |
| Wrong prompt | |
| Some models and tasks expect a certain input prompt format to work properly. |