runtime error

Exit code: 1. Reason: �█████████| 470/470 [00:00<00:00, 3.78MB/s] t3_turbo_v1.safetensors: 0%| | 0.00/1.92G [00:00<?, ?B/s] t3_turbo_v1.safetensors: 1%|▏ | 28.4M/1.92G [00:01<01:26, 21.8MB/s] t3_turbo_v1.safetensors: 29%|██▉ | 565M/1.92G [00:02<00:04, 280MB/s]  t3_turbo_v1.safetensors: 100%|██████████| 1.92G/1.92G [00:03<00:00, 584MB/s] tokenizer_config.json: 0%| | 0.00/3.88k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 3.88k/3.88k [00:00<00:00, 26.3MB/s] ve.safetensors: 0%| | 0.00/5.70M [00:00<?, ?B/s] ve.safetensors: 100%|██████████| 5.70M/5.70M [00:00<00:00, 18.1MB/s] vocab.json: 0%| | 0.00/999k [00:00<?, ?B/s] vocab.json: 100%|██████████| 999k/999k [00:00<00:00, 126MB/s] /usr/local/lib/python3.10/site-packages/diffusers/models/lora.py:393: FutureWarning: `LoRACompatibleLinear` is deprecated and will be removed in version 1.0.0. Use of `LoRACompatibleLinear` is deprecated. Please switch to PEFT backend by installing PEFT: `pip install peft`. deprecate("LoRACompatibleLinear", "1.0.0", deprecation_message) loaded PerthNet (Implicit) at step 250,000 adapter_model.safetensors: 0%| | 0.00/18.4M [00:00<?, ?B/s] adapter_model.safetensors: 100%|██████████| 18.4M/18.4M [00:00<00:00, 34.3MB/s] Detected LoRA adapter – merging weights 🔧 Merging Hindi LoRA into T3 weights... Error loading model: 'T3' object has no attribute 'base_model' Traceback (most recent call last): File "/app/app.py", line 155, in <module> get_or_load_model() File "/app/app.py", line 141, in get_or_load_model merge_lora_into_t3(MODEL.t3, state) File "/app/app.py", line 92, in merge_lora_into_t3 module = getattr(module, attr) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1928, in __getattr__ raise AttributeError( AttributeError: 'T3' object has no attribute 'base_model'

Container logs:

Fetching error logs...