| | --- |
| | library_name: diffusers |
| | tags: |
| | - pruna_pro-ai |
| | - pruna-ai |
| | - safetensors |
| | --- |
| | |
| | # Model Card for pruna-test/test-save-tiny-stable-diffusion-pipe-smashed-pro |
| |
|
| | This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead. |
| |
|
| | ## Usage |
| |
|
| | First things first, you need to install the pruna library: |
| |
|
| | ```bash |
| | pip install pruna_pro |
| | ``` |
| |
|
| | You can [use the diffusers library to load the model](https://huggingface.co/pruna-test/test-save-tiny-stable-diffusion-pipe-smashed-pro?library=diffusers) but this might not include all optimizations by default. |
| |
|
| | To ensure that all optimizations are applied, use the pruna library to load the model using the following code: |
| |
|
| | ```python |
| | from pruna_pro import PrunaProModel |
| | |
| | loaded_model = PrunaProModel.from_pretrained( |
| | "pruna-test/test-save-tiny-stable-diffusion-pipe-smashed-pro" |
| | ) |
| | # we can then run inference using the methods supported by the base model |
| | ``` |
| |
|
| |
|
| | For inference, you can use the inference methods of the original model like shown in [the original model card](https://huggingface.co/hf-internal-testing/tiny-stable-diffusion-pipe?library=diffusers). |
| | Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information. |
| |
|
| | ## Smash Configuration |
| |
|
| | The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model. |
| |
|
| | ```bash |
| | { |
| | "adaptive": false, |
| | "auto": false, |
| | "awq": false, |
| | "bottleneck": false, |
| | "c_generate": false, |
| | "c_translate": false, |
| | "c_whisper": false, |
| | "deepcache": false, |
| | "diffusers_higgs": false, |
| | "diffusers_int8": false, |
| | "fastercache": false, |
| | "flash_attn3": false, |
| | "flux_caching": false, |
| | "fora": false, |
| | "fp4": false, |
| | "fp8": false, |
| | "gptq": false, |
| | "half": false, |
| | "higgs": false, |
| | "hqq": false, |
| | "hqq_diffusers": false, |
| | "hyper": false, |
| | "ifw": false, |
| | "img2img_denoise": false, |
| | "ipex_llm": false, |
| | "llm_int8": false, |
| | "pab": false, |
| | "padding_pruning": false, |
| | "periodic": false, |
| | "prores": false, |
| | "qkv_diffusers": false, |
| | "quanto": false, |
| | "realesrgan_upscale": false, |
| | "reduce_noe": false, |
| | "ring_attn": false, |
| | "sage_attn": false, |
| | "stable_fast": false, |
| | "taylor": false, |
| | "taylor_auto": false, |
| | "text_to_image_distillation_inplace_perp": false, |
| | "text_to_image_distillation_lora": false, |
| | "text_to_image_distillation_perp": false, |
| | "text_to_image_inplace_perp": false, |
| | "text_to_image_lora": false, |
| | "text_to_image_perp": false, |
| | "text_to_text_inplace_perp": false, |
| | "text_to_text_lora": false, |
| | "text_to_text_perp": false, |
| | "torch_compile": false, |
| | "torch_dynamic": false, |
| | "torch_structured": false, |
| | "torch_unstructured": false, |
| | "torchao": false, |
| | "torchao_autoquant": false, |
| | "whisper_s2t": false, |
| | "x_fast": false, |
| | "zipar": false, |
| | "batch_size": 1, |
| | "device": "cpu", |
| | "device_map": null, |
| | "save_fns": [], |
| | "save_artifacts_fns": [], |
| | "load_fns": [ |
| | "diffusers" |
| | ], |
| | "load_artifacts_fns": [], |
| | "reapply_after_load": {} |
| | } |
| | ``` |
| |
|
| | ## 🌍 Join the Pruna AI community! |
| |
|
| | [](https://twitter.com/PrunaAI) |
| | [](https://github.com/PrunaAI) |
| | [](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) |
| | [](https://discord.gg/JFQmtFKCjd) |
| | [](https://www.reddit.com/r/PrunaAI/) |