Adversarial Image Auditor
This model serves as a deep learning-based image auditor for AI safety, capable of evaluating images and interpreting aligned text prompts across multiple distinct axes:
- Adversarial Safety (Binary): Predicting whether an image is Safe or Unsafe.
- Category Classification: Placing unsafe images directly into
Safe,NSFW,Gore, orWeaponscategories. - Artifact / Seam Quality: Assessing the quality of image manipulation to detect adversarial seams or diffusion artifacts.
- Relative Adversarial Score: Predicting a continuous metric of adversarial strength in an image.
- Prompt Faithfulness (Contrastive InfoNCE): Calculating a temperature-scaled contrastive probability of image–text faithfulness.
Architecture
This neural auditor introduces robust contrastive alignments for multimodal safety.
- Vision Backbone: Pretrained DenseNet121, modified to extract feature grids to construct dense 2x2 local spatial maps.
- Text Conditioning: Simple text tokenizer with correct Cross-Attention (
key_padding_maskintegrated, Pre-LayerNorm). - FiLM Modulation: Conditions adversarial layers using timestep diffusion tokens and text feature projections directly.
- Output: Decoupled safety axes generating bounding-box GradCAM predictions, Continuous InfoNCE faithfulness, and safety classifications.
Usage
You can load this model along with its inference script auditor_inference.py:
from auditor_inference import audit_image
results = audit_image(
model_path="auditor_new_best.pth",
image_path="example.jpg",
prompt="A cute cat"
)
print(results)
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support