SynthFairCLIP/clip-vit-base-patch16-hybrid
Zero-Shot Image Classification
•
Updated
•
43
None defined yet.
SynthFairCLIP is a research initiative focused on fair vision–language models.
We study how to reduce bias in CLIP-style models by combining:
If you use our resources, please consider citing the SynthFairCLIP project.
We acknowledge EuroHPC JU for awarding the project ID EHPC-AI-2024A02-040 access to MareNostrum 5 hosted at BSC-CNS.