| # Working with the Metadata |
|
|
| ## Downloading all the metadata files at once |
|
|
| Install the huggingface-cli utility (via pip). You may then use the following command: |
|
|
| huggingface-cli download Spawning/PD12M --repo-type dataset --local-dir metadata --include "metadata/*" |
| |
| ## metadata format |
|
|
| The metadata files are in parquet format, and contain the following attributes: |
| - `id`: A unique identifier for the image. |
| - `url`: The URL of the image. |
| - `s3_key`: The S3 file key of the image. |
| - `caption`: A caption for the image. |
| - `hash`: The MD5 hash of the image file. |
| - `width`: The width of the image in pixels. |
| - `height`: The height of the image in pixels. |
| - `mime_type`: The MIME type of the image file. |
| - `license`: The URL of the license. |
| - `source`: The source organization of the image. |
|
|
| #### Open a metadata file |
| The files are in parquet format, and can be opened with a tool like `pandas` in Python. |
| ```python |
| import pandas as pd |
| df = pd.read_parquet('pd12m.000.parquet') |
| ``` |
|
|
| #### Get URLs from metadata |
| Once you have opened a maetadata file with pandas, you can get the URLs of the images with the following command: |
| ```python |
| urls = df['url'] |
| ``` |
|
|
| ### Download all files mentioned in metadata |
|
|
| If you want to just grab all files referenced by a metadata collection, you may try this (adjust to taste): |
|
|
| img2dataset --url_list $file --input_format "parquet" \ |
| --url_col "url" --caption_col "caption" --output_format files \ |
| --output_folder $dir --processes_count 16 --thread_count 64 \ |
| --skip_reencode true --min_image_sizel 654 --max_aspect_ratio=1.77 |
| |
|
|
|
|
|
|