codeocr-dataset / README.md
Maksonchek's picture
Upload dataset
8197157 verified
---
dataset_info:
features:
- name: id
dtype: string
- name: difficulty
dtype: string
- name: code
dtype: string
- name: render_light
dtype: image
- name: render_dark
dtype: image
- name: photo
dtype: image
splits:
- name: easy
num_bytes: 3082543834
num_examples: 700
- name: medium
num_bytes: 902595780
num_examples: 200
- name: hard
num_bytes: 500128560
num_examples: 100
download_size: 4481644051
dataset_size: 4485268174
configs:
- config_name: default
data_files:
- split: easy
path: data/easy-*
- split: medium
path: data/medium-*
- split: hard
path: data/hard-*
license: mit
task_categories:
- image-to-text
- text-generation
tags:
- code
- ocr
pretty_name: CodeOCR
size_categories:
- 1K<n<10K
---
---
pretty_name: "CodeOCR Dataset (Python Code Images + Ground Truth)"
license: mit
language:
- en
task_categories:
- image-to-text
tags:
- ocr
- code
- python
- leetcode
- synthetic
- computer-vision
size_categories:
- 1K<n<10K
---
# CodeOCR Dataset (Python Code Images + Ground Truth)
This dataset is designed for **Optical Character Recognition (OCR) of source code**.
Each example pairs **Python code (ground-truth text)** with **image renderings** of that code (light/dark themes) and a **real photo**.
## Dataset Summary
- **Language:** Python (text ground truth), images of code
- **Splits:** `easy`, `medium`, `hard`
- **Total examples:** 1,000
- `easy`: 700
- `medium`: 200
- `hard`: 100
- **Modalities:** image + text
### What is “ground truth” here?
The `code` field is **exactly the content of `gt.py`** used to generate the synthetic renderings.
During dataset creation, code is normalized to ensure stable GT properties:
- UTF-8 encoding
- newline normalization to **LF (`\n`)**
- tabs expanded to **4 spaces**
- syntax checked with Python `compile()` (syntax/indentation correctness)
This makes the dataset suitable for training/evaluating OCR models that output **plain code text**.
---
## Data Fields
Each row contains:
- `id` *(string)*: sample identifier (e.g., `easy_000123`)
- `difficulty` *(string)*: `easy` / `medium` / `hard`
- `code` *(string)*: **ground-truth Python code**
- `render_light` *(image)*: synthetic rendering (light theme)
- `render_dark` *(image)*: synthetic rendering (dark theme)
- `photo` *(image)*: real photo of the code
---
## How to Use
### Load with 🤗 Datasets
```python
from datasets import load_dataset
ds = load_dataset("maksonchek/codeocr-dataset")
print(ds)
print(ds["easy"][0].keys())
```
### Access code and images
```python
ex = ds["easy"][0]
# Ground-truth code
print(ex["code"][:500])
# Images are stored as `datasets.Image` features.
render = ex["render_light"]
print(render)
```
If your environment returns image dicts with local paths:
```python
from PIL import Image
img = Image.open(ex["render_light"]["path"])
img.show()
```
Real photo (always present in this dataset):
```python
from PIL import Image
photo = Image.open(ex["photo"]["path"])
photo.show()
```
---
## Dataset Creation
### 1) Code selection
Python solutions were collected from an open-source repository of LeetCode solutions (MIT licensed).
### 2) Normalization to produce stable GT
The collected code is written into `gt.py` after:
- newline normalization to LF
- tab expansion to 4 spaces
- basic cleanup (no hidden control characters)
- Python syntax check via `compile()`
### 3) Synthetic rendering
Synthetic images are generated from the normalized `gt.py` in:
- light theme (`render_light`)
- dark theme (`render_dark`)
### 4) Real photos
Real photos are manually captured and linked **for every sample**.
---
## Statistics (high-level)
Average code length by difficulty (computed on this dataset):
- `easy`: ~27 lines, ~669 chars
- `medium`: ~36 lines, ~997 chars
- `hard`: ~55 lines, ~1767 chars
(Exact values may vary if the dataset is extended.)
---
## Intended Use
- OCR for programming code
- robust text extraction from screenshot-like renders and real photos
- benchmarking OCR pipelines for code formatting / indentation preservation
### Not Intended Use
- generating or re-distributing problem statements
- competitive programming / cheating use-cases
---
## Limitations
- Code is checked for **syntax correctness**, but not necessarily for runtime correctness.
- Rendering style is controlled and may differ from real-world photos.
---
## License & Attribution
This dataset is released under the **MIT License**.
The included solution code is derived from **kamyu104/LeetCode-Solutions** (MIT License):
https://github.com/kamyu104/LeetCode-Solutions
If you use this dataset in academic work, please cite the dataset and credit the original solution repository.
---
## Citation
### BibTeX
```bibtex
@dataset{codeocr_leetcode_2025,
author = {Maksonchek},
title = {CodeOCR Dataset (Python Code Images + Ground Truth)},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/maksonchek/codeocr-dataset}
}
```