HusseinLezzaik's picture
Upload README.md with huggingface_hub
3a2c49c verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: game
      dtype: string
    - name: trial_id
      dtype: int32
    - name: episode_id
      dtype: int32
    - name: frame_idx
      dtype: int32
    - name: action
      dtype: string
    - name: action_int
      dtype: int32
    - name: score
      dtype: int32
    - name: reward
      dtype: int32
    - name: reaction_time_ms
      dtype: int32
    - name: gaze_positions
      dtype: string
    - name: image_bytes
      dtype: binary
license: mit
task_categories:
  - robotics
  - reinforcement-learning
tags:
  - atari
  - vla
  - vision-language-action
  - imitation-learning
  - human-demonstrations
size_categories:
  - 1M<n<10M

TESS-Atari Stage 1 (5Hz)

Human gameplay demonstrations from Atari games, formatted for Vision-Language-Action (VLA) model training.

Overview

Metric Value
Source Atari-HEAD
Games 11 (overlapping with DIAMOND benchmark)
Samples ~4M
Action Rate 5 Hz (1 action per observation)
Format Lumine-style action tokens

Games Included

Alien, Asterix, BankHeist, Breakout, DemonAttack, Freeway, Frostbite, Hero, MsPacman, RoadRunner, Seaquest

Action Format

<|action_start|> FIRE <|action_end|>
<|action_start|> LEFT <|action_end|>
<|action_start|> RIGHTFIRE <|action_end|>

Schema

Field Type Description
id string Unique sample ID: {game}_{trial}_{frame}
game string Game name (lowercase)
trial_id int Human player trial number
episode_id int Episode within trial (-1 if unknown)
frame_idx int Frame sequence number
action string Lumine-style action token
action_int int Raw ALE action code (0-17)
score int Current game score
reward int Immediate reward
reaction_time_ms int Human decision time in ms
gaze_positions string Eye tracking data (x,y pairs)
image_bytes bytes PNG image of game frame

Usage

from datasets import load_dataset

ds = load_dataset("TESS-Computer/atari-vla-stage1-5hz")

# Get a sample
sample = ds["train"][0]
print(sample["action"])  # <|action_start|> FIRE <|action_end|>

# Decode image
from PIL import Image
from io import BytesIO
img = Image.open(BytesIO(sample["image_bytes"]))

Evaluation

Designed for evaluation in DIAMOND world models on the Atari 100k benchmark.

Related

  • 15Hz variant - 3 actions per observation for faster gameplay
  • Lumine AI - Inspiration for VLA architecture
  • DIAMOND - World model for evaluation

Citation

@misc{atarihead2019,
  title={Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset},
  author={Zhang, Ruohan and others},
  year={2019},
  url={https://zenodo.org/records/3451402}
}