Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 183, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 791, in read_json
                  json_reader = JsonReader(
                                ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 905, in __init__
                  self.data = self._preprocess_data(data)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
                  data = data.read()
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
                  out = read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^
                File "<frozen codecs>", line 322, in decode
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

RobotInter-VQA: Intermediate Representation Understanding & Generation VQA Dataset for Manipulation

English | 简体中文

A Visual Question Answering dataset for robotic manipulation, developed as part of the RoboInter, covering generation, understanding (and task planning) of Intermediate Representations. The dataset is built on top of the annotation of RoboInter-Data, raw robot datasets are from DROID and RH20T.

Dataset Structure

robotinter/
├── Generation/          # generation tasks (grounding, trajectory, contact, ...)
│   ├── image/           # Images (zip archives, extract before use)
│   │   ├── train/{droid,rh20t}/
│   │   └── val/
│   └── meta/            # VQA annotations in JSON
│       ├── train/{droid,rh20t}/{origin_format,llava_format,smart_resize_format}/
│       └── val/{origin_format,llava_format,smart_resize_format}/
├── Understanding/       # understanding tasks (multiple-choice/decision)
│   ├── image/
│   │   ├── train/{droid,rh20t}/
│   │   └── val/
│   └── meta/
│       ├── train/{droid,rh20t}/
│       └── val/
└── Task_planning/       # Task planning & primitive recognition
    ├── image/
    │   ├── train/manipvqa/
    │   └── val/{planning,choice,decide}/
    └── meta/
        ├── train/manipvqa/
        └── val/{planning,choice,decide}/

Quick Start

  1. Extract images: All images are stored as .zip files. Extract them in place:

    cd RoboInter-VQA/Task_planning/image/train/manipvqa
    cat task_planning.zip.* > task_planning_full.zip
    cd ../../../../
    find . -name "*.zip" -execdir unzip -o {} \;
    
  2. Load VQA data: Refer to RoboInterVLM.

Coordinate Formats (Generation only)

The Generation annotations are provided in three coordinate formats. The underlying data and images are identical; only the coordinate representation in the answers differs:

Format Description Example
origin_format Pixel coordinates in original image resolution (h x w) [[72, 102], [192, 179]]
llava_format Normalized coordinates in [0, 1] range as LLaVA-based Model [[0.22, 0.57], [0.60, 0.99]]
smart_resize_format Pixel coordinates in resized image resolution as Qwen-based Model (new_h x new_w) [[69, 95], [184, 167]]

JSON Descriptions

Generation (7 task types)

Each entry follows this schema:

{
  "id": "unique_sample_id",
  "task": "task_type",
  "conversations": [{"from": "human", "value": "..."}, {"from": "gpt", "value": "..."}],
  "images": "relative/path/to/image.jpg",
  "gt": "ground_truth_value",
  "h": 180, "w": 320,
  "new_h": 168, "new_w": 308
}
JSON File Task Description Output Format
*_traj_qa.json Trajectory Prediction Given a task description and a starting position, predict 10 future trajectory waypoints for the gripper. {"future_traj": [[x1,y1], ...]}
*_traj_qa_wo_init_pos.json Trajectory Prediction (no init pos) Same as above but without providing the starting position in the prompt. {"future_traj": [[x1,y1], ...]}
*_gripper_det_qa.json Gripper Detection Detect the current bounding box of the robot gripper in the scene. {"gripper_det_bbox": [[x1,y1],[x2,y2]]}
*_contact_point_qa.json Contact Point Prediction Predict the two contact points where the gripper fingers touch the manipulated object. {"contact_point": [[x1,y1],[x2,y2]]}
*_contact_box_qa.json Contact Box Prediction Predict the bounding box of the gripper at the moment of contact with the object. {"contact_bbox": [[x1,y1],[x2,y2]]}
*_current_box_qa.json Current Object Box Predict the current bounding box of the manipulated object. {"current_bbox": [[x1,y1],[x2,y2]]}
*_final_box_qa.json Final Object Box Predict the final bounding box of the manipulated object (at the end of the action). {"final_bbox": [[x1,y1],[x2,y2]]}

Understanding (6 task types)

Multiple-choice VQA tasks that evaluate visual understanding of intermediate representations. Each entry uses single-image or multi-choice concatenated images.

JSON File Task Description Answer
contact_decide.json Contact Decision Given a scene, determine whether the gripper has reached/contacted the object. Yes / No
grasppose_choice.json Grasp Pose Choice Select the correct grasping pose from 4 candidate images (A/B/C/D), where orange fork-like patterns represent possible gripper poses. A/B/C/D
grounding_choice.json Grounding Choice Select which image correctly depicts the bounding box (purple box) of the manipulated object from 4 candidates. A/B/C/D
traj_choice.json Trajectory Choice Select the correct gripper trajectory from 4 candidate images with gradient-colored paths (green=start, red=end). A/B/C/D
trajlang_choice.json Trajectory-Language Choice Given a trajectory visualization, select the correct task description from 4 language options. A/B/C/D
traj_direction_choice.json Trajectory Direction Choice Given colored arrows around the gripper, select which color represents the actual movement direction. A/B/C/D

Task Planning (16 task types)

Multi-image (video frame) or single-frame VQA tasks for high-level task planning, including 16 types (Scene understanding, Discriminative affordance negative task, Discriminative affordance positive task, Future multitask selection, Future prediction task, Future primitive selection task, Generative affordance task, Past description task, Past multitask selection, Past primitive selection, Planning remaining steps task, Planning task, Planning with context task, Success negative task, Success positive task, Temporal understanding). Each entry uses 8 sampled frames as input (Temporal understanding and Scene understanding use a combined image of four image). We provide 4 examples below:

JSON File Task Description Answer
train/manipvqa/task_planning.json Next Step Planning (train) Given 8 video frames and a goal, predict the next sub-task to perform. Free-form text
val/planning/task_planning.json Next Step Planning (val) Same as training but on held-out validation data. Free-form text
val/choice/task_planning.json Primitive Selection (val) Given 8 video frames, select which action primitive was just executed from 4 options. A/B/C/D
val/decide/task_planning.json Success Decision (val) Given 8 video frames and a sub-task description, determine whether the sub-task was successfully completed. Yes / No

Data Statistics

Generation

Source traj_qa gripper_det contact_point contact_box current_box final_box
DROID (train) 31,282 84,777 78,004 78,004 149,671 145,996
RH20T (train) 33,803 120,747 115,266 115,266 225,055 224,944
Val 2,000 2,000 2,000 2,000 2,000 2,000

Understanding

Source contact_decide grasppose_choice grounding_choice traj_choice trajlang_choice traj_direction
RH20T (train) 15,060 9,835 8,158 3,610 3,610 3,729
DROID (train) 18,184 - 57,572 8,245 8,245 6,500
Val 15,514 2,702 6,108 787 1,474 266

Task Planning

Split Entries
Train (manipvqa) 928,819
Val - planning 10,806
Val - choice 15,059
Val - decide 10,629

License

Please refer to the original dataset licenses for RoboInter, DROID and RH20T.

Downloads last month
29