You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

AncientVision-3T: A Hierarchical Benchmark for Ancient Chinese Document Vision-Language Tasks

Dataset Summary

AncientVision-3T is a benchmark designed to systematically evaluate the cognitive capabilities of Vision Language Models (VLMs) in the context of Ancient Chinese Documents. Unlike general benchmarks, AncientVision-3T employs a hierarchical task design to decouple and analyze model capabilities based on cognitive complexity.

Processing ancient Chinese documents presents a "dual chasm":

  1. Visual: Characterized by vertical text flow, complex layouts, and degradation noise (temporal erosion).
  2. Linguistic: Involves abstruse grammar and archaic glyphs requiring deep linguistic priors.

This dataset facilitates the analysis of internal neuronal activation patterns and task behaviors across varying levels of cognitive difficulty, ranging from pixel-level perception to semantic-level reasoning.

Supported Tasks

The dataset covers three progressive dimensions of cognitive processing:

  1. Visual Symbol Perception (OCR):

    • Goal: Recognize Ancient Chinese characters from document images.
    • Metric: Normalized Edit Distance (NED).
    • Focus: Overcoming visual noise and recognizing archaic glyphs.
  2. Cross-Modal Image Classification (IC):

    • Goal: Classify images of historical artifacts or subjects.
    • Categories: Ritual vessels, musical instruments, official attire, military equipment, etc.
    • Metric: Accuracy.
    • Focus: Aligning visual features with specific cultural semantic categories.
  3. Cross-Modal Image Understanding (IU):

    • Goal: Generate detailed semantic descriptions or interpretations of the visual content.
    • Metric: ROUGE-L.
    • Focus: Deep cross-modal reasoning requiring linguistic priors and historical knowledge.

Dataset Structure

The dataset comprises 1,500 images in total, split into two distinct categories based on content type:

Subset Count Description Task
Textual Images 500 Images featuring Traditional Chinese text. OCR
Illustrative Images 1,000 Images depicting historical subjects with distinct cultural semantics. Classification, Understanding

Splits

For interpretability experiments (as described in the associated paper), the dataset is typically split into Training and Validation sets with a 1:1 ratio.

  • Note: In the context of the AC-TCGN method, the training set is primarily used as a reference corpus to collect neuronal statistics (activations/gradients) rather than for updating model parameters.

Data Collection & Source

  • Sources: Public digital archives of ancient Chinese texts, including the Siku Quanshu (Complete Library in Four Sections) and various local gazetteers.
  • Historical Period: Spans diverse dynasties including Song, Yuan, Ming, and Qing.
  • Formats: Includes both woodblock-printed editions and handwritten manuscripts.
Downloads last month
9