Datasets:
input_sequence large_string | input_length int64 | target_api int64 | target_category large_string | session_goal_id int64 | session_goal large_string | session_end int64 |
|---|---|---|---|---|---|---|
[1] | 1 | 25 | data_input | 1 | data_analysis | 0 |
[1, 25] | 2 | 35 | data_processing | 1 | data_analysis | 0 |
[1, 25, 35] | 3 | 65 | viz_basic | 1 | data_analysis | 0 |
[1, 25, 35, 65] | 4 | 75 | viz_advanced | 1 | data_analysis | 0 |
[1, 25, 35, 65, 75] | 5 | 85 | export | 1 | data_analysis | 0 |
[1, 25, 35, 65, 75, 85] | 6 | 85 | export | 1 | data_analysis | 0 |
[25, 35, 65, 75, 85, 85] | 6 | 85 | export | 1 | data_analysis | 0 |
[35, 65, 75, 85, 85, 85] | 6 | 85 | export | 1 | data_analysis | 0 |
[65, 75, 85, 85, 85, 85] | 6 | 85 | export | 1 | data_analysis | 1 |
[1] | 1 | 25 | data_input | 3 | quick_viz | 0 |
[1, 25] | 2 | 65 | viz_basic | 3 | quick_viz | 0 |
[1, 25, 65] | 3 | 80 | export | 3 | quick_viz | 0 |
[1, 25, 65, 80] | 4 | 65 | viz_basic | 3 | quick_viz | 0 |
[1, 25, 65, 80, 65] | 5 | 65 | viz_basic | 3 | quick_viz | 0 |
[1, 25, 65, 80, 65, 65] | 6 | 65 | viz_basic | 3 | quick_viz | 0 |
[25, 65, 80, 65, 65, 65] | 6 | 65 | viz_basic | 3 | quick_viz | 0 |
[65, 80, 65, 65, 65, 65] | 6 | 65 | viz_basic | 3 | quick_viz | 0 |
[80, 65, 65, 65, 65, 65] | 6 | 65 | viz_basic | 3 | quick_viz | 1 |
[0] | 1 | 10 | user_management | 2 | user_management | 0 |
[0, 10] | 2 | 15 | user_management | 2 | user_management | 0 |
[0, 10, 15] | 3 | 90 | admin | 2 | user_management | 0 |
[0, 10, 15, 90] | 4 | 95 | admin | 2 | user_management | 0 |
[0, 10, 15, 90, 95] | 5 | 90 | admin | 2 | user_management | 0 |
[0, 10, 15, 90, 95, 90] | 6 | 90 | admin | 2 | user_management | 0 |
[10, 15, 90, 95, 90, 90] | 6 | 90 | admin | 2 | user_management | 0 |
[15, 90, 95, 90, 90, 90] | 6 | 90 | admin | 2 | user_management | 0 |
[90, 95, 90, 90, 90, 90] | 6 | 90 | admin | 2 | user_management | 1 |
[0] | 1 | 20 | data_input | 1 | data_analysis | 0 |
[0, 20] | 2 | 30 | data_processing | 1 | data_analysis | 0 |
[0, 20, 30] | 3 | 45 | ml_training | 1 | data_analysis | 0 |
[0, 20, 30, 45] | 4 | 55 | ml_prediction | 1 | data_analysis | 0 |
[0, 20, 30, 45, 55] | 5 | 65 | viz_basic | 1 | data_analysis | 0 |
[0, 20, 30, 45, 55, 65] | 6 | 70 | viz_advanced | 1 | data_analysis | 0 |
[20, 30, 45, 55, 65, 70] | 6 | 85 | export | 1 | data_analysis | 0 |
[30, 45, 55, 65, 70, 85] | 6 | 80 | export | 1 | data_analysis | 0 |
[45, 55, 65, 70, 85, 80] | 6 | 85 | export | 1 | data_analysis | 1 |
[0] | 1 | 22 | data_input | 0 | ml_pipeline | 0 |
[0, 22] | 2 | 32 | data_processing | 0 | ml_pipeline | 0 |
[0, 22, 32] | 3 | 42 | ml_training | 0 | ml_pipeline | 0 |
[0, 22, 32, 42] | 4 | 52 | ml_prediction | 0 | ml_pipeline | 0 |
[0, 22, 32, 42, 52] | 5 | 62 | viz_basic | 0 | ml_pipeline | 0 |
[0, 22, 32, 42, 52, 62] | 6 | 82 | export | 0 | ml_pipeline | 0 |
[22, 32, 42, 52, 62, 82] | 6 | 82 | export | 0 | ml_pipeline | 0 |
[32, 42, 52, 62, 82, 82] | 6 | 82 | export | 0 | ml_pipeline | 0 |
[42, 52, 62, 82, 82, 82] | 6 | 82 | export | 0 | ml_pipeline | 1 |
[1] | 1 | 25 | data_input | 0 | ml_pipeline | 0 |
[1, 25] | 2 | 35 | data_processing | 0 | ml_pipeline | 0 |
[1, 25, 35] | 3 | 45 | ml_training | 0 | ml_pipeline | 0 |
[1, 25, 35, 45] | 4 | 55 | ml_prediction | 0 | ml_pipeline | 0 |
[1, 25, 35, 45, 55] | 5 | 65 | viz_basic | 0 | ml_pipeline | 0 |
[1, 25, 35, 45, 55, 65] | 6 | 85 | export | 0 | ml_pipeline | 0 |
[25, 35, 45, 55, 65, 85] | 6 | 65 | viz_basic | 0 | ml_pipeline | 0 |
[35, 45, 55, 65, 85, 65] | 6 | 65 | viz_basic | 0 | ml_pipeline | 0 |
[45, 55, 65, 85, 65, 65] | 6 | 65 | viz_basic | 0 | ml_pipeline | 1 |
[0] | 1 | 20 | data_input | 0 | ml_pipeline | 0 |
[0, 20] | 2 | 30 | data_processing | 0 | ml_pipeline | 0 |
[0, 20, 30] | 3 | 40 | ml_training | 0 | ml_pipeline | 0 |
[0, 20, 30, 40] | 4 | 50 | ml_prediction | 0 | ml_pipeline | 0 |
[0, 20, 30, 40, 50] | 5 | 60 | viz_basic | 0 | ml_pipeline | 0 |
[0, 20, 30, 40, 50, 60] | 6 | 80 | export | 0 | ml_pipeline | 0 |
[20, 30, 40, 50, 60, 80] | 6 | 80 | export | 0 | ml_pipeline | 0 |
[30, 40, 50, 60, 80, 80] | 6 | 80 | export | 0 | ml_pipeline | 0 |
[40, 50, 60, 80, 80, 80] | 6 | 80 | export | 0 | ml_pipeline | 1 |
[2] | 1 | 22 | data_input | 1 | data_analysis | 0 |
[2, 22] | 2 | 32 | data_processing | 1 | data_analysis | 0 |
[2, 22, 32] | 3 | 62 | viz_basic | 1 | data_analysis | 0 |
[2, 22, 32, 62] | 4 | 72 | viz_advanced | 1 | data_analysis | 0 |
[2, 22, 32, 62, 72] | 5 | 82 | export | 1 | data_analysis | 0 |
[2, 22, 32, 62, 72, 82] | 6 | 72 | viz_advanced | 1 | data_analysis | 0 |
[22, 32, 62, 72, 82, 72] | 6 | 72 | viz_advanced | 1 | data_analysis | 0 |
[32, 62, 72, 82, 72, 72] | 6 | 72 | viz_advanced | 1 | data_analysis | 0 |
[62, 72, 82, 72, 72, 72] | 6 | 72 | viz_advanced | 1 | data_analysis | 1 |
[1] | 1 | 25 | data_input | 1 | data_analysis | 0 |
[1, 25] | 2 | 35 | data_processing | 1 | data_analysis | 0 |
[1, 25, 35] | 3 | 65 | viz_basic | 1 | data_analysis | 0 |
[1, 25, 35, 65] | 4 | 75 | viz_advanced | 1 | data_analysis | 0 |
[1, 25, 35, 65, 75] | 5 | 85 | export | 1 | data_analysis | 0 |
[1, 25, 35, 65, 75, 85] | 6 | 75 | viz_advanced | 1 | data_analysis | 0 |
[25, 35, 65, 75, 85, 75] | 6 | 75 | viz_advanced | 1 | data_analysis | 0 |
[35, 65, 75, 85, 75, 75] | 6 | 75 | viz_advanced | 1 | data_analysis | 0 |
[65, 75, 85, 75, 75, 75] | 6 | 75 | viz_advanced | 1 | data_analysis | 1 |
[3] | 1 | 25 | data_input | 0 | ml_pipeline | 0 |
[3, 25] | 2 | 35 | data_processing | 0 | ml_pipeline | 0 |
[3, 25, 35] | 3 | 45 | ml_training | 0 | ml_pipeline | 0 |
[3, 25, 35, 45] | 4 | 55 | ml_prediction | 0 | ml_pipeline | 0 |
[3, 25, 35, 45, 55] | 5 | 65 | viz_basic | 0 | ml_pipeline | 0 |
[3, 25, 35, 45, 55, 65] | 6 | 70 | viz_advanced | 0 | ml_pipeline | 0 |
[25, 35, 45, 55, 65, 70] | 6 | 80 | export | 0 | ml_pipeline | 0 |
[35, 45, 55, 65, 70, 80] | 6 | 85 | export | 0 | ml_pipeline | 0 |
[45, 55, 65, 70, 80, 85] | 6 | 88 | export | 0 | ml_pipeline | 1 |
[0] | 1 | 10 | user_management | 2 | user_management | 0 |
[0, 10] | 2 | 15 | user_management | 2 | user_management | 0 |
[0, 10, 15] | 3 | 90 | admin | 2 | user_management | 0 |
[0, 10, 15, 90] | 4 | 95 | admin | 2 | user_management | 0 |
[0, 10, 15, 90, 95] | 5 | 95 | admin | 2 | user_management | 0 |
[0, 10, 15, 90, 95, 95] | 6 | 95 | admin | 2 | user_management | 0 |
[10, 15, 90, 95, 95, 95] | 6 | 95 | admin | 2 | user_management | 0 |
[15, 90, 95, 95, 95, 95] | 6 | 95 | admin | 2 | user_management | 0 |
[90, 95, 95, 95, 95, 95] | 6 | 95 | admin | 2 | user_management | 1 |
[1] | 1 | 25 | data_input | 0 | ml_pipeline | 0 |
Context Engineering V1: Sequential API Recommendation Dataset
This dataset accompanies the research paper:
Rethink Context Engineering Using an Attention-based Architecture Yiqiao Yin — University of Chicago Booth School of Business / Columbia University
It was generated using the open-source context-engineer Python package:
- GitHub: https://github.com/yiqiao-yin/context-engineer-repo
- PyPI: https://pypi.org/project/context-engineer/0.1.0/
Dataset Summary
This dataset contains simulated sequential API usage logs modeled as Markov chains, designed for training and evaluating multi-task transformer models for sequential API recommendation. The simulation encompasses 2,000 user sessions totaling 20,000 API calls across 100 APIs organized into 10 functional categories, with 4 distinct session goal types driving workflow-specific behavioral patterns.
The dataset is split into two files:
| File | Rows | Description |
|---|---|---|
user_sessions.parquet |
2,000 | Full user session sequences with goal labels |
training_pairs.parquet |
18,000 | Supervised input-output pairs for model training |
Key Statistics
| Metric | Value |
|---|---|
| Total users | 2,000 |
| Total API calls | 20,000 |
| Unique APIs | 100 (across 10 categories) |
| Avg. session length | 10 API calls |
| Session goal types | 4 |
| Training pairs generated | 18,000 |
| Max input sequence length | 6 |
| Random seed | 42 |
Dataset Structure
user_sessions.parquet
Each row represents one complete user session:
| Column | Type | Description |
|---|---|---|
user_id |
int | Unique user/session identifier (0–1999) |
session_goal_id |
int | Goal type ID (0–3) |
session_goal |
string | Goal name: ml_pipeline, data_analysis, user_management, quick_viz |
sequence_length |
int | Number of API calls in the session |
api_sequence |
string (JSON list) | Ordered list of API IDs called during the session |
category_sequence |
string (JSON list) | Ordered list of API category names |
training_pairs.parquet
Each row is a supervised training example with multi-task labels:
| Column | Type | Description |
|---|---|---|
input_sequence |
string (JSON list) | Context window of preceding API calls (up to 6) |
input_length |
int | Number of tokens in the input sequence |
target_api |
int | Ground-truth next API ID to predict |
target_category |
string | Category name of the target API |
session_goal_id |
int | Session goal label (auxiliary task) |
session_goal |
string | Session goal name |
session_end |
int | Whether this is the last action in the session (0 or 1) |
API Categories
The 100 APIs are organized into 10 functional categories, reflecting typical enterprise platform architecture:
| Category | API Range | Description |
|---|---|---|
| Authentication | 0–9 | Login, session management |
| User Management | 10–19 | Roles, permissions, accounts |
| Data Input | 20–29 | Data ingestion, file upload |
| Data Processing | 30–39 | Transformation, cleaning, feature engineering |
| ML Training | 40–49 | Model training, hyperparameter tuning |
| ML Prediction | 50–59 | Inference, batch prediction |
| Basic Visualization | 60–69 | Charts, basic plots |
| Advanced Visualization | 70–79 | Dashboards, interactive visualizations |
| Export/Share | 80–89 | Export, report generation |
| Administration | 90–99 | System config, monitoring |
Session Goals
| Goal ID | Goal Name | Distribution | Workflow Adherence |
|---|---|---|---|
| 0 | ML Pipeline | 34.8% | 85% |
| 1 | Data Analysis | 26.1% | 80% |
| 2 | User Management | 24.3% | 90% |
| 3 | Quick Visualization | 14.8% | 75% |
How to Use
Load with Hugging Face datasets
from datasets import load_dataset
# Load both splits
dataset = load_dataset("eagle0504/context-engineering-v1")
# Or load individual files
sessions = load_dataset("eagle0504/context-engineering-v1", data_files="user_sessions.parquet")
pairs = load_dataset("eagle0504/context-engineering-v1", data_files="training_pairs.parquet")
Load with Pandas
import pandas as pd
sessions = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/user_sessions.parquet")
pairs = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/training_pairs.parquet")
Reproduce with the context-engineer Package
You can regenerate this exact dataset (or create your own variant) using the package:
pip install context-engineer
from context_engineer import simulate_multitask_markov_data, create_multitask_training_pairs, set_random_seeds
# Set seed for exact reproducibility
set_random_seeds(42)
# Generate 2000 user sessions (matches this dataset)
sequences, goals = simulate_multitask_markov_data(
num_users=2000,
num_apis=100,
clicks_per_user=10,
)
# Create supervised training pairs
input_seqs, target_apis, goal_labels, session_end_labels = create_multitask_training_pairs(
sequences, goals, max_seq_len=6
)
Run the Full Training Pipeline
from context_engineer import run_pipeline
# Reproduce the full experiment from the paper
results = run_pipeline(seed=42)
model = results["model"] # Trained PyTorch model
metrics = results["metrics"] # ~79.8% top-1 accuracy, 99.97% top-5 hit rate
Generate Custom Datasets via CLI
# Generate data and save to JSON
context-engineer generate --num-users 5000 --clicks 15 --seed 99 --output my_data.json
# Run the full pipeline
context-engineer run --num-users 1000 --epochs 30
Benchmark Results (from the paper)
A multi-task attention-based transformer trained on this dataset achieves:
| Metric | Value |
|---|---|
| API Prediction Accuracy (Top-1) | 79.83% |
| Mean Reciprocal Rank (MRR) | 0.7983 |
| Top-5 Hit Rate | 99.97% |
| Top-10 Hit Rate | 100.00% |
| Goal Prediction Accuracy | 81.6% |
| Session End Accuracy | 99.3% |
| Improvement over Markov baseline | +432% |
Citation
If you use this dataset in your research, please cite:
@article{yin2025rethink,
title={Rethink Context Engineering Using an Attention-based Architecture},
author={Yin, Yiqiao},
year={2025}
}
Disclaimer
About the Author. This dataset and the accompanying context-engineer package were created by Yiqiao Yin, who holds affiliations with the University of Chicago Booth School of Business and the Department of Statistics at Columbia University. The author brings over a decade of professional experience in the SaaS (Software as a Service) and Platform-as-a-Service (PaaS) domain, spanning enterprise software development, API ecosystem design, user behavior analytics, and machine learning infrastructure. The API category taxonomy, workflow patterns, user persona definitions, and transition probability structures encoded in this simulator are informed by that cumulative domain expertise—reflecting realistic patterns observed in production enterprise environments over the course of many years.
Simulation, Not Real Data. This dataset is entirely synthetic. It was generated programmatically using the open-source context-engineer Python package. No real user data, proprietary platform logs, personally identifiable information (PII), or third-party datasets of any kind are included, referenced, or derived from in this release. The Markov chain transition probabilities, user personas, and session goal distributions are designed to approximate realistic enterprise API usage patterns for research purposes, but they do not represent, reproduce, or leak any actual user behavior from any specific platform or organization.
Reproducibility. This dataset is fully reproducible. Running the generation script with seed=42 and the default parameters (num_users=2000, num_apis=100, clicks_per_user=10) will produce an identical dataset. The source code is publicly available at github.com/yiqiao-yin/context-engineer-repo.
License. This dataset is released under the MIT License. You are free to use, modify, and distribute it for academic and commercial purposes with attribution.
- Downloads last month
- -