repo_name stringlengths 2 29 | repo_link stringlengths 28 60 | category stringlengths 3 39 ⌀ | github_about_section stringlengths 10 415 ⌀ | homepage_link stringlengths 14 93 ⌀ | github_topic_closest_fit stringlengths 3 28 ⌀ | contributors_all int64 1 6.68k ⌀ | contributors_2025 int64 0 2.38k ⌀ | contributors_2024 int64 0 2.13k ⌀ | contributors_2023 int64 0 1.92k ⌀ |
|---|---|---|---|---|---|---|---|---|---|
llvm-project | https://github.com/llvm/llvm-project | compiler | The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. | http://llvm.org | compiler | 6,680 | 2,378 | 2,130 | 1,920 |
vllm | https://github.com/vllm-project/vllm | inference engine | A high-throughput and memory-efficient inference and serving engine for LLMs | https://docs.vllm.ai | inference | 1,885 | 1,369 | 579 | 145 |
pytorch | https://github.com/pytorch/pytorch | machine learning framework | Tensors and Dynamic neural networks in Python with strong GPU acceleration | https://pytorch.org | machine-learning | 5,434 | 1,187 | 1,090 | 1,024 |
transformers | https://github.com/huggingface/transformers | multi-purpose library | Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training. | https://huggingface.co/transformers | machine-learning | 3,582 | 860 | 769 | 758 |
sglang | https://github.com/sgl-project/sglang | inference engine | SGLang is a fast serving framework for large language models and vision language models. | https://docs.sglang.ai | inference | 937 | 796 | 189 | 1 |
hhvm | https://github.com/facebook/hhvm | virtual machine | A virtual machine for executing programs written in Hack. | https://hhvm.com | virtual-machine | 2,624 | 692 | 648 | 604 |
llama.cpp | https://github.com/ggml-org/llama.cpp | inference engine | LLM inference in C/C++ | https://ggml.ai | inference | 1,374 | 535 | 575 | 461 |
kubernetes | https://github.com/kubernetes/kubernetes | container orchestration | Production-Grade Container Scheduling and Management | https://kubernetes.io | kubernetes | 5,041 | 509 | 498 | 565 |
tensorflow | https://github.com/tensorflow/tensorflow | machine learning framework | An Open Source Machine Learning Framework for Everyone | https://tensorflow.org | machine-learning | 4,618 | 500 | 523 | 630 |
verl | https://github.com/volcengine/verl | reinforcement learning | verl: Volcano Engine Reinforcement Learning for LLMs | https://verl.readthedocs.io | deep-reinforcement-learning | 462 | 454 | 10 | 0 |
rocm-systems | https://github.com/ROCm/rocm-systems | multi-purpose library | super repo for rocm systems projects | https://amd.com/en/products/software/rocm.html | amd | 1,032 | 440 | 323 | 204 |
ray | https://github.com/ray-project/ray | multi-purpose library | Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads. | https://ray.io | machine-learning | 1,381 | 397 | 223 | 230 |
spark | https://github.com/apache/spark | data processing | Apache Spark - A unified analytics engine for large-scale data processing | https://spark.apache.org | data-processing | 3,083 | 322 | 300 | 336 |
goose | https://github.com/block/goose | agent | an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM | https://block.github.io/goose | ai-agents | 332 | 319 | 32 | 0 |
elasticsearch | https://github.com/elastic/elasticsearch | search engine | Free and Open Source, Distributed, RESTful Search Engine | https://elastic.co/products/elasticsearch | search-engine | 2,297 | 316 | 284 | 270 |
jax | https://github.com/jax-ml/jax | scientific computing | Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more | https://docs.jax.dev | scientific-computing | 997 | 312 | 280 | 202 |
modelcontextprotocol | https://github.com/modelcontextprotocol/modelcontextprotocol | mcp | Specification and documentation for the Model Context Protocol | https://modelcontextprotocol.io | mcp | 327 | 298 | 42 | 0 |
executorch | https://github.com/pytorch/executorch | model compiler | On-device AI across mobile, embedded and edge for PyTorch | https://executorch.ai | inference | 437 | 267 | 243 | 77 |
numpy | https://github.com/numpy/numpy | scientific computing | The fundamental package for scientific computing with Python. | https://numpy.org | scientific-computing | 2,172 | 235 | 233 | 252 |
triton | https://github.com/triton-lang/triton | parallel computing dsl | Development repository for the Triton language and compiler | https://triton-lang.org | parallel-programming | 522 | 233 | 206 | 159 |
modular | https://github.com/modular/modular | parallel computing | The Modular Platform (includes MAX & Mojo) | https://docs.modular.com | parallel-programming | 366 | 222 | 205 | 99 |
scipy | https://github.com/scipy/scipy | scientific computing | SciPy library main repository | https://scipy.org | scientific-computing | 1,973 | 210 | 251 | 245 |
ollama | https://github.com/ollama/ollama | inference engine | Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models. | https://ollama.com | inference | 574 | 202 | 314 | 97 |
trl | https://github.com/huggingface/trl | reinforcement learning | Train transformer language models with reinforcement learning. | http://hf.co/docs/trl | reinforcement-learning | 433 | 189 | 154 | 122 |
flashinfer | https://github.com/flashinfer-ai/flashinfer | gpu kernels | FlashInfer: Kernel Library for LLM Serving | https://flashinfer.ai | attention | 205 | 158 | 50 | 11 |
aiter | https://github.com/ROCm/aiter | gpu kernels | AI Tensor Engine for ROCm | https://rocm.blogs.amd.com/software-tools-optimization/aiter-ai-tensor-engine/README.html | null | 151 | 145 | 10 | 0 |
LMCache | https://github.com/LMCache/LMCache | inference | Supercharge Your LLM with the Fastest KV Cache Layer | https://lmcache.ai | null | 152 | 144 | 18 | 0 |
composable_kernel | https://github.com/ROCm/composable_kernel | gpu kernels | Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators | https://rocm.docs.amd.com/projects/composable_kernel | null | 190 | 140 | 58 | 33 |
Mooncake | https://github.com/kvcache-ai/Mooncake | inference | Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI. | https://kvcache-ai.github.io/Mooncake | inference | 138 | 133 | 13 | 0 |
torchtitan | https://github.com/pytorch/torchtitan | training framework | A PyTorch native platform for training generative AI models | https://arxiv.org/abs/2410.06511 | null | 145 | 119 | 43 | 1 |
ao | https://github.com/pytorch/ao | quantization | PyTorch native quantization and sparsity for training and inference | https://pytorch.org/ao | quantization | 178 | 114 | 100 | 5 |
lean4 | https://github.com/leanprover/lean4 | theorem prover | Lean 4 programming language and theorem prover | https://lean-lang.org | lean | 278 | 110 | 85 | 64 |
ComfyUI | https://github.com/comfyanonymous/ComfyUI | user interface | The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. | https://comfy.org | stable-diffusion | 278 | 108 | 119 | 94 |
unsloth | https://github.com/unslothai/unsloth | fine tuning | Fine-tuning & Reinforcement Learning for LLMs. Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM. | https://docs.unsloth.ai | fine-tuning | 127 | 102 | 27 | 3 |
burn | https://github.com/tracel-ai/burn | multi-purpose library | Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability. | https://burn.dev | null | 237 | 99 | 104 | 62 |
accelerate | https://github.com/huggingface/accelerate | training framework | A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support. | https://huggingface.co/docs/accelerate | null | 392 | 97 | 124 | 149 |
terminal-bench | https://github.com/laude-institute/terminal-bench | benchmark | A benchmark for LLMs on complicated tasks in the terminal | https://tbench.ai | benchmark | 96 | 96 | 0 | 0 |
DeepSpeed | https://github.com/deepspeedai/DeepSpeed | training framework | DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. | https://deepspeed.ai | null | 442 | 96 | 134 | 165 |
milvus | https://github.com/milvus-io/milvus | vector database | Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search | https://milvus.io | vector-search | 387 | 95 | 84 | 72 |
cutlass | https://github.com/NVIDIA/cutlass | parallel computing | CUDA Templates and Python DSLs for High-Performance Linear Algebra | https://docs.nvidia.com/cutlass/index.html | parallel-programming | 238 | 94 | 64 | 66 |
tilelang | https://github.com/tile-ai/tilelang | parallel computing dsl | Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels | https://tilelang.com | parallel-programming | 90 | 89 | 1 | 0 |
monarch | https://github.com/meta-pytorch/monarch | distributed computing | PyTorch Single Controller | https://meta-pytorch.org/monarch | null | 85 | 85 | 0 | 0 |
Liger-Kernel | https://github.com/linkedin/Liger-Kernel | kernel examples | Efficient Triton Kernels for LLM Training | https://openreview.net/pdf?id=36SjAIT42G | triton | 120 | 78 | 61 | 0 |
nixl | https://github.com/ai-dynamo/nixl | distributed computing | NVIDIA Inference Xfer Library (NIXL) | null | null | 78 | 78 | 0 | 0 |
jupyterlab | https://github.com/jupyterlab/jupyterlab | user interface | JupyterLab computational environment. | https://jupyterlab.readthedocs.io | jupyter | 698 | 77 | 85 | 100 |
hipBLASLt | https://github.com/AMD-AGI/hipBLASLt | Basic Linear Algebra Subprograms (BLAS) | hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library | https://rocm.docs.amd.com/projects/hipBLASLt | matrix-multiplication | 111 | 69 | 70 | 35 |
peft | https://github.com/huggingface/peft | fine tuning | PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. | https://huggingface.co/docs/peft | null | 272 | 69 | 111 | 115 |
ROCm | https://github.com/ROCm/ROCm | multi-purpose library | AMD ROCm Software - GitHub Home | https://rocm.docs.amd.com | null | 166 | 67 | 61 | 44 |
mcp-agent | https://github.com/lastmile-ai/mcp-agent | mcp | Build effective agents using Model Context Protocol and simple workflow patterns | null | mcp | 63 | 63 | 1 | 0 |
rdma-core | https://github.com/linux-rdma/rdma-core | systems level code | RDMA core userspace libraries and daemons | null | null | 437 | 58 | 61 | 66 |
onnx | https://github.com/onnx/onnx | machine learning interoperability | Open standard for machine learning interoperability | https://onnx.ai | onnx | 370 | 56 | 45 | 61 |
letta | https://github.com/letta-ai/letta | agent | Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time. | https://docs.letta.com | ai-agents | 157 | 56 | 75 | 47 |
helion | https://github.com/pytorch/helion | parallel computing dsl | A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate. | https://helionlang.com | parallel-programming | 49 | 49 | 0 | 0 |
openevolve | https://github.com/codelion/openevolve | evolutionary algorithm | Open-source implementation of AlphaEvolve | null | genetic-algorithm | 46 | 46 | 0 | 0 |
lightning-thunder | https://github.com/Lightning-AI/lightning-thunder | model compiler | PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own. | null | null | 76 | 44 | 47 | 29 |
truss | https://github.com/basetenlabs/truss | inference engine | The simplest way to serve AI/ML models in production | https://truss.baseten.co | inference | 72 | 44 | 30 | 21 |
ondemand | https://github.com/OSC/ondemand | hpc portal | Supercomputing. Seamlessly. Open, Interactive HPC Via the Web | https://openondemand.org | hpc | 117 | 43 | 23 | 21 |
pybind11 | https://github.com/pybind/pybind11 | middleware | Seamless operability between C++11 and Python | https://pybind11.readthedocs.io | bindings | 404 | 43 | 45 | 42 |
cuda-python | https://github.com/NVIDIA/cuda-python | middleware | CUDA Python: Performance meets Productivity | https://nvidia.github.io/cuda-python | parallel-programming | 48 | 41 | 12 | 1 |
warp | https://github.com/NVIDIA/warp | spatial computing | A Python framework for accelerated simulation, data generation and spatial computing. | https://nvidia.github.io/warp | physics-simulation | 79 | 40 | 29 | 17 |
metaflow | https://github.com/Netflix/metaflow | null | Build, Manage and Deploy AI/ML Systems | https://metaflow.org | null | 121 | 37 | 35 | 28 |
numba | https://github.com/numba/numba | compiler | NumPy aware dynamic Python compiler using LLVM | https://numba.pydata.org | null | 430 | 36 | 32 | 55 |
SWE-bench | https://github.com/SWE-bench/SWE-bench | benchmark | SWE-bench: Can Language Models Resolve Real-world Github Issues? | https://swebench.com | benchmark | 66 | 33 | 37 | 9 |
AdaptiveCpp | https://github.com/AdaptiveCpp/AdaptiveCpp | compiler | Compiler for multiple programming models (SYCL, C++ standard parallelism, HIP/CUDA) for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime! | https://adaptivecpp.github.io | null | 93 | 32 | 32 | 24 |
Triton-distributed | https://github.com/ByteDance-Seed/Triton-distributed | distributed computing | Distributed Compiler based on Triton for Parallel Systems | https://triton-distributed.readthedocs.io | null | 30 | 30 | 0 | 0 |
ThunderKittens | https://github.com/HazyResearch/ThunderKittens | parallel computing | Tile primitives for speedy kernels | https://hazyresearch.stanford.edu/blog/2024-10-29-tk2 | parallel-programming | 34 | 29 | 13 | 0 |
dstack | https://github.com/dstackai/dstack | null | dstack is an open-source control plane for running development, training, and inference jobs on GPUs-across hyperscalers, neoclouds, or on-prem. | https://dstack.ai | orchestration | 69 | 28 | 42 | 14 |
ome | https://github.com/sgl-project/ome | null | OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs) | http://docs.sglang.ai/ome | k8s | 28 | 28 | 0 | 0 |
pocl | https://github.com/pocl/pocl | parallel computing | pocl - Portable Computing Language | https://portablecl.org | parallel-programming | 166 | 26 | 27 | 21 |
server | https://github.com/triton-inference-server/server | null | The Triton Inference Server provides an optimized cloud and edge inferencing solution. | https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html | inference | 147 | 24 | 36 | 34 |
Vulkan-Hpp | https://github.com/KhronosGroup/Vulkan-Hpp | graphics api | Open-Source Vulkan C++ API | https://vulkan.org | vulkan | 102 | 21 | 15 | 15 |
ccache | https://github.com/ccache/ccache | null | ccache - a fast compiler cache | https://ccache.dev | null | 218 | 20 | 28 | 22 |
lapack | https://github.com/Reference-LAPACK/lapack | linear algebra | LAPACK is a library of Fortran subroutines for solving the most commonly occurring problems in numerical linear algebra. | https://netlib.org/lapack | linear-algebra | 178 | 20 | 24 | 42 |
Vulkan-Tools | https://github.com/KhronosGroup/Vulkan-Tools | graphics api | Vulkan Development Tools | https://vulkan.org | vulkan | 248 | 20 | 24 | 24 |
tflite-micro | https://github.com/tensorflow/tflite-micro | null | Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors). | null | null | 111 | 19 | 25 | 31 |
Vulkan-Docs | https://github.com/KhronosGroup/Vulkan-Docs | graphics api | The Vulkan API Specification and related tools | https://vulkan.org | vulkan | 141 | 18 | 21 | 34 |
quack | https://github.com/Dao-AILab/quack | kernel examples | A Quirky Assortment of CuTe Kernels | null | null | 17 | 17 | 0 | 0 |
oneDPL | https://github.com/uxlfoundation/oneDPL | null | oneAPI DPC++ Library (oneDPL) | https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-library.html | null | 67 | 17 | 29 | 28 |
KernelBench | https://github.com/ScalingIntelligence/KernelBench | benchmark | KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems | https://scalingintelligence.stanford.edu/blogs/kernelbench | benchmark | 19 | 16 | 3 | 0 |
reference-kernels | https://github.com/gpu-mode/reference-kernels | kernel examples | Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard! | https://gpumode.com | null | 16 | 16 | 0 | 0 |
synthetic-data-kit | https://github.com/meta-llama/synthetic-data-kit | synthetic data generation | Tool for generating high quality Synthetic datasets | https://pypi.org/project/synthetic-data-kit | synthetic-dataset-generation | 15 | 15 | 0 | 0 |
tritonparse | https://github.com/meta-pytorch/tritonparse | null | TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels | https://meta-pytorch.org/tritonparse | null | 15 | 15 | 0 | 0 |
kernels | https://github.com/huggingface/kernels | gpu kernels | Load compute kernels from the Hub | null | null | 15 | 14 | 2 | 0 |
Wan2.2 | https://github.com/Wan-Video/Wan2.2 | video generation | Wan: Open and Advanced Large-Scale Video Generative Models | https://wan.video | diffusion-models | 14 | 14 | 0 | 0 |
SYCL-Docs | https://github.com/KhronosGroup/SYCL-Docs | null | SYCL Open Source Specification | https://khronos.org/sycl | parallel-programming | 67 | 13 | 20 | 27 |
Primus-Turbo | https://github.com/AMD-AGI/Primus-Turbo | null | null | null | null | 12 | 12 | 0 | 0 |
flashinfer-bench | https://github.com/flashinfer-ai/flashinfer-bench | benchmark | Building the Virtuous Cycle for AI-driven LLM Systems | https://bench.flashinfer.ai | benchmark | 12 | 11 | 0 | 0 |
FTorch | https://github.com/Cambridge-ICCS/FTorch | wrapper | A library for directly calling PyTorch ML models from Fortran. | https://cambridge-iccs.github.io/FTorch | machine-learning | 20 | 11 | 8 | 9 |
TensorRT | https://github.com/NVIDIA/TensorRT | null | NVIDIA TensorRT is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT. | https://developer.nvidia.com/tensorrt | null | 104 | 10 | 18 | 19 |
TileIR | https://github.com/microsoft/TileIR | parallel computing dsl | TileIR (tile-ir) is a concise domain-specific IR designed to streamline the development of high-performance GPU/CPU kernels (e.g., GEMM, Dequant GEMM, FlashAttention, LinearAttention). By employing a Pythonic syntax with an underlying compiler infrastructure on top of TVM, TileIR allows developers to focus on productivity without sacrificing the low-level optimizations necessary for state-of-the-art performance. | null | parallel-programming | 10 | 10 | 1 | 0 |
kernels-community | https://github.com/huggingface/kernels-community | gpu kernels | Kernel sources for https://huggingface.co/kernels-community | https://huggingface.co/kernels-community | null | 9 | 9 | 0 | 0 |
GEAK-agent | https://github.com/AMD-AGI/GEAK-agent | agent | It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically. | null | ai-agents | 9 | 9 | 0 | 0 |
neuronx-distributed-inference | https://github.com/aws-neuron/neuronx-distributed-inference | null | null | null | null | 11 | 9 | 3 | 0 |
OpenCL-SDK | https://github.com/KhronosGroup/OpenCL-SDK | null | OpenCL SDK | https://khronos.org/opencl | parallel-programming | 25 | 8 | 6 | 9 |
ZLUDA | https://github.com/vosen/ZLUDA | null | CUDA on non-NVIDIA GPUs | https://vosen.github.io/ZLUDA | parallel-programming | 15 | 8 | 4 | 0 |
intelliperf | https://github.com/AMDResearch/intelliperf | performance testing | Automated bottleneck detection and solution orchestration | https://arxiv.org/html/2508.20258v1 | profiling | 7 | 7 | 0 | 0 |
nccl | https://github.com/NVIDIA/nccl | null | Optimized primitives for collective multi-GPU communication | https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html | null | 51 | 7 | 5 | 6 |
cudnn-frontend | https://github.com/NVIDIA/cudnn-frontend | parallel computing | cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it | https://developer.nvidia.com/cudnn | parallel-programming | 12 | 6 | 5 | 1 |
BitBLAS | https://github.com/microsoft/BitBLAS | Basic Linear Algebra Subprograms (BLAS) | BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment. | null | matrix-multiplication | 17 | 5 | 14 | 0 |
Self-Forcing | https://github.com/guandeh17/Self-Forcing | video generation | Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight) | https://self-forcing.github.io | diffusion-models | 4 | 4 | 0 | 0 |
End of preview. Expand
in Data Studio
- Downloads last month
- 162