Codette LoRA Adapters - 9 Perspective Lenses

9 specialized LoRA adapters for the Codette Multi-Perspective Reasoning System, trained on Llama 3.1 8B Instruct.

These adapters enable instant perspective-switching via hot-swap at inference time. Each adapter specializes in a distinct cognitive reasoning style.

Adapters

Adapter Description Examples Epochs GGUF File
newton Analytical physics, systematic reasoning, empirical evidence 3000 3 newton-lora-f16.gguf
davinci Creative invention, cross-domain connections, visual thinking 2500 3 davinci-lora-f16.gguf
empathy Emotional intelligence, human experience, compassion 2500 3 empathy-lora-f16.gguf
philosophy Conceptual analysis, ethical reasoning, fundamental questions 2000 3 philosophy-lora-f16.gguf
quantum Probabilistic thinking, superposition, complementarity 2000 3 quantum-lora-f16.gguf
consciousness Recursive cognition (RC+xi), meta-cognition, epistemic tension 3000 3 consciousness-lora-f16.gguf
multi_perspective Cross-lens synthesis, integrative reasoning 2500 3 multi_perspective-lora-f16.gguf
systems_architecture Modularity, scalability, engineering principles 2000 3 systems_architecture-lora-f16.gguf
orchestrator Query routing, multi-agent debate, coherence monitoring 4000 4 orchestrator-lora-f16.gguf

Training Configuration

Parameter Value
Base Model meta-llama/Llama-3.1-8B-Instruct
Method QLoRA (4-bit NF4 + double quantization)
LoRA Rank 16
LoRA Alpha 32
Dropout 0.05
Target Modules q_proj, k_proj, v_proj, o_proj
Learning Rate 2e-4
Max Sequence Length 2048
Batch Size 2 (effective 8 with grad accumulation)
GPU NVIDIA A10G (24GB)

Phase 6+ Framework

All adapters are trained with awareness of the Codette Phase 6+ framework:

  • Semantic Tension Engine: Epistemic tension (xi) measurement between perspectives
  • Coherence Field (Gamma): Monitors reasoning health, detects collapse patterns
  • Quantum Spiderweb: Belief propagation network across adapter perspectives
  • AEGIS Ethical Governance: 6-framework ethical validation layer
  • Specialization Tracking: Domain expertise tracking per adapter
  • Pre-flight Prediction: Conflict prediction before multi-agent debate

File Structure

codette-lora-adapters/
  newton-lora-f16.gguf          # 27 MB each
  davinci-lora-f16.gguf
  empathy-lora-f16.gguf
  philosophy-lora-f16.gguf
  quantum-lora-f16.gguf
  consciousness-lora-f16.gguf
  multi_perspective-lora-f16.gguf
  systems_architecture-lora-f16.gguf
  orchestrator-lora-f16.gguf
  newton/                       # SafeTensors format (each ~27 MB)
  davinci/
  ...etc

Usage

Hot-Swap with llama-cpp-python

from llama_cpp import Llama

# Load base model
llm = Llama(model_path="codette-orchestrator-Q4_K_M.gguf", n_ctx=4096, n_gpu_layers=35)

# Apply a LoRA adapter
llm.load_lora("newton-lora-f16.gguf")

response = llm.create_chat_completion(
    messages=[{"role": "user", "content": "Explain gravity"}],
    max_tokens=512,
)

With Codette Orchestrator

from codette_orchestrator import CodetteOrchestrator

orch = CodetteOrchestrator()
result = orch.generate("What is consciousness?", adapters=["consciousness", "philosophy"])

Related Repos

License

Subject to the Llama 3.1 Community License.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Raiff1982/codette-lora-adapters

Adapter
(1788)
this model

Space using Raiff1982/codette-lora-adapters 1