Hui means gathering/assembly. Hui lets you convene any models, debate clearly, and grow new skills fast.
flowchart TB
subgraph "Monorepo: hui"
direction TB
subgraph apps["apps/"]
direction LR
subgraph collab["apps/collab (Hui Council: real-time model council)"]
direction TB
UI["Gradio UI\\nSwitchboard (models) + Role Foundry (roles)"]
VCE["Consensus Engine"]
UI --> VCE
end
subgraph finetune["apps/finetune (Hui Train: repurpose & fine-tune)"]
direction TB
CLUI["CLI/Notebook Runner\\n(ingest → prepare → train → eval → package)"]
Pipelines["Pipelines: JSONL prep • SFT/LoRA/Unsloth • Eval • Packaging"]
CLUI --> Pipelines
end
end
subgraph packages["packages/"]
direction LR
subgraph core["packages/core"]
direction TB
RA["EnhancedResearchAgent\\n(cache • coalesce • parallel)"]
Tools["Tools: web • wikipedia • arxiv • github • sec"]
Cache["Caches: LRU + in-flight coalescing + KV hooks"]
DSPy["DSPy Synthesis (optional)"]
Reports["Structured Report Templates"]
Logger["Dataset Logger (JSONL)"]
RA --- Tools
RA --- Cache
RA --- Reports
RA --- Logger
VCE === RA
end
subgraph adapters["packages/adapters"]
direction TB
Plug["Model Plugboard\\n(OpenAI-compatible endpoints)"]
HuiIO["Hui IO Adapter\\n(ingest tasks, transcripts, outputs)"]
VCE --- Plug
Pipelines --- HuiIO
end
subgraph training["packages/training"]
direction TB
Prep["Data Prep: normalize • schema • dedupe • splits"]
Train["Trainer: SFT/LoRA • Unsloth compatible"]
Eval["Eval: regression sets • rubric • structured report checks"]
Pack["Packaging: checkpoints • prompt templates • adapters"]
Pipelines === Prep
Prep --> Train --> Eval --> Pack
end
end
subgraph data["data/"]
direction TB
JSONL["JSONL Traces (SFT-ready)"]
Artifacts["Artifacts (models/checkpoints)"]
ReportsOut["Generated Reports (structured)"]
Logger --- JSONL
Pack --- Artifacts
CLUI --- ReportsOut
end
end
subgraph ext["External Services"]
direction LR
Mistral["Mistral API"]
Samba["SambaNova API"]
OpenAICompat["Other OpenAI-Compat Endpoints"]
end
Plug --- OpenAICompat
VCE --- Mistral
VCE --- Samba
subgraph wf1["Workflow A: Hui Council"]
direction TB
A1["Switchboard: add models"]
A2["Role Foundry: assign roles"]
A3["Debate: N rounds + live research"]
A4["Final synthesis (structured)"]
A5["Optional: log traces → JSONL"]
UI --> A1 --> A2 --> A3 --> A4 --> A5 --> JSONL
end
subgraph wf2["Workflow B: Hui Train"]
direction TB
B1["Ingest via Hui IO"]
B2["JSONL prep"]
B3["Train (Unsloth)"]
B4["Evaluate (rubric)"]
B5["Package → Plugboard"]
HuiIO --> B1 --> B2 --> JSONL
JSONL --> B3 --> B4 --> B5 --> Artifacts
end
- Real‑time ANY‑model collaboration via Switchboard and Role Foundry
- Deterministic structured reports for high‑stakes outputs
- Rapid fine‑tuning/repurposing using SFT/LoRA (Unsloth‑friendly)
- Reusable data pipeline: ingest → JSONL → train → evaluate → package
hui/
apps/
collab/ # Hui Council UI (Switchboard, Role Foundry)
finetune/ # Hui Train CLI/Notebooks
packages/
core/ # Research agent, templates, logging, DSPy (opt)
adapters/ # Plugboard + IO adapters
training/ # Prep/Train/Eval/Package utilities
data/ # JSONL traces, artifacts, reports
- Council:
python apps/collab/app.py - Train:
python apps/finetune/cli.py ingest input_dir_or_jsonl data/raw.jsonlpython apps/finetune/cli.py prepare data/raw.jsonl data/prepared.jsonlpython apps/finetune/cli.py evaluate data/prepared.jsonl reports/eval.json --type enhancement_plan_v1 --threshold 0.7python apps/finetune/cli.py unsloth-cmd data/prepared.jsonl models/hui-lora --base meta-llama/Meta-Llama-3.1-8B-Instruct