HX AI's proprietary intelligence engine. Aiqrion is forked, audited, modified, fine-tuned, distilled, evaluated, and productized — not a router over third-party models. Currently in alpha.
Free, Pro, Max, and Enterprise tiers. Quotas, model access, Codex runs, RAG storage, and Dispatch tasks scale per plan. Internal tier is reserved for HX team. v0.8 ships hosted alpha billing in mock mode by default; production billing is wired through the Stripe adapter and only activates with secrets configured. See pricing.
HX AI's autonomous task layer. Submit a goal, Aiqrion plans the work, runs through a sandboxed tool layer, and stops at approval gates for risky actions. Destructive commands and network egress are denied by default. Logs are PII-redacted and tenant-isolated. See Dispatch.
HX-AI-owned reasoning model lineage. Fork-and-fuse on top of Qwen 27B with HX-owned training, distillation, and checkpoint manifests.
HX-owned coding agent — repo indexing, plans, diffs, dry-run patches, review, audit, rollback metadata. Not a wrapper around Aider, OpenHands, Cursor, or Copilot.
Image, screenshot, PDF, chart, table, and OCR document understanding. License-aware adapter chain (Nemotron OCR / PaddleOCR / Tesseract).
License-aware image generation. SDXL by default, FLUX-pro premium API, FLUX-dev research-only blocked from production tenants.
Local / offline / on-prem packaging built from Ollama and llama.cpp lessons. Deny-external by default, tenant-policy-aware.
Private deployment — your tenants, your audit trail, your model card lineage, your safety policy.
Aiqrion follows a fork-and-fuse strategy: we identify open permissively-licensed model and tooling sources, audit each license, snapshot the parts we can legally use, study the rest, and build HX-owned interfaces over the reusable components. Aiqrion-owned checkpoints come from SFT / LoRA / QLoRA / DPO / distillation pipelines on top of those bases. Aiqrion is honest about being early — we do not claim a from-scratch HX-trained frontier model today. v0.5 produced our first adapter artifact; v0.6 ships the product alpha.
Aiqrion targets the same capability surface as Claude, ChatGPT, Gemini, Grok, Cursor, GitHub Copilot, and Devin-style agents. We are honest about being in alpha — we do not claim Aiqrion already beats those products on benchmarks. The v0.6 alpha lets internal teams test the full pipeline end to end so we can earn those numbers in v0.7+.
OpenAI-compatible /v1/chat/completions plus first-class Aiqrion-specific endpoints for Codex, RAG, AgentOS, datasets, adapters, evals, and audit. See docs/v0.6/web-console.md.
Aiqrion v0.6 lineage: aiqrion-core on Qwen 27B,aiqrion-codex on Qwen 2.5-Coder-32B,aiqrion-vision on Qwen 2-VL-7B,aiqrion-edge quantized via Ollama/llama.cpp,aiqrion-premium through the Mistral Large 3 API. Every Aiqrion-owned derivative carries a manifest, model card, inherited upstream license, and a modification log.
Aiqrion v0.6 is an internal alpha. Early access is invite-only.