Phase 1 — Daemon v0 in progress

One environment.
Every tool.
An OS that pays attention.

Sigil is a purpose-built, AI-native Linux OS for professional software engineers. A unified shell where every developer tool lives in one frame, governed by a self-tuning daemon that learns how you work — and removes friction from everything around the code.

NixOS + Hyprland · Hybrid on-device AI · Privacy-first · Open source
sigil@localhost
local inference
1
2
3
4
5
6
local
48h
sigil@localhost:~/payments $ go test ./...
ok github.com/acme/payments/handler 0.312s
ok github.com/acme/payments/store 0.089s
FAIL github.com/acme/payments/router 0.441s
--- FAIL: TestRouteAuth (0.44s)
payments_test.go:88: expected 200, got 401
sigil@localhost:~/payments $
TestRouteAuth has failed 3 times this session — the JWT middleware changed in auth/middleware.go:42 2 commits ago tab to accept
$
run tests, start servers, or ⌥Tab to ask AI…
on-device

Your tools are fragmented.
Your OS doesn't know you exist.

01

Context switching kills flow

Terminal. Editor. Browser. Slack. Four windows, four mental models, zero shared context. Every switch costs focus. Your OS sees processes — it doesn't see you shipping.

02

AI tools live at the IDE layer

Copilot, Cursor, Claude Code — powerful, isolated. They see the file you have open. They don't see your build failing for the third time, the container you spun up an hour ago, or the pattern that connects both.

03

No OS-level intelligence layer

The information needed to actually understand a developer's workflow exists — in /proc, in inotify, in git history, in build logs. No OS has ever done anything useful with all of it together. Until now.

Developers don't need AI that replaces their work —
they need a single cohesive environment that learns how they work
and removes friction from everything around the code.

A terminal that
learned to be a GUI.

The Sigil Shell is a single full-screen application that is your entire desktop. Keyboard-first. Dark. Monospace. But with the spatial affordances of a real GUI — because you shouldn't have to choose between power and polish.

$

Left Rail

56px strip. Six embedded tools: Terminal, Editor, Browser, Git, Containers, and the Daemon Insights view. ⌘1–6 to switch. Status indicators show inference mode and daemon health.

Main Content Pane

Full PTY terminal. Neovim via PTY (the real thing, not a subset). Minimal WebView browser. Git log and diffs. Container status. Daemon analytics. Split-pane via ⌘\.

Suggestion Bar

The daemon's passive voice. A rotating feed of contextual insights — not generic tips, but observations about your current session. Tab accepts. Esc dismisses. Never interrupts.

Unified Input Bar

Two modes, one keystroke apart. $ Shell mode — a real terminal prompt. AI mode — natural language to the entire system, with full daemon context. ⌥Tab to toggle. You never context-switch between doing and asking.

Your OS,
paying attention.

sigild is a Go binary running as a systemd service. It watches everything your OS sees — file events, process activity, git commits, build results, shell commands — and builds a local model of how you work. All data stays on your machine. The intelligence lives at the OS level, where it belongs.

Collector

Observation layer

  • fsnotify — file events
  • /proc — process activity
  • Hyprland IPC — display events
  • Git — commit, branch, diff
  • Build & test output
  • AI interaction metrics
All raw events written to local SQLite. Nothing leaves the machine.

Analyzer

Intelligence layer — two tiers

On-device

Statistical heuristics. Pattern detection. Frequency tables. Temporal analysis. Runs via Cactus local model — no network, sub-120ms.

Cloud (opt-in)

Complex reasoning. Weekly workflow summaries. Stuck-detection. Frontier models via Cactus hybrid router — only when needed, previewed before sending.

Actuator

Action layer

passive

Suggestion bar — contextual insights surfaced as you work. Demonstrates value before you ask for it.

active (opt-in)

Auto-split pane on build. Pre-warm containers. Dynamic keybindings. Progressive AI disclosure as your usage patterns mature.

<50MB
RAM target for daemon
90days
raw event retention window
80%
queries handled on-device
derived patterns kept forever

On-device when possible.
Cloud when it matters.

Sigil partners with Cactus Compute (YC-backed, 4k+ GitHub stars) — an open-source hybrid inference engine that runs quantized LLMs directly on hardware, then routes complex queries to cloud models automatically. One API. One routing decision engine. No Ollama + LiteLLM complexity.

sigild
Cactus Hybrid Router
local localfirst remotefirst remote
80%
Simple queries — pattern lookup, suggestions, command hints
20%
Complex reasoning — weekly summaries, cross-session analysis
On-device model
LFM2-2.6B · INT4 · <120ms
Cloud frontier model
Claude, GPT-4 · previewed before sending
🌵
Cactus Compute is YC-backed and open-source. Their C/C++ runtime uses zero-copy memory mapping and INT4 quantization — meaning the on-device model runs on constrained hardware (tested on a 2017 MacBook Pro) without consuming RAM your development tools need. If they pivot or disappear, their API is OpenAI-compatible and their engine is forkable. The architectural dependency is on the API shape, not the company.

Built for engineers who
read source code.

Telemetry-averse developers are the hardest audience to earn trust from. That's intentional. The privacy architecture isn't a checkbox — it's the product. Here's exactly what happens to your data.

Individual · Always on · Non-negotiable

What stays on your machine

  • ✓ All raw telemetry — file events, process data, commands, build output
  • ✓ Your workflow patterns and derived user model
  • ✓ All AI interaction history
  • ✓ 90-day retention window, configurable
  • ✓ Kill switch in the Insights view — wipes everything instantly

What can leave (with preview)

  • → Summarized context for complex LLM queries — you review before it sends
  • → Nothing raw, nothing identifiable, nothing without your explicit action
Fleet · Enterprise only · Explicit opt-in per engineer

What the fleet layer receives

  • ✓ Anonymized aggregate counts — query volumes, acceptance rates, routing ratios
  • ✓ Adoption tier classification — anonymous, no individual attribution
  • ✓ Build velocity trends — correlated to tiers, not individuals

What the fleet layer never receives

  • ✗ Raw events of any kind
  • ✗ File paths or filenames
  • ✗ Code content or query text
  • ✗ Anything that identifies a specific engineer's work
We never call it telemetry. It's team insights — you share aggregate patterns with your organization, not surveillance of your work. You can see exactly what the Fleet Reporter sends in the Insights view before it leaves.
The daemon is open source. The Cactus engine is open source. You can audit every line of code that touches your workflow data. "Trust us" is not the privacy model — "read the code" is.

The product engineers love
becomes the product that
justifies the AI investment.

Engineering leadership has one question: are our engineers actually using the AI tools we're paying for? Sigil answers it — without surveilling anyone.

Fleet Dashboard — Engineering Leadership
AI Adoption by Tier
Tier 3 · Native
22%
Tier 2 · Integrator
41%
Tier 1 · Explorer
27%
Tier 0 · Observer
10%
$0.03
cost / accepted suggestion
81%
queries on-device (free)
67%
suggestion acceptance rate
Compliance posture
100% of AI inference routed through approved endpoints
Zero raw code sent to external APIs
All engineers on fleet configuration v1.4.2
📊

AI Adoption Analytics

Query volumes, acceptance rates, adoption tier distribution, query categories. Watch the org move from Observer to Integrator over time.

Velocity Correlation

Correlate AI adoption tiers with commit cadence, build success rates, and PR cycle time. Prove the tool works — anonymously, at the team level.

💰

AI Cost Efficiency

Exact cloud API spend. Local-vs-cloud routing ratios. Cost per accepted suggestion. Turn "we route intelligently" into a procurement-ready number.

🔒

Compliance Posture

Which models are in use. What percentage hits approved endpoints. Data residency confirmation. One page your security team and auditors will love.

The fleet layer deploys on your infrastructure — a Helm chart or NixOS module. No data flows through Sigil's servers. Engineering leadership gets the dashboard. Engineers keep full individual-level control. The two audiences never trade.
Free Forever

Sigil OS

Individual engineer

  • Full unified shell
  • sigild daemon with all three subsystems
  • Cactus hybrid inference integration
  • Local-only — no fleet layer
  • Open source under Apache 2.0
Per seat · Annual

Sigil Enterprise

Engineering organizations

  • Everything in open source
  • Fleet Reporter subsystem
  • Fleet Aggregation Layer (your infra)
  • Leadership dashboard
  • Centralized Cactus routing policy
  • Model allowlisting & SSO
  • Priority support
Talk to us

One engineer.
One old MacBook.
Three novel layers.

Sigil is being built by a single staff-level engineer with 15 years in FinTech and 6 years of Go. The core product is open-source because the target audience reads source code before they trust anything. That's not a constraint — that's the go-to-market strategy.

Technical Stack
Layer Technology Why
Base OS NixOS Declarative, reproducible, rollback-safe
Compositor Hyprland (Wayland) GPU substrate, IPC, multi-monitor, pop-out windows
Shell Tauri · Rust + Preact ~5–10MB binary, native backend, TypeScript frontend
Terminal xterm.js + PTY Mature, full PTY — real Neovim, not a subset
Inference Cactus Compute Hybrid on-device/cloud routing, zero-copy memory, INT4
Daemon Go — sigild Low memory, fast startup, <50MB target
Local store SQLite (WAL mode) Zero-config, concurrent reads, pure Go driver
IPC Unix socket + JSON Fast, standard, no dependencies
Config Nix flakes Single declarative spec for the entire stack
Fleet Go + PostgreSQL + Helm Deployable on org infra, no Sigil-hosted cloud
Execution Roadmap
Done

Phase 0 — Bare Metal

NixOS + Hyprland on 2017 MacBook Pro. Custom ISO with Broadcom Wi-Fi. Cactus running local inference.

In progress

Phase 1 — Daemon v0

sigild as a systemd service. Collector, Analyzer, Actuator. Cactus integration. SQLite store. sigilctl CLI. 48h stability target.

Next

Phase 2 — Sigil Shell v0

Tauri app full-screen on Hyprland. All six tool views. Daemon socket connection. Live suggestion bar. AI mode input.

Q3

Phase 3 — Intelligence & Polish

Heuristic local model. Active actuations. Split-pane. Pop-out windows. Progressive AI disclosure.

Q4

Phase 4 — Enterprise Fleet Layer

Fleet Reporter. Aggregation service. Leadership dashboard. Enterprise configuration and SSO.

Q4–Q1

Phase 5 — Distribution

Open source release. Downloadable ISO. Enterprise pilot. Documentation and community.

The shell is what people will screenshot.
The daemon is what will make them stay.
The fleet dashboard is what their CTO will pay for.

Sigil is in active development. Join the waitlist to get early access when individual installs open up, or to talk enterprise when the fleet layer ships.

No spam. No marketing blasts. You'll hear from us when there's something real to share.