01Executive Summary

Every AI system in production infrastructure today operates on data that has already passed the point of no return.

By the time your security platform detects an anomaly, the attack has progressed. By the time your observability stack identifies degradation, users have experienced errors.

TernaryPhysics changes this by running AI at the earliest possible decision point — at the kernel boundary, where packets first exist. We make decisions in microseconds, not milliseconds. The system adapts to your specific infrastructure patterns. We keep all data on your infrastructure, not our cloud.

The result: An AI that knows your network better than any generic tool ever could — and gets better every day it runs.

02The Problem

Every System Today Acts Too Late

Packet arrives ↓ copied to userspace ↓ shipped to agent ↓ transmitted to cloud ↓ stored in database ↓ queried by AI ↓ decision made ↓ action taken Total latency: 100ms - 10s

By the time any system reasons about data and acts — the moment has passed.

Your Data Leaves Your Network

Modern infrastructure AI requires sending telemetry to cloud services. Your metrics, your traffic patterns, your operational data — all shipped to someone else's infrastructure.

Manual Configuration Doesn't Scale

Traditional tools require manual rules and thresholds. But what's "normal" varies by system. Traffic patterns change. Manual configuration becomes stale.

03The Solution

AI at the Source

We invert the architecture. Instead of copying data to where AI runs, we run AI where data originates:

Packet arrives ↓ inference runs (same context) ↓ action taken Total latency: <1 microsecond

Local AI, No Cloud

The entire system runs on your infrastructure:

  • Inference: At the kernel boundary
  • Training: Your CPU (no GPU required)
  • Storage: Your filesystem

No data leaves your network. Ever.

Adaptive System

The system improves over time as it observes more traffic:

  • Pattern recognition: Learns what's normal for your infrastructure
  • Continuous training: Models retrain automatically on your hardware
  • Performance gates: New models must outperform current before deployment
  • Hot-swap updates: Zero downtime model deployments

No manual tuning required. The model improves automatically.

Self-Discovery

You don't configure what the system does. It figures that out.

Based on observed traffic patterns, the system automatically determines what domain it's operating in:

  • Threat-Response — Blocking, mitigating, throttling, rejecting
  • Traffic-Steering — Balancing, routing, distributing, forwarding
  • Data-Collection — Filtering, sampling, collecting, logging
  • Traffic-Marking — Tagging, labeling, marking for downstream
  • Passthrough — Allowing, handling, passing through

The domain isn't assigned — it emerges from what the model learns to do on your infrastructure. Different environments produce different specializations.

04How It Works

Architecture Overview

NETWORK
Packets arrive → Features extracted
↓ under 1μs
KERNEL (eBPF)
Ternary NN inference → Action (route/block/sample)
↓ ringbuf
USERSPACE
Telemetry collection → Training → Validation → Hot-swap
↻ Continuous improvement

The Kernel Boundary Advantage

The Linux kernel is the first software layer that sees network traffic — on bare metal, VMs, and every Kubernetes node. Acting there means acting before pods, sidecars, or service meshes are involved:

LocationLatency to Decision
Kernel boundary<1μs
Userspace proxy10-50μs
Local agent100μs-1ms
Cloud service10-100ms

Integer-Only Neural Inference

Traditional neural networks require floating-point math — prohibitively expensive at kernel speeds. Our architecture uses integer-only neural inference, making decisions in under a microsecond with no floating point operations and a model small enough to live entirely in kernel memory.

  • No floating point operations
  • No GPU required — runs on any CPU
  • Tiny model footprint
  • Sub-microsecond inference

The Improvement Cycle

INGEST
INFER
ACT
LOG
IMPROVE
  1. Ingest: Every packet generates behavioral features
  2. Infer: NN scores it in microseconds
  3. Act: Block, reroute, filter, or allow — before your application sees it
  4. Log: Telemetry recorded for analysis
  5. Improve: Model updates on your hardware. Zero downtime.

No manual intervention. The system improves continuously.

Accuracy Improves Over Time

The model improves as it sees more of your traffic patterns:

  • Shadow mode: 30 days of observation, learning your baseline
  • Early operation: Basic patterns recognized
  • Mature operation: Infrastructure-specific knowledge
  • Long-term: Tuned specifically to your environment

No generic rules — the model is specific to your infrastructure.

05What You Get

Day 1

Install

# Bare metal / VM curl -sSL install.ternaryphysics.com | sh # Kubernetes (DaemonSet) kubectl apply -f install.ternaryphysics.com/k8s.yaml

One command for Linux hosts. One manifest for Kubernetes — no sidecars, no service mesh changes, no pod modifications required.

Days 1–30

Shadow Mode

The system observes everything and acts on nothing. It learns your baseline, your patterns, your normal. Zero risk.

Day 31

First Live

First real actions. The model has seen a month of your traffic. It knows what normal looks like — and has discovered what domain it operates in.

Active model: v3 (Threat-Response: Blocker)
Early

Accurate

More traffic observed. False positives drop. Detection sharpens.

Mature

Specific

The model knows your individual services. Your specific failure modes. Patterns unique to your infrastructure that no generic tool could know.

Mature

Specialized

The model understands your specific traffic patterns, failure modes, and infrastructure behavior. Tuned to YOUR network.

06Security & Privacy

Your Data Never Leaves

WhatWhere It Lives
Packet dataYour kernel
Feature vectorsYour memory
Training dataYour filesystem
Model weightsYour infrastructure

We ship software. You own everything else.

eBPF: Sandboxed by Design

TernaryPhysics runs on eBPF (Extended Berkeley Packet Filter) — the same technology powering Cloudflare's DDoS protection, Netflix's performance monitoring, and Facebook's load balancing.

  • Verified before execution: Linux kernel verifier ensures memory safety — no unsafe operations possible
  • Cannot crash your system: Bounded loops, validated memory access, automatic termination guarantees
  • Resource bounded: CPU and memory usage enforced by kernel — cannot consume unbounded resources
  • Not a kernel module: No kernel patches, no recompilation, no custom code in kernel space

Unlike traditional kernel modules that can crash systems, eBPF programs are mathematically proven safe before they run. The Linux kernel simply will not load unsafe code.

Production Hardened

  • Tested with extreme edge cases and input validation
  • Stress-tested with millions of samples under load
  • Resource exhaustion scenarios validated (disk, memory, concurrency)
  • Automatic safeguards prevent unbounded growth

Air-Gap Ready

  • No cloud dependency
  • No phone-home requirement
  • Full offline operation
  • Works in defense, financial services, healthcare

Deployment Environments

EnvironmentHowNotes
Bare metalSingle install commandFull support
VM (any hypervisor)Single install commandFull support
KubernetesDaemonSet manifestFull support
EKS (AWS)DaemonSet manifestStandard node groups only
GKE StandardDaemonSet manifestFull support
GKE AutopiloteBPF restricted by provider
Edge / air-gappedSingle install commandFull support, no egress required

What We Never See

Your packet contents
Your application data
Your user information
Your network topology

We literally cannot access your data. It never leaves your infrastructure.

07Performance

<1μs
Inference latency
<1%
CPU overhead
0
GPUs required
100%
Traffic coverage

More traffic = more samples = better model.
The system gets better the more you use it.

Production Validation

Deployed in production on a minimal DigitalOcean droplet (1 vCPU, 1GB RAM) with real workload:

  • 9.1M+ events processed: 4.5M features extracted, 4.5M actions taken
  • 971K connections tracked: Full visibility into connection success rates
  • 99.2% accuracy achieved: After 10 epochs of continuous training on live data
  • 20 automatic model deployments: Hot-swapped with zero downtime
  • Minimal resource usage: Under 1% CPU overhead, bounded memory growth

No GPU required. No cloud dependency. Runs anywhere Linux runs — from 512MB VMs to multi-core servers.

08Why It Works

Generic tools use generic rules.

TernaryPhysics learns YOUR network.

The model is trained on YOUR traffic patterns, YOUR services, YOUR infrastructure behavior. It understands what's normal for YOUR environment — not some average across all customers.

Specific to your infrastructure. No false positives from generic rules.

09Comparison

vs. Load Balancers

nginx, HAProxy, Envoy

  • They react to failure. We predict it.
  • They run in userspace. We act earlier.
  • They use static rules. We learn continuously.

vs. Service Mesh

Istio, Cilium, Linkerd

  • They add sidecar overhead. We add zero pods.
  • They need mesh config. One manifest, done.
  • They observe. We act.

vs. Observability

Datadog, Dynatrace

  • They alert after the fact. We act in the moment.
  • They require your data. We keep it local.
  • They charge per GB. We don't.

vs. Security

CrowdStrike, Falco

  • They match signatures. We learn patterns.
  • They need rule updates. We adapt automatically.
  • They're generic. We're specific to YOUR infrastructure.

10Get Started

One Command

# Bare metal / VM $ curl -sSL install.ternaryphysics.com | sh
# Kubernetes $ kubectl apply -f install.ternaryphysics.com/k8s.yaml
1 Agent installs
2 Observer attaches
3 Shadow mode begins
4 Day 15: Goes live
Improves forever

Shadow mode acts on nothing. Uninstall anytime. Zero risk.