Private AI infrastructure

Run AI privately.

Vision AI on iOS, Android, macOS, Linux, and Windows. Hugind runs local LLMs and agents on macOS, Linux, and Windows. No hosted API. No per-call fees.

See it

Two products. A few lines each.

Load a model, run it locally, done. No keys, no sign-up, no outbound network.

Vision/detect.py
Python
# Load YOLOv8, detect objects on-device, print results.
from imaged import AI, Image, ModelType

ai = AI()
ai.load_model(ModelType.YOLOV8)

img = Image()
img.load("photo.jpg")

for obj in ai.yolov8(img).objects_list:
    print(obj.label, obj.score)
Runs on iOS, Android, macOS, Linux, Windows SDK docs →
LLM/hugind.sh
Terminal
# Install, start a server, talk to it OpenAI-compatible.
$ brew install hugind
$ hugind model add google/gemma-3-4b-it-qat-q4_0-gguf
$ hugind server start gemma-4b
  ready on http://localhost:8080

$ curl http://localhost:8080/v1/chat/completions \
    -d '{"model":"gemma-4b",
        "messages":[{"role":"user","content":"hi"}]}'
Runs on macOS, Linux, Windows Hugind on GitHub →
11
Ready-to-use vision capabilities in the SDK
5
Platforms for Vision: iOS, Android, macOS, Linux, Windows
0
Bytes of your data we ever see, by design
MIT
Hugind runs under an MIT license, forever
Who this is for

Teams that can’t send data to the cloud.

  • Healthcare and medical devices. PHI stays on‑prem, on‑device, or inside a hospital network.
  • Mobile apps with privacy claims. Process photos and video without a network round-trip.
  • Regulated and on‑premise enterprise. GDPR, HIPAA, air‑gapped, data‑residency constrained.
  • Embedded and industrial. Drones, robots, kiosks, and field hardware without reliable internet.
  • Defense‑adjacent work. Vision workloads behind strict outbound‑traffic rules.
  • Developer teams evaluating local LLMs. Comparing Hugind, Ollama, and LM Studio for in‑house use.
What we believe

If the data matters, the model should run where the data already is.That is the product, the project, and the work.