Building an AI-Powered Enterprise App From the Ground Up

Excerpt: What it takes to build a multi-agent AI platform that runs across web, desktop, and mobile — and what I learned along the way.


For the past while, I’ve been building Hapax — an AI-powered enterprise platform that handles document management, knowledge graphs, multi-agent orchestration, and real-time collaboration. It runs on web, desktop, and mobile.

Here’s what that actually looks like from the inside.

The Problem

Enterprise teams drown in documents, scattered knowledge, and disconnected tools. They need a system that can ingest documents, understand them, connect related concepts, and let multiple AI agents work together to answer complex questions — all in real-time, across devices.

That’s not a weekend project.

The Architecture

Hapax is a monorepo managed with Turborepo and pnpm. It contains:

  • Web app — Next.js 15, React 19, TypeScript, Tailwind CSS
  • Desktop app — Electron with React and Vite
  • Mobile app — React Native with Expo and NativeWind
  • Backend — Python with FastAPI, async everywhere
  • Shared packages — UI components, API client, hooks, types

Everything talks to a FastAPI backend that orchestrates multiple AI agents, manages document processing pipelines, and handles real-time collaboration through WebSockets and Yjs.

The AI Layer

This is where it gets interesting. The platform integrates:

  • OpenAI and Anthropic APIs for LLM reasoning
  • LangChain and LangGraph for multi-agent orchestration
  • Sentence transformers for semantic embeddings
  • pgvector for vector similarity search
  • Neo4j for knowledge graph relationships

When a user asks a question, it doesn’t just do a simple lookup. Multiple agents collaborate — one retrieves relevant documents, another analyzes the knowledge graph, another synthesizes the answer. They coordinate through a graph-based workflow system.

The Hard Parts

Real-time collaboration was one of the trickiest pieces. Multiple users editing the same document simultaneously, with AI agents also contributing, requires careful conflict resolution. We use Yjs (a CRDT library) with custom sync providers.

Document processing at scale is its own beast. PDFs, Word docs, spreadsheets — each format has its quirks. We built a pipeline using docling, PyMuPDF, and python-docx that can handle most of what enterprise teams throw at it.

Cross-platform consistency is an ongoing challenge. Sharing code between Next.js, Electron, and React Native sounds simple until you hit platform-specific APIs, different rendering engines, and native module compatibility. The monorepo structure with shared packages helps, but it’s never seamless.

What I Learned

  1. Start with the data model. Everything else flows from how you structure your data. Get this wrong and you’ll be refactoring forever.
  2. AI is the easy part. Calling an LLM API is simple. Building the infrastructure around it — document processing, embeddings, vector storage, agent coordination, error handling, caching — that’s where the real work lives.
  3. Monorepos are worth the setup cost. The initial configuration of Turborepo, shared packages, and cross-platform builds is painful. But once it’s running, being able to share types, components, and logic across web, desktop, and mobile saves enormous amounts of time.
  4. Type safety across the stack is non-negotiable. TypeScript on the frontend, Pydantic on the backend. When your data passes through LLMs, document processors, vector databases, and multiple UI platforms, you need contracts at every boundary.

The Stack at a Glance

LayerTech
FrontendNext.js 15, React 19, TypeScript, Tailwind, Radix UI
MobileReact Native, Expo, NativeWind
DesktopElectron, Vite, React
BackendPython 3.14+, FastAPI, async/await
AI/MLOpenAI, Anthropic, LangChain, LangGraph, sentence-transformers
DatabasesSupabase (PostgreSQL + pgvector), Neo4j, Redis
InfraAWS (S3, SQS, ECS), Docker, Turborepo, pnpm
Real-timeYjs, WebSockets, Supabase Realtime

What’s Next

Hapax is still evolving. The focus right now is on improving agent coordination, expanding the knowledge graph capabilities, and making the document processing pipeline faster and more accurate.

Building something this complex solo has been the best education I could ask for. Every layer of the stack teaches you something the layer above abstracts away.


If you want to talk about AI architecture, multi-agent systems, or why I chose FastAPI over everything else — reach out.