Aeoncore - The Journey from 0 to 1
The story of how I turned a collection of ideas and spare parts into working infrastructure.
The computer chassis sits two feet to my right. I had just submitted my first local LLM query through the front-end I’d spent weeks configuring. As the model began to calculate, the fans spun up.
I could literally hear it thinking.
That sound represented more than just a cooling cycle; it was the moment Aeoncore stopped being a collection of ideas or a project plan and became working infrastructure. It was a service I owned, running on a platform I designed, implemented, and controlled, on hardware that I physically built, performing a task I had previously outsourced to the cloud.
This is the story of how I moved from 0 to 1.
The Spark: It Started With a Graphics Card
Before this project, I had the components of a high-end gaming desktop sitting idle. I knew it was over-specced for a standard home server—it has too much power for just cloud storage or media streaming, the most common personal "homelab" use cases—but it was already paid for and had massive headroom.
The catalyst was the RTX 3090. In a gaming computer, this represented the top-of-the-range model from Nvidia in 2020, and it still performs that task in my workstation PC admirably. Even though it is now five years old and two generations behind, it has a superpower.
Because of its unusually high VRAM capacity (24GB), it has come to occupy a sweet spot in the current graphics card economy. The VRAM allows you to keep large-parameter models entirely on the card, which is orders of magnitude faster than offloading them into system RAM.
At the same time, open-weight models (such as Llama, Gemma, Qwen, or DeepSeek) and local tooling had finally crossed the threshold of being genuinely usable. For the first time, building a local AI platform that was actually good felt like more than just a theoretical possibility. I didn’t have a clear picture of my platform needs or how to build it yet, but I knew the end state: real, working AI functionality.
Professional Discipline Over the "Naive" Build
It would have been easy to build a monolith—install Linux, throw on Docker, pull an Ollama image, and call it a day. That works for a hobbyist, but it’s brittle and doesn't scale.
I wanted this to mirror the discipline of a professional DevOps organization. This meant choosing a hypervisor-backed model (Proxmox) with purpose-built VMs to ensure a clean separation of concerns between AI workloads, storage, and general services.
However, I had to balance that professional ambition against real-world constraints:
- Fixed Hardware: I had exactly the components in that chassis and no budget for extra hardware or software.
- Finite Time: This project took place against the backdrop of my job search; my time was an investment, not an infinite resource.
- The "Solo" Factor: I was a one-man team—acting as architect, engineer, and support. While I have a career background in IT, I would be relying heavily on documentation, community resources, and AI assistants to bridge specific skill gaps.
Defining the MVP: What 1.0 Actually Delivered
In product management, the Minimum Viable Product (MVP) is about finding the smallest version of a product that delivers actual value. 1.0 doesn't mean "feature complete"; it means "operational."
I had to make hard choices about what to include versus what to defer:
- INCLUDE: The Platform Layer – I prioritized a stable platform through the entire stack [bare-metal → hypervisor → virtual machines → containers → user services]. This meant scheduled automated backups, resource monitoring, and solid networking. Most users won't see these, but without them, the platform has no integrity.
- INCLUDE: Core AI Functionality – For the users, 1.0 meant functional local LLM inference (Tau) and image generation (Ceti). I wanted to extend this basic functionality as soon as practical without waiting on deeper customization.
- INCLUDE: Simple Cloud Functionality – Since I was already building out a storage VM (Vega) to support the services, I chose to implement cloud sync because it represented high value for relatively low effort in the MVP roadmap.
- DEFERRED: Customized Agentic AI Workflows – The front-end I chose has features that allow LLMs to take actions like web search and code execution. I deliberately chose to ship 1.0 without them until I gather data on how users interact with the system.
In short, shipping a stable core in a shorter timeframe was more important than chasing every interesting feature and delaying the release.
The Reality Check: Highs and Lows
Infrastructure rewards patience, but it also provides a unique set of frustrations.
- The Hard Parts: Building this meant "drinking from the firehose" of new technologies. There were humbling moments, like a house power outage that exposed an incomplete UPS configuration or consumer-grade networking gear that "forgot" settings at the worst possible times.
- The Easy Parts: Conversely, modern tooling is shockingly accessible. The logic of project sequencing—derived from years of professional experience in program management—meant that the phases of the build felt natural and structured.
From Idea to Operational System
When I started, I didn't have everything figured out. I knew I had some hardware, some time, and a vague sense of how to put it together. What I did have figured out was a sense of purpose.
I wanted to build something in order to learn. Tech changes fast, and the pace of that change continues to accelerate. Whether it was learning something genuinely new or deepening my understanding of a familiar stack, I wanted to learn by doing.
I also wanted to build something that would provide proof of domain mastery—the ability to translate complex architecture into tangible value. Product and program management is full of people who understand the "what" but not the "how." I didn't want to be that guy. I wanted something I could point to and say, "Hey, check out this thing I made!"
Crucially, I wanted to build something that would deliver actual value to me and my family. I didn't want this to be just a portfolio case study; I wanted it to be the infrastructure I use every day.
Full-Stack Ownership
I mentioned being a one-man team as a constraint, but it was also a unique opportunity for growth.
My career has spent the last decade in the "Product" and "Program" seats. While my roots are in engineering, the arc of my career had naturally moved me away from hands-on implementation. I found myself increasingly reliant on Subject Matter Experts (SMEs) to bridge my own technical gaps.
I didn’t like that.
Owning the full stack of Aeoncore forced me to balance competing perspectives in a way that theoretical work never could. I can no longer be a product manager with unrealistic expectations of the hardware, or an architect who chases "cool" tech at the expense of customer value. I certainly can't be an engineer who designs a system that looks great on paper but is an operational nightmare to support.
My North Star on this project was: "Enterprise-grade design and operation, implemented at a personal scale." By starting small enough to "do it all," I was able to finally recapture that feeling of true professional ownership.