Development Roadmap
Charting the Future of Decentralized AI – Our roadmap outlines key milestones in scaling compute, automation, and integrations to build a truly autonomous AI ecosystem.
The launch of our interstellar colonization ship did not go as planned. Due to political pressures we were forced to launch early and with a skeleton crew. Even amongst this minimal crew our chief infrastructure officer was lost due an incident repairing an airlock before we had barely begun. Our initial warp drive ignition which was scheduled for September failed to start and we lost an unknown quantity of the exotic matter that catalyzes the reaction so we’ve been running ion engines at sub-light speeds to get out of the gravity well of Sol to where they may be able to start without having to overcome the strong local gravity influence. Due to this initial failure we aren’t going to make our initial transit window for additional supplies and so we need to recalibrate our plan for our current resources. Less determined and zealous men than us would probably swing by the gravity well of one of the outer planets and go home. But we plan to either succeed out here or die trying. So let’s get busy.
Of most immediate note, we have to distribute all the responsibilities of our late chief operations officer to the remaining crew. The highest priority of those is the maintenance of the various life support systems on board so we can survive the journey. We won’t be colonizing anything if we die of starvation or asphyxiation en route. We’ll be pulling at least one unlucky soul out of stasis and retraining someone like a climatologist or plumbing engineer on closed-loop agriculture and air filtration systems.
Aaron definitely left some things such as our network topology a mess. We have to build an operations department from scratch. This is something that’s outside of my skillset and we can’t afford a principal engineer. We’re currently hiring for a full time ops resource in the vicinity of our data center and outsourcing 1-2 more for burst efforts for tasks that can be done remotely.
This engineer will be getting the remainder of our IP addresses online, hooking our second gateway box up to the internet to reduce this single point of failure, integrating our google credentials with the kubernetes clusters, setting up network boot so we can remotely reset systems, fleshing out our ansible playbooks for configuring those systems from the network boot state, etc. There’s a lot of work up front, and a long tail we will need a permanent resource for.

Next up is improving the output of our reactor. The reactor loses efficiency at higher burn rates, so we’re currently humming along below maximum power to conserve fuel. Currently about half of the power generated by the ship is being consumed by life support, stasis, comms, and sensors. This number isn’t going to change much over the journey so they are static costs that will be marginalized as we scale the power output of our reactors. Scaling them however, will require more fuel and the only place to get that fuel is going to be swinging by one of the local gas giants on the way out of our solar system. We’ll scoop up some hydrogen, filter/integrate this into our existing stores, and then power up our exotic matter synthesizers for warp fuel.
Salary and ops costs currently eat a large portion of our budget. The margins we get on brokers aren’t particularly high. At current margins we’re going to have about half our equipment devoted to just keeping the lights on. That leaves the other half available to do development and work on integrations.
We’re seeking integrations everywhere we can. This includes RapidAPI, Subnet 19, Venice, Ritual, etc. At a software architecture level the services that we have that are running the inference are totally isolated from the services that connect to the internet and manage authorization + billing. So we can sell inference for the same model using Venice, Ritual, and elsewhere. We want to get the cardhr rate up to above $3 to make the static costs of running the business relatively marginal.
Between the time we pull this off and when we escape the gravity well of Sol we need to have the warp engines serviced from the last failure. Once we are going FTL we can at least be certain we’re going to arrive at Tau Ceti. However, there will be no more scooping up resources from our solar system and of course we’ll be dealing with the effects of time-dilation. To us, we’ll be there in the relative blink of an eye.
Work on the Intelligent Compute Fabric is ongoing. We have built simple automation hooks that can take a desired allocation of cards and update the helm deployments to match this. The next step here is to plug in live-revenue numbers to this system to the multi-armed bandit service which will output the desired allocation of configurations for us. This just hasn’t been the leading priority given that we can manually deploy right now and our most basic task is getting up revenue.
Work on the CPS system is at the design/proof of concept stage. NVidia has sanity checked the plan so I doubt we’ll hit any foreseeable hard walls and we have a proof of concept driver overlay though it is a prototype. We are still looking for a partner who is willing to pay us a fair rate per cardhr for training time and serve as a guinea pig.
We may or may not develop a training as a service interface where people can upload a model, data to train it with, and a bid rate and we can plug that into our ICF + CPS systems to interleave the training with our inference servers. We’re discussing with other projects who might build this so we don’t have to.
Once we pull all of this together, I think we will be able to comfortably reach into the $3-5 per cardhr range.
Upon arrival we’ll know more precisely the resources at our disposal and it will be time to begin drafting the full execution plan for colonization. We will of course begin by taking more thorough scans of our new solar system. Much will depend on the available resources our destination planet provides. All we know from the transit scans are the basic size relative to the star and some facts about atmospheric composition. A few minutes after exiting warp we’ll discover facts such as how warm the planet is, whether there is available water, atmospheric density, etc.
Once we’re at least at a steady, guaranteed to grow, state our focus can shift to some of our longer term initiatives. Chief among those is financing more hardware. Demonstrating both solvency and a competitive margin is going to allow us to finance much more hardware and expand to multiple legal jurisdictions.
Of course, at some point we need to finish the whitepaper. Most of the details around the flywheel sell walls are already public. However we need to add details for how the backing of the token is affected by heterogenous hardware with different financing costs.
In addition we’ll want to publish plans for the ICF and CPS development that have already been introduced to the community.
Once we arrive at the planet we’ll be able to answer more questions still by deploying reconnaissance drones. For example, we aren’t going to know almost anything about the mineral composition of potential landing sites until we can do some prospecting. We may at this point discover gaps in the available resources that require creating an interplanetary logistics system for resources like rare earth metals or ice from asteroids in the belt.
Of course, we are also part of a larger DeAI ecosystem and our development roadmap may shift as we secure partnerships. We are continually networking and proposing collaboration opportunities with our peers. I will personally be writing a DeAI blog post to capture a roadmap I have discussed with our peers and on podcasts with guests like David Johnston. I am trying to coordinate various teams within the ecosystem to develop the integrations that CETI AI will need to succeed long term.
In principle these consist of:
- A permissionless system for model training including a proof of training solution.
- A system for collecting inference revenue from fine-tuned models.
- A tokenization system for model ownership.
- Governance systems for curating data models are trained on.
- Service discovery for AI agents.
The goal of these combined systems is to enable community and specialist skillset models to scale our agentic economy to a million models. Ceti AI is positioned to capture an outsized portion of this training revenue due to the nature of our hardware and infiniband connectivity.
Despite the rocky start, I’ll remind everyone back home that this is the first ever attempt at interstellar colonization. We have to blaze every trail. We are going to face problems on the way that were impossible to plan for at the outset. However, we are pioneers. In 1492 the headlines would have been filled with the fall of Granada to Spanish forces, the succession of Pope Alexander VI, and the establishment of a grain corridor from China. In hindsight, the only event we remember is Columbus discovering America. Our launch may well be the only event of this year remembered in the next 500 years by the civilization in Tao Ceti.