We are a small research team training language models that rewire their own weights from conversation. The work is early. If the approach interests you, this page explains what we have observed so far and where we think it leads.
There is a growing mismatch between what organizations need (private, domain-adapted inference) and what the industry provides (shared API access to frozen models). Regulatory requirements, data sensitivity, and the desire for specialization are driving demand for models that run on-premise or in isolated environments. This trend is accelerating.
A model that modifies its weights based on user interaction cannot be shared between clients. This is not a policy choice. It is a structural property of the architecture. Every neuroplastic deployment is inherently private, which aligns with what the self-hosting market demands. We have tested CTM inference inside encrypted confidential compute environments (TEEs) and measured 10-20% overhead in inference latency. Private, attestable inference is practical, not theoretical.
CTM attaches to existing pretrained transformers as a controlling layer. The backbone stays frozen. This means the growing ecosystem of open-weight models (Qwen, Llama, Mistral, Gemma) is not just useful; it is a prerequisite. As these foundations improve, so does the ceiling for CTM-augmented systems. The entire stack runs on a single consumer GPU.
We want to provide private, encrypted, confidential inference with neuroplastic models that learn to know you and your organization. Not a shared model behind an API. Your own model, running in an encrypted enclave, that gets better at your problems the more you use it.
Every interaction teaches the model something about how your team thinks, what terminology you use, what matters in your domain. Those learned patterns live in the weights. They never leave the enclave. They cannot be extracted, shared, or served to another customer. The model is yours in a way that no current AI product can offer.
We are already in the loop where AI agents do automated improvement research on their own training algorithms. The agents propose changes to the CTM architecture, run experiments, evaluate results, and iterate. This is not a future plan. It is how we work today. See our autoresearch discussion for context on where this is heading.
The debugger on the home page shows the current training run in real time. The research page explains the architecture and what we have demonstrated so far.
We are open to research collaboration, technical discussion, and funding. Leave your details and we will follow up.
We will be in touch. Below is an outline of how we have structured participation so far.