Supporting this research

Something different is happening here

We are a small research team training language models that rewire their own weights from conversation. The work is early. If the approach interests you, this page explains what we have observed so far and where we think it leads.

What we observe

Three things worth paying attention to

The market is moving toward self-hosted models

There is a growing mismatch between what organizations need (private, domain-adapted inference) and what the industry provides (shared API access to frozen models). Regulatory requirements, data sensitivity, and the desire for specialization are driving demand for models that run on-premise or in isolated environments. This trend is accelerating.

Neuroplastic models require privacy by construction

A model that modifies its weights based on user interaction cannot be shared between clients. This is not a policy choice. It is a structural property of the architecture. Every neuroplastic deployment is inherently private, which aligns with what the self-hosting market demands. We have tested CTM inference inside encrypted confidential compute environments (TEEs) and measured 10-20% overhead in inference latency. Private, attestable inference is practical, not theoretical.

Open-source models are the starting point

CTM attaches to existing pretrained transformers as a controlling layer. The backbone stays frozen. This means the growing ecosystem of open-weight models (Qwen, Llama, Mistral, Gemma) is not just useful; it is a prerequisite. As these foundations improve, so does the ceiling for CTM-augmented systems. The entire stack runs on a single consumer GPU.

What we are building

An AI that learns to know you

We want to provide private, encrypted, confidential inference with neuroplastic models that learn to know you and your organization. Not a shared model behind an API. Your own model, running in an encrypted enclave, that gets better at your problems the more you use it.

Every interaction teaches the model something about how your team thinks, what terminology you use, what matters in your domain. Those learned patterns live in the weights. They never leave the enclave. They cannot be extracted, shared, or served to another customer. The model is yours in a way that no current AI product can offer.

We are already in the loop where AI agents do automated improvement research on their own training algorithms. The agents propose changes to the CTM architecture, run experiments, evaluate results, and iterate. This is not a future plan. It is how we work today. See our autoresearch discussion for context on where this is heading.

The debugger on the home page shows the current training run in real time. The research page explains the architecture and what we have demonstrated so far.

Get in touch

If this interests you

We are open to research collaboration, technical discussion, and funding. Leave your details and we will follow up.

Received. Thank you.

We will be in touch. Below is an outline of how we have structured participation so far.

Participation structure

For those who want to go deeper

Seed
$25K-100K
  • Early access to research outputs
  • Quarterly progress updates
  • Acknowledgment in publications
  • Priority API access when available
Strategic
$500K+
  • Everything in Research Partner
  • Equity participation
  • Board observer seat
  • Exclusive deployment rights in your vertical
  • Joint development