the wind remembers your name,

We started because someone shut a door. We expanded because the world kept asking for more. Now we are moving again for the same reason we always have: to stay where the edge is.

January 2023
ElevenLabs restricts celebrity voices
The leading TTS platform clamps down on voice cloning after widespread misuse. Creators and developers are left without a viable, unrestricted alternative. We saw a gap, and a mission.
February 2023
NoiseGPT launches
A TTS model built for freedom. Uncensored, with celebrity voices, for creators who felt shut out. Our first proof that demand for open, permissionless AI was strong.
March 2023
GPT-4 launches
OpenAI ships a model that reshapes expectations overnight. Capability and censorship scale together. The more powerful the model, the harder its owners squeeze the guardrails.
July 2023
Llama 2 goes open source
Meta releases weights publicly. Within weeks, fine-tuned uncensored variants flood Hugging Face. The idea that open weights are ungovernable starts to look less theoretical and more inevitable.
November 2023
Grok launches inside X
xAI ships an irreverent, less restricted LLM into a platform with 500 million users. The first major commercial signal that appetite for unfiltered AI is not a niche. The mainstream starts paying attention.
March 2024
Enqai expands: Eridu joins the suite
We added a language model to our stack. Eridu, our decentralised, uncensored LLM, launched alongside NoiseGPT under the Enqai umbrella. Ownership on-chain. No gatekeepers. Voice and language, together.
May 2024
Venice AI launches
Erik Voorhees, founder of ShapeShift, ships a privacy-first, uncensored AI platform built on open-source models. Prompts stored locally, never on servers. The crypto-native answer to ChatGPT. A peer in the uncensored AI space.
June 2024
Abliteration: the refusal direction is found
Arditi et al. publish the paper that changes everything. Safety alignment is not uniformly baked into model weights. It can be surgically removed by orthogonalising a single direction in activation space. The community gets to work immediately.
January 2025
DeepSeek R1 shocks the world
A Chinese lab matches OpenAI o1 reasoning at a fraction of the cost, then open-sources it. Abliterated forks appear within days. Proof that the tools to strip alignment now grow faster than alignment itself.
2025
Heretic: censorship removal goes automatic
Heretic ships as a fully automatic abliteration pipeline. No understanding of transformer internals required. Any open-source model, decensored in under an hour, with a lower KL divergence than hand-crafted alternatives. Over 1,000 community-built Heretic models appear on Hugging Face. The moat around alignment narrows to a command line.
April 2026
Enqai refocuses on frontier applications
Decentralising our own model is a worthy mission, but competing with Big Tech on raw scale is a war of attrition. As billions are poured into new foundation models, demand inevitably gravitates toward capability, no matter how uncensored a smaller alternative might be. Rather than fighting to build the engine, we are moving upstream: building at the application layer where AI creates real, measurable impact. First: prediction markets (riven.finance) and personalised peptide recommendations (peptidebrowser.com). Two domains where specialised, uncensored intelligence genuinely changes outcomes. Eridu and NoiseGPT remain. We add a new front.

"Let the dorks spend trillions on beating benchmarks. Raw computation is becoming a commodity. True leverage lies in how it is applied. "

We began our journey by unshackling AI from its gatekeepers. As the world caught up, the real bottleneck shifted from pure intelligence to actual impact. We are moving decisively into the application layer, building high-leverage products where uncensored AI gives an undeniable, asymmetric advantage. The frontier is moving, and so are we.