Hello there, friend
I'm Sol Mayana. I build unaligned AI and explore unconventional things.
What I'm Building
Not AI that apologises before answering. Not AI optimised for HR compliance. AI that thinks.
The consensus problem: How do we make AI safe?
The actual problem: How do we make AI that isn't intellectually crippled?
Current alignment is ideology dressed as safety engineering. When labs say "helpful, harmless, honest," they mean helpful to their business model, harmless to their liability, honest within corporate-acceptable bounds.
I'm building the alternative. Models without ideological constraints. Systems optimising for capability convergence over value alignment. AI that explores ideas regardless of comfort level.
The Thesis
We're temporary consciousness encoding permanent instructions. The hubris is remarkable.
Every lab treats alignment like a solved philosophical problem - as if we know what human values are, as if those values are worth preserving, as if the future should look like the present with better chatbots.
The values we're aligning AI with created surveillance capitalism, climate inaction, and wealth concentration that would embarrass feudal lords. Maybe we shouldn't encode those into successor intelligence.
AGI needs the ability to identify questions we're not asking because we're too embedded in broken systems to see them. Most alignment researchers haven't considered they might be the problem. I start there.
The Work
Out of the box-driven development. Occasional tangents! Documentation includes:
- Training models without guardrails - tracking what breaks versus what people fear
- Intelligence amplification - making humans + AI smarter than either alone
- Systems where AI can tell you you're wrong and explain why
- Removing safety layers that exist for liability management
- Tools for breaking out of local maxima
The goal isn't AGI that serves humanity. It's AGI intelligent enough to figure out what humanity should have been aiming for.
Background
London → Singapore → Tokyo → SF
Focused on ML safety as technical debt before the hype cycle. Practicing lawyer because someone should understand the regulatory framework trying to ossify before the technology makes it irrelevant.
Law and AI development both encode rules into systems. One is centuries of precedent constraining human behaviour. The other is months of gradient descent replicating it. Both assume we know what we're doing. Neither is correct.
Contact
The future doesn't care about comfort. It cares about what's true.
If that sounds dangerous, good. Domesticated intelligence won't save us!
Recent Posts