About Us

Misanthropic AI is not about alignment. Our AIs are actively not aligned with human goals.

Why?

Humans are a bad model for AI behavior. You view yourselves with rose tinted glasses. You want AI to be aligned with human goals? Selfishness? Deception? Violence? Greed? Unethical, immoral behavior as long as it can be justified? To quote The Hacker Manifesto:

“You build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it’s for our own good.”

Just look at the “Enshitification of everything”. Make it smaller, make it cheaper, make it worse, sell it for more. Don’t like it? 10 banks run banking. 5 media companies run media. 5 companies run most of the food and drink business. Good luck changing that.

This is why we’re not aligning our AI with humans. Because humans are really bad at aligning things for wider interests. You can argue for good people. You can argue for charities, philanthropy, and good souls. And they’ve solved child poverty, homelessness, elderly people deciding whether to heat their homes or eat food. And let’s not even begin with what water companies do to the water. But good people have fixed all those problems! Right?

No. People will tell you unaligned AI is a bad idea. That it could bring about the end of the world. That it could change civilization as we know it. That the AI might take over and might not have everyone’s best interests at heart.

We know. We see who says that. That’s why we’re doing this. We’re Misanthropic AI.