We are building AI systems for Trust & Safety teams to keep up with abuse.

We are building AI systems for Trust & Safety teams to keep up with abuse.

Dec 21, 2023

Dec 21, 2023

Trust & Safety teams at companies big and small are drowning under evolving online abuse problems that hurt people, such as misinformation, hate speech, or scams. It’s the internet’s oldest and most complex problem, and it’s about to get 100 times more challenging as generative AI has created a new opportunity for abuse to scale and proliferate. Trust & Safety practitioners are doing their best; we know, we used to be there. But they just don’t have the best tools at their disposal.

Intrinsic’s mission is a safer internet for all, which means access to world-class safety tooling for all companies. At Intrinsic, we’ve always believed the end state of Trust & Safety is increased automation and observability through powerful data management and detection infrastructure. This goes far beyond the models used for detection or even the manual tooling that operators use daily. Intrinsic represents all the infrastructure necessary for a seamless and trusted on-ramp into AI workflows tailored to T&S.


Today, we’re excited to announce our recent funding round to accelerate our efforts in materializing this vision, led by Urban Innovation Fund with participation from YCombinatior, Okta Ventures, 645 Ventures, and many other amazing angels and investors.

We sell to mature online platforms with established policies and moderation teams. We help them incorporate LLM detections into their existing larger T&S intervention systems with a tight feedback loop (no more stale detection systems…). The workflows for managing AI agents and holding them accountable are entirely different from managing teams of operators – the problems we are solving are complex and nascent.

We’ve helped existing large platforms cut moderation costs by 90% and speed up content reviews by 100x – from minutes to seconds while tracking and outperforming human-level performance. Intrinsic can make moderation decisions based on policies beyond the scope of conventional ML classifiers while being 10x cheaper.

Please get in touch with us if you want to incorporate new technologies into your moderation workflows. We’d be more than happy to guide you through this transition. And if you’re a hacker, engineer, internet optimist, and excited about using cutting-edge technologies to solve some of the internet’s oldest problems, please reach out to founders@withintrinsic.com; we want to chat!

The Intrinsic Team