Intrinsic: Y Combinator-backed Company Crafting Infrastructure for Trust and Safety Operations

A few years back, Karine Mellata and Michael Lin crossed paths while collaborating at Apple’s fraud engineering and algorithmic risk team. Their focus encompassed mitigating online abuse spanning issues such as spam, bot activity, account security, and developer fraud within Apple’s expanding user base.

Despite their efforts to devise innovative models to combat evolving abuse patterns, Mellata and Lin sensed they were lagging behind, frequently rebuilding core trust and safety infrastructure components.

Per Mellata, “As regulations heightened scrutiny on teams to centralize trust and safety responses, we identified an opportunity to modernize this industry and foster a safer online environment for all. We envisioned a system capable of adapting as swiftly as the evolving abuse itself.”

This vision led Mellata and Lin to co-establish Intrinsic, a startup aiming to equip safety teams with tools essential for curbing abusive behavior across digital platforms. Recently, Intrinsic secured $3.1 million in a seed funding round with contributions from Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.

 

 

Intrinsic’s platform is tailor-made for moderating user- and AI-generated content. It furnishes infrastructure enabling social media firms and e-commerce platforms to detect and respond to policy-violating content promptly. The focus lies on seamless safety product integration, automating tasks like user bans and content flagging for review.

Mellata emphasized, “Intrinsic offers a fully customizable AI content moderation platform.” For instance, it aids publishing companies in avoiding legal liabilities by steering clear of offering financial advice in marketing materials. Additionally, it assists marketplaces in identifying and managing listings like brass knuckles, legal in certain regions but not others.

Distinguishing itself from rivals like Spectrum Labs, Azure, and Cinder, Intrinsic stands out in its explainability and expanded toolset. Mellata highlighted Intrinsic’s unique ability to explain its content moderation decisions and its suite of manual review and labeling tools enabling customers to fine-tune moderation models with their data.

Mellata acknowledged the limitations of conventional trust and safety solutions, asserting that Intrinsic’s adaptable platform addresses the needs of resource-constrained teams, aiming to reduce moderation costs without compromising safety standards.