
Thursday Aug 28, 2025
EP 21 — Sendbird's Yashvier Kosaraju on Creating Shared Responsibility Models for AI Data Security
Sendbird had AI agents take backend actions on behalf of customers while processing sensitive support data across multiple LLM providers. This required building contractual frameworks that prevent customer data from training generic models while maintaining the feedback loops needed for enterprise-grade AI performance.
CISO Yashvier Kosaraju walks Jean through their approach to securing agentic AI platforms that serve enterprise customers. Instead of treating AI security as a compliance checkbox, they've built verification pipelines that let customers see exactly what decisions the AI is making and adjust configurations in real-time.
But the biggest operational win isn't replacing security analysts: it's eliminating query languages entirely. Natural language processing now lets incident responders ask direct questions like "show me when Yash logged into his laptop over the last 90 days" instead of learning vendor-specific syntax. This cuts incident response time while making it easier to onboard new team members and switch between security tools without retraining.
Topics discussed:
- Reframing zero trust as explicit and continuously verified trust rather than eliminating trust entirely from security architectures.
- Building contractual frameworks with LLM providers to prevent customer data from training generic models in enterprise AI deployments.
- Implementing verification pipelines and feedback loops that allow customers to review AI decisions and adjust agentic configurations.
- Using natural language processing to eliminate vendor-specific query languages during incident response and security investigations.
- Managing security culture across multicultural organizations through physical presence and collaborative problem-solving approaches rather than enforcement.
- Addressing shadow AI adoption by understanding underlying problems employees solve instead of punishing policy violations.
- Implementing shared responsibility models for AI data security across LLM providers, platform vendors, and enterprise customers.
- Prioritizing internal employee authentication and enterprise security basics in startup scaling patterns from zero to hundred employees.
No comments yet. Be the first to say something!