
Tuesday Oct 14, 2025
EP 23 — IBM's Nic Chavez on Why Data Comes Before AI
When IBM acquired Datastax, they inherited an experiment that proved something remarkable about enterprise AI adoption. Project Catalyst gave everyone in the company — not just engineers — a budget to build whatever they wanted using AI coding assistants. Nic Chavez, CISO of Data & AI, explains why this matters for the 99% of enterprise AI projects currently stuck in pilot purgatory: technical barriers for creating useful tools have collapsed.
As a member of the World Economic Forum's CISO reference group, Nic has visibility into how the world's largest organizations approach AI security. The unanimous concern is that employees are accidentally exfiltrating sensitive data into free LLMs faster than security teams can deploy internal alternatives. The winning strategy isn't blocking external AI tools, but deploying better internal options that employees actually want to use.
Topics discussed:
- Why less than 1% of enterprise AI projects move from pilot to production.
- How vendor push versus customer pull dynamics create misalignment with overall enterprise strategy.
- The emergence of accidental data exfiltration as the primary AI security risk when employees dump confidential information into free LLMs.
- How Project Catalyst democratized AI development by giving non-technical employees budgets to build with coding assistants, proving the technical barrier for useful tool creation has dropped dramatically.
- The strategy of making enterprise AI "the cool house to hang out at" by deploying internal tools better than external options.
- Why the velocity gap between attackers and enterprises in AI deployment comes down to procurement cycles versus instant hacker decisions for deepfake creation.
- How the World Economic Forum's Chatham House rule enables CISOs from the world's largest companies to freely exchange ideas about AI governance without attribution concerns.
- The role of LLM optimization in preventing super intelligence trained on poison data by establishing data provenance verification.
- Why Anthropic's copyright settlement signals the end of the “ask forgiveness not permission” approach to training data sourcing.
- How edge intelligence versus cloud centralization decisions depend on data freshness requirements and whether streaming updates from vector databases can supplement local models.
No comments yet. Be the first to say something!