Customer churn is the silent killer of SaaS growth. By the time you notice a customer has churned, it's already too late. The key to reducing churn is predicting it before it happens—giving your customer success team time to intervene and save at-risk accounts.
What it provides: Support tickets, customer satisfaction scores, agent interactions, and response time metrics
What it provides: Data integration, predictive modeling, real-time monitoring, and automated alerts without engineering
First, connect Zendesk and your product analytics tool to Airbook:
Build tables that connect customer identities across all systems:
-- Customer Master Table
CREATE TABLE customer_master AS
SELECT DISTINCT
c.customer_id,
c.email,
c.company_name,
c.subscription_tier,
c.mrr,
c.signup_date,
z.organization_id as zendesk_org_id,
p.user_id as product_user_id
FROM customers c
LEFT JOIN zendesk_organizations z ON c.email = z.external_id
LEFT JOIN product_users p ON c.email = p.email;
Configure daily data syncs to keep your churn prediction model current with fresh data from all sources.
Create predictive models that combine support and usage patterns to identify at-risk customers before they churn.
Successful churn prediction requires combining behavioral data from multiple touchpoints. Here's how to structure and unify your data for maximum predictive power.
These behavioral patterns and support interactions are strong predictors of customer churn. Monitor these indicators to identify at-risk accounts early.
Identify customers with increasing ticket complexity and escalation rates:
-- Ticket Escalation Risk Analysis
WITH ticket_metrics AS (
SELECT
cm.customer_id,
cm.company_name,
COUNT(*) as total_tickets,
COUNT(CASE WHEN t.priority = 'urgent' THEN 1 END) as urgent_tickets,
COUNT(CASE WHEN t.status = 'escalated' THEN 1 END) as escalated_tickets,
AVG(t.satisfaction_rating) as avg_csat,
AVG(EXTRACT(EPOCH FROM (t.solved_at - t.created_at))/3600) as avg_resolution_hours
FROM customer_master cm
JOIN zendesk_tickets t ON cm.zendesk_org_id = t.organization_id
WHERE t.created_at >= CURRENT_DATE - INTERVAL '60 days'
GROUP BY cm.customer_id, cm.company_name
),
risk_scores AS (
SELECT *,
CASE
WHEN escalated_tickets >= 3 OR avg_csat < 2 THEN 'Critical'
WHEN urgent_tickets >= 5 OR avg_csat < 3 THEN 'High'
WHEN total_tickets > 10 OR avg_resolution_hours > 48 THEN 'Medium'
ELSE 'Low'
END as support_risk_level,
(escalated_tickets * 20 + urgent_tickets * 10 +
CASE WHEN avg_csat < 2 THEN 40 WHEN avg_csat < 3 THEN 20 ELSE 0 END +
CASE WHEN avg_resolution_hours > 48 THEN 15 ELSE 0 END) as support_risk_score
FROM ticket_metrics
)
SELECT
customer_id,
company_name,
total_tickets,
escalated_tickets,
avg_csat,
support_risk_level,
support_risk_score
FROM risk_scores
WHERE support_risk_level IN ('Critical', 'High')
ORDER BY support_risk_score DESC;
Track customers showing significant drops in product engagement:
-- Usage Decline Analysis
WITH usage_trends AS (
SELECT
cm.customer_id,
cm.company_name,
DATE_TRUNC('week', pe.event_date) as week,
COUNT(DISTINCT pe.session_id) as weekly_sessions,
COUNT(*) as weekly_events,
COUNT(DISTINCT DATE(pe.event_date)) as active_days,
AVG(pe.session_duration_minutes) as avg_session_duration
FROM customer_master cm
JOIN product_events pe ON cm.product_user_id = pe.user_id
WHERE pe.event_date >= CURRENT_DATE - INTERVAL '12 weeks'
GROUP BY cm.customer_id, cm.company_name, DATE_TRUNC('week', pe.event_date)
),
baseline_vs_recent AS (
SELECT
customer_id,
company_name,
-- Baseline (weeks 5-12)
AVG(CASE WHEN week <= CURRENT_DATE - INTERVAL '4 weeks'
THEN weekly_sessions END) as baseline_sessions,
AVG(CASE WHEN week <= CURRENT_DATE - INTERVAL '4 weeks'
THEN weekly_events END) as baseline_events,
-- Recent (last 4 weeks)
AVG(CASE WHEN week > CURRENT_DATE - INTERVAL '4 weeks'
THEN weekly_sessions END) as recent_sessions,
AVG(CASE WHEN week > CURRENT_DATE - INTERVAL '4 weeks'
THEN weekly_events END) as recent_events
FROM usage_trends
GROUP BY customer_id, company_name
HAVING COUNT(*) >= 8 -- Ensure sufficient data
)
SELECT
customer_id,
company_name,
baseline_sessions,
recent_sessions,
ROUND((recent_sessions - baseline_sessions) / baseline_sessions * 100, 1) as session_change_pct,
ROUND((recent_events - baseline_events) / baseline_events * 100, 1) as event_change_pct,
CASE
WHEN recent_sessions < baseline_sessions * 0.3 THEN 'Critical'
WHEN recent_sessions < baseline_sessions * 0.6 THEN 'High'
WHEN recent_sessions < baseline_sessions * 0.8 THEN 'Medium'
ELSE 'Low'
END as usage_risk_level
FROM baseline_vs_recent
WHERE recent_sessions < baseline_sessions * 0.8
ORDER BY session_change_pct ASC;
Comprehensive risk assessment combining support and usage patterns:
-- Combined Churn Risk Assessment
WITH support_risk AS (
SELECT
cm.customer_id,
COALESCE(COUNT(CASE WHEN t.priority = 'urgent' THEN 1 END) * 10, 0) +
COALESCE(COUNT(CASE WHEN t.status = 'escalated' THEN 1 END) * 20, 0) +
CASE WHEN AVG(t.satisfaction_rating) < 2 THEN 40
WHEN AVG(t.satisfaction_rating) < 3 THEN 20 ELSE 0 END as support_score
FROM customer_master cm
LEFT JOIN zendesk_tickets t ON cm.zendesk_org_id = t.organization_id
AND t.created_at >= CURRENT_DATE - INTERVAL '60 days'
GROUP BY cm.customer_id
),
usage_risk AS (
SELECT
cm.customer_id,
CASE
WHEN COUNT(pe.event_id) = 0 THEN 60
WHEN COUNT(DISTINCT DATE(pe.event_date)) < 5 THEN 40
WHEN AVG(pe.session_duration_minutes) < 2 THEN 30
WHEN COUNT(pe.event_id) < 50 THEN 20
ELSE 0
END as usage_score
FROM customer_master cm
LEFT JOIN product_events pe ON cm.product_user_id = pe.user_id
AND pe.event_date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY cm.customer_id
),
account_risk AS (
SELECT
cm.customer_id,
CASE
WHEN cm.mrr < 100 THEN 15
WHEN DATE_PART('day', CURRENT_DATE - cm.signup_date) < 90 THEN 10
ELSE 0
END as account_score
FROM customer_master cm
)
SELECT
cm.customer_id,
cm.company_name,
cm.mrr,
cm.subscription_tier,
sr.support_score,
ur.usage_score,
ar.account_score,
(sr.support_score + ur.usage_score + ar.account_score) as total_risk_score,
CASE
WHEN (sr.support_score + ur.usage_score + ar.account_score) >= 80 THEN 'Critical'
WHEN (sr.support_score + ur.usage_score + ar.account_score) >= 50 THEN 'High'
WHEN (sr.support_score + ur.usage_score + ar.account_score) >= 25 THEN 'Medium'
ELSE 'Low'
END as risk_category,
CASE
WHEN (sr.support_score + ur.usage_score + ar.account_score) >= 80 THEN 'Immediate intervention required'
WHEN (sr.support_score + ur.usage_score + ar.account_score) >= 50 THEN 'Proactive outreach recommended'
WHEN (sr.support_score + ur.usage_score + ar.account_score) >= 25 THEN 'Monitor closely'
ELSE 'Healthy account'
END as recommended_action
FROM customer_master cm
JOIN support_risk sr ON cm.customer_id = sr.customer_id
JOIN usage_risk ur ON cm.customer_id = ur.customer_id
JOIN account_risk ar ON cm.customer_id = ar.customer_id
ORDER BY total_risk_score DESC;
Move beyond reactive alerts to predictive churn modeling. Here are two approaches you can implement in Airbook.
Uses weighted scoring based on predefined thresholds for support and usage metrics.
Uses gradient boosting to learn complex interactions between features automatically.
Design dashboards that surface critical insights when your team can still take action. Focus on alerting, trending, and prioritization.
Different risk levels require different intervention approaches. Here's how to systematically save at-risk accounts.
Track both model performance and business impact to continuously improve your churn prediction system.
Clean, consistent customer identifiers across all systems are crucial for accurate predictions.
Support + usage + billing data provides the most comprehensive risk assessment.
A/B test different approaches to find what works best for each risk segment.
Customer behavior changes over time. Retrain models quarterly to maintain accuracy.
Support tickets alone or usage data alone won't give you the full picture.
Too many false alarms will cause alert fatigue and reduce team responsiveness.
Start with available data and improve incrementally. Perfect is the enemy of good.
High-risk accounts need human touch. Use automation for monitoring, not for customer communications.
Answer: Identify churn risks in Zendesk by tracking support ticket escalation patterns, declining CSAT scores, increasing ticket volume per customer, longer resolution times, and recurring technical issues. Combine these signals with product usage decline to predict churn 60+ days early.
Answer: Key churn risk indicators in support data include escalated tickets from management, CSAT scores below 3/5, increased ticket frequency, unresolved technical issues, requests for product alternatives, billing inquiries, and declining sentiment in ticket content.
Answer: Combine Zendesk data with product usage events by creating unified customer profiles using email addresses or customer IDs. Build risk scoring models that weight support indicators (40%) with usage decline patterns (60%) to predict churn probability.
Answer: The best churn prediction model combines weighted support risk factors (40% weight) with product usage decline indicators (60% weight). Use logistic regression or machine learning models trained on historical churn data for accounts that canceled in the past 12 months.
Answer: You can predict churn 60-90 days early by combining Zendesk support patterns with product usage decline. Early warning signs include support ticket increase 3+ months before churn, CSAT decline over 60-day periods, and gradual usage reduction starting 90+ days before cancellation.
Answer: Intervention strategies should match risk levels: Critical risk (80-100 score) requires executive escalation within 24 hours; High risk (50-79) needs CSM outreach within 3 days; Medium risk (25-49) uses gentle engagement within 1 week through automated sequences.
Answer: Measure churn prediction model effectiveness using precision (true positive rate of 85%+), recall (capturing 80%+ of actual churners), false positive rate (keep below 35%), early warning accuracy (60+ days average), and business impact metrics like retention improvement and revenue saved.
Metric | Target Range | Measurement Method |
---|---|---|
Precision (True Positive Rate) | 85-95% | Predicted churners who actually churned |
Recall (Sensitivity) | 80-90% | Actual churners correctly identified |
False Positive Rate | <35% | Healthy accounts incorrectly flagged |
Early Warning Days | 60+ days | Average prediction lead time |
Learn from real implementations of churn risk identification across different SaaS business models and team structures.
Customer Success team was reactive to churn, only learning about at-risk accounts during quarterly business reviews or contract renewal conversations.
With 10,000+ customers and a small CS team, manual intervention wasn't scalable. Needed automated churn risk detection with tiered response strategies.
Developer customers rarely create support tickets, making traditional support-based churn prediction ineffective. Needed to focus heavily on product usage patterns and API usage.
Customer usage patterns varied dramatically by season (Black Friday spikes, January lulls), causing false positive churn alerts during natural low-usage periods.
This comprehensive guide for identifying churn risks using Zendesk and product events is being developed. Check back soon for the complete implementation with SQL examples, predictive models, and early warning systems.
Clean customer identifiers across all systems ensure accurate risk assessment
CS and support teams must coordinate intervention strategies for maximum impact
Regular model retraining and intervention testing improve prediction accuracy