Drift
Contract vs reality monitoring
Thesis
Contracts say one thing, reality drifts another. Document parsing + email integration can surface mismatches automatically.
The Problem
Scope creep is a silent killer: The contract says 5 pages, you're on page 12. The timeline said 6 weeks, you're in month 3. The client asks for "one small change" for the 47th time. By the time you notice, you've already eaten the margin. Nobody tracks this in real-time because it's tedious. You only realize how bad it got during the post-mortem.
Implementation Approaches
Email + Document Monitor (Recommended)
Watch communication channels and flag drift automatically
Implementation
- →Connect email, Slack, and project tools
- →Parse original contract/SOW for scope boundaries
- →AI monitors communication for scope expansion signals
- →Alerts: 'Client requested X which is outside original scope'
- →Weekly drift report with quantified impact
Pros
- +Catches drift as it happens, not after
- +Quantifies the problem for client conversations
- +Flywheel: learns what 'drift signals' look like across projects
- +High value, directly protects margin
Cons
- −Requires broad data access (email, Slack, docs)
- −Privacy and security concerns
- −False positives could be annoying
- −Complex integration requirements
Manual Check-In Tool
Periodic scope review with AI assistance
Implementation
- →Upload contract + current deliverables list
- →AI compares and highlights mismatches
- →Manual trigger, not continuous monitoring
- →Generates scope change documentation
Pros
- +Much simpler to build
- +No ongoing integration maintenance
- +User controls when to check
- +Lower privacy concerns
Cons
- −Reactive, not proactive
- −Depends on user remembering to check
- −Misses real-time drift signals
Time Tracking Integration
Compare hours logged vs hours scoped
Implementation
- →Connect to Harvest, Toggl, Clockify, etc.
- →Map time entries to contract line items
- →Alert when actuals exceed estimates by threshold
- →Show burn rate and projected overrun
Pros
- +Concrete, quantifiable signal
- +Time tracking data already exists
- +Clear ROI story
- +Simpler than full communication monitoring
Cons
- −Only catches time drift, not scope drift
- −Depends on accurate time tracking
- −Lagging indicator, damage already done
Validation Plan
Hypothesis to Test
Agencies will pay $99/mo to catch scope creep before it kills their margins
Validation Phases
Retrospective Analysis
1 week- •Get 3 completed projects with known scope creep
- •Analyze: contract vs final deliverables vs communication
- •Identify: when did drift signals first appear?
- •Show agencies: 'here's where you could have caught it'
Time-Based MVP
2 weeks- •Build time tracking integration (Harvest first)
- •Compare actuals to contract estimates
- •Alert on overruns with context
- •Pilot with 3 agencies on active projects
Full Monitoring Beta
3 weeks- •Add email/Slack monitoring for one pilot
- •Test drift detection accuracy
- •Refine alert thresholds to minimize noise
- •Validate $99/mo pricing
Kill Criteria
Stop and move on if any of these become true:
- ✕Agencies don't grant needed data access
- ✕Too many false positives make it noisy
- ✕Drift detection isn't accurate enough to be useful
- ✕Problem isn't painful enough to justify $99/mo