Expert Biography

Yegor Denisov-Blanch

Researcher at Stanford University leading groundbreaking studies on software engineering productivity. Analyzed 120,000+ engineers across hundreds of companies to quantify AI’s real impact on developer work. Former Chief of Staff at DHL leading digital transformation for 6,000+ engineers. Olympic Weightlifting National Champion and Stanford MBA (merit scholar).

Data-Driven Authority on AI ROI

Yegor Denisov-Blanch provides the most comprehensive, data-backed evidence on AI’s impact on developer productivity—analyzing actual code from hundreds of companies rather than surveys or self-reports. His research reveals uncomfortable truths about AI effectiveness and developer performance.

Current Research

At Stanford’s Software Engineering Productivity Research Group, Yegor leads empirical studies analyzing private Git repositories from 100,000+ engineers across nearly 1,000 companies. His work quantifies:

  • AI ROI Variance: Productivity gains range from 35-40% in greenfield work to 0-10% in brownfield/high-complexity scenarios
  • Ghost Engineers: 9.5% of software engineers contribute virtually nothing (14% remote, 9% hybrid, 6% office-based)
  • Code Quality Multiplier: Clean code amplifies AI gains; poor code quality eliminates benefits (“rich get richer effect”)
  • AI Maturity Framework: L0-L4 classification correlating organizational AI adoption with achieved productivity gains

Background

Stanford MBA (school’s only merit-based scholarship recipient). 8th-grade dropout who became sole family breadwinner after mother’s cancer diagnosis. Led digital transformation for 6,000+ engineers at DHL as Chief of Staff to CEO. Olympic Weightlifting National Champion and Master of Sport. Unique blend of technical research, business strategy, and enterprise transformation experience.

Key Publications

Research Tools & Talks

Key Research Insights

Context-Dependent Impact: AI productivity gains are highly contextual. Greenfield work sees 35-40% improvements while legacy code refactoring sees only 0-10% gains.

Quality Over Quantity: AI usage quality matters more than volume. Token consumption is not a reliable success metric—strategic, focused usage outperforms high-volume unfocused usage.

Environment Cleanliness Index: Clean, well-maintained code is a prerequisite for AI effectiveness. Top-performing teams see compounding benefits while struggling teams face diminishing returns (“rich get richer effect”).

Flawed Traditional Metrics: Conventional measurements (lines of code, story points, DORA metrics, commit counts) don’t accurately measure engineering productivity and may encourage counterproductive behaviors.

Methodology: ML models replicate expert panel evaluations. Time-series analysis of GitHub data accounts for rework, refactoring, and code quality. Integration with telemetry from enterprise tools like Cursor Enterprise.

Organizational Implications: Successful AI adoption requires investment in software quality and engineering practices, not just tool acquisition. Agents should focus on “fighting entropy” and cleaning codebases to amplify AI gains.

Media Coverage

Stay Updated

Get the Latest AI Engineering Insights

Join the Focus.AI newsletter for curated research, analysis, and perspectives on the evolving AI landscape.

No spam. Unsubscribe anytime.

CLASSIFIED_FILES

USER: AUTHORIZED

[ EMPTY DRAWER ]

No documents have been filed.