Skip to main content

61 posts tagged with "engineering-metrics"

View all tags

Marketplace Engineering: Metrics for Two-Sided Products

· 9 min read
Artur Pan
CTO & Co-Founder at PanDev

A marketplace CTO told me the line I keep hearing: "My supply team ships fast, my demand team ships fast, and GMV still stagnates." The DORA dashboards were green on both sides. The matching engine was not. Two-sided products have a metric gap that single-sided SaaS doesn't: engineering output on one side of the marketplace only creates business value if it's matched by output on the other side.

Andreessen Horowitz's marketplace framework ranks liquidity — the probability that a listed item actually transacts within a window — as the single best predictor of marketplace health. That probability is an engineering outcome, not a marketing one. When search latency rises by 200ms, listed-item conversion drops measurably. When seller onboarding takes 14 days instead of 4, supply growth curves flatten within a quarter.

Manufacturing Software Engineering: Agile Meets Hardware

· 8 min read
Artur Pan
CTO & Co-Founder at PanDev

A mid-sized automotive supplier I consulted for in 2024 had a production bug land at 03:15 on a Tuesday. The fix took 8 minutes to code and 19 days to deploy — because it required a software update to PLCs on 14 production cells, each of which could only be updated during the 4-minute changeover window between shift batches. The engineering team's average lead time on the office-IT side: 31 hours. On the shop-floor side: 14 days. Same team, same repository, two different universes of delivery constraint.

Manufacturing software engineering is Agile meeting hardware. The practices that work at a SaaS startup — deploy-whenever, feature flags, canary releases — collide with regulated plant-floor reality: OEE targets, changeover costs, OT/IT separation, and production lines that cannot pause for a deploy. A 2023 Deloitte Smart Factory study found 73% of manufacturers cite "IT/OT integration" as the top barrier to digitization. The problem isn't technology; it's that metrics and rituals designed for pure software break when the software touches a physical process.

PropTech Development Velocity: Real Estate SaaS Engineering

· 8 min read
Artur Pan
CTO & Co-Founder at PanDev

A PropTech team I worked with last year ships 4.2 deploys per week across their flagship product. Their CEO benchmarks that against a reference SaaS portfolio and concludes velocity is "mediocre." It's not. A fintech of similar headcount ships 7.1; a pure B2B SaaS ships 9.4. PropTech lives at the intersection of regulated data, geospatial complexity, and 1990s MLS integrations — the raw deploy-frequency number hides what engineering is actually fighting.

Stack Overflow's 2024 Developer Survey places real-estate software in the bottom third of all industries for reported build and integration-testing speed. Microsoft Research's 2024 DevEx benchmarks show regulated industries losing an average 23% of engineering throughput to compliance friction alone. PropTech layers geospatial complexity on top of that.

IoT Embedded Engineering: Metrics for Firmware Teams

· 9 min read
Artur Pan
CTO & Co-Founder at PanDev

A team shipping a battery-powered agricultural sensor runs a CI pipeline that takes 38 minutes to build a firmware image, flash it to hardware-in-the-loop, run a 12-minute on-device test suite, and publish artifacts. Their web-app teammates push to main and see green checks in 7 minutes. When both teams get measured on deployment frequency, the firmware team looks like they're underperforming by 5×. They're not. They're doing harder work with a longer feedback loop, and the metric isn't reading it.

Most engineering metrics were built for web software: fast builds, reversible deploys, observability from day one. IoT and embedded teams inherit these metrics and look bad against them. The DORA framework acknowledges this explicitly — the 2023 Accelerate State of DevOps report noted that "teams shipping embedded or regulated software face a different distribution and should not be compared to web teams on deployment frequency alone". This article is what you track instead.

Code Ownership vs Collective: What the Data Shows

· 10 min read
Artur Pan
CTO & Co-Founder at PanDev

Two engineering orgs of identical size shipping at the same pace. Org A: every file has a named owner, PRs need their approval. Org B: anyone can merge to any part of the codebase after a peer review. Org A has 40% fewer bugs per KLOC. Org B recovers from a senior engineer leaving 3× faster. Microsoft Research (Bird et al., 2011, Don't Touch My Code: Examining the Effects of Ownership on Software Quality) ran this experiment across 3,000+ files in Windows Vista/7 and showed that files with a strongly-identified owner had significantly fewer post-release failures — but they also showed that high-ownership files were more likely to become a bottleneck.

This article compares three real ownership models — strong ownership, collective ownership, and the hybrid pattern — using the Microsoft data, Google's 2018 internal study on code review, and 100+ companies in our own IDE dataset. The goal: pick the model that fits your team's stage and work, not the one that fits the blog post you read last week.

Monorepo vs Polyrepo: Team Productivity Impact (Real Data)

· 9 min read
Artur Pan
CTO & Co-Founder at PanDev

Your 40-engineer team maintains 34 repositories. Sound reasonable? We see this shape often. A typical developer in that configuration triggers 11.4 context switches per day between repositories — almost all invisible to the EM, each costing roughly 23 minutes of refocus time, per UC Irvine's Gloria Mark (The Cost of Interrupted Work, 2008) and subsequent replications. The same team post-monorepo migration: 3.2 switches per day. The productivity math is obvious; the cost math is where it gets interesting.

Both architectures work. Google runs the largest known monorepo (2 billion+ lines of code, ~85,000 engineers). Netflix runs thousands of polyrepos. The question isn't which is better in the abstract — it's which fits your team size, your CI budget, and your tolerance for coordination overhead.

AI Code Review: Does It Actually Help? (Data from 100 Teams)

· 7 min read
Artur Pan
CTO & Co-Founder at PanDev

AI code review sits at the crest of the hype cycle. GitHub Copilot, CodeRabbit, Qodo, Graphite, and half a dozen startups are pitching a future where LLMs catch bugs faster than humans. Microsoft Research and Bacchelli's seminal 2013 study on code review established the baseline we've been measuring against for a decade: human review catches ~14% of functional defects but 68% of maintainability issues. The question now is: does layering an LLM on top actually move either number?

We pulled review data from 100 B2B teams between Q1 2025 and Q1 2026: a mix of teams using AI review, teams not, and teams running hybrid. The pattern isn't what the vendors claim.

CEO's Guide to Engineering Team Health (Non-Technical)

· 11 min read
Artur Pan
CTO & Co-Founder at PanDev

Most non-technical CEOs I've met treat engineering as either a black box or a theater. Black-box CEOs ask "how's engineering?" at the executive meeting, accept "we're on track" as an answer, and act surprised four quarters later when the senior architect resigns and the product roadmap stalls. Theater CEOs become amateur engineering managers — they learn to recite DORA metrics, mispronounce "Kubernetes," and inadvertently turn every roadmap discussion into a technical argument they can't follow.

Neither failure mode is about intelligence. It's about the absence of a short, non-technical vocabulary for engineering health. First Round's 2023 State of Startups survey found 68% of first-time CEOs rate themselves "somewhat" or "very" dependent on their CTO for all engineering judgment calls — which is fine until the CTO leaves or disagrees with the board on direction.

This guide is the minimum CEO vocabulary: 6 questions that let you test whether engineering is healthy without pretending to be technical.

CFO's Guide to Engineering Metrics: What to Ask and Why

· 9 min read
Artur Pan
CTO & Co-Founder at PanDev

A CFO usually sees engineering on one line of the P&L: salaries. A headcount column, a loaded-cost multiplier, a big number growing faster than revenue. That's it. Deloitte's 2024 Global Technology Leadership Study put the gap at its starkest: only 31% of CFOs said they could tell whether their engineering investment was producing returns proportionate to cost. The other 69% were flying blind on roughly the largest discretionary spend in the company.

This is not a tooling problem. It's a question problem. The numbers exist. Your CFO peers just haven't learned which five questions extract them.

HRTech Engineering: Metrics for People-Platform Teams

· 9 min read
Artur Pan
CTO & Co-Founder at PanDev

HRTech engineering teams ship software that pays people on the wrong day if you get it wrong. A failed deploy on the 14th of the month is not a Slack-apology situation — it's a wire-transfer reversal, a legal letter, and in the EU a GDPR notification to the Data Protection Authority. Deloitte's 2024 Global Human Capital Trends report found that 73% of HR leaders cite their technology platform as a top-three operational risk — above hiring itself.

Most engineering-productivity articles written for SaaS or e-commerce teams don't translate. The metrics that matter for a payroll engineer or an HRIS platform team look different. This guide covers what actually deserves tracking, why, and how the PanDev Metrics dataset for HRTech customers compares to general B2B SaaS.