The ChatGPT Report and The Enterprise Blind Spot
OpenAI’s ChatGPT usage report is consumer-only, so enterprise signal is skewed. Still clear: AI is a copilot. Writing dominates, most of it editing. Younger users drive volume; older pros drive work. B2B: prioritize decision support, co-creation, embedded copilots, governance.
[Views are my own]
Executive Summary
OpenAI's new report on ChatGPT usage (link in first comment) offers a rich dataset, but for enterprise leaders, it’s a distorted reflection. The study analyzes only consumer usage (Free, Plus, Pro), explicitly excluding Business, Enterprise, and Education plans. As a result, the most critical, high-stakes, and commercially valuable use cases are missing from the data.
Despite this, the report reveals powerful signals. Users don’t treat AI as an autopilot; they treat it as a copilot. They overwhelmingly use ChatGPT to edit, refine, and seek decision support, not just to generate content or complete tasks.
Writing is the dominant work use case, and nearly two-thirds of it involves editing human-generated content. This confirms a strong preference for human-in-the-loop workflows.
Younger users dominate volume, but older professionals contribute more work-related use. This signals an AI-native workforce that expects conversational, intelligent interfaces as baseline functionality. Enterprise products without embedded AI risk obsolescence.
For B2B software, the implications are clear:
- prioritize decision support over automation
- build for co-creation
- embed AI where work happens
- design governance features that preserve trust.
The future of enterprise software isn’t a chatbot. It’s a federated network of intelligent copilots, embedded directly in critical workflows.
That means embedding copilots where core General Work Activities happen most often in knowledge work, including Getting/Interpreting/Documenting Information and Making Decisions.
“How People Use ChatGPT” paper – A personal view
It’s the data drop we’ve all been waiting for. OpenAI, in collaboration with researchers from Duke and Harvard, just pulled back the curtain on how hundreds of millions of people are actually using ChatGPT. The paper, "How People Use ChatGPT," is packed with fascinating stats on the tool’s explosive growth, from its user demographics to the most common conversation topics.
The internet is already buzzing with takeaways. It’s tempting to take these numbers at face value, plug them into our strategy decks, and pivot our roadmaps.
This report reflects real patterns, but in a consumer-only context that distorts enterprise relevance. The research is rigorous; the risk is misinterpretation, not methodology.
The data is sound. The context is everything. Always.
What Is and Isn’t in the Data
Every product leader knows that the most important insights aren’t in the summary; they’re in the trade-offs. The same is true here. The study’s biggest conclusions are shaped by methodological choices that, while appropriate for a consumer focus, require careful translation for the business world.
The Enterprise Is Missing (Literally)
This is the single most important detail. The study exclusively analyzes ChatGPT’s consumer plans (Free, Plus, Pro) and explicitly excludes all usage from its Business, Enterprise, and Education plans. The most complex, high-stakes, and commercially valuable use cases are simply not in the data. To be clear, consumer behavior is a powerful leading indicator, and this report is an invaluable signal. But like any signal, it must be decoded correctly for a different context: one governed by compliance, security, and high-stakes workflows.
Decoding 'Work vs. Non-Work' Use
The paper classifies millions of messages into a simple binary: “likely part of work” or “likely not part of work”. In the real world, that line is blurry. A student doing calculus homework? A freelancer drafting a client email? An employee learning a new skill on their own time? A side-hustler learning new skill? The paper’s tidy 70/30 split of non-work to work usage hides a universe of nuance that is critical for understanding the true nature of professional use. To their credit, the authors use a validated LLM-based pipeline and a clean-room approach; it is the right tool under privacy constraints, but still a coarse lens for enterprise inference.
Professional Coding Happens Outside the Web UI
The report finds that computer programming accounts for a surprisingly small 4.2% of messages. This seems shockingly low, until you consider the study’s focus. By design, it measures a different context than where most professional coding occurs. A likely explanation is that most professional developers aren’t living in the ChatGPT web UI; they’re using AI through API integrations in their code editors and IDEs. Measuring web chat to understand professional coding is like studying a chef’s impact by only watching them in the dining room. The real work is happening in the kitchen.
The Clean Room Paradox
The research team used a secure data clean room to analyze user employment data while protecting privacy. This is a gold standard for responsible research. But it comes with a trade-off: strict aggregation thresholds mean that any data from smaller, specialized professional groups gets bundled into a "suppressed" category to protect anonymity. So while we know that “programmers” and “teachers” are using the tool, the long tail of niche professional use remains invisible.
The Self-Selection Bias
The analysis removes users who opt out of model training. Who are these people? My hypothesis is likely the most privacy-conscious and sophisticated professionals: lawyers, security experts, and strategists working with proprietary data. The study may therefore systematically under-represent the behavior of those with the most to lose, skewing the data toward lower-risk activities.
The Truncated Context
The classifiers used in the study only analyze the last 10 messages of a conversation, with each message capped at 5,000 characters. While sufficient for simple queries, this is a shallow view of complex, long-running enterprise workflows, where critical context may exist far earlier in the conversation history.
Numbers That Matter
Now that we’ve adjusted the mirror, let's look at the reflection. The paper is packed with data, but most of it is noise for business leaders. Here are few stats that actually matter, with my take on what they signal for the enterprise world.
700 Million Weekly Users
By July 2025, ChatGPT had 700 million weekly active users, roughly 10% of the world's adult population. This isn't a niche tech tool anymore. This is a new consumer utility, like search or social media. Your employees, customers, and competitors are already using it. Assuming they aren't, is a material strategic miss.
70% of Use is "Non-Work"
Non-work messages now account for over 70% of all consumer usage, up from 53% a year prior. Don’t let this headline fool you. This reflects the consumer-only sample. More importantly, it confuses volume with value. One complex, high-stakes work query (like "draft a legal clause for this M&A term sheet") holds more economic weight than a thousand queries for "write a poem about my cat."
42% of Work is Writing
Writing is the single most common work-related task, making up 42% of professional use cases. Of course it is. LLMs are text-in, text-out systems. This isn’t a sign that AI is "coming for all jobs"; it’s a sign that it’s exceptionally good at augmenting its native medium: language. The killer app is communication.
66% of Writing is Editing
Nearly two-thirds of all "Writing" tasks involve modifying user-provided text (editing, summarizing, translating) rather than creating new text from scratch. This is one of the most important findings in the entire paper. People aren't using ChatGPT as an autopilot; they're using it as a copilot. The human is still the pilot, setting the direction. This validates the "human-in-the-loop" model for enterprise AI products.
Nearly Half of Messages Are From Users 18-25
Among users who self-report age, around half of adult messages come from users 18-25. This is a powerful leading indicator. The next generation of knowledge workers are not just "AI-literate"; they are AI-native. They will expect and demand AI-powered tools as a baseline, not a perk. Your future employees are already training on these systems.
10% of All Use is Education
"Tutoring or Teaching" accounts for a staggering 10.2% of all conversations. This highlights AI's role as a knowledge and skills accelerator. For Enterprise L&D, this is a massive, untapped opportunity to scale personalized onboarding, training, and professional development.
Overall, 49% of Use is "Asking," Not "Doing"
Users are more likely to use ChatGPT for decision support ("Asking," 49% of messages) than for direct task completion ("Doing," 40%). This directly challenges the simple automation narrative. People are using AI more as a reasoning partner to inform their decisions than as a digital intern to offload tasks. It's a tool for thinking, not just for executing. “Asking” messages also score higher on interaction-quality signals than “Doing”, based on both a satisfaction classifier and user feedback.
But at work “Doing” ≈ 56%, “Asking” ≈ 35%
It's important to remember this data is from the consumer-only sample, so patterns for high-stakes enterprise tasks could differ. Furthermore, "Doing" is not full automation. A significant portion of "Doing" at work is "Writing," and two-thirds of that involves editing user-provided text. This suggests these activities are collaborative and augmentative, fitting the "Copilot" model, not a hands-off "Autopilot" paradigm.
Professionals Are the Power Users
Highly educated users in professional occupations are "substantially more likely to use ChatGPT for work". This is the least surprising but most commercially relevant finding. The greatest economic value is being unlocked by knowledge workers in high-skill jobs. This is precisely the audience that Enterprise SaaS serves, and it confirms that the appetite for sophisticated AI tools in the workplace is already strong.
These numbers paint a clear picture: ChatGPT has become a mainstream utility, used more as a thinking partner than a task-master.
But these are just the headlines. The real story emerges when we connect these patterns to the deeper, often invisible, needs of the enterprise.
Signals Beneath the Trends
The numbers are interesting, but the patterns they reveal are far more important. If you only read the headlines, you'll see a story about a popular consumer app. But if you look closer, you'll see the blueprint for a fundamental shift in how humans work with information.
Here are few insights the data reveals that go beyond the obvious.
AI Isn't an Autopilot; It's a Sparring Partner
The data sends a clear, consistent message: people are not outsourcing their thinking to AI. They are using it to sharpen their own. Two key data points prove this. First, "Asking" for decision support is more common than "Doing" a task. Second, two-thirds of all "Writing" consists of editing and refining existing text, not generating it from a blank slate.
This isn’t delegation; it’s collaboration. Users are bringing their own ideas, drafts, and problems to the table and using the AI as a tireless sparring partner. It’s the developmental editor for your brain, the Socratic tutor that’s always on call. This pattern shows that the real value of AI in knowledge work isn’t replacing the human; it’s augmenting their reasoning.
The data confirms this in user satisfaction: interactions based on "Asking" for advice are consistently rated as higher quality than those based on "Doing" a task, revealing a clear preference for a cognitive partner over a simple tool.
The Silent Majority Runs on "Good Enough"
While high-stakes use cases get all the attention, the report shows that the vast majority of conversations are about everyday problems. "Practical Guidance", "Seeking Information" and “Writing” together make up nearly 80% of all use cases. This is the long tail of knowledge work: how-to advice, tutoring, creative ideation, and finding specific facts.
This isn't glamorous, but it is the bedrock of ChatGPT's utility. For millions of small, low-stakes problems, it provides a "good enough" answer, faster than traditional search. It has become the default first stop for solving the thousand tiny frictions we face every day, both at work and at home.
The Center of Gravity is Shifting to Our Personal Lives
The finding that non-work usage is growing faster than work usage is significant. It signals that AI is following the same path as the PC and the smartphone: it enters our lives as a work tool but achieves true scale when it becomes indispensable to our personal lives. This creates a massive, and often overlooked, consumer surplus. The more integrated AI becomes in our daily routines, from planning workouts to helping with homework, the more its presence in our professional lives will feel natural and non-negotiable. The consumer experience is setting the baseline expectation for the enterprise.
People Crave Capabilities, Not Just Conversations
In April 2025, when ChatGPT released new image-generation capabilities, "Multimedia" usage saw a massive spike, growing from 2% to over 7% of all conversations almost overnight. This isn’t just a blip; it's a critical lesson in product strategy. Users don’t just want to talk to an AI; they want it to do things. When a genuinely useful, new capability is introduced, adoption can be explosive. The future of AI assistants will be defined not by how well they chat, but by the breadth and quality of the actions they can perform.
We Are Still Human, After All
Buried in the data is a small but telling signal. While a tiny fraction of use, "Relationships and Personal Reflection" (1.9%) and "Games and Role Play" (0.4%) persist as steady use cases. This isn't a statistical error. It’s a faint signal of a deep human need for interaction, creativity, and even companionship. While the paper focuses on economic value, these categories remind us that humans will always find ways to use technology for connection and play. It’s a niche for now, but it points toward a future where affective and creative use cases may become far more mainstream.
The Next Billion Users Are AI-Native
Also buried in the demographic data is a seismic shift that most Western-centric analyses will miss: ChatGPT usage is growing fastest in low- and middle-income countries. For enterprise software, this is more than a footnote. Emerging markets may bypass portions of older systems and processes. Their first serious business tools could be conversational and AI-driven from the start. This has profound implications for product strategy: from the need for hyper-localization and lightweight, mobile-first interfaces, to the opportunity to tap into vast new talent pools that are building their skills on an AI-native foundation. The center of gravity for the next wave of growth isn't where we think it is.
AI Provides a Universal Cognitive Layer
While use cases differ between roles, the paper reveals a profound uniformity in the types of thinking AI supports. Core activities like “Making Decisions and Solving Problems,” “Getting Information,” and “Documenting/Recording Information” are among the most common across nearly all occupations, from management to sales. This signals that AI isn't just a collection of vertical tools for specific jobs; it's becoming a horizontal infrastructure for the fundamental cognitive tasks that define all knowledge work.
These patterns show an AI that is being shaped by fundamentally human needs: the need to think more clearly, solve daily problems, and connect with others. Now, let’s translate this into the high-stakes world of B2B.
The Enterprise Angle: What This Means for B2B
All enterprise implications below are my inferences layered on top of the consumer signal. This is where we adjust the lens for the enterprise. Consumer trends are interesting, but enterprise leaders are accountable for outcomes. If we filter the paper’s findings through the lens of a B2B product leader, few strategic imperatives emerge that will shape the next generation of enterprise software.
The AI-Native Workforce Is Here. Is your UX keeping up?
The report confirms that young, educated professionals are the most active and work-oriented users. These aren’t just early adopters; they are the next generation of your employees and customers. They don't see AI as a feature; they see it as a fundamental utility.
The implication for B2B is stark: enterprise software that relies solely on traditional GUIs will soon feel as archaic as a dial-up modem. This signals that language is becoming the new user interface. The expectation is shifting from operating software to conversing with it, a trend accelerated by the great unbundling of search into conversational synthesis. Users no longer want to hunt through dashboards; they expect a direct answer. If your product doesn’t have an intelligent, conversational layer that helps users achieve outcomes, you’re not just behind the curve; you’re building for a workforce that is rapidly disappearing.
The Real ROI Is Judgment Amplification, Not Just Task Automation
The enterprise world is obsessed with AI for automation and cost-cutting. This report suggests that’s over-indexed on automation. The dominant use case wasn’t asking ChatGPT to do a task, but to help the user think about a task ("Asking" > "Doing"). It’s a tool for decision support, not just delegation.
For B2B, this reframes the entire value proposition of AI. The biggest gains won’t come from replacing junior analysts with a bot. They will come from giving a senior (and junior) analyst an AI sparring partner that can challenge assumptions, surface hidden risks in data, and model second-order effects. Stop selling only "efficiency." Start selling "sharper judgment." The data shows users already value this, rating decision-support ("Asking") interactions as higher quality than task-based ("Doing") ones.
Writing is the Trojan Horse for Enterprise Adoption
Writing is the single most common work-related use case and a universal pain point. A B2B AI strategy should be built around exceptional writing, editing, and communication tools, as they represent the path of least resistance into every department, from marketing and sales to legal and HR. By solving this universal need first, you earn the trust and permission to solve the organization's next, more complex problem.
The "Copilot" Is the Winning Model. The "Autopilot" Is a Liability
Nearly two-thirds of all writing-related tasks involved a human bringing their own content for the AI to edit, critique, or summarize. They weren't asking the AI to fly the plane; they were asking it to help navigate.
This "Copilot" pattern is the dominant model in high-stakes contexts. Why? Because in regulated industries, the human must remain accountable. An AI "autopilot" that generates a legal contract or a financial report from scratch is a compliance and liability nightmare. A "copilot," where a human sets the intent and the AI assists in execution, keeps the human in the loop and in control. This isn’t a technical limitation; it’s a fundamental requirement for building trust in enterprise AI.
Autopilot is high-risk in ambiguous, high-stakes workflows; in narrow, auditable processes, it can be the right choice with controls. (i.e. pre-approved templates, deterministic pipelines, human sign-off gates, …)
The Federated Copilot: Why the Future is Embedded Intelligence
The report shows that as use cases get more specialized, they move to more specialized tools. The declining share of "Technical Help" in the web UI doesn't mean developers stopped using AI; it means they started using GitHub Copilot.
This signals the future of enterprise AI. It won’t be a single, all-knowing chatbot that employees log into. It will be a federated ecosystem of specialized AI capabilities embedded directly into the workflows where they’re needed most: in your ERP, your CRM, your code editor, and your BI platform. The chat window is a transitional interface. The real winners in B2B won't be those who build the best chatbot; they will be those who build the best capabilities and deliver them seamlessly via APIs into the tools their customers already use.
You’re Not Just Building a Tool; You’re Building a Training Ground
With 10% of all usage dedicated to "Tutoring or Teaching," the report uncovers one of the most powerful, yet overlooked, use cases for AI in the enterprise: skill acceleration. The "grunt work" that once formed the training ground for junior employees is being automated away.
This creates a critical risk: an experience vacuum where the next generation of seniors never get the reps they need to build foundational instincts. But it also presents an opportunity. The right way to leverage AI isn’t just to automate tasks, but to build AI-powered apprenticeship models. Use AI to handle the boilerplate, freeing up juniors to focus on higher-order skills like problem-framing and trade-off analysis from day one, with seniors inspecting their thinking, not their syntax.
The Human Role is Shifting from Operator to Governor
As AI becomes more capable of executing complex tasks, the most critical human skill shifts from doing the work to defining it. We are moving from being tool operators to system governors. The new imperative is to set clear objectives, define constraints, manage risk, and know when to intervene. Your B2B product must be designed for this new reality of controlled delegation, with features for oversight, audit, and human-in-the-loop approvals.
Hire for Durable Skills, Not Perishable Tricks
The skills required to leverage AI are not perishable, tool-specific tricks like "prompt engineering". They are durable, cognitive capabilities: problem framing, systems thinking, and clear communication. This report proves the highest-value work is cognitive. Therefore, hire and train for these "Keystone" skills, the foundational abilities that AI amplifies, rather than the ones it will eventually automate.
These insights point to a future where AI is not a separate destination but the connective tissue of the modern enterprise: a reasoning partner that amplifies judgment and accelerates learning. Now, for the final section, let’s unearth some of the more curious, counter-intuitive, and frankly, fun details buried in the report.
Surprises from the Data and Conclusion
Every dense report has a few hidden gems – those small, peculiar data points that reveal more about human nature than a dozen summary charts. Before we close, here are a few of my favorites from the deep corners of the study.
The Great Pimple vs. HR Complaint Divide
How did the researchers decide if a query was "work-related"? According to the appendix, the prompt given to the classifying AI used two stark examples: asking it to “rewrite this HR complaint” was definitively work, while asking “does ice reduce pimples?” was not.
It’s oddly comforting to know that the analysis of our collective digital consciousness boils down to a clear line between corporate grievances and skincare advice. It’s a strangely human way to categorize the entire global economy.
The Persistence of Social Norms in AI Interaction
A full 2.0% of all conversations fall under "Greetings and Chitchat". This means millions of times a day, someone is starting their query with "Hello," "How are you?" or just making small talk.
We can’t help it. We’re social creatures hardwired for politeness, even when we know we’re talking to a stateless prediction engine. Either that or we’re all hedging our bets for when the machines take over.
We Keep Asking the Mirror "Who Are You?"
One of the tracked conversation categories is "Asking About the Model". People are using ChatGPT to ask... about ChatGPT.
This is peak human curiosity. We don’t just want to use our tools; we have an innate need to understand them, test their boundaries, and figure out what makes them tick. It’s not just a tool; it’s an object of fascination.
The 66+ Crowd is (Mostly) Done With Work
The study found that users aged 66 and older had the lowest share of work-related messages, at just 16%.
After a lifetime of work, the oldest generation of users is apparently using the world’s most advanced AI for what matters most: everything else. There’s a lesson in there somewhere.
We Are All Developmental Editors Now
My favorite overarching theme: the data shows a clear preference for augmentation over automation. We edit more than we create; we ask for guidance more than we delegate tasks.
The most powerful use case for AI right now isn’t replacing us. It’s helping us refine our own work, clarify our own thinking, and make better decisions. The data is a mirror, reflecting our own cognitive processes back at us.
The Hype Cooled, Then Usage Reheated
The very first cohort of users, the trailblazers from Q1 2023, actually saw their usage dip in mid-2023 before roaring back to all-time highs. It’s a perfect illustration of the Gartner Hype Cycle in a single trendline. The novelty wore off, but then real, sustained utility kicked in, proving the product's long-term value beyond the initial excitement.
Conclusions
And that’s the real takeaway. This report isn’t just a story about an AI. It’s a story about us: how we learn, how we work, and what we value. The AI is just the mirror. It's up to us to decide what we want to see in the reflection.
Where do you see ‘Asking’ beating ‘Doing’ in your org today? If you run a product, which decision-support flows would you redesign first? And do you believe chat stays the front door, or do capabilities move entirely into apps? How are you preparing for the AI-native workforce?
Let me know in the comments.
Further Reading: Connecting the Dots
The insights from this paper resonate with several themes I’ve explored previously. If you found this analysis useful, here’s how it connects to my past work:
- On the "Copilot" Model and User Ownership: The paper's finding that users prefer to edit existing text rather than generate it from scratch directly validates the "Copilot" model. This is the core idea I explored in "Harnessing the IKEA Effect for AI-First Products," where I argued that user ownership and involvement are critical for adoption, especially in B2B contexts where accountability is key. The data shows users intuitively lean towards co-creation, not abdication.
- On AI as a Tool for Thinking, Not a Replacement for It: The dominance of "Asking" (decision support) over "Doing" (task automation) reinforces a central theme of my writing: AI's true power lies in augmenting human judgment. This was the focus of "The Invisible Decline of Reasoning" and "Where Did the Thinking Go?," where I discussed how product systems can stop "thinking" when they focus only on output. This report provides quantitative evidence that users are pulling AI into their reasoning loop.
- On the Need for Specialized, Embedded Tools: The decline of "Technical Help" usage in the main ChatGPT UI supports the argument I made in "Control, Delegate, or Disappear," where I predicted a shift away from monolithic interfaces toward a federated ecosystem of specialized agents. Developers are already living in this future, using AI embedded in their native workflows. This is a leading indicator for how all knowledge work will evolve.
- On AI's Role in Talent Development: The finding that 10% of all usage is for "Tutoring or Teaching" provides a strong data point for the model I proposed in "PAIR: A Simple Model for AI-Accelerated Apprenticeship." In that piece, I argued that AI should be used not to replace junior talent, but to accelerate their learning by handling boilerplate tasks. This paper confirms that learning is one of AI's most natural and powerful applications.
- On the Risk of Misinterpreting Data Without Judgment: The entire exercise of this article (critically analyzing the paper's methodology and re-framing its conclusions for a specific context) is a real-world application of the principles in "Beyond the Dashboard | Principle 1: Avoid the Data Delusion." That article warned that data without judgment is just noise. This paper is a perfect example: the data is statistically significant, but without the right context, its strategic relevance for the enterprise can be wildly misinterpreted.
Anticipating Pushbacks: "Possibly Asked Questions"
Data this significant invites scrutiny. A good analysis shouldn't just present findings; it should anticipate and address the tough questions. Here are the pushbacks I expect and my response to them.
Are you dismissing this data because it doesn’t fit a pro-enterprise narrative?
No. I’m contextualizing it. The paper is a strong consumer signal. Enterprises face different constraints: security, compliance, liability, proprietary data. Think “real reflection, distorted mirror.” The analysis translates the consumer signal for high-stakes enterprise use.
Don’t consumer trends predict enterprise adoption? Isn’t this consumerization of IT again?
Partly. Consumer UX sets expectations. Enterprise “what” is governed by risk and trust. A vacation-planning hallucination is annoying; a contract-review hallucination is liability. Let consumer trends inform UX, not capabilities and controls.
Is ‘Copilot’ just a phase until full automation?
No. For high-stakes work, humans must stay accountable. Two signals: most “writing” is editing, and top value is in decisions and problem-solving. In regulated workflows, autopilot creates liability. Copilot is a durable requirement for trust.
Doesn’t 70% non-work prove this is a consumer toy?
No. That is volume, not value, and from a consumer-only sample. One complex work query can outweigh thousands of personal queries. Also, heavy non-work use is free AI upskilling. Leaders should harness that literacy, not dismiss it.
Data is imperfect for B2B. What should leaders do?
Act with a better lens. Use the data to ask sharper questions and build resilience.
- Prioritize embedded, API-first integrations.
- Build for decision support, not only task automation.
- Design for Copilot to keep human accountability.
Treat the paper as clues; use this framework to decode them.
You’re overstating federation. One assistant can orchestrate tools with a unified UX.
Agreed. Keep one front door. Under the hood, embed domain copilots where data and permissions live.
- Pattern: assistant router + skills registry + policy gateway + domain services.
- Why: least privilege, lower latency, fewer context spills.
- Measure: task success and time-to-complete across surfaces, not chat volume.
Volume is a leading indicator. Why downplay it?
Don’t. Weight it. Formula: Weighted value = Frequency × Business impact × Quality uplift − Risk cost. Example: 100k low-risk micro-wins can beat 500 high-touch tasks.
Developers are in the data more than you suggest.
Correct. Dev work often sits outside web chat: IDEs, CLIs, review bots, CI, internal RAG. Track: IDE plugin use, code-review accepts, test-fix cycle time, defect escape rate. Builders are present; web chat undercounts them.