Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
JOHNWICK
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
AI & Automation in AEC — Part 2
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
Implementation Friction: The Feature That Saves Your AI Pilot TL;DR: Pilots don’t fail because the model is weak; they fail because Tuesday doesn’t change. Treat “friction”: data cleanup, tiny SOPs, permission fixes, and short trainings, as the work that creates value. Prove impact with a few hard KPIs, then scale. [[file:AI_&_Automation_in_AEC.jpg|500px]] Photo by Imagine Buddy on Unsplash Why “friction” is the value-creation step If Part 1 was about choosing a problem and a KPI, Part 2 is about making the work feel different next week. New tools only create value when they change a task, a handoff, or a decision. That change looks like friction: renaming files so people can find them, agreeing on who approves a detail, scheduling two 45-minute sessions to learn the new step, and tightening a checklist so reviews are consistent. It isn’t glamorous! But it’s where results become measurable instead of theoretical. Most organizations try to bolt AI onto yesterday’s workflow and expect a miracle. The pattern in recent reporting is blunt: ~95% of enterprise GenAI initiatives show no measurable P&L impact, largely due to poor integration and skipped process change: not model horsepower. Forbes The programs that do succeed redesign workflows and let metrics “not novelty” decide what sticks; McKinsey’s 2025 survey identifies workflow redesign as the single biggest driver of EBIT impact from GenAI. McKinsey & Company Think of friction as governed change. A light spine: Govern → Map → Measure → Manage, keeps risk low while you rewire work, which is exactly how the NIST AI Risk Management Framework Playbook recommends operationalizing AI. NIST In AEC specifically, adoption and reality still diverge: Bluebeam’s global survey found ~74% report using AI in at least one phase, yet 72% still rely on paper at some point. This is a classic friction between tools and Tuesday. Engineering.com Keep the scope tiny (and real) Pick one workflow that leaks time. Let’s say, “Revit view → Finding external/internal detail references → Developing a detail solution.” Write a single sentence that describes the new behavior: “For Project X, we’ll use D.TO for developing construction details for high quality building envelop details.” Now baseline before you touch anything. Pull three to five recent details and capture how long they took, how many review cycles they needed, and how often they were blocked by missing content or proper references. If you can’t show the “before,” no one will believe the “after.” Build only the scaffolding you need Week one will feel slow. That’s normal. You’re assembling the minimal pieces that remove excuses later: a pilot-approved subfolder with good guidelines, a one-page SOP, working access to D.TO and the company detail libraries, and two short enablement sessions: one to run the flow, one to review and correct. By week two, a different person should complete the same run. If only one expert can do it, you have a demo, not a pilot. Publish a tiny chart every time your team completes one detail: cycle time per detail, review cycles, and exception rate. When people see the line bend (thirty minutes shaved here, one fewer review cycle there), the tool stops being “new tech” and becomes “how we do details.” For exec/board visibility on roles and decision rights, the WEF Oversight Toolkit is a clean add: it maps committee responsibilities, and the questions leaders should ask as you scale. World Economic Forum Common traps (and minimum fixes) Your firm’s libraries are messy? Don’t rebuild them mid-pilot. Put twenty solid details as “pilot-approved” onto D.TO’s Company Detail Library and expand later. Exceptions multiply? Tag the top three causes and fix one per week. None of this is flashy, but each fix ties directly to the KPI you chose. When to scale honestly? Scale when the metric moved on real work, two different people ran the flow successfully, and exceptions are shrinking. If the metric is flat after two steady weeks, narrow the scope and try again, or walk away. Pretending helps no one. Pilot scope (tiny, but real) * Workflow: Revit view → Finding external/internal detail references → Developing a detail solution. * Project: One live project with 10–15 envelope details for the pilot * Team: 1 runner (designer/intern), 1 reviewer (senior), 1 owner (PA/PM) * One-liner: “For Project X, we’ll use D.TO for developing construction details for high quality building envelop details.” KPIs that actually move the business Pick 3–5. Baseline before week 1; target by week 4. * Cycle Time per Detail (CTD): start → completed. Target: –30 ~ –40%. * Touch Time per Detail (TTD): runner + reviewer hours. Target: –35%. * Review Cycles per Detail (RCD): submissions to approval. Target: –1. * First-Pass Yield (FPY): % approved with only minor edits. Target: ≥70%. * Exception Rate (ER): % blocked by missing content or errors. Target: ≤10% by week 3. * Adoption Rate (AR): % of eligible details run with the pilot method. Target: ≥80%. Four-week plan (light but disciplined) Week 0 — Prep Pick a live project. Upload relevant pilot-approved company detail references to D.TO. Confirm access to D.TO and content libraries. Draft a one-page SOP (steps, owners, acceptance criteria, exception log). Week 1 — First runs Complete 3–5 details end-to-end. Log CTD, TTD, RCD, and exceptions. Publish a mini chart on the completion of those details with one lesson learned. Week 2 — Repeatability A second runner completes 4–5 details. Fix the top two blockers (naming, missing typicals, reviewer drift). Targets: ER ≤15%, FPY ≥60%. Week 3 — Throughput Maintain a steady cadence. Targets: CTD –25 ~ –30%, RCD –0.5, AR ≥70%. Week 4 — Proof Reach 10–15 total details. Targets: CTD –30 ~ –40%, TTD –35%, RCD –1, FPY ≥70%, AR ≥80%, ER ≤10%. Decide to scale, iterate, or stop. Scale/stop criteria Scale when KPI targets hit on live work, two different runners succeed, and exceptions shrink two weeks in a row. Stop/Rescope if metrics are flat after two consistent weeks or the flow only works with an expert in the room. Quick ROI sketch (for the slide) 30 details × baseline 2.5 hrs = 75 hrs. At –35% TTD → 26 hrs saved. At $120/hr → $3,120 labor value this month on one workflow. If the pilot slice costs $1,500, net $1,620. Scaling multiplies linearly. What to do this week * Map one workflow you’ll actually change (e.g., Revit view → D.TO suggestion → linked typical → sheet). Write the one-line behavior you expect next week. * Baseline once from 5 similar details: CTD, TTD, RCD. * Set targets for 4 weeks: CTD –30 ~ –40%, TTD –35%, RCD –1, FPY ≥70%, ER ≤10%, AR ≥80%. * Stand up the friction: create the pilot folder, fix permissions, publish a 1-page SOP, and book two 45-minute sessions (run; review & correct). * Prove repeatability: have a second person run the flow in week 2. Post a mini-chart (CTD, RCD, ER) and one lesson learned. * Decide in week 4: if KPIs hit and exceptions trend down, scale; if flat two weeks in a row, rescope or stop. Coming next (Part 3): Divergence vs. Convergence When to let generative tools explore, when to optimize toward a decision, and how to keep creativity without losing time — plus a simple guardrail to avoid “100 options, no decision.” Notes / Sources * MIT / NANDA (2025): ~95% of GenAI pilots show no measurable P&L impact; the gap is workflow integration and change, not model weakness. Forbes * McKinsey (2025 State of AI): Workflow redesign has the biggest effect on EBIT impact from GenAI. McKinsey & Company * NIST AI RMF Playbook: Operationalizes Govern–Map–Measure–Manage for production AI. NIST * AEC adoption context: ~74% of firms use AI in at least one phase (Bluebeam); 72% still rely on paper documents in parts of delivery. Engineering.com Read the full article here: https://medium.com/@juhun.lee_42657/ai-automation-in-aec-part-2-af98a0190607
Summary:
Please note that all contributions to JOHNWICK may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
JOHNWICK:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
AI & Automation in AEC — Part 2
Add topic