Jump to content

AI & Automation in AEC — Part 3

From JOHNWICK


Parallel-coordinates diagram with ingredient icons. AI-generated (OpenAI image model) by D.TO, Nov. 2025.

Divergence Done Right: Two Very Different Ways to Generate Options

TL;DR: Divergence isn’t one thing. Generative Design (rules/parametric) explores a space you define; Generative AI (foundation-model) proposes patterns you didn’t expect. One trades in legibility, the other in surprise. Treating them as the same machine is how teams end up with 100 options and no decision. If Part 2 said “Tuesday must change,” Part 3 says “Monday must disagree” , about constraints, about taste, and about what counts as a credible leap.

You can’t ask a tool to dream and decide at the same time

AEC culture loves the demo: a flourish of variations, an ocean of imagery, the promise that “the best” will float to the top. But divergence is not about finding “the best.” It’s about revealing the space, what’s possible, what’s plausible, and what’s off-limits. When we pretend the same engine that dreams can also prove, we create decision debt: a stack of pretty candidates with no path to a defensible choice, silently transferred to reviewers, coordinators, and construction administration.

Generative Design: the legibility play

Generative Design (GD) is disciplined search. You draw a box with parameters and constraints and the system walks the inside of that box faster than your team ever could. Its virtue is legibility: every option has a traceable lineage from constraint to configuration. Its vice is taste ossification: the box you drew last year becomes the box you live in this year. Optimizing yesterday’s taste can masquerade as innovation while quietly narrowing the future. GD also forces an uncomfortable admission: if your rules are wrong, your portfolio will be consistently wrong. The “error bar” isn’t in the algorithm; it’s in the encoded judgment, what you chose to model, how you measured “better,” which constraints you treated as sacred and which you treated as elastic. That’s governance disguised as tooling. Good GD makes that judgment auditable: you can replay an option from inputs, show why it beats a baseline, and explain what it sacrifices.

Generative AI: the surprise play

Generative AI (GenAI) isn’t searching your box; it’s hallucinating new boxes from patterns learned across data. Its virtue is surprise: it surfaces assemblies, sequences, and compositions no parametric system would think to test first. Its vice is confident ambiguity: the prose reads right, the diagram looks right, until someone notices the vapor barrier has wandered to the wrong side of the insulation in a cold climate. GenAI tests your institutional memory. If your firm cannot articulate what “good” means beyond vibe and precedent, you’ll confuse novelty with quality. If you can articulate it, GenAI becomes a provocative colleague: it challenges defaults without breaking guardrails. But that bargain depends on provenance what data informed the suggestion and accountability which human accepted the risk. Without those, surprise becomes noise.

The mirage of “more options”

Option count is not creativity; it is cognitive load. Image walls don’t increase intelligence; they push uncertainty downstream. Divergence creates value only when you can say which options cannot exist for physics, code, or cost, which should not exist for quality or brand, and which might exist and deserve the dignity of a real check. “More” is a vanity metric. “Fewer, with receipts” is a cultural shift.

What “good” looks like for each automation

Good GD exposes the trade space with receipts. Inputs are saved and scores are explainable. A second team can reproduce the option a month later and reach the same conclusion. The output functions as a map rather than a slot machine.
Good GenAI changes the conversation by adding credible leaps your rules never considered. It references precedent in your institutional library and states risk in a sentence. It does not pretend to be a checker; it provokes and justifies.

On taste (the part we pretend is objective)

Divergence drags taste into daylight, where it belongs. Every rule is an opinion about what matters; every prompt is a bet about what to ignore. AEC has long treated taste as an apprenticeship secret, transmitted via redlines and folklore. Automation refuses that privacy. It serializes taste into parameters and tokens, which means disagreements can finally be argued instead of inherited. That’s progress. It also means you will need places to disagree on Monday before you ask Tuesday to change.

Data diet, or why provenance matters

Generative systems are only as honest as their data diet. If your GD is trained on mis-specified constraints, you will optimize toward the wrong hill. If your GenAI draws from public imagery that romanticizes impossible joints, you will normalize unbuildable ideas. Provenance is not academic; it is a design control. Save seeds, prompts, rule versions, and references alongside each option. The ability to say “this came from here” is how you keep creative search from eroding professional duty.

Evidence beats aura

The industry still rewards aura: the goosebump image, the heroic pitch. Divergence tempts aura because it performs creativity on demand. But we do not deliver a feeling; we deliver buildings. For GD, evidence is comparative metrics on objectives that matter. For GenAI, evidence is credible lineage what this resembles in your library, why it might work here, and where the risk sits. The test is simple: can a skeptical reviewer, unfamiliar with the tool, understand why this option deserves the next hour?

Ethics we keep skirting

When a model proposes an assembly, who is the author? When a ruleset “designs” a façade, whose taste did it encode? If a detail fails, whose judgment failed the prompter, the rule author, or the approver? Automation doesn’t create new responsibility; it makes existing responsibility legible. That’s a feature. It also means your practice will need updated habits: record acceptance, record rationale, and record exceptions. Not to cover yourself, but to teach the next person why.

Incentives, not features, decide outcomes

Under deadline, teams prefer fast options to traceable logic. Vendors prefer glossy novelty to boring reproducibility. Owners prefer more images to fewer decisions. None of this is a technology problem; it is contracts and incentives. Until we reward explainable breadth in GD and bounded originality in GenAI, we will keep paying for divergence twice, once in design hours and again in coordination rework.

The cultural move

If Part 2 argued that Tuesday must change, Part 3 argues that Monday must disagree. Real divergence requires argued taste: why these constraints and not those; why these leaps and not others. Generative Design gives you the grammar; Generative AI gives you the poetry. Publishing both without adjudication is not bravery; it is abdication.

The handoff you can’t skip (even in a divergence-only world)

Great divergent work invites its own judge. Each set: parametric or generative , should end with a short, human note on what the generator got wrong and what boundary the team discovered. That note is the bridge to Part 4’s convergence. If there’s nothing to say, you didn’t diverge you browsed.


Coming next (Part 4): Convergence without regret , objective hierarchies, sensitivity checks, and human-in-the-loop sign-offs so the option you pick is fast and defensible across performance, cost, and constructability.

Read the full article here: https://medium.com/@juhun.lee_42657/ai-automation-in-aec-part-3-9fd8f70225c7