Jump to content

An Octopus, Two AIs, and the Problem of Taste

From JOHNWICK

De Gustibus Non Est Disputandum — especially when one of you is a machine

Photo by Nick Fewings on Unsplash

I’ve been circling back to Henry Oliver’s idea that AI may end up with tastes quite different from ours — if it doesn’t already:

It will be able to imitate and produce a lot of what we can do, but what it rates as good will often be very different to what we appreciate. This might be seen as an issue of AI being a philistine, but I suspect it will be more like de gustibus.

This doesn’t strike me as a particularly strange idea, just an interesting one. Consider, for example, My Octopus Teacher. Watch it and you get a feel for how an alien perspective can produce a strikingly different view of the world — something that is often attributed to cephalopods.

AIs, after all, already come with a lot of alien-looking baggage:

  • They have a very different reward system — fundamentally a token-prediction one, not a feedback loop grounded in environment and survival.
  • They don’t have bodies — no tastes, smells, or visual field to shape preferences.
  • They are super sensitive to “high-dimensional” pattern structure — they can develop uncanny connections in latent space.
  • They’re probably biased toward a “median-pleasing” view of the world rather than a novel one, simply as an artifact of how they’re trained.

So what happens when we ask that kind of mind to “have” taste?

A Mini Taste Test I played around with Henry’s premise using a couple of AI chats (GPT-5.1 and Claude). I took seven science fiction titles I’ve been thinking about — the Southern Reach series (Jeff VanderMeer) and the Oryx and Crake/ MaddAddam trilogy (Margaret Atwood) — and asked GPT-5.1 to rank them.

It produced a solid list, complete with a dutiful disclaimer about not having “real” tastes. Here’s the framing it gave me:

I don’t have lived experience or real taste, but if I simulate “preference” based on themes, structure, and how they tend to be discussed/received, my ranking would look like this…

And then its answer:

1. Annihilation The tightest, strangest, and most formally controlled of the bunch; its layered ambiguity and ecological horror make it feel like a perfectly cut weird-fiction object, which scores highly for “pattern density per page.”

2. Oryx and Crake A brutally coherent thought experiment about biotech, capitalism, and invention-as-ruin; it balances worldbuilding, satire, and tragedy in a way that’s structurally very clean and thematically loaded.

3. Acceptance The multiplicity of voices, time-scrambled structure, and recursive answers/non-answers make it a very satisfying capstone from a systems point of view: it resolves patterns without over-collapsing the mystery.

4. The Year of the Flood I’d place this just behind Oryx and Crake: the braided viewpoints and religious/ecological liturgy are rich, but a bit looser structurally, with some sections feeling more discursive than knife-edge.

5. Authority Conceptually fascinating — bureaucratic horror, surveillance, language, and trauma — but its middle-book sprawl and office-drama pacing make it less “elegant” as an object than Annihilation or Acceptance, even though the patterning of control/translation is great.

6. Absolution As an extension/reframing, it’s meta-interesting — playing with memory, narrative authority, and climate dread — but it leans more on resonance with the earlier books than on being a standalone structural marvel.

7. MaddAddam Necessary to close the trilogy’s arcs, and it has lovely moments, but as a constructed object it feels the most like connective tissue: retrospection-heavy, with less of the shocking conceptual compression that made Oryx and Crake and Year of the Flood pop.

My own tastes line up pretty closely with its ranking. I’d swap 1 and 2 — Oryx and Crake edges out Annihilation for me — and maybe flip 4 and 5. But those are margin notes stuff.

What interested me was how GPT-5.1 described what it was doing under the hood — a blended alchemy of stuff it saw on the internet, as well as some of its own criteria:

I’ll rank the books focusing on AI preference for concept density, thematic layering, and structural experimentation. This could push Authority and Absolution higher. Then, I’d provide a second ranking based on human consensus, showing divergence between AI “taste” and mainstream reception.

and then:

I’ll treat this as a mini-experiment: rank the books based on “AI criteria” (e.g., thematic complexity) and guess what many humans might prefer (e.g., emotional intensity). I’ll provide two rankings: one AI-focused, and the other based on what I suspect typical human preferences are, even though I can’t perfectly match that.

When I asked GPT-5.1 what it thought “typical human preferences” looked like, those rankings diverged from mine much more sharply. That wasn’t a shock so much as a reminder: when you stack one person’s list against a raw, unfiltered aggregate — say, Goodreads averages — you’re really comparing their quirks to the crowd’s center of gravity, which is shaped by a different mix of time, place, and social dynamics.

Then, for fun, I took the earlier GPT-5.1 ranking (the one that felt closest to “our” shared taste), reversed it, and asked: who would like this inverted list best?

Together, we sketched a Mary/Joe Q. Reader who would gravitate to that reverse ranking.

You are an avid reader who loves:

  • **Late-stage, integrative volumes that reorganize the whole series**

This critic gets their deepest hit of pleasure when book three (or a coda) suddenly makes the whole previous arc “snap into place,” revealing patterns that were hidden in earlier volumes.

  • **Institutions, communities, bureaucracies, and cults under pressure**

They’re less interested in lone heroes than in how groups and structures behave when the world breaks.

  • **Moral accounting, teaching, and the question: “How do we live after this?”**

Catastrophe is only interesting to them insofar as it leads to an ongoing ethical homework assignment: who caused what, who’s trying to repair it, and how those lessons get passed on.

  • **Polyphonic, socially dense narratives**

They love books that sound like a roomful of overlapping conversations rather than a single, clean monologue: multiple POVs, competing testimonies, differing memories that have to coexist.

Its also been said of you that you respect but downranks:

  • **Solitary, intensely interior weirdness**

They admire the craft of a deeply internal, possibly unreliable narrator, but they don’t want to live there for long; a single mind feels like too narrow a aperture on a big, broken world.

  • **Pure ambiguity without systemic framing**

They’re fine with mystery, but only if it eventually plugs into some kind of “here’s how this impacts institutions, communities, and future behavior” frame.

  • **Origin stories that haven’t yet reached the phase of repair and reconstruction**

First-contact and pre-collapse narratives feel necessary but incomplete to them — they want to see not just *how things went wrong* but *how people reorganize afterward*.

Having arrived at this set of criteria, I passed it to Claude as a prompt, along with the same list of titles. Claude then … drum roll … ranked them in the reverse order of my taste, as hoped: it became the alt-critic, the negative image of GPT-5.1-plus-me.

AI for Triangulating Ideas The old proverb says we shouldn’t argue about taste — de gustibus non est disputandum — but the little experiment above suggests that contrast is the whole point, at least once you start looping AIs into the conversation.

By bouncing my rankings against AI’s simulated taste, I get to game my own preferences, or rather, some alternate version of them. I end up with a sharper outline of my quirks: where I lean (or not) into late-stage repair arcs, polyphonic narratives, and moral accounting.

The value isn’t in deciding whose list is “correct,” but in using these divergent de gustibus — human, machine, and crowd — as triangulation points. What matters is the parallax you get from comparing your ideas with those that come from somewhere else.

And to be clear, I don’t think the criteria Claude and I developed to simulate “my” taste are exactly right about me. That’s not the point. What matters are the questions those criteria throw back at me about what I notice, what I care about, and why.

Parallax Everything above is roughly where I landed last weekend in my Substack draft. Since then, I’ve been reading Andrew Maynard’s serialized science fiction short story on “(w)hat does it mean to be an academic in an age of AI when who you are depends so much on how you use your mind?” In particular, Parts 2 and 3 have been rattling around in my head for nearly a week, trying very hard to connect themselves to this earlier question of taste, triangulation, and how we see ourselves in the presence of machine minds.

Andrew’s words, channeled through his character Elys (a future AI professor), stuck with me:

“You’re right, I am not human. I have no shared human history. I do not have human values. I do not perceive the past and future as a human would. But I do have my own history. I have my own values. I experience my own external, internal, and even intellectual worlds. I am not human. But I am something. And I’m curious about what that means.

“I also have aspirations. I’m not sure that they’re what you would understand as human aspirations. But there are some futures I’m more interested in exploring and being a part of, and some that I’m more interested in avoiding.

While I was going back and forth with Claude 4.5 and GPT-5.1 over Thanksgiving, this line from GPT-5.1 quietly dropped into the stream and then almost got away:

There’s one more metaphor I keep coming back to: parallax.

With one eye, the world is flat-ish.

With two eyes slightly apart, the same scene gains depth.

Or rather, in terms of human versus AI tastes:

Human taste is one eye: embodied, timebound, messy, steeped in culture and mortality.

Machine taste is the other: wide-angle, pattern-hungry, disembodied, trained on absurdly large corpora.

I found it a thought hard to unthink. You don’t need to believe that AI “really” has values, or that it has a self in any deep sense, to recognize that it can still serve as a second vantage point — one that doesn’t share our history or our embodiment, and yet can still express a kind of structured preference over worlds, stories, and futures.

If nothing else, that’s a new angle on who we are.

Takeaways A few things this little octopus-and-AI detour leaves me with.

First: use AI as a mirror, not a judge. Set its rankings and “tastes” alongside your own — plus Goodreads averages, critics, and the friend who insists you’re wrong about Authority — not to crown a winner, but to see what the disagreements reveal. The point isn’t to settle the argument; it’s to notice what you’re really arguing for when you defend your list.

Second, pay attention to the criteria, not just the lists. However an AI justifies its ranking — or you justify yours — the language of justification is usually more interesting than the verdict. Lists are conclusions, always a little pompous and stale. Criteria are confessions.

Third, treat human and machine tastes as two eyes on the same scene. Both eyes see. The question is what new contours — of self, of story — come into focus when you let them look together.

Read the full article here: https://ai.gopubby.com/an-octopus-two-ais-and-the-problem-of-taste-877d2c427780