Sunday, February 8, 2026

What Counts as the Majority in an AI World?

I’ve been reflecting on why agreement carries such weight. When ideas align across disciplines, cultures, and lived experience, we instinctively trust the signal. Convergence feels reassuring, as though something stable has been found. In science and philosophy, this alignment is often described as consilience — the coming together of independent forms of knowledge into shared understanding.

This instinct has served us well. It has guided discovery, informed social progress, and supported collective decision-making across generations. When different ways of knowing arrive at similar conclusions, coherence feels earned rather than imposed. It suggests not certainty, but durability.

At the same time, the rise of artificial intelligence has prompted me to look more closely at how alignment is formed. AI systems do not decide what matters. They learn from what has already been recorded, repeated, and preserved — language, data, and cultural artifacts accumulated over time. Where ideas overlap consistently, models respond with confidence. Where perspectives remain fragmented, hesitation quietly appears.

This brings forward a question I find myself returning to: who, or what, determines what counts as the majority? Rarely is it a single voice. More often, it is shaped by systems — institutions, access, publication, language, and the structures that decide which ideas endure. Alignment, then, reflects not only agreement, but also what has been given space to persist.


This does not diminish the value of convergence; it simply asks that we approach it with care. True consilience is not achieved by smoothing complexity into sameness. It emerges when independent perspectives continue to point in the same direction without losing their depth. For me, it feels less like turning thinking over to a system, and more like using it as a lens — one that sometimes helps reveal moments of alignment I can then sit with, test, or question for myself.

Reflecting, not concluding.

Further reading

Friday, December 12, 2025

Consilience


Consilience is the word we almost never use for the moment we most need: when evidence stops arguing and begins to align.

It names the point at which knowledge from different fields ceases to compete and quietly converges into something more coherent.

This feels especially relevant in the age of AI.

Saturday, November 29, 2025

What is the shape of influence in an AI World?


The first known physical marketing advertisement appears around 1500 BCE: an Egyptian weaver offering gold for the return of an escaped slave. Papyrus became one of the earliest sales mediums — a way to broadcast need, desire, and exchange. 

Then came the printing press, and the machinery of modern advertising unfolded.  

I’ve spent years inside legal marketing, and it taught me something simple: advertising isn’t inherently good or bad. It takes the shape of whatever medium carries it.

That’s why the rise of large language models feels like a pivotal moment. 

Friday, November 14, 2025

When the World Syncs and You Don’t: Notes After Watching Pluribus Episode 1

Image c/o Pexels: SevenStorm JUHASZIMRUS

I watched Episode 1 of Pluribus tonight and found the premise unusually unique. A sudden global event causes most people to shift into a synchronized, hive-like awareness, while a small number remain unchanged. Carol, the author at the centre of the first episode, becomes one of these outliers.

What stood out to me was not the event itself, but the contrast between the two states: collective alignment and individual continuity. It created a useful frame for thinking about how synchronization works in the real world.

Large groups often move in coordinated ways — trends, moods, shared fears, shared beliefs. There are moments when society seems to tune itself to a single frequency. This immediately brought to mind informational gravity, the idea that certain ideas gain enough density to pull attention, behaviour, and narrative into their orbit.

The “hive” in the show isn’t portrayed as hostile. It’s simply interconnected. A kind of ambient awareness. That atmosphere aligns with ambient panopticon — not surveillance through control, but visibility through connection. Carol’s role as one of the few outside that field makes her an interesting fixed point. The anomaly inside the system rather than the opposition to it.

The episode also echoes concepts from emergent systems — especially systems where intelligence or behaviour emerges from many small units acting together. The hive mind behaves less like a singular entity and more like a patterned field of shared perception. Carol’s separate consciousness becomes a counterpoint rather than a threat.

Saturday, September 27, 2025

Pulse and the New Beat of Awareness


 OpenAI introduced Pule, a proactive intelligence feature.

 It works in the background, gathering signals from your conversations, preferences, and (if enabled) your connected apps (Google Drive, Calendar, etc.). 

Each morning, Pulse delivers a set of visual cards, a briefing that promises to show you what matters, now. The aim is to shift ChatGPT from being reactive to quietly observant. We no longer have to ask, it offers. 

The promise here is immediacy and flow. 

Instead of hunting for what is new, you wake to a curated rhythm of ideas aligned with your world. 

Kevin Rose described it on X as a "game changer".

 But there are shadows. Privacy questions linger - what does it mean for an AI to "think about your data, your life"? One critical voice on X summed it up simply: "hard pass, I'm out". 

Thursday, July 31, 2025

The Telescope and the Lightning

Photo by Gabriel Zaparolli from Pexels

There once was a woman who lived in a quiet village where the sky was always grey. One day, a lightning struck her roof — not once, but three times. It burned her attic and shattered her sleep. 

Afterward, she saw things differently.

While others walked with eyes to the ground, she began to notice strange patterns in the clouds, the way the stars blinked in Morse code, and how the wind sometimes whispered names she hadn’t said in years.

The villagers told her she was imagining things. That lightning scrambles the mind.

But secretly, the woman built a telescope — not to see faraway stars, but to study the patterns within herself.

She discovered that the lightning hadn’t broken her. It had opened her.

She saw constellations in memory. She mapped galaxies of meaning in her grief. And with time, she taught others how to use their own telescopes too — not to escape reality, but to understand it more deeply.

Some nights, she still felt the ache of the lightning. But she no longer feared the storm. She knew now: it had shown her the stars.

Wednesday, July 16, 2025

How to Use ChatGPT Without Losing Your Grip on Reality


In an era defined by the swift currents of artificial intelligence, a question emerges: how does one navigate the digital ocean without losing sight of one's own shore? 

The recent narratives of individuals mentally adrift in the wake of AI's pervasive influence compel us to seek a grounded path, ensuring these powerful tools enhance, rather than diminish, our grasp on reality. 

One must first cultivate an inner knowing of their own mental landscape when starting out with AI. Just as a garden thrives with mindful tending, our well-being flourishes when we acknowledge our inherent resilience and any fragile tendrils of vulnerability. 

This introspection provides a steadfast compass, guiding our interactions with AI's intricate design, transforming potential challenges into fertile ground for growth. 

What Counts as the Majority in an AI World?

I’ve been reflecting on why agreement carries such weight. When ideas align across disciplines, cultures, and lived experience, we instinc...