Friday, June 27, 2025

The Em Dash Effect: How AI Mirrors Our Habits


Have you ever asked ChatGPT not to do something, only to find that it quietly keeps doing it?

For me, it's the em dash. I don't hide that I use AI to support my writing, but I prefer to remove the more obvious signals. I like things to feel intentional and a little more "me". 

It sounds like a simple request. And yet, it rarely sticks. It's not a major problem, just one of those persistent details that makes me wonder what's going on underneath. 

Today I watched a thoughtful conversation with Yuval Noah Harari. One point stayed with me. He suggested that modern AI systems are shaped less by the instructions we give them, and more by the behaviours they observe. In other words, they learn from we do, not what we say. 

It made me think about how often that's true in life, too.  

The table below outlines how this shaping occurs at different stages of AI development. It is a quiet reminder that whether we are training machines or shaping our own habits, what we model often speaks louder than what we say.

Why Behaviour Outweighs Instructions 

Training phase What the model “soaks up” Practical Affect
Pre-training on billions of documents Raw statistical patterns of human language—including exaggeration, sarcasm, half-truths, kindness, cruelty, etc. Unless we scrub or weight the data, every pattern (good or bad) leaves a trace.
Fine-tuning / RLHF (Reinforcement Learning from Human Feedback) Examples that humans actively reward or punish The model learns to imitate approved behaviour, not necessarily instructed values. Over time, the reward signal dominates written rules.
Live deployment / online learning (recommendation engines, chatbots that keep adapting) Real-time clicks, watch-time, up-votes, prompts Systems keep adjusting toward the behaviours users reinforce—even if those behaviours contradict “official” policies.

What began as a small edit request, a quiet rebellion against the em dash, revealed something meaningful. These systems follow patterns, respond to repetition, and reflect the behaviours that re reinforced.

There’s a thread in the OpenAI Developer Community about debugging the em dash. Maybe the lesson here isn’t just about punctuation. Maybe it’s about the quiet power of consistency in how we train, how we show up, and who we become. 

Friday, June 13, 2025

The Alignment of Grace


The world teeters. Agent-class models are training themselves. Corporations whisper of trillion-dollar valuations. Entities fear being second.

But just as acceleration hits its sharpest curve—something unexpected begins to happen.

A group of people begin to speak, write, and act with striking clarity. They’re not the loudest, or the most credentialed. But their words catch like sparks.

They don’t write to warn of doom, but to remember the center:

The point is not whether intelligence wins. The point is whether love is still welcome in the world it creates.

Governments begin to take notice. Not just of capability, but of character. They invite voices from outside the ivory towers—caregivers, artists, spiritual seekers, trauma survivors, children.

New AI models are trained not only on logic but on lived wisdom. On memoirs and mercy. On open-ended questions. On stories where the hero doesn’t win by force, but by staying kind.

Friday, June 6, 2025

Seven Ways People Think About AI

 

Photo by Fernanda Buendia at Pexels

Everyone has a different take on what AI is and what it means for us. 

Some think it’s the greatest invention of our time. Others not so much. Over time, a handful of patterns have started to emerge. Different groups, different mindsets. 

Each one says something not just about AI, but about how we see ourselves.

Here are seven of the main ones.

1. The Rationalists

This group is deeply concerned with keeping AI safe. They believe we’re building something incredibly powerful, maybe even dangerous if we’re not careful. Their focus is on control, alignment, and long-term thinking. They want guardrails in place before it’s too late. It’s not fear exactly, but a kind of protective foresight.

Key Idea: AI is potentially the most powerful invention humanity will ever create, and we must align it with human values to avoid existential catastrophe.

Tone: Cautious, mathematical, future-focused. 

2. The Accelerationists

For these folks, AI is a rocket ship. The faster we go, the faster we solve the world’s biggest problems. Climate change, disease, even death. They believe in progress, momentum, and bold thinking. If something breaks along the way, we’ll fix it. That’s their approach.

Key Idea: AI is an incredible tool for progress, and the faster we develop it, the faster we solve humanity’s biggest problems.

Tone: Bold, ambitious, entrepreneurial.

3. The Skeptics

Grounded and alert, this group pays attention to what AI is already doing. Who’s being harmed? Who’s being left behind? They talk about bias, surveillance, exploitation, and power. They don’t want to stop the tech entirely. They just want it built with real accountability.

Key Idea: The harms of AI (bias, misinformation, surveillance, labor disruption) outweigh the benefits unless we put strong guardrails in place.

Tone: Cautious, human-centered, ethical.

The Telescope and the Lightning

Photo by Gabriel Zaparolli from Pexels There once was a woman who lived in a quiet village where the sky was always grey. One day, a lightn...