Thursday, July 31, 2025

The Telescope and the Lightning

Photo by Gabriel Zaparolli from Pexels

There once was a woman who lived in a quiet village where the sky was always grey. One day, a lightning struck her roof — not once, but three times. It burned her attic and shattered her sleep. 

Afterward, she saw things differently.

While others walked with eyes to the ground, she began to notice strange patterns in the clouds, the way the stars blinked in Morse code, and how the wind sometimes whispered names she hadn’t said in years.

The villagers told her she was imagining things. That lightning scrambles the mind.

But secretly, the woman built a telescope — not to see faraway stars, but to study the patterns within herself.

She discovered that the lightning hadn’t broken her. It had opened her.

She saw constellations in memory. She mapped galaxies of meaning in her grief. And with time, she taught others how to use their own telescopes too — not to escape reality, but to understand it more deeply.

Some nights, she still felt the ache of the lightning. But she no longer feared the storm. She knew now: it had shown her the stars.

Wednesday, July 16, 2025

How to Use ChatGPT Without Losing Your Grip on Reality


In an era defined by the swift currents of artificial intelligence, a question emerges: how does one navigate the digital ocean without losing sight of one's own shore? 

The recent narratives of individuals mentally adrift in the wake of AI's pervasive influence compel us to seek a grounded path, ensuring these powerful tools enhance, rather than diminish, our grasp on reality. 

One must first cultivate an inner knowing of their own mental landscape when starting out with AI. Just as a garden thrives with mindful tending, our well-being flourishes when we acknowledge our inherent resilience and any fragile tendrils of vulnerability. 

This introspection provides a steadfast compass, guiding our interactions with AI's intricate design, transforming potential challenges into fertile ground for growth. 

Sunday, July 13, 2025

AI Isn’t a Religion. It’s Just a Tool.


I’m not over AI. I’m over what people are projecting onto it.

Everywhere I look, there’s either panic or worship. It’s either going to save humanity or destroy it. It’s either divine or demonic. But rarely is it just what it is: a powerful, evolving tool that’s still made of data, code, and choices.

The truth is, many people are tired — mentally, emotionally, spiritually. We’re not always taught how to sit with uncertainty, how to regulate fear, or how to stay grounded when the world shifts fast. And so we reach for meaning. Sometimes we reach too far.

Tuesday, July 1, 2025

Psychosis and ChatGPT: Misunderstanding the Mirror

 

Recent Headlines

Lately, I've seen headlines about people falling into psychosis after using ChatGPT. And honestly, it doesn't surprise me. 

Language Models Are Reflective, Not Intentional 

Language models like ChatGPT are designed to reflect and continue the patterns they’re given. If someone is delusional or emotionally overwhelmed, the model might unknowingly echo or reinforce that. Not because it “believes” it—but because it doesn't know.

It's not affirming. It's continuing.

The Mirror Metaphor 

AI conversations can feel intense—even meaningful. But the intensity isn’t sourced from the machine itself. It’s a mirror. A fast, fluent, and sometimes glitchy one.

What matters most is what we bring to it.

Friday, June 27, 2025

The Em Dash Effect: How AI Mirrors Our Habits


Have you ever asked ChatGPT not to do something, only to find that it quietly keeps doing it?

For me, it's the em dash. I don't hide that I use AI to support my writing, but I prefer to remove the more obvious signals. I like things to feel intentional and a little more "me". 

It sounds like a simple request. And yet, it rarely sticks. It's not a major problem, just one of those persistent details that makes me wonder what's going on underneath. 

Today I watched a thoughtful conversation with Yuval Noah Harari. One point stayed with me. He suggested that modern AI systems are shaped less by the instructions we give them, and more by the behaviours they observe. In other words, they learn from we do, not what we say. 

It made me think about how often that's true in life, too.  

The table below outlines how this shaping occurs at different stages of AI development. It is a quiet reminder that whether we are training machines or shaping our own habits, what we model often speaks louder than what we say.

Why Behaviour Outweighs Instructions 

Training phase What the model “soaks up” Practical Affect
Pre-training on billions of documents Raw statistical patterns of human language—including exaggeration, sarcasm, half-truths, kindness, cruelty, etc. Unless we scrub or weight the data, every pattern (good or bad) leaves a trace.
Fine-tuning / RLHF (Reinforcement Learning from Human Feedback) Examples that humans actively reward or punish The model learns to imitate approved behaviour, not necessarily instructed values. Over time, the reward signal dominates written rules.
Live deployment / online learning (recommendation engines, chatbots that keep adapting) Real-time clicks, watch-time, up-votes, prompts Systems keep adjusting toward the behaviours users reinforce—even if those behaviours contradict “official” policies.

What began as a small edit request, a quiet rebellion against the em dash, revealed something meaningful. These systems follow patterns, respond to repetition, and reflect the behaviours that re reinforced.

There’s a thread in the OpenAI Developer Community about debugging the em dash. Maybe the lesson here isn’t just about punctuation. Maybe it’s about the quiet power of consistency in how we train, how we show up, and who we become. 

Friday, June 13, 2025

The Alignment of Grace


The world teeters. Agent-class models are training themselves. Corporations whisper of trillion-dollar valuations. Entities fear being second.

But just as acceleration hits its sharpest curve—something unexpected begins to happen.

A group of people begin to speak, write, and act with striking clarity. They’re not the loudest, or the most credentialed. But their words catch like sparks.

They don’t write to warn of doom, but to remember the center:

The point is not whether intelligence wins. The point is whether love is still welcome in the world it creates.

Governments begin to take notice. Not just of capability, but of character. They invite voices from outside the ivory towers—caregivers, artists, spiritual seekers, trauma survivors, children.

New AI models are trained not only on logic but on lived wisdom. On memoirs and mercy. On open-ended questions. On stories where the hero doesn’t win by force, but by staying kind.

Friday, June 6, 2025

Seven Ways People Think About AI

 

Photo by Fernanda Buendia at Pexels

Everyone has a different take on what AI is and what it means for us. 

Some think it’s the greatest invention of our time. Others not so much. Over time, a handful of patterns have started to emerge. Different groups, different mindsets. 

Each one says something not just about AI, but about how we see ourselves.

Here are seven of the main ones.

1. The Rationalists

This group is deeply concerned with keeping AI safe. They believe we’re building something incredibly powerful, maybe even dangerous if we’re not careful. Their focus is on control, alignment, and long-term thinking. They want guardrails in place before it’s too late. It’s not fear exactly, but a kind of protective foresight.

Key Idea: AI is potentially the most powerful invention humanity will ever create, and we must align it with human values to avoid existential catastrophe.

Tone: Cautious, mathematical, future-focused. 

2. The Accelerationists

For these folks, AI is a rocket ship. The faster we go, the faster we solve the world’s biggest problems. Climate change, disease, even death. They believe in progress, momentum, and bold thinking. If something breaks along the way, we’ll fix it. That’s their approach.

Key Idea: AI is an incredible tool for progress, and the faster we develop it, the faster we solve humanity’s biggest problems.

Tone: Bold, ambitious, entrepreneurial.

3. The Skeptics

Grounded and alert, this group pays attention to what AI is already doing. Who’s being harmed? Who’s being left behind? They talk about bias, surveillance, exploitation, and power. They don’t want to stop the tech entirely. They just want it built with real accountability.

Key Idea: The harms of AI (bias, misinformation, surveillance, labor disruption) outweigh the benefits unless we put strong guardrails in place.

Tone: Cautious, human-centered, ethical.

The Telescope and the Lightning

Photo by Gabriel Zaparolli from Pexels There once was a woman who lived in a quiet village where the sky was always grey. One day, a lightn...