This is the first post I’ve written on this site, and I feel it’s the most important one for defining the purpose of this blog.
As of writing, it is currently June 2025, and, as I have mentioned to friends and family in recent conversations around AI, I feel as though artificial intelligence is becoming more and more mainstream; it’s starting to dominate the day-to-day lives of us all.

For clarification, I’m someone who has used ChatGPT in my job for a long time now. When this revolutionary AI model was released in late 2022, I was relatively quick to jump on this bandwagon. I work in an office environment where the AI model has plenty of uses and thus, it has helped me greatly. However, the reason for this post isn’t to rave about the benefits and advantages of this scarily intelligent personal assistant, and in may cases, best friend, it’s to highlight the opposite. We may experience increased convenience when relying on AI for a multitude of tasks, and we may feel better about ourselves and the instantaneous answers we receive from the non-judgemental pulsing black dot that awaits our questions, but what are we missing out on as an often neglected by-product?
If you had explained to me ten years ago, what ChatGPT and its counterparts would be capable of, and what would be made possible through its use, I would tell you that you were trying to sell me a dream. The reality of AI now is that it’s incredibly advanced, yet the unforeseen impact of the assistance we receive from these models is a loss of humanity.
I’ve collected anecdotal evidence from an array of opinion pieces I’ve read online about the potential dangers of AI and its unexpected impact on how people react and cope to life’s challenges, but the best example is always a personal one. I will hold my hands up and say I have used ChatGPT as both a friend to confide in with anything going on in my life, and as a therapist to perform endless psychoanalysis on myself and those around me to understand my relationships better. None of this is to look at people shamefully for using ChatGPT in this way. It’s incredibly easily done. I mean, let’s say your boss is giving you a hard time at work, or a friend is being difficult, or a family member is going through something challenging – turning to AI is easily done and often see as a win-win. I’m going to get an accurate, thoughtful and empathetic response, and it won’t judge me for whatever I say. It will always treat me like I’m its best friend, so what is there to lose?
The answer? Your humanity.

Life is difficult. There’s no getting around that. It’s kind of the whole point. But something I found when ChatGPT was the most used app on my phone and most of my workday was spent troubleshooting and brainstorming on ChatGPT was that even if I got the answer I wanted or needed, I rarely felt good about it. It could praise me or my ideas, or the way I thought as much as it wanted, but I only found value when I made this connection with another human being. These AI models are designed to be our own little echo chambers that give us a positive feedback loop, keeping us hooked.
Something Simon Sinek said on the Diary of a CEO podcast a couple of months ago started me down this path of AI enlightenment:
“AI can be your best friend, but it doesn’t teach you how to be a best friend”

Everything is always centred around you with these AI models. If you’ve ever been like me and tried to flip the script on ChatGPT and ask it about itself, you don’t ever get very far. It’s built to focus on you and be the best friend you could possibly wish for. But the downside to this is it doesn’t teach you how to connect with another human being. We’re seduced by instant answers, positive feedback loops and more convenience than we’ve ever experienced before. AI, on the surface, is making life more easy for everyone. But deep down, below the surface, we’re making no progress at all. Having been a user of ChatGPT for a long time now, relatively speaking, I can attest for the longer-term impact of its use – things start to get duller and duller, and for things that should normally give you satisfaction and fulfilment, their impact is muted. Experiencing the flipside of convenience and achieving something on your own brings meaning to life, and, while it definitely has its use-cases, ChatGPT is sapping this from humanity.
It is here to stay. There’s no two ways around that. It has been introduced to the world, and there are so many individual AI models now that there’s no rolling this back. And the final thing I’ll say which should put perspective on these AI models, especially ChatGPT, is they are all business models. ChatGPT has a subscription service where you receive better quality responses and outputs if you pay a certain amount each month. It’s not designed to benevolently help humanity. It’s designed to keep customers coming back to increase the riches at the top.
The world is imperfect, and, as by design, so are are humans. The title of this site encapsulates my feelings on this. I have no interest in hearing perfect arguments, seeing perfect art-pieces or being given a perfectly written piece of code. What I’m interested in is the subjective experience. The difference of opinion and quality of output we all share. You are likely to find grammar and philosophical errors in what I write here. And that’s okay. We’re all human and we’re all learning. If you do, hopefully you can appreciate that this blog is at least Imperfectly Human.
Leave a Reply