Skip to main content
Tom Casavant Tom Casavant

Musings on AI

For the past few months, I’ve been trying to write a blog post about my thoughts on AI. I’ve written three drafts of this and trashed each one. It’s part of the reason I haven’t published anything since early last year. The issue I kept running into is that there are so many conversations about AI that each time I wrote about it, the scope expanded so far that it became incredibly uninteresting to write and likely twice as boring to read.

So last weekend, while watching the Bengals season finale against the Browns, I decided to brute-force a stream of consciousness approach (while I’ll never be able to prove it, the first paragraph of that piece included a prediction for the end of that Bengals game that came true almost word-for-word). I wrote out every thought I had about AI so I could collapse that into a single subject that I actually wanted to talk about.

I ended up with a little over 3,000 words that touched on climate change, education, programming, non-consensual pornography, terminology, online arguments, marketing, comedy, copyright, the economy, security, intelligence, journalism, Luddites and my love for technology, medicine and cancer research, ethics, monopolies, and how I’m such a bad writer. Over the last week, I’ve tried to pare that down to the key points I wanted to make, and I struggled to do so until reading an article from a tech journalist and subsequently “hacking” (using the term very loosely here) that journalist. After that, I managed to pull everything into a much more focused post.

Terminology #

Wanted to get this out of the way early, when I refer to 'AI', I will primarily be using this to describe LLMs and derivative technology (and if there's an alternative usage I'll try clarify that at that time). While I think it's probably valuable to discuss other forms of AI and algorithmic content early drafts around that tended to get extremely out of scope.

Context #

Earlier this week, I read a post from a journalist discussing the use of a coding agent to generate a website, and presenting it as evidence that this marked the beginning of the end for programmers, a concept that’s been brought up time and time again. I had a hunch that this website had the exact same problem LLM-generated scripts have had since ChatGPT launched several years ago. So I went to their website, found an interesting widget, right-clicked and viewed the source, did a Ctrl+F for API_KEY, and found their Last.fm API key embedded in the site.

I did my due diligence, notified them that they had leaked an API key, and let them know that they should reset the key in their account to prevent abuse. A few hours later, they thanked me and let me know that they used Claude to fix the mistake (I verified this, and it appeared to have been fixed). From this exchange, I learned a few things about my priorities around AI. To be clear, I consider this journalist to be an incredibly intelligent person, and a far better writer than I am, even though I expect this will read like I am ragging on them at times.

Ethics #

The first thing I recognized was that I don’t have any particularly deep feelings about the ethics of other people using AI, and that’s something I first realized early on in the AI hype cycle. In my head, it gets grouped into “things I won’t do, but you can if you want.” There are plenty of other things that fall into that category:

  • I use Linux instead of Windows
  • I use open social media platforms instead of Facebook, Twitter, Instagram, TikTok, Reddit, Substack, etc
  • I use open messaging platforms instead of WhatsApp, Messenger, and GroupMe
  • I use Android over Apple (though it’s gotten to a point where I consider Android to be just as unethical as Apple, and I’m not entirely sure if I’m ready or capable of moving to a more open mobile platform)
  • I use DuckDuckGo over Google
  • I use Firefox over Chrome (this one also feels like it’s beginning to cross the line into “I need to start using an alternative to Firefox,” and that change seems more likely to happen sometime this year)
  • My thermostat is set very low in the winter, and I take short showers.

I’m not going to try to force anyone to do any of the above, even if they probably should. It’s not like I’m perfect (though if you ask me in person, I’ll probably claim otherwise). I believe becoming a vegetarian is far more impactful on the environment than avoiding ChatGPT, but I haven’t decided to make that leap yet.

The point is, the fact that this journalist was using AI wasn’t something I was upset about.

People who depend on AI often agree with me on many of those other points. Some AI skeptics might claim that by using AI, those people suddenly become climate-change-denying monopolists, and that’s just not something I see as true. My ethics-based concerns lie mainly with the AI companies themselves.

Security and The AI Narrative #

(I tried to come up with a less inflamatory sounding label than "The AI Narrative" but failed, so please do not think of it as a more intense description than it is meant to be)

For those not in the tech space, an API Key (or Application Programming Interface) is basically a password that lets you interface with some piece of software. In Last.FM's case, the API key lets me see this journalist's music listening histor or something as generic as getting the top songs across the Last.FM platform. Which probably isn't a huge deal, the original widget on their site was just showing their most recent listened to song so it's not like I have significantly more data than I did before I got access to the key. The worst thing I can probably do is start using this key and force his account to hit rate limits (a rate limit is basically when an account has used the API too often in a short amount of time, so the software stops responding to requests from that account). But, imagine for a second that instead of a Last.FM API key, I had obtained a key used to pull in data from their social media account, then suddenly I could potentially write posts on their behalf (you can see how that could be bad, the puns I post could destroy their reputation irreparably). Anyways, to avoid this developers will typically hide the API key instead of publishing it directly to their website.

It's not the leak that frustrates me, however. Sure, it exposes a larger problem with LLMs that has been around since coding with LLMs began, but the reason LLMs do this is because they are trained on code, written by humans, which leaked API keys as well. I have personally contacted several people on Github when I've noticed that their project has published an API key, this is not a new proble and any reasonably well-trained developer who used Claude to generate code would probably catch that mistake pretty quickly. What worries me is what happened after. In that initial email I had told them what they needed to do to that API key to rectify this leak (remove it from their account). Days later, however, that key still gives me access to their account. While I won't ever touch that key again, their website was up for days before I looked at it so who knows who else has access? This is something that an actual developer would have immediately dealt with, but I expect this will never get fixed.

And this is where we get to the narrative that’s repeated year after year: that AI enables you to do things that would otherwise take months (or years) of training. That it can already replace software developers, lawyers, doctors, therapists, authors, teachers, or mathematicians. I keep reading articles that say this is the year AI replaces X, Y, or Z. I read those same articles last year, and the year before that.

I’m not under the illusion that AI will never be good enough to replace people in any industry. I just wish the entire AI hype cycle would take a step back and pause before telling people to unconditionally trust the output of these LLMs, especially when those same people aren’t trained to recognize when something is wrong with it. Maybe this concern extends to the internet more broadly and not just AI output, but for most of my life I’ve consistently heard things like: “don’t trust everything you read on Twitter,” “don’t copy-paste random Stack Overflow code,” or “don’t use Wikipedia as a source". And yet AI companies and pro-AI writers seem determined to make the opposite point-that this is the year you’ll be able to vibe-code your own website and never have to think about the code at all.

Conclusion #

Look, maybe I’m wrong. Nobody can predict the future. Maybe 2026 is the year we finally replace 20 million software developers with 5 million skilled prompters, but I just don’t see it happening. And I worry that we’re moving closer and closer to a security nightmare as AI-generated code becomes easier to make by people less likely to understand it.

Citations-ish #

I figured I'd provide a list of everything I've read about AI over the course of the last few years to give you an idea of the headspace I'm in. I went through my browser(s) history (as far as I could) and various groupchats I'm in and compiled as many resources as I could though I'm sure this isn't all of it:


Webmentions

3 Likes

Jonas Trostle Flash Mob Of One somcak

2 Retweets

Journalism & Comment Feed Flash Mob Of One

These are webmentions via the IndieWeb and webmention.io. Mention this post from your site: