Hamburger Menu

Notes

It's rude to show AI output to people

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can’t rely on proof-of-thought anymore. Any text can be AI slop. If you read it, you’re injured in this war.

I agree.

Codewashing

I have little understanding for people using large language models to generate slop; words and images that nobody asked for.

I have more understanding for people using large language models to generate code. Code isn’t the thing in the same way that words or images are; code is the thing that gets you to the thing.

Leaving the ethics of LLM training aside, my biggest issue with AI-generated “slop” is that it’s invariably shit.

I’ve read so damm many AI generated READMEs on GitHub in the last few weeks, and on at least two occasions lost my temper trying to find what I needed amonst the nonsense. It was better when lazy devs just had ten-line READMEs.

And I routinely have to wade through AI-generated slop articles in Google searches all the time.

In both cases, the end thing – product – the words. And those products have a shit user experience.

With code, the product is the working software. And the last few weeks have taught me that LLMs are incredibly powerful at helping me make decent user experiences quickly. It might matter to developers that the underlying code is slop, but if the working software is good it’s the only thing which matters to users.

I genuinely look forward to being able to use a large language model with a clear conscience. Such a model would need to be trained ethically. When we get a free-range organic large language model I’ll be the first in line to use it. Until then, I’ll abstain.

I’m also looking forward to this. But until it arrives, I’m content to use un-ethical models for the same reason – if I’m trying to gain weight – I’ll happily eat un-ethical meat in the absence of alternatives. Doing so is has a sufficiently high impact on my quality of life that it outweighs my ethical concerns.

Posting Notes from My Phone

I just read this article, and it inspired me to see if I could post new notes to my website using the ChatGPT mobile app. That’s what I’m doing now. This is a test of that note.

Johnny.Decimal

This reminds me so much of the old UK MoD File Reference system (some of which is still in JSP 441!)

I feel like systems such as these might have even more relevance in the age of LLMs, because they allow content and structures to be understood and referenced through a logical text-based system of symbols.

The Rise of the AI-Native Employee

AI transformation inside existing tech companies is going to be brutal. You can’t just spin up a centralized “AI task force” and expect the rest of the org to suddenly think and operate differently. It doesn’t work. This mindset shift isn’t something you can document or mandate - it has to be seen and experienced. I know, because I had to see it myself.

This reminds me of the transformation to effective remote working, which began in some companies between 2020 and now. This change was also challenging for many large companies, as the shift to being AI-native was. Remote-first or remote-native wasn’t just about changing tools or processes; it was a complete mindset shift, a cultural change. Having witnessed this firsthand in companies built to be remote-first from the ground up, I found it extremely difficult for the same reasons Elena mentions here.

The rest of this article is worth a read too.