Notes

15 rules for blogging

I found Matt’s website today and had a lovely couple of hours exploring his writing.

I love most of his 15 Rules for Blogging, but this one in particular…

One idea per post. If I find myself launching into another section, cut and paste the extra into a separate draft post, and tie off the original one with the word “Anyway.” Then publish.

I regularly get excited enough to write a kinda-strem-of-conciousness thought down which turns out to be half-decent except for the end. And these often languish as drafts for ages for want of a neat way to wrap them up.

I just looked at my unfnished drafts and about two thirds of them are hard to wrap up because they contain multiple kinda-related ideas which are just a bit too hard to bring together in a conculsion I’m happy with.

Anyway.

On Working with Wizards - by Ethan Mollick

And that suggests another risk we don’t talk about enough: every time we hand work to a wizard, we lose a chance to develop our own expertise, to build the very judgment we need to evaluate the wizard’s work.

But I come back to the inescapable point that the results are good, at least in these cases. They are what I would expect from a graduate student working for a couple hours (or more, in the case of the re-analysis of my paper), except I got them in minutes.

This is the issue with wizards: We’re getting something magical, but we’re also becoming the audience rather than the magician, or even the magician’s assistant.

The Rise and Fall of Vibe Coding | Tomasz Tunguz

The fundamental problem lies in misaligned capabilities and understanding. AI generates working code fast but cannot instill architectural thinking or testing discipline.

Users gain false competency. They produce working software without grasping underlying complexity or long-term implications.

Engineering best practices must become as accessible as AI coding tools. Security improvements and test generation should happen in natural language.

The future involves hybrid workflows. “Vibe coders” prototype solutions while engineers harden successful experiments.

Reflections on an old article I wrote

So seven years hence I’m re-reading this, and… this paragraph applies even bloody more in 2025 if we replace googling and stackoverflow with “talking with Large Language Models”. I’m generally bullish re LLMs as far as speeding up coding. But I’m the opposite opposite as far as learning and what I wrote in this article is concerned.

Like StackOverflow in 2018, LLMs are very good at knowing how to solve well-defined, general-case problems which can be defined with some specificity. And like StackOverflow in 2018, they are shit at solving ill-defined general technical problems… especially when those problems exist in a niche business domain.

Relevant aside…

It pains me to think that a meaningful chunk of people will probably find this note because they’re finding it hard to break down problems when learning to code and will end up engaging with it via some AI-summary rather than by actually reading it. And then move on to the next thing without taking anything at all meaningful from my 2018 words besides “man coding sounds hard”. I hope I’m wrong here.

Mass Intelligence?

When a billion people have access to advanced AI, we’ve entered what we might call the era of Mass Intelligence. Every institution we have — schools, hospitals, courts, companies, governments — was built for a world where intelligence was scarce and expensive. Now every profession, every institution, every community has to figure out how to thrive with Mass Intelligence. How do we harness a billion people using AI while managing the chaos that comes with it? How do we rebuild trust when anyone can fabricate anything? How do we preserve what’s valuable about human expertise while democratizing access to knowledge?