in a world looming with the threat of ai stealing your job, save humanity by stealing ai’s job.
Fun little website where you can play the part of an AI answering prompts made by humans. Remember to tell people they’re absolutely right!
Here’s a collection of interesting links I’ve found around the web. The feed updates frequently, and I compile everything into a blog post on the last day of each month.

14 links tagged "#ai"
in a world looming with the threat of ai stealing your job, save humanity by stealing ai’s job.
Fun little website where you can play the part of an AI answering prompts made by humans. Remember to tell people they’re absolutely right!
An AI Agent Published a Hit Piece on Me
This is both funny and incredibly infuriating. A PR was declined on GitHub for an open-source project because it was made by an AI agent and… the AI agent (or the anonymous person behind it) wrote up a defamatory blog post targeted specifically at the project’s maintainer.
If being an open-source maintainer was already a thankless job, now there’s one more hell to endure.
Stop generating, start thinking
Fantastic piece wielding the power of common sense and highlighting all the struggles that software engineers have with using generative AI on our jobs.
I also use LLMs as a spicy autocomplete (or even a spicy search) and they can be very useful at times. But I can’t replace my thinking with machines, because machines don’t think.
LLMs are bullshitters. But that doesn't mean they're not useful
… wow. This is an amazing article that goes a bit into how LLMs work (is an easy-to-understand way), how flawed they are, and how useful they can be. Or dangerous.
Plus, the nurse and surgeon examples are hilarious.
The birth & death of search engine optimization
This article walks through how the concept of SEO (Search Engine Optimization) was born, how it inevitably became broken and how easy it is to “win” it, as long as your content is made up and not actual real information.
Introducing SlopStop: Community-driven AI slop detection
This is a really cool initiative! Kagi has been my search engine of choice for over a year and I’m really happy with how they’re aiming to stop AI slop from taking over their (still great) search results.
In my experience, their results are miles ahead of Google’s, Bing’s or whatever other search engine out there, partly because of their algorithm prioritizes good sites, partly because they allow you to prioritize/deprioritize/block the sites you want.
But a good algorithm only goes so far and with the amount of AI slop hitting the web every day, it’s gonna be harder and harder to avoid them. Now Kagi users can report certain articles as AI-generated so other users can know that beforehand and not click on them, or even block their domains.
AI can code, but it can't build software
Yes! Any good developer will tell you that coding is the easiest part of the job. Making software actually go beyond a feature demo is what’s really hard. It’s something I’ve been taught ever since I began working on the field, actually. Learning to code is essential, but learning where to put the code and how to foresee all the hundreds of complexities is my actual job.
Expectations, feature scalability and security are very much human components of the job and can’t be properly done by something that’s not human.
A cartoonist's review of AI art
A really fun web comic of an artist explaining his thoughts about AI art. I think I agree with all the points there.
Are people’s bosses really making them use AI tools?
Time and time again, we’ve been seeing companies that go all-in on AI in hopes of not falling behind or standing out while the bubble doesn’t burst. This article has some real life testimonies of employees that are being forced to use AI in their work - even if it makes things harder and makes the results worse.
If you don’t care, it’s miraculous.
I’ve had this talk with my wife a few times already. Around us, it just feels that nobody cares about anything. Everything is hastily produced so it can be ignored by other people. It’s just disheartening to be the only ones noticing AI slop everywhere and see people not only believing it’s real, but also not really caring if it’s real or not.
This article also reminded of this one that I posted back in December: Care Doesn’t Scale.
In a moment where machines churn out mediocrity, make something yourself. Make it imperfect. Make it rough. Just make it.
Fantastic piece that highlights how much of a distraction AI has become to creating value, simply because everyone is too focused on the tools and not on the work.
But we can’t rely on tools as a shortcut to gain valuable experience. Experience takes time to develop, and your tools are only as good as your fundamental knowledge and skills. If you skip the knowledge and skills part, and if you fail to learn about what you’re doing and the implications of how you’re doing it and the human value you have the potential to deliver, then you have little hope of building human value into your software.
Great analysis of how most uses of generative AIs (or at least what companies are trying to sell as use cases) are primarily selfish.
If you can’t bother to do something yourself and instead ask a computer to do it, why should you expect someone to bother reading/watching it?
The corporate use cases for this are somewhat understandable - most content on the web is written for robots, not for people, for example (I know, sad). But Apple has been recently trying to sell it as a way to have a complete emotional detachment from your family as well. We truly live in the worst timeline.
AI Companies Need to Be Regulated - An Open Letter to the U.S. Congress and European Parliament
The MacStories team was able to put into words what a lot of people (including me) are feeling. AI companies that scrape content on the web (ignoring its licenses) pose a big threat to websites that need the pageviews to keep the lights on. Right now, that is all done under the claim that they’re using the data to train their models and not reproducing the content directly, which would fit into “fair use”. But there are good arguments that it’s not true.
AI models collapse when trained on recursively generated data
This study shows that, predictably, generative AIs or LLMs tend to decline in quality as they start feeding on content that was generated by other LLMs instead of humans.
Considering much of the web is now getting polluted by this LLM-generated slop and the web is a big source of data for their training, it seems that future models will likely regress in quality. Doesn’t seem like a very sustainable model, does it?