

Here’s a big important test you can use to see if something is actually useful and effective.
Pick a random tool and ask all your friends and neighbors what they last used it for. “Hey bob, I was wondering, what was the last thing you used your belt sander/hammer/paintbrush for?”. You’ll probably get a very accurate answer about something that needed doing. “Oh, I had to sand down the windowsill because the paint was cracked” or “I tightened the screws on my coffeetable”
Now do the same for AI.
The big problem with asking if AI is useful is that people suck at figuring out how to do someone else’s work, but they’ve got a pretty good idea what their own work is like. As a result, it’s very easy to think that AI can do someone else’s job, but for YOUR job, that you actually understand, you can easily see what bullshit AI spouts and how it misses all the important bits.
Sure, if your idea is that “Programmers write code”, then yeah, AI can do that. Similarly, “authors write stories” is true, and AI can write stories. But if you know very slightly more, you realize that programmers only write code like 10% of the time, and authors probably write words less than 10% of the time. The job is about structuring and planning and laying out, the typing is just the final details.
But if you understand fuckall about a job, then yeah AI can definitely generate stuff that looks like other stuff, because it’s a machine specifically designed to make stuff that looks like other stuff.
Fair, I was mostly talking about LLMs and other generative AI.