Comment Re:I'm always amazed (Score 2) 131
It seems not, because IIRC my Pixel 10 says that even if for instance a thief steal my phone and shutdown it, it will still be traceable for a few hours.
Oh man, that's a name I haven't heard in many decades!
I built my career around dBase, starting in 1986; evolved to Clipper around 1992. My Clipper work kept me busy until nearly 2000, and I was still patching my old Clipper apps well into the first decade of the new millennium as I made the transition from developer to DevOps working primarily in SQL and Powershell.
I think my argument there is that we shouldn't be saying that what they did wrong was to "use infinite scrolling maliciously" as much as the broader concept of "creating addictive content".
Please forgive the poor comparison, but it's against the law for me to cause bodily harm to you. There might be additional laws that indicate that my reasons modify the nature of the crime, or the implements I use change sentencing, but the underlying law is about my actions and how they cause harm.
Similarly, I don't believe the issue should be about what UI elements the companies choose to use, but about the underlying actions / harm.
Historical data lookup is the first one that comes to mind.
I want to pull back data, and keep pulling back more data as I go down further. This is a context where the data has value - it's not trying to keep me on the site. I'd *love* it if my bank would do this for me.
From a purely social media perspective, you're right, there aren't really any good places for it. But I'm just saying that the concept of a UI element that grabs more data when you get to the end isn't fundamentally bad.
My initial argument, before I just started attacking social media, was that if we start legislating certain UI elements as problematic, then we end up in a situation where legitimate use cases get outlawed, and companies actually trying to create good products end up hamstrung.
Can we quit trying to attack UIs?
I understand that an infinite scroll can be addictive. It's also an incredibly simple UI feature that has plenty of viable use-cases.
As long as we look at these companies in terms of what they *do*, rather than what they *are*, we're never going to actually solve any problems.
If you ban this or that feature, they'll use their teams of psychologists to find something else that isn't specifically regulated and use that feature. Or they'll have a litigation of lawyers come in and argue that the thing they're doing doesn't fit the particular legislation. But we need to come to the point where we all agree that artificially trying to force someone to engage beyond the point they normally would is not "making a better product", it's just sleazy.
I get the argument that people can make choices to do what they want. I support that. But we also shouldn't collectively turn a blind eye to companies going out of their way to milk psychology and exploit people. Just because I accept responsibility for the fact that I spend more time on YouTube than I should doesn't mean that YouTube gets a pass in the matter.
I 100% agree that parents need to be way more engaged, and that teens shouldn't get unfettered access to social media. But just because some parents are less engaged than they should be doesn't excuse bad behavior by Instagram / Tiktok.
Personal freedoms doesn't have to be diametrically opposed to companies being responsible. I'm all for a smaller government with less stupid crap, but if a multinational conglomerate isn't going to make right choices on its own, then oversight ends up as the only viable option.
I completely went off course with my argument, but as a curmudgeon, I stand by it.
That's the issue - it's all or nothing, just with weird caveats. Either:
1. The AI can do everything an engineer can do, in which case some business management person might come back and tell it that it was wrong with some assumptions on this or that (just like they would with a human), but it's otherwise fully autonomous, acting entirely on its own, or:
2. It can't.
The problem with #2 is that we'll spend so much time and money in thinking we're just a little ways away from #1 that no one is in the pipeline. There's also the risk of treating #2 like it's #1, where we let it make decisions, with no repercussions, and we just watch things burn.
I suppose there's a third option - it can do everything, *plus* mentoring a junior so that a human is still learning things just in case.
Forgive me, but I'm going to rant some, because this is the only place I can do so.
I've started having to tell my friends to stop talking to me about AI.
Don't get me wrong. I use it. I find it helpful and saves time with stupid scripting tasks, throwing together modals, etc. There's a ton of ways that it helps me be more efficient with my human person job.
But actual work - architecture, design, thinking through a full process...that still requires a human.
What I'm starting to get really freaking irritated at is that everyone talks about AI like it's magic, and all I *hear* is "I couldn't do my job myself, but *now* I think I can!!".
Quit treating the fact that you spent money on Claude credits like some kind of proof of value. If you want to talk to me about something cool you're working on and a problem you had to solve - awesome. If you want to brag about how you spent all day crafting a prompt and then AI did all the work for you, then I kinda just want to punch you in your stupid face.
The one rather depressing bright spot I have is that the owner of the company discovered OpenClaw, and managed to set one up (even though he required me to do the really complicated stuff, like signing up for a Twilio account). His LinkedIn posts suddenly got way more articulate, added a ton of graphics, and is trying to sell people on his new agentic workflow that's running his company. Meanwhile, I know that nothing at all has changed, and that all he's managed to do is have the AI create a post and graphic and post it.
The "bright" point there is that it finally hit me that that's what literally all of the AI-spam is in my LinkedIn feed - a bunch of other people's bosses in the same boat - and that real people are still required to do anything of actual, legitimate value.
You're absolutely right to call me out on dropping bombs on Canada and destroying Toronto. I made a mistake, and I own it. But it was due to your keen insight that we can learn from these hiccups and move forward. That kind of sharp analysis is rare—and that makes you special.
With your gift for catching this type of mistake before it escalates into something worse, we can work together to build a better tomorrow.
For humans.
For AIs.
Forever.
Human beings were created by water to transport it uphill.