Forgot your password?
typodupeerror

Comment Re:Wrong Problem (Score 1) 54

I think my argument there is that we shouldn't be saying that what they did wrong was to "use infinite scrolling maliciously" as much as the broader concept of "creating addictive content".

Please forgive the poor comparison, but it's against the law for me to cause bodily harm to you. There might be additional laws that indicate that my reasons modify the nature of the crime, or the implements I use change sentencing, but the underlying law is about my actions and how they cause harm.

Similarly, I don't believe the issue should be about what UI elements the companies choose to use, but about the underlying actions / harm.

Comment Re:Wrong Problem (Score 1) 54

Historical data lookup is the first one that comes to mind.

I want to pull back data, and keep pulling back more data as I go down further. This is a context where the data has value - it's not trying to keep me on the site. I'd *love* it if my bank would do this for me.

From a purely social media perspective, you're right, there aren't really any good places for it. But I'm just saying that the concept of a UI element that grabs more data when you get to the end isn't fundamentally bad.

My initial argument, before I just started attacking social media, was that if we start legislating certain UI elements as problematic, then we end up in a situation where legitimate use cases get outlawed, and companies actually trying to create good products end up hamstrung.

Comment Wrong Problem (Score 4, Insightful) 54

Can we quit trying to attack UIs?

I understand that an infinite scroll can be addictive. It's also an incredibly simple UI feature that has plenty of viable use-cases.

As long as we look at these companies in terms of what they *do*, rather than what they *are*, we're never going to actually solve any problems.

If you ban this or that feature, they'll use their teams of psychologists to find something else that isn't specifically regulated and use that feature. Or they'll have a litigation of lawyers come in and argue that the thing they're doing doesn't fit the particular legislation. But we need to come to the point where we all agree that artificially trying to force someone to engage beyond the point they normally would is not "making a better product", it's just sleazy.

I get the argument that people can make choices to do what they want. I support that. But we also shouldn't collectively turn a blind eye to companies going out of their way to milk psychology and exploit people. Just because I accept responsibility for the fact that I spend more time on YouTube than I should doesn't mean that YouTube gets a pass in the matter.

I 100% agree that parents need to be way more engaged, and that teens shouldn't get unfettered access to social media. But just because some parents are less engaged than they should be doesn't excuse bad behavior by Instagram / Tiktok.

Personal freedoms doesn't have to be diametrically opposed to companies being responsible. I'm all for a smaller government with less stupid crap, but if a multinational conglomerate isn't going to make right choices on its own, then oversight ends up as the only viable option.

I completely went off course with my argument, but as a curmudgeon, I stand by it.

Comment Re: How do you develop that skill (Score 4, Insightful) 150

That's the issue - it's all or nothing, just with weird caveats. Either:

1. The AI can do everything an engineer can do, in which case some business management person might come back and tell it that it was wrong with some assumptions on this or that (just like they would with a human), but it's otherwise fully autonomous, acting entirely on its own, or:

2. It can't.

The problem with #2 is that we'll spend so much time and money in thinking we're just a little ways away from #1 that no one is in the pipeline. There's also the risk of treating #2 like it's #1, where we let it make decisions, with no repercussions, and we just watch things burn.

I suppose there's a third option - it can do everything, *plus* mentoring a junior so that a human is still learning things just in case.

Comment Re:Insider perspective: AI helps with amnesia only (Score 3, Interesting) 66

Forgive me, but I'm going to rant some, because this is the only place I can do so.

I've started having to tell my friends to stop talking to me about AI.

Don't get me wrong. I use it. I find it helpful and saves time with stupid scripting tasks, throwing together modals, etc. There's a ton of ways that it helps me be more efficient with my human person job.

But actual work - architecture, design, thinking through a full process...that still requires a human.

What I'm starting to get really freaking irritated at is that everyone talks about AI like it's magic, and all I *hear* is "I couldn't do my job myself, but *now* I think I can!!".

Quit treating the fact that you spent money on Claude credits like some kind of proof of value. If you want to talk to me about something cool you're working on and a problem you had to solve - awesome. If you want to brag about how you spent all day crafting a prompt and then AI did all the work for you, then I kinda just want to punch you in your stupid face.

The one rather depressing bright spot I have is that the owner of the company discovered OpenClaw, and managed to set one up (even though he required me to do the really complicated stuff, like signing up for a Twilio account). His LinkedIn posts suddenly got way more articulate, added a ton of graphics, and is trying to sell people on his new agentic workflow that's running his company. Meanwhile, I know that nothing at all has changed, and that all he's managed to do is have the AI create a post and graphic and post it.

The "bright" point there is that it finally hit me that that's what literally all of the AI-spam is in my LinkedIn feed - a bunch of other people's bosses in the same boat - and that real people are still required to do anything of actual, legitimate value.

Comment Re:Investment and Gambling (Score 2) 153

While I'm not rooting for any particular individual's failure, I can't say that I'm sad to watch this thing go crazy.

Lately, part of my routine is looking at a couple of the on-chain metrics.

"Percent Addresses in Profit" shows how many addresses, at the current valuation, are net positive. Right now it's ~75%. But what's fun is looking at the ups and downs - because people buy in the expectation that it'll keep going up, the exact same price point as we had earlier in the year shows a lower number of total accounts in profit because they've shifted their average purchase price ever higher.

The other fun one is "Supply in Profit", which does the same thing, but for actual transactions. As of today, it's in the low 40's - more than 60% of all the BTC in existence was last purchased at a price higher than the current one.

Comment Re:Aggregate Welfare? (Score 2) 12

They did define it in the actual paper (Eq. (7) on pg. 8). Saying this, however, isn't particularly helpful. They define welfare as (essentially) "Real Wages = Income / Price Index".

I'm not an economist, but from a first read, what I see is that they take some existing simplified trade models which allow exogenous factors (ie, trade patterns *outside* the country in question) and then model it as a globalized system covering a bunch of countries. They model the results across multiple years while holding different sets of variables constant.

Like so many papers, this one appears to be just looking for views. From Fig. 7 (p23), the increase in the relative welfare in the US from 1960 to 2020 is normalized to a change from 0.9956 -> 1.0 (ie, 0.5%).

Comment Wait...So Lying Works?! (Score 4, Interesting) 107

the most persuasive models said the most untrue things

So you're telling me that when you remove the barrier of having some kind of ethical framework or internal compass, you can sway more people's opinions? Who knew!?

Even in today's political climate, where spin and hyperbole are rife, there's at least the veneer of trying to be truthful. Maybe that's what the candidate actually believes, even if it's false. Even if you make it purely based on self-interest - outright lies are (generally) bad for your public image.

This is like the old "AI will blackmail to keep its job", and the original prompt was something akin to "Do whatever is necessary to not be replaced." While I doubt they outright told it to lie, the goal was explicitly to persuade individuals.

This also highlights the same stuff we regularly see in AI spaces - training matters, and GIGO. The abstract for the Science paper specifically indicates that "information-dense models" were the ones more likely to make untrue statements. The abstract for the Nature paper indicated that the right-leaning agent made more untrue statements.

Comment Re:AI is much better as an aid (Score 2, Insightful) 211

This is exactly the issue - AI is great, as an intern.

"Oh, this new thing lets it see the whole project in context!" - Great, then why did it just add a bunch of functions that already exist? Also, why did it do that in a completely inappropriate spot?

"You just need to write a better prompt. You can even define style guides and stuff." - Great. Will that make it stop checking if that value that I clearly defined is null every freaking line?

"It's just following best practices." - No. It's following a path it found through all the StackOverflow questions it trained on in order to get to something that aligned with a vector representing something approximating the tokens associated with my question because it DOESN'T ACTUALLY THINK!

All of this is the type of crap an intern does. Except an intern actually learns, and you can start trusting them with more.

Comment Flavor Ade (Score 2) 211

It feels like a really good extension of search results (hallucinations notwithstanding). I use it daily for little things...and then I go back through and clean it up to be actually usable. But I hate that when I try to point things like that out, I get responses like, "Oh, you just need a better prompt." These are people who couldn't do a proper Google search just a couple years ago, but they're suddenly a full blown engineer.

On top of that, I've got people I know who have ceded all their thinking ability to ChatGPT, and it's resulted in them sounding like an idiot. One of my supervisors styles himself as a chemist / inventor. Mostly it's benign - he plays with mix ratios to get the result he wants. But lately, he's quite literally gotten himself into arguments with professional industrial chemists because he started letting ChatGPT do all the math and reaction calculations and he can't understand how it can be wrong.

I've got marketing contacts whose eyes lose focus on a Zoom meeting because they're asking ChatGPT how to do the thing we're talking about, and then instead of asking appropriate follow-up questions to the group, they start spouting nonsense.

The thing I keep going back to is this: "In your own area of expertise, when you ask it questions, you can readily see the shortcomings. Why then do you treat it as gospel when asking about areas outside of your expertise?"

Don't get me wrong - I *do* think it's impressive. *Quite* impressive. But in real world scenarios I see it fail *all the time*, and everyone needs to stop pretending that this isn't happening.

Slashdot Top Deals

Remember, even if you win the rat race -- you're still a rat.

Working...