Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Faster, no. Multi-tasking yes. (Score 1) 139

As a developer, AI workflows still rub me the wrong way. If I was dedicated to the task, I'd produce better code.
As a human, AI workflows let me have a life. I can let the agents knock out the easy things while I'm working on other tasks. I still need design out what's to be worked on, review the code, fix bone mistakes they make, etc. It's basically like having a junior developer assigned to you.

Which brings up an important point. Junior developers need clear instructions/requirements and so do AIs. I looked at a recent ArsTechnica article comparing coding agents and their prompt was one or two lines to create a clone of minecraft. I just stopped reading at that point. If you're not starting with a prompt that's about half a page or more:
1. You're probably going to get garbage
2. Your subsequent sessions working on the code aren't going to work as well because the new agent session is probably going to infer slightly different requirements (AI "temperature").

Even given a "simple" task like "create a minecraft clone", a senior developer/architect is going to come back with at least a few questions. A junior developer is either going to ask a ton of questions _or_ (worse) they're not going to ask any questions at all.

Take the time to give your AI junior developer clear requirements and you're going to be a lot happier with the results.

Comment Re:Cooperation Governments needed (Score 4, Informative) 45

Uh...

- China and the US both filed a brief with the UN (some it was somewhat of a big deal)
- Starlink did tell the State Department which apparently did not pass the information on (China doesn't seem to have notified the US in this case either)
- The information was apparently not passed on because it wasn't deemed a risk (bad decision) - The CSS did an anti-collision burn but the Starlink satellite also did an anti-collision burn (this satellite did not)
- The closest approach was about 1km (this satellite was 200m)

Neither should have happened and both should be a big deal but get out of here with this "but Starlink/US" nonsense.
https://www.thespacereview.com...

Comment Often Excel _is_ the right tool for the job. (Score 5, Interesting) 92

"Time for a database" depends a lot on what they are doing with the spreadsheet. If it's inventory or asset tracking, then yes... wrong tool for the job. However, workbooks like this are often used for forecasting and other financial models which don't map well to databases because there are cascading formulas being applied (I've seen sheets that take minutes to update).

Yes you can do it with Pandas, numpy, etc but the financial staff know Excel and they know it very, very well. Porting to something else is time consuming, expensive and risky, even a minor difference in precision or rounding on sheets like these can throw numbers off by millions of dollars/euros/etc. It's also usually more difficult to debug. With the Excel sheets you can see the numbers at each step/stage and an experienced user can pretty quickly identify where something is going wrong.

My background is programming and when I first came across these type of sheets my first reaction was NOPE. But having worked with financial teams on them, I game to realize I was wrong. Excel is exactly what they need. That's changing, more finance staff have experience with python and equivalent data modeling tools but don't be so quick to judge.

Comment We've had this... context files are your friend. (Score 1) 20

This is exactly what CLAUDE.md, GEMINI.md and AGENTS.md (or copilot-instructions.md) are for. You put your requirements, instructions, guardrails and notes in there. My general flow for things I just want to rip out is to put my core requirements into Gemini Deep Research and ask it to flesh them out (the code assist "plan" modes do the same thing but Deep Research is usually a little better), give it a good once or twice over to see what it got wrong, add guardrails based off of previous experience with the coding agent (e.g. do not use this library, this class of functions, this approach, etc. etc.) and then drop it in the context file. The resulting code is going to be substantially better than if you just give it a simple prompt. If you find something its continually screwing up, add another guardrail or not to the context file.

If you're not doing this, you are doing it very, very wrong and you're going to get garbage. You'll still get garbage with a context file but a lot less of it and generally it's not completely off the rails.

Comment Re:Access does at least appear to be encrypted (Score 3, Interesting) 43

The statement from the Yutong could be a little weasel worded. The article is talking about remote deactivation, the spokesperson is talking about data-collection. Nothing in the quoted statement addresses remote control. Chinese companies have a history of doing this when responding to this type of thing. 'A' is broken. What are you talking about, 'B' is just fine... nothing to see here! They misdirect or just flat out lie (Anker with their Robovacs being a recent, good example).

Comment Re:Cool (Score 3, Informative) 79

All Slot I/II's required heatsinks and most had fans (some OEM's didn't but it was intended for the OEM to install the fan). Now the heatsink was often preinstalled (or part of) the cartridge... maybe that's what you were thinking? The max TDP was around 20-30W, not crazy but still required a fan or a chonky passive heatsink. The card/slot was also not done for cooling reasons, it was done so they could bundle L2 cache with a dedicated bus instead of having it on the motherboard (L2 still wasn't on the chip package at this point).

Comment Reasoning models do verify (to varying degrees). (Score 1) 49

Your statement about how LLMs work in general is 1000% correct and critical for people to understand. If you have a decent understanding of how they work, you start to understand the importance of good prompts and guardrails which have a significant impact on the quality of the output.

However, "it does NOT make any reasonable attempt to verify" is no longer true. Reasoning models do make some efforts to verify, in some cases it's pretty significant. Gemini Deep Research outputs it's "reasoning" and you will frequently see it go down a rabbit hole only to come back and say "nope, that's wrong". That doesn't mean it's always right and doesn't make mistakes but I pretty much only use Deep Research/Think now and save Pro for the simple things. Gemini Pro is definitely worth the money. I'm still on the fence about Deep Think, haven't had it really blow my hair back but maybe I'm not throwing the right class of problem(s) at it.

Comment Prime isn't what it used to be... (Score 4, Insightful) 241

At least in Japan. The "deals" are "you save 40% on this item we marked up 39.999999%", deliveries are often delayed by days with no notification or reason and especially lately is more "you'll get it when you get it.". It's gotten to where if I know I need something, I just go to the store and buy it.

Comment Quirky Side Effect, everyone will be named Sato... (Score 1) 85

A weird side effect of this is that family names are dying off in Japan. While I doubt it will just be Sato as this professor is predicting, there is a really short list of common last names and other ones stand out (thankfully the different kanji for the names helps a bit).

Obligatory: How many a**holes we got on this ship anyhow? Yo! I knew it, I'm surrounded by a**holes.

Slashdot Top Deals

How many weeks are there in a light year?

Working...