Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Just shows he does not really understand hardwa (Score 1) 79

One major difference, assuming you've got full platform support(should be the case on any server or workstation that isn't an utter joke; but can be a problem with some desktop boards that 'support' ECC in the sense that AMD didn't laser it off the way Intel does; but don't really care); is that ECC RAM can (and should) report even correctable errors; so you get considerably more warning than you do with non-ECC RAM.

If you pay no attention to error reports ECC or non-ECC are both rolling the dice; though ECC has better odds; but 'proper' ECC and Linux-EDAC support will allow you to keep an eye on worrisome events(normally with something like rasdaemon, not sure what other options and preferences there are in terms of aggregating the kernel-provided data) and, unless the RAM fails particularly dramatically and thoroughly, will give you much better odds of knowing that you have a hardware problem while that problem is still at correctable levels; so you can take appropriate action(either replacement, or on the really fancy server systems, some 'chipkill'-like arrangement where the specific piece of DRAM that is failing gets cut out of use when deeemed unreliable without having to bring the system down.

Comment Re:BSoD was an indicator (Score 1) 79

Sometimes you'd get a BSOD that was a fairly clear call to action; when the error called out something recognizable as the name of part of a driver; but that is mostly just a special case of the "did you change any hardware or update any drivers recently?" troubleshooting steps that people have been doing more or less blind since forever; admittedly slightly more helpful in cases where as far as you know the answer to those questions is 'no'; but windows update did slip you a driver update; or a change in OS behavior means that a driver that used to work is now troublesome.

Realistically, as long as the OS provides suitable support for being configured to collect actual crash dump material if you want it; it's hard to object too strongly to the idea that just rebooting fairly quickly is probably the better choice vs. trying to make the BSOD a genuinely useful debugging resource; especially given how rare it is for the person with useful debugging ability to happen to be at the console at the time of crash(rather than just an end user who is ill equipped to make sense of it; or a system that mostly does server stuff, quite likely not on actual physical hardware, where nobody has even touched the physical console in months or years; and it's more or less entirely useless to display a message there; rather than rebooting and hoping that things come up enough that management software can grab the dump files; or giving up and leaving the system in EMS so that someone can attach to that console.

Comment Re:Linus is right, but this is really not news (Score 1) 79

Win9x and Win2k (and the other NT descendants) are fundamentally different operating systems. In general, NT had a much more robust kernel, so system panics were and remain mainly hardware issues, or, particularly in the old days, dodgy drivers (which is just another form of hardware issue). I've seen plenty of panics on *nix systems and Windows systems, and I'd say probably 90-95% were all hardware failures, mainly RAM, but on a few occasions something wrong with the CPU itself or with other critical hardware like storage device hardware. There were quite a few very iffy IDE cards back in the day.

The other category of failure, various kinds of memory overruns, have all but disappeared now as memory management, both on the silicon and in kernels, have radically improved. So I'd say these are pretty much extinct, except maybe in some very edge cases, where I'd argue someone is disabling protections or breaking rules to eke out some imagined extra benefit.

Comment Are there really no PD Asian fonts? (Score 1) 94

I would have thought by now, after 40 years of computerization, that there would be some robust Asian language fonts available in the public domain or perhaps licensed through government agencies to promote their use.

All the way back in the 1980s, I was involved in a Japanese/Chinese/English photo-typesetter project using what I believe were freely available font sets.

Seems like the Japanese game companies should switch to Google or MS fonts. $20K/year in Japan is someone's salary.

Comment Just shoddy... (Score 4, Interesting) 95

What seems most depressing about this isn't the fact that the bot is stupid; but that something about 'AI' seems to have caused people who should have known better to just ignore precautions that are old, simple, and relatively obvious.

It remains unclear whether you can solve the bots being stupid problem even in principle; but it's not like computing has never dealt with actors that either need to be saved from themselves or are likely malicious before; and between running more than a few web servers, building a browser, and slapping together an OS it's not like Google doesn't have people who know that stuff on payroll who know about that sort of thing.

In this case, the bot being a moron would have been a non-issue if it had simply been confined to running shell commands inside the project directory(which is presumably under version control, so worst case you just roll back); not above it where it can hose the entire drive.

There just seems to be something cursed about 'AI' products, not sure if it's the rush to market or if mediocre people are most fascinated with the tool, that invites really sloppy, heedless, lazy, failure to care about useful, mature, relatively simple mitigations for the well known(if not particularly well understood) faults of the 'AI' behavior itself.

Comment Re:Only part of the story... (Score 1) 126

What always puzzled me about Intel's...more peripheral...activities is that they seemed to fall into a weird, unhelpful, gap between 'doing some VC with the Xeon money; rather than just parking it in investments one notch riskier than savings accounts' and 'strategic additions to the core product'; which normally meant that the non-core stuff had limited synergies with intel systems; and had the risks associated with being a relatively minor program at a big company with a more profitable division; and thus subject to being coopted or killed at any time.

Seemed to happen both with internal projects and with acquisitions. Intel buys Altera because, um, FPGAs are cool and useful and it will 'accelerate innovation' if Intel is putting the PCIe-connected FPGA on the CPU's PCIe root complex rather than a 3rd party vendor doing it? Or something? Even at the tech demo level I'm not sure we even saw a single instance of an FPGA being put on the same package as a CPU(despite 'foveros' also being the advanced-packaging hotness that Intel assured us would make gluing IP blocks together easy and awesome). They just sort of bought them and churned them without any apparent integration. No 'FPGA with big fuck-off memory controller or PCIe root we borrowed from a xeon' type part. No 'Intel QuickAssist Technology now includes programmable FPGA blocks on select parts' CPUs or NICs. Just sort of Intel sells Altera stuff now.

On the network side, Intel just kind of did nothing with and then killed off both the internal Omni-path(good thing it didn't turn out that having an HPC focused interconnect you could run straight from your compute die would have been handy in the future...luckily NVlink never amounted to much...) and the stuff they bought from Barefoot; and at this point barely seems to ship NICs without fairly serious issues. I'm not even counting Lantiq; which they seem to have basically just spent 5 years passing on to Maxlinear with minimal effect; unless that one was somehow related to that period where they sold cable modem chipsets that really sucked. It's honestly downright weird how bad the news seems to be for anything that intel dabbles in that isn't the core business.

Comment Re:Quality Work Can't Be Rushed (Score 1) 126

Not delivering on schedule is absolutely a symptom; it's just a somewhat diagnostically tricky one since the failure can come from several directions; and 'success' can be generated by gaming the system in several places, as well as by successful execution.

In the 'ideal' case things mostly happening on schedule is a good sign because it means both that the people doing the doing are productive and reliable and the people trying to plan have a decent sense(whether personally, or by knowing what they don't know and where they can get an honest assessment and doing so) of how long things are going to take; whether there's something useful that can be added or whether forcing some mythical man-month on the people already working on it would just be a burden; keeping an eye on whether there's anything in the critical path that is going to disrupt a bunch of other projects, and so on.

If you start losing your grip on the schedule, that fact alone doesn't tell you whether your execution is dysfunctional or your planners are delusional, or some combination of the two; but it's not a good sign. Unhelpfully, the relationship between how visibly the gantt charts are perturbed and how big a problem there is is non-obvious(a company whose execution is robust but whose planners live in a world of vibes-based theatre and one whose execution is dysfunctional and crumbling and whose planners are reusing estimates from the time before the rot set in might blow a roughly equal number of deadlines; despite one having mostly a fluff problem and one probably being in terminal decline); but it's never a good sign.

Comment Re:Seems reasonable (Score 2) 25

It seems reasonable; but also like something that should really spook the customers.

It seems to be generally accepted that junior devs start out as more of an investment than a genuine aid to productivity; so you try to pick the ones that seem sharp and with it, put some time into them, and treat them OK enough that they at least stick around long enough to become valuable and do some work for you.

If that dynamic is now being played out with someone else's bots, you are now making that investment in something that is less likely to leave, whatever as-a-service you are paying for will continue to take your money; but which is much more likely to have a price regularly and aggressively adjusted based on its perceived capabilities; and have whatever it learned from you immediately cloned out to every other customer.

Sort of a hybrid of the 'cloud' we-abstract-the-details arrangement and the 'body shop' we-provision-fungible-labor-units arrangement.

Some customers presumably won't care much; sort of the way people who use Wix because it's more professional than only having your business on Facebook don't exactly consider web design or site reliability to be relevant competencies; their choice is mostly going to be between pure off the shelf software and maybe-the-vibe-coded-stuff-is-good-enough; but if your operation depends in any way on your comparative ability to build software Amazon is basically telling you that any of the commercial offerings are actively process-mining you out of that advantage as fast as the wobbly state of the tech allows.

Comment Well, that answers my question... (Score 4, Insightful) 38

So the 'hyperloop' people have a cool website; while the 'train' people are just plain getting on with building stuff; whether conventional or the now-quarter-century-ish old maglev option.

Looks like someone signed up for another round of 'faff with apps vs. offshoring our entire high tech supply chain' and hoped it would work better this time.

And some dumbass 'managing director' is telling us that a gigantic safety-critical vacuum system is 'not effected by strikes'; more or less because he has no idea what the maintenance and operations would involve? Truly a joke telling itself.

Slashdot Top Deals

Weekends were made for programming. - Karl Lehenbauer

Working...