Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:3D printing wasn't the problem (Score 1) 98

I'll find out in mid January, lol - it's en route on the Ever Acme, with a transfer at Rotterdam. ;) But given our high local prices, it's the same cost to me of like 60kg of local filament, so so long as the odds of it being good are better than 1 in 8, I come out ahead, and I like those odds ;)

That said, I have no reason to think that it won't be. Yasin isn't a well known brand, but a lot of other brands (for example Hatchbox) often use white-label Yasin as their own. And everything I've seen about their op looks quite professional.

Comment Re:A funny scary thing (Score 2) 60

Unless you are at the North or South Pole or on top of one of the highest mountains, you are unlikely to be getting an average of one SEU per week in one computer due to cosmic rays. I would attribute most of the errors you see to other causes: marginal timing compatibility, power glitches, an overburdened fan, a leaky microwave nearby, several of these in combination, etc. Cosmic rays sound cool, but most bit flips have more boring causes.

In my case, I saw a lot more errors when I was running compute-intensive jobs: read files, decompress them, run a domain specific compression to text, generate SHA-256, compress using a general purpose compression, in parallel on 24 cores. The location of errors was random like in your system, but the correlation with processor load convinced me it wasn't caused by cosmic rays.

Comment Re:Overthinking it... (Score 4, Interesting) 60

Their developers are supposed to be very competent and careful, but mostly because of culture and the application of development processes that consider lots of potential errors. The default assurance guidance documents (don't call them standards, for rather pedantic reasons) are ED-79 (for Europe because we're taking about Airbus, jointly published as ARP4754 in the US) for aircraft and system design, ARP4761/ED-135 for the accompanying safety analyses, DO-178/ED-12 for software development and DO-254/ED-80 for hardware development. DO-254 gets augmented by AC 20-152A to clarify a number of points. Regulators who certify the system or aircraft also have guidance about what level of involvement they should have in the development process, based on lots of factors, but with most of them boiling down to prior experience of the developers.

You can read online about the objectives in those documents, but flight control systems have potentially catastrophic failure effects, so they need to be developed to DAL A. For transport category aircraft, per AC 25.1309-1B, a catastrophic effect should occur no more often than once per billion operational hours. Catastrophic effects must not result from any single failure; there must be redundancy in the aircraft or system. Normally, the fault tree analysis can only ignore an event if it's two or three orders of magnitude less likely than the overall objective.

Cosmic rays normally cause more than one single-event upset per 10 trillion hours of operation, so normally there should be hardware and software mechanisms to avoid effects from them. In hardware, it might be ECC plus redundant processors with a voting mechanism. For software, it might be what DO-178 calls multiple version dissimilar software independence.

I don't know Airbus itself, and one always has the chance of something like the Boeing 737 MAX MCAS. But typically, companies and regulators do expect these systems to be extremely reliable because the developers are professional and honest: not necessarily super-competent, but super-careful about applying good development practices, having independence in development processes as well as the product, and checking their work with process and quality assurance teams who know what to look for and what to expect.

Comment Re:A funny scary thing (Score 2) 60

Try running a one-week memtest86 run, then?

I used to have similar problems (with 4x32 GB sticks), but they went away when I replaced my RAM. Those kinds of problems can also be caused by voltage fluctuations, either from the input power or from load (and memtest86 isn't good at increasing CPU or GPU load) -- even without overcooking. It could be cosmic rays, but it could also be much more local causes.

Comment Re:So (Score 3, Insightful) 56

I'm not even a manager and there are, at present count, 30 hours of meetings on my calendar. I go to less than half, I just let the meetings sandbag my calendar so that new meetings are difficult to schedule. Either you know me and we have a reason to meet, or fuck you.

The actual managers are much worse off. Corporate life is stupid.

Comment Re: Not helpful (Score 3, Funny) 26

ClipGPT: "It looks like you're trying to manage public relations in connection with an advertising campaign. Sterling Cooper & Partners is an internationally recognized agency with a long track record of successful campaigns in this area. Can I help you navigate to Link Target?"

Comment Re:Filming people getting CPR (Score 3, Interesting) 148

We need to stop pretending like it's perfectly OK to film strangers in public. Legal? Sure. Should you be doing it? 9 times out of 10, no.

It's long past time we had a real debate about the law, too. Just because something has been the law for a long time, that doesn't necessarily mean it should remain the law as times change. Clearly there is a difference between the implications of casually observing someone as you pass them in a public street, when you probably forget them again a moment later, and the implications of recording someone with a device that will upload the footage to a system run by a global corporation where it can be permanently stored, shared with other parties, analysed including through image and voice recognition that can potentially identify anyone in the footage, where they were, what they were doing, who they were doing it with, and maybe what they were saying and what they had with them, and then combined with other data sources using any or all of those criteria as search keys in order to build a database at the scale of the entire global population over their entire lifetimes to be used by parties unknown for purposes unknown, all without the consent or maybe even the knowledge of the observed people who might be affected as a result.

I don't claim to know a good answer to the question of what we should allow. Privacy is a serious and deep moral issue with far-reaching implications and it needs more than some random guy on Slashdot posting a comment to explore it properly. But I don't think the answer is to say anything goes anywhere in public either just because it's what the law currently says (laws should evolve to follow moral standards, not the other way around) or because someone likes being able to do that to other people and claims their freedoms would be infringed if they couldn't record whatever they wanted and then do whatever they wanted with the footage. With freedom comes responsibility, including the responsibility to respect the rights and freedoms of others, which some might feel should include more of a right to privacy than the law in some places currently protects.

That all said, people who think it's cool to film other human beings in clear distress or possibly even at the end of their lives just for kicks deserve to spend a long time in a special circle of hell. Losing a friend or family member who was, for example, killed in a car crash is bad enough. Having to relive their final moments over and over because people keep "helpfully" posting the footage they recorded as they drove past is worse. If you're not going to help, just be on your way and let those who are trying to protect a victim or treat a patient get on with it.

Comment Re:Way too early, way too primitive (Score 1) 62

The current "AI" is a predictive engine.

And *you* are a predictive engine as well; prediction is where the error metric for learning comes from. (I removed the word "search" from both because neither work by "search". Neither you nor LLMs are databases)

It looks at something and analyzes what it thinks the result should be.

And that's not AI why?

AI is, and has always been, the field of tasks that are traditionally hard for computers but easy for humans. There is no question that these are a massive leap forward in AI, as it has always been defined.

Comment Re:And if we keep up with that AI bullshit we (Score 1) 62

It is absolutely crazy that we are all very very soon going to lose access to electricity

Calm down. Total AI power consumption (all forms of AL, both training and inference) for 2025 will be in the ballpark of 50-60TWh. Video gaming consumes about 350TWh/year, and growing. The world consumes ~25000 TWh/yr in electricity. And electricity is only 1/5th of global energy consumption.

AI datacentres are certainly a big deal to the local grid where they're located - in the same way that any major industry is a big deal where it's located. But "big at a local scale" is not the same thing as "big at a global scale." Just across the fjord from me there's an aluminum smelter that uses half a gigawatt of power. Such is industry.

Comment Re:Sure (Score 4, Informative) 62

Most of these new AI tools have gained their new levels of performance by incorporating Transformers in some form or another, in part or in whole. Transformers is the backend of LLMs.

Even in cases where Transformers isn't used these days, often it's imitated. For example, the top leaderboards in vision models are a mix of ViTs (Vision Transformers) and hybrids (CNN + transformers), but there are still some "pure CNNs" that are high up. But the best performing "pure CNNs" these days use techniques modeled after what Transformers is doing, e.g. filtering data with an equivalent of attention and the like.

The simple fact is that what enabled LLMs is enabling most of this other stuff too.

Slashdot Top Deals

"You show me an American who can keep his mouth shut and I'll eat him." -- Newspaperman from Frank Capra's _Meet_John_Doe_

Working...