Comment Re:What does this mean? (Score 1) 11
If I ran a business what would I need Confluent to do for me?
They're dragging buzzwords through the water, so see whether they get any nibbles.
Your MBA/PHB eats this shit right up.
If I ran a business what would I need Confluent to do for me?
They're dragging buzzwords through the water, so see whether they get any nibbles.
Your MBA/PHB eats this shit right up.
If you're that stupid/lacking willpower/whatever that you immediately go off and buy something because you saw it online, you deserve what you get.
You're an adult, supposedly with something approaching intelligence and self-control. If you're $50K in debt because you're continually buying junk, the problem is not with the influencers.*
* They're called shills. Call them what they are.
I'll find out in mid January, lol - it's en route on the Ever Acme, with a transfer at Rotterdam.
That said, I have no reason to think that it won't be. Yasin isn't a well known brand, but a lot of other brands (for example Hatchbox) often use white-label Yasin as their own. And everything I've seen about their op looks quite professional.
Unless you are at the North or South Pole or on top of one of the highest mountains, you are unlikely to be getting an average of one SEU per week in one computer due to cosmic rays. I would attribute most of the errors you see to other causes: marginal timing compatibility, power glitches, an overburdened fan, a leaky microwave nearby, several of these in combination, etc. Cosmic rays sound cool, but most bit flips have more boring causes.
In my case, I saw a lot more errors when I was running compute-intensive jobs: read files, decompress them, run a domain specific compression to text, generate SHA-256, compress using a general purpose compression, in parallel on 24 cores. The location of errors was random like in your system, but the correlation with processor load convinced me it wasn't caused by cosmic rays.
Their developers are supposed to be very competent and careful, but mostly because of culture and the application of development processes that consider lots of potential errors. The default assurance guidance documents (don't call them standards, for rather pedantic reasons) are ED-79 (for Europe because we're taking about Airbus, jointly published as ARP4754 in the US) for aircraft and system design, ARP4761/ED-135 for the accompanying safety analyses, DO-178/ED-12 for software development and DO-254/ED-80 for hardware development. DO-254 gets augmented by AC 20-152A to clarify a number of points. Regulators who certify the system or aircraft also have guidance about what level of involvement they should have in the development process, based on lots of factors, but with most of them boiling down to prior experience of the developers.
You can read online about the objectives in those documents, but flight control systems have potentially catastrophic failure effects, so they need to be developed to DAL A. For transport category aircraft, per AC 25.1309-1B, a catastrophic effect should occur no more often than once per billion operational hours. Catastrophic effects must not result from any single failure; there must be redundancy in the aircraft or system. Normally, the fault tree analysis can only ignore an event if it's two or three orders of magnitude less likely than the overall objective.
Cosmic rays normally cause more than one single-event upset per 10 trillion hours of operation, so normally there should be hardware and software mechanisms to avoid effects from them. In hardware, it might be ECC plus redundant processors with a voting mechanism. For software, it might be what DO-178 calls multiple version dissimilar software independence.
I don't know Airbus itself, and one always has the chance of something like the Boeing 737 MAX MCAS. But typically, companies and regulators do expect these systems to be extremely reliable because the developers are professional and honest: not necessarily super-competent, but super-careful about applying good development practices, having independence in development processes as well as the product, and checking their work with process and quality assurance teams who know what to look for and what to expect.
Try running a one-week memtest86 run, then?
I used to have similar problems (with 4x32 GB sticks), but they went away when I replaced my RAM. Those kinds of problems can also be caused by voltage fluctuations, either from the input power or from load (and memtest86 isn't good at increasing CPU or GPU load) -- even without overcooking. It could be cosmic rays, but it could also be much more local causes.
I'm not even a manager and there are, at present count, 30 hours of meetings on my calendar. I go to less than half, I just let the meetings sandbag my calendar so that new meetings are difficult to schedule. Either you know me and we have a reason to meet, or fuck you.
The actual managers are much worse off. Corporate life is stupid.
ClipGPT: "It looks like you're trying to manage public relations in connection with an advertising campaign. Sterling Cooper & Partners is an internationally recognized agency with a long track record of successful campaigns in this area. Can I help you navigate to Link Target?"
This is the sound of the other shoe dropping.
We need to stop pretending like it's perfectly OK to film strangers in public. Legal? Sure. Should you be doing it? 9 times out of 10, no.
It's long past time we had a real debate about the law, too. Just because something has been the law for a long time, that doesn't necessarily mean it should remain the law as times change. Clearly there is a difference between the implications of casually observing someone as you pass them in a public street, when you probably forget them again a moment later, and the implications of recording someone with a device that will upload the footage to a system run by a global corporation where it can be permanently stored, shared with other parties, analysed including through image and voice recognition that can potentially identify anyone in the footage, where they were, what they were doing, who they were doing it with, and maybe what they were saying and what they had with them, and then combined with other data sources using any or all of those criteria as search keys in order to build a database at the scale of the entire global population over their entire lifetimes to be used by parties unknown for purposes unknown, all without the consent or maybe even the knowledge of the observed people who might be affected as a result.
I don't claim to know a good answer to the question of what we should allow. Privacy is a serious and deep moral issue with far-reaching implications and it needs more than some random guy on Slashdot posting a comment to explore it properly. But I don't think the answer is to say anything goes anywhere in public either just because it's what the law currently says (laws should evolve to follow moral standards, not the other way around) or because someone likes being able to do that to other people and claims their freedoms would be infringed if they couldn't record whatever they wanted and then do whatever they wanted with the footage. With freedom comes responsibility, including the responsibility to respect the rights and freedoms of others, which some might feel should include more of a right to privacy than the law in some places currently protects.
That all said, people who think it's cool to film other human beings in clear distress or possibly even at the end of their lives just for kicks deserve to spend a long time in a special circle of hell. Losing a friend or family member who was, for example, killed in a car crash is bad enough. Having to relive their final moments over and over because people keep "helpfully" posting the footage they recorded as they drove past is worse. If you're not going to help, just be on your way and let those who are trying to protect a victim or treat a patient get on with it.
Yeah, it's not even worth considering for something like 15-20kg. A full pallet in this case is 464kg
The current "AI" is a predictive engine.
And *you* are a predictive engine as well; prediction is where the error metric for learning comes from. (I removed the word "search" from both because neither work by "search". Neither you nor LLMs are databases)
It looks at something and analyzes what it thinks the result should be.
And that's not AI why?
AI is, and has always been, the field of tasks that are traditionally hard for computers but easy for humans. There is no question that these are a massive leap forward in AI, as it has always been defined.
It is absolutely crazy that we are all very very soon going to lose access to electricity
Calm down. Total AI power consumption (all forms of AL, both training and inference) for 2025 will be in the ballpark of 50-60TWh. Video gaming consumes about 350TWh/year, and growing. The world consumes ~25000 TWh/yr in electricity. And electricity is only 1/5th of global energy consumption.
AI datacentres are certainly a big deal to the local grid where they're located - in the same way that any major industry is a big deal where it's located. But "big at a local scale" is not the same thing as "big at a global scale." Just across the fjord from me there's an aluminum smelter that uses half a gigawatt of power. Such is industry.
That "ruler study" was ancient. It's mentioned in peer review at least as early as 2018, and might be even older.
Believe it or not, people in the field are familiar with these sorts of things that you just read about.
Every successful person has had failures but repeated failure is no guarantee of eventual success.