Comment Re:Do the home owners (Score 1) 130
Well, you just said flat out, user pays $150/month in lieu of ISP+power.
Note that last month my ISP+Power was less than $100/month (thanks to solar offsetting it).
Well, you just said flat out, user pays $150/month in lieu of ISP+power.
Note that last month my ISP+Power was less than $100/month (thanks to solar offsetting it).
"One big reason the XFRA model works is that the average American home only uses about 40 percent of its electrical capacity,"
Yes, sure, on the individual level, a house may average 40 percent and the 200A is just peak demand and/or anomalous residences, but I guarantee that the grid is *not* sized for everyone to continuously pull down 200A all the time.
When power demand gets pressure due to prolonged weather events, you get rolling blackouts precisely because the grid is not sized to handle the load, even though *technically* everyone is operating within their individual 'capacity'.
Power grids are oversubscribed, and this concept pretends that the aren't.
Per the article, the homeowner has to pay to have this unit at their house, but the cost of the monthly fee goes to also cover ISP.
So you don't get free anything, but they claim that, in theory, you would get lower monthly bill than ISP + Utility.
More like the opposite.
They would want to divert resources away from your usage and into a locked enclosure that you would not have access to.
It would not upgrade your home infrastructure one bit. You even still have to pay them for the privilege, though they argue it will be less than you would have otherwise paid to an ISP and utility company.
I agree with you, but the big tech companies *seem* to be winning their arguments, even when the plaintiff shows output that includes even the watermark of the plaintiff's stuff on something that looks like the plaintiff's assets.
So it's at least more pragmatic to show that the acquisition and likely redistribution of the works while torrenting were a problem without even bringing up the whole AI ingest argument.
The problem is whether the AI "learning" is analagous to "learning" in the human sense.
For example, there was an existing library implementation of some function, but it didn't work the way I wanted and it was too deep to reasonably modify and the function wasn't *that* involved, so I decided to roll a new one.
Now I type a function name and LLM offered to tab complete the whole function body. The function body for a non-trivial function matched verbatim the very library that didn't work for my purposes, down to name choices. I compared for curiosity but discarded since the whole point was I needed it to work differently than the one reference implementation I could find.
A human might have vaguely recalled how it worked and you could maybe see the structural resemblance in what they produced, but for a function this significant no human "learning" would have resulted in such a verbatim copy.
I said that because it was consistent with the general classwork that was expected of them from the history and english classes. Having to do write ups of this sort of nuance, but then carving out some 'yay AI' fluff in the middle seemed just wholly stupid.
In the 90s, the school systems were kind of left to fend for themselves. The vast majority of the computers in my schools were systems the area companies were scrapping, but donated them on the way out. A decent part of my programming class was trying to salvage 20 out of 24 systems that a business donated that wouldn't boot. They spent what budget they could on a handful of computers capable of running encarta for the library.
In the 2000s, things started shifting a bit, in a college course we were handed out 'donated' copies of Visual Studio, but the teacher said that's for us but wasn't going to be used for class at all.
Since 2010, things have gotten a bit worrisome as a lot of the big tech have started getting awfully opinionated and wanting to 'help' kids learn to code. Education is all well and good, but when the big corporate interests get actively involved and prescriptive, things drift toward indoctrination more than education.
At least with the 'learn to code', a skill that needed significant develop was being theoretically served, though a lot to be worried about there, with the LLM scenario, it's pretty much just indoctrination. To the extent an LLM works or does not work is not something that takes a significant amount of time to sort out.
As an example, my kid was asked to write a brief thing on what excitingly awesome thing they are looking forward to using AI to do as part of an "AI challenge" at school sponsored by a local tech company. Not to take a critical assessment of things, of evaluating the nuance of benefits and drawbacks, nothing on helping them understand how to best use it, just to blatantly write a puff piece about how awesome AI is/would be for something. Basically soliciting marketing fodder and awarding three kids a couple hundred bucks. It was going to be a grade and so they had to do it and take it seriously..
You could bypass SELinux with this.
When your attack modifies the cached copy of an executable, you do it for *everyone*. So even if the process you control cannot make the system calls you want and maybe you can't exec the victim binary yourself, you *can* make it so the next time someone runs 'ls', it opens a backdoor for you in whatever context that command runs.
This is an impossibly bad breakage of the security model that cuts through pretty much every isolation mechanism in a system.
While immutable distros are broadly better structured to tolerate a weakness here or there, in this specific case it does not afford protection.
You should never assume immutable distro means you don't have a problem when a vulnerability comes out, particularly in the kernel which is the very thing implementing the security primitives in the first place.
White it is immutable by "normal" means, this exploit throws out the enforcement mechanism entirely.
I will share in the lamentation of languages barely having core libraries. Having a runtime state hundreds of dependencies for a browser frontend in JavaScript makes me shudder...
That argument on
One, they might not have enabled the cited module.
For another, the demonstrator may not work because su isn't there or is in a different path, but the broader weakness may be there: the ability to rewrite arbitrary cached copy of any file while bypassing the permissions and being able to also use it's privileges.
So you might just need a tuned exploit that targets something else instead of
While the popular demonstrater may fail because, for example, maybe su isn't there or in a different location or whatever, the risk is still there.
This exploit allows any process to rewrite the read cache of *any file*. The demonstrator picks on su, replacing it with a short binary that just immediately calls 'sh', but the fact is that the read cache of anything can be rewritten to do whatever the attacker would want.
Do not think your affinity for immutable distributions makes you immune to these issues. Never mistake a demonstrator failing to work as-is as a sign that you are completely protected from the demonstrated flaw.
It is configurable, e.g.:
initcall_blacklist=algif_aead_init"
Now if you meant without a reboot, well in which case that universe is way too open ended. *anything* could be a security mistake so you'd need to have each function in the kernel somehow being capable of being disabled...
I'm a Lisp variable -- bind me!