Become a fan of Slashdot on Facebook


Forgot your password?
Slashdot Deals: Get The Fastest VPN For Your Internet Security Lifetime Subscription Of PureVPN at 88% off. ×

Comment Re:Two words (Score 1) 355

First of all, who cares how much money he made and how much I make.
I don't even know who that guy is.

Secondly: I'm not arguing about him or his ideas but about my parent who wants Germans and Swiss and other water rich countries to switch back to nuclear power, to desalinate water when they have excess power.

Hint: Switzerland has not een access to salt water!

OK, then power a free public transportation system with the excess electricity. The point is, you can always find uses for extra electric power; I know what the vast majority of the world would use it for, even if you wouldn't use it for the same thing.

Comment Re:Levels of Security (Score 1) 119

I'm not going to write an entire paper here on Slashdot.

You already kind of did lol. This is good stuff though. I have some follow-up questions if you don't mind:

1) How are you aware of (and able to control) lower-level things like the page size, or which functions go into which groups of pages?

In a general, hand-wavy fashion, things like page size are an attribute of the compilation environment, and do not vary.

In practice, there are some older MIPS systems and the original NeXTStep which would "gang" 4K pages into 8K pages, and of course there's the Intel variety of superpages, depending on operating mode and contents of CR4, and the PSE bit being set, with or without the PAE bit being set, to give you either 4M or 2M pages. There are also some other architectures that allow even weirder variants.

As a general rule, most of these things other than the default are accessed via two interfaces: Either a section attribute in an executable section descriptor -- meaning it's handled by the kernel -- or via a special user mode interface for allocating large pages. The user mode interface may or may not be hidden in the malloc internals, in order to prevent direct use by a program. In addition, there are potentially device specific controls (in UNIX systems, these would be ioctl's) to map large pages into a user space process; as an example, the frame buffer memory in a Wayland or X Server, and so on.

Practically speaking, one of the most useful things you can do with large pages in a Linux, BSD, or UNIX running on an Intel system is to put the kernel itself into large pages; location won't matter, without a kernel code injection exploit. It's useful because Intel processors maintain separate TLBs (Translation Look-aside Buffers) for large and small pages, and this means that user space processes, and kernel interrupts, traps, or traps from user space to kernel space (e.g. system calls) won't be ejecting each other's pages from the look-aside. Depending on how frequently you end up running in the kernel vs. user space, this can result in an up to about 36% performance improvement.

One of the problems with this is that there's a known bug in Intel processors where INVLPG won't invalidate the page mappings in both, so there was an early bug that tended to hit Linux systems -- but not FreeBSD system -- where the INVLPG instruction kicked a page out of one TLB but not both TLBs, if it was mapped in both. This was mostly an issue when you tried to convert from running in real mode to running on the PMMU, and then from there, from 4K pages to large pages. The work around is to INVLPG twice, or to reload CR3, which flushes all the TLBs (making it the "big hammer").

Anyway, that was a digression, and in the scenario I discussed using statistical protection, you'd use the compiler and linker to make sections per function or function group, and then the linker would put linkage vtables in each of these groups when creating the executable, and then the exec function in the kernel would interpret these as allocation units, and put the sections in as few contiguous pages as possible, and then randomly locate them some place in the process address space. Which on an Intel/PPC architecture would locate them in a 64 bit virtual space, out of a 52 or 53 bit physical addressable space.

When the loader resolved the linkages for shared libraries through the fault/fixup mechanism, it'd do it by library:section, rather than by library alone, using the per-section vtables.

2) Why is it called "container-in-a-mailbox?"

Fair question.

Historically speaking, there are several ways to pass things around around between components. One of these is via register reference to the address of the thing. Another is via stack reference to the address of the thing. Another is via descriptor (in VMS, this is the function descriptor; in Mach, this is a Mach Message that is defined in an IDL compiled with a tool called "mig"; in ORBs like OmiORB or Corba, or even COM/DCOM in Windows, it's via an IDL stub that's either compiled via a compiler like mig, or is generated automatically during the link stage, etc.). And then there is message passing, as is done on the data bus of things like the Northern Telecom DV1, DV2, DV3, etc., which are used in the implementation of phone switches.

When you pass messages, it's generally impossible to pass the entire message; and while the message passing could be a partial address space mapping hand-off, as in Mach, another method is to use what's called a "mailbox".

In its simplest implementation, a mailbox is a file in which one program copies a large chunk of data, and then informs some other program that there's data waiting in the mailbox.

In one of the early implementations of a TPS (transaction processing system) implemented by AT&T Bell Labs (named "Tuxedo", if you care), mailboxes were placed in System V shared memory segments, and the notification between programs occurred via System V message queues. In something like sendmail or qmail, they're implemented as separate message and index files in what's called a "mail queue directory".

In practice, if you are using something like statistical memory protection on a system with a very large address space, and a large amount of physical RAM (or less, if you are willing to page, but you will almost certainly page, due to front-to-back linear processing of mailbox contents), you could just place the messages in anonymous memory mappings, and then "forget" them once their address has been handed off to the next component interface.

So on most modern systems, mailboxes are ideal.

The reason it's container-in-mailbox is that there is a separate logical phase for container receipt vs. validation vs. use of container contents. So you would have to put the container in the mailbox so the validation phase could safely operate on it without trusting the component handing it the data has not been compromised (and likewise, with the handoff between validation and utilization.

It's a security domain separation, rather than a protection domain separation or an address space domain separation (although address space, in the case of statistical protections -- or hypothetical future hardware or use of ring 1/2, if you can live with a granularity of "3" -- generally amounts to one of those as an implementation detail).

3) you wrote, "Most modern (predominantly research) security architectures" who is doing this research, and where can I find it?

Wow. Pretty much everyone in OS software who cares?

IBM and Microsoft are players, OpenBSD is, for some types of things. Apple is; Linux people (though I think it was a DARPA project run by IBM?) were the first to implement ASLR; I think Apple was the first to ASLR absolutely everything? And to do page level executable signature verification in the paging path? Though I think they mostly did it for DRM reasons, rather than to be helpful to users. I think compiler stack probes came from the LLVM folks?

The hardware guys have pretty much been warts on the tail of progress; they're not very fast to implement anything for which there isn't already a proven market, because of development costs; T-Zones on ARM are the one thing I can think of that went big-time, and that was mostly to allow running application software and baseband software to run on the same processor in a cell phone, and that was mostly so that the SDRs (Software Defined Radios) could get certification by agencies like the FCC in the U.S. and whatever passes for the FCC in various other countries.

As part of this, you define an interface contract: you are permitted to call down to the interfaces below yourself, and you are permitted to call across, within the same layer to auxiliary functions, but under no circumstances are you permitted to call upward.

That would ruin (or improve) a lot of modern OO techniques.

Well yes, and no.

From a security perspective, if you were using an object interface vtable on a linkage for a C++ object, it would certainly prevent you doing things like hiding the function pointers from other code that's allowed to call into the object. So the mechanics of OO language design are inherently inimical to security, from that stand point. On the other hand, you could be handing off the call as a descriptor with an object to other code that knows how to do the dispatch, and performs information hiding to keep you from adding 4 or 8 or whatever to the known address of the function to get to the next function, which might be a friend function or a private function located sequentially in the object.

This wouldn't entirely preclude layering violations, but it would certainly make them more difficult. That would improve security, but whether it improved the techniques? It depends on whether your techniques were already predicated on interface violations, or whether you were accessing data embers directly, rather than trough an accessor/mutator function, and whether or not you were using a global static object instance with a reference or entry counter for critical sectioning, and so on.

Sidebar: as a general rule: critical sections should almost never be used, except when dealing directly with hardware interrupts, and not even then, if your hardware is correctly designed to block/queue interrupts until a previous interrupt is explicitly acknowledged. Protect access to data objects, not to the code that accesses or modifies data objects. Code should be intrinsically reentrant. There are some really cheap mechanisms you can use here to avoid pipeline stalls, like atomic increment when adding a reference, and only taking a lock on code when decrementing for the 1->0 check, and so on. You need a data pipeline barrier in that case, but you don't incur both a data and a code pipeline stall, etc. (if you want to see an example of how this is done, look at the kern_cred.c and kern_credential.c code in Mac OS X -- it's on

In general, I think anything that results in code being reentrant, particularly OO code, where the object itself acts as a stateite, and there may be multiple threads operating on separate objects through the same code simultaneously, it just makes sense.

Other examples are turnstiles in Solaris, and prohibiting lock recursion as a "kernel panic/segfault offense" to prevent layer revisiting from even being legal at all, etc..

The reason I like DJB's work is because he seems to carefully think about what problems may arise every time he writes a line of code. He may not always succeed, but if you don't have that way of thinking, you will automatically fail at "identifying architectural layers for your libraries in order to abstract complexity of each layer from the layer below it," and will have bugs no matter what rules you follow.

The problem I really have with his work is that it's largely academically oriented, rather than practical. It's Like Peter Druschel's work at Rice University on LRP (Lazy Receiver Processing), which is quite brilliant, but impossible to reduce to practice.

At a previous employer, we actually reduced LRP (without the "rescon" additions, which are patented and IMO not useful) to practice. Getting this to actually work usefully as a solution to receiver livelock involved going well beyond the work he and his students had done, and required things like modification of the firmware in Tigon II and Tigon III cards to not interrupt until their last interrupt was acknowledged, and for incoming connections, you had to modify the way the routing and socket handling in the accept(2) system call operated, or you'd get a kernel packet when there wasn't actually an mbuf hanging there read to receive the incoming connect request, and so on.

It was a lot of "resolving implementation details which are inconvenient for me is left as an exercise for the student".

DJB's work has a lot of that flavor to it.

In particular, if you look at DJBDNS, it has no support for secondaries, it has no support for interior vs. exterior DNS resolution (I wrote the RFC draft for the in the IETF DNS Working Group, which is a mailing list named "namedroppers"), and it has no support for zone transfers.

These were all considered "insecure", and he expected that all DNS servers would be authoritative primaries, and that "zone transfers" would use an out of band communication mechanism (I believe at the time, he was suggesting "rsync" on the zone data files for this?). This was my first experience with his philosophy of partitioning function by program, and functional decomposition as a solution to complexity.

I didn't see this change with qmail -- although he admittedly did cover a larger proportion of the problem space, he still failed to map all of it -- and the places where he "compromised his principles by doing so" demonstrated later weaknesses in the philosophy, like the exploit we've been using as an example in this discussion.

I really don't like "proof is left to the student" type stuff any more than I first liked it when I saw in Feynman's lectures that he was in fact using Clifford Algebras to do quantum physics, and didn't bother to share this fact with the rest of us, or when I found out that Newton had invented calculus, and was able to pop Sir Edmund Halley's bubble on his big announcement by answering his rhetorical question "and do you know what shape the orbit describes?", and Newton pipes up "it's an ellipse, of course".

Anything where parts of the problem space that are supposedly being mapped by someone's solution aren't reduced to practice tend to be very annoying. But perhaps that's just me... 8^)

Comment Re:Levels of Security (Score 1) 119

btw, I'm pretty sure you have an interesting point here when you said this:

Functional decomposition is a really poor way of abstracting complexity, when it's being used in isolation, and does not include mandatory boundary layer order and direction of operations over said boundary.

but I'm not entirely sure what you meant. Could you clarify? What other option is there besides functional decomposition?

DJB's philosophy is to minimize individual attack surfaces by reducing code complexity. This has three components, of which DJB himself is a proponent of two of them. I'm not sure whether this is because he doesn't realize that it's a consequence of his implementation paradigm, or whether he simply thinks it's too obvious to talk about. These are the components:

(1) Reduce complexity by separating the problem domains into individual processes. This separates necessary privilege escalations from other code, and separates cross-functionality address space based attacks on the code.

(2) Reduce complexity into functional time domains involving serialization of operations which could (potentially) otherwise take place in parallel. This is also done through use of individual processes, but is based on the trigger initiating the processes being separate, and therefore not under the control of an attacker. This increases the difficulty of an attack by requiring serial attacks for each component between the intermediate targets and the final target of an exploit (as in the previously referenced "shellshock" attack). For a shellshock attack, this particular precaution was meaningless, since there was a direct passthrough of the data without prevalidation without action before passing the data onto another component. In other words: the particular attack zips through this security precaution.

(3) This may or may not have been intentional, but he reduces the network and system call footprint for each of the components in such a way that it reduces the remotely accessible attack surface (you can only attack things you can talk to) to something which can be firewalled, and the system call footprint of individual components into something that could have local application sandboxing applied to prevent particular system calls being used by individual program components, or even sequences of system calls being used outside a particular order, or in excess of a particular number of times. This was probably not a design goal, given that neither deep packet inspection/stateful firewalls, nor sandboxing, were utilized in most systems at the time qmail was originally written.

That's cool and all, but it's taking a hammer to a problem which is actually a result of programmer discipline and machine architecture, and, frankly, some of those architecture issues have been addressed at the operating systems and compiler level for years, and others are better addressed through other mechanisms. It also failed miserably in intentional strategy #2, above.

The first mechanism is boundary layer violations. The most infamous email program in existence is Microsoft Outlook, and it's for good reason. Outlook engaged in interface layering violations. These are responsible for nearly all the initially exploited Outlook vulnerabilities.

What avoiding boundary layer violations means is that, if you are designing correctly, you identify architectural layers for your libraries in order to abstract complexity of each layer from the layer below it. As part of this, you define an interface contract: you are permitted to call down to the interfaces below yourself, and you are permitted to call across, within the same layer to auxiliary functions, but under no circumstances are you permitted to call upward. A good example of a boundary layer violation in libc is the use of a function pointer for the compare function in the qsort library routine, which will result into an upcall from the libc layer, to upper level code. In general, this should be avoided -- and if you have multiple protection domains, such as ring 1 and ring 2, which are generally unused by most operating systems, it should be prohibited in hardware. A "poor man's" version of hardware prohibition is achievable through a rather more radical utilization of large address spaces than is used in ASLR: statistical page protection. If you can't find the page, and if functions in a library are not laid out in adjacent pages in the process address space when the library is loaded, you can use a computed location based on a known call site to find an attack vector.

Another boundary layering violation which Outlook has failed on -- and which qmail periodically fails upon, and which constitutes a number of usable qmail exploits -- is container boundary verification.

In the second vector for Outlook exploits, after the layering violations are dismissed, we have container boundary verification. While qmail is not subject specifically to MIME based container boundary verification issues, it has its own problems with containers. In Outlook, these took the form of intentionally malformed data content being passed as part of a message. The easiest of these is the fact that, in order to render a message more quickly, and, specifically to support the rendering of HTML messages (which Microsoft still things are "Nifty!(tm)"), Outlook started decoding the container contents before verifying the validity of the container. Specifically, it would start rendering GIFs before they were verified to be valid GIFs, it would start rending other content before it was determined the content was verified to be valid content. This is where we get the "malformed attachment" exploits in Outlook.

The correct thing to do is to download the content, verify each container matches its purported size, and then the containers inside the containers -- images, audio, video, etc. -- are themselves valid, before handing them off to the rendering component. Outlook failed to do this, and treated the header as a dispatch item, handing off the data stream to the rendering component, which allowed a header on a container to cause mush more of the byte stream than the container boundary to execute payload in subsequent containers. Qmail fails in a similar way with the handing off to a renderer unverified content container content ...and this is precisely how the second component failed them in the shellshock scenario.

Most modern (predominantly research) security architectures have moved to a container-in-a-mailbox mechanism. You put the contents of the container into a mailbox, and then you run a verifier -- separate from the renderer -- on the mailbox contents, thus preventing an assignment of meaning, and therefore a communication of intelligence (attack data) to a target; only after that, do you hand the mailbox off to the content renderer.

Note that this application of containers in mailboxes has a couple of significant advantages: (A) it's really amenable to things like statistical memory protection, since if you run off the end of a page, you fault, instead of getting meaningful payload data, and (B) for hardening purposes, you can put the container contents end at the end of the page boundary, and index the start of the mailbox into the first page, rather than at the start of the page (you can do this, because you are aware of the content length as a result of looking at the initial container and having vetted it before mailboxing the data). This means that a scan forward into the container past the boundary results in an immediate fault, even though your hardware perhaps only supports 4K page boundaries, rather than byte-level mapping. And finally, (C), you can map the mailbox contents as read-only, non-executable, non-writeable, before you hand them off even to the verifier, thus preventing self-referential execution as part of an attack.

To deal with the issue of attack surface at interface layers, which is handled by decomposition into processes, instead you can rely on the link-loader. In most modern operating systems, the link-load phase is handled in the kernel exec/fork/spawn functions, which also manage ASLR. An alternative is to only make the addresses of code in inferior layers (one should never access data in an inferior layer, other than through use of an accessor/mutator function, rather than directly), you make known the function only at the call site. Further, you decompose the functions locations into groups of pages which are non-contiguous. Thus the fprintf() function in libc might (should) be in a physically discontiguous location from the gethostent() or other libc function. Thus address space decomposition is a better approach than functional decomposition based on role and program boundaries: it's much more granular.

There's at least 5 additional techniques that you might be expected to use, each with diminishing security utility, which you could utilize to do a better job than qmail does, but you get the basic idea, and I'm not going to write an entire paper here on Slashdot.

Comment Re:I really wonder how other employers/employees.. (Score 1) 124

In the cases I have seen "contractors" have all been W-2s I should move to your part of the country, I hate being a W-2

The easiest way to accomplish this is to start your own contracting agency, and then employ yourself, and any friends who are in the same boat, as a 1099 worker. The bonus is that this will let you deduct most of your taxes as either "operating expense" or "capital outlay" on the part of the agency, you can run an expense account for most of the day to day expenses, including a car if you want, you can incorporate retirement fund operating company for the contracting agency to reallocate income into for the principals in the contracting agency, and you still get your 1099 job on top of it.

BTW: This is how most massage studios, day spas, nail salons, hair salons, and so on operate. Everyone who does the actual work is a 1099, with the exception of the owner, and maybe a hourly receptionist, if the business is big enough to merit one for bookings.

Comment Re:Two words (Score 1) 355

I'm normally not this rude, but I'm feeling a little put off by you, so I will take my gloves off this time to set you straight.

A few facts for you idiot.

Sure, fucktard. I'm listening.

1) Californias water problems are house made and not solveable by desalination plants, I doubt they would ever be economical in relation to just start with 'saving water'.

Adding together all the water savings every year since the conservation programs began over 20 years ago, you get slightly less than the 5 *billion* gallons a day which are used in the Sacramento Valley *alone* for growing rice for export, to cover evaporative losses from the paddies.

Or, you know: you assholes could grow your own food, since almost all that rice is grown for export.

Or you could build some reservoirs, but well, that would involve the government, needs tax money, god forbid the government actually doing something for the people.

Reservoirs interfere with the mating cycles of fish, and in particular, Pacific Salmon, but also with a number of endangered species.

While I think it would be great for the people in Los Angeles to get off their collective Hollywood asses, and build some cisterns, instead of directing all their rainwater runoff into the ocean, that would only make a small dent in the problem, since the primary problem is that California grows about 1/5th the food eaten in, and *exported from*, the U.S., and uses a lot of agricultural water to do it.

By the way: it's the same people who care so much about the fish that they are actually tearing down reservoirs and dams to save their habitat, who are violently anti-nuclear power.

2) Germany is a net exporter of energy, allways was and likely allways will be. That includes for most of the time France, there are only a few months in a row in 2013 or 2014 where we where a net importer versus France. Germany is exporting 30% - 50% of its energy production to the EU, you idiot.

See, that used to be true when you were running nuclear plants, but according to this Bloomberg article, that stopped right after you idiots shut things down after the Fukushima disaster because, you know, all your plants are in coastal areas subject to tsunamis, and you stupidly did what TEPCO did, and failed to upgrade sea walls and safety systems.

Oh wait. Your plants aren't actually in any danger from this.

Why did you idiots shut them down again? It's hard to believe that a country that birthed nuclear physicists of the like of Einstein and Heisenberg would be quaking in their boots over a problem in Japan caused by greedy middle management.

3) look on a damn map. How retarded can one be and claim that Parkistan is using 'thermal waste to desalinate water' ... and why should they? Again, look on a damn map where Parkistan actually lies.

"Pakistan has a 1,046-kilometre (650 mi) coastline along the Arabian Sea and the Gulf of Oman in the south"

I thought Germans were supposed to be good engineers. You are also aware that desalination is a generic term for water purification from various impurities, and can be applied not only to sea water, but also to well water, and waste water from other sources, right? Not that Karachi isn't on the freaking Arabian Sea anyway, as opposed to being land-locked, like you are trying to imply.

P.S.: Yes, that desalination plant was subsequently built at the Karachi nuclear facility.

4) The efficiency of pumped storage and lithium ion batteries is more or less the same, no idea why you disagree about stuff you simply can read up on wikipedia (pumped storage a bit above 89% and lithium ion batteries a bit above 90%, both depending on all the components involved, oops, you assumed lithium ion would be less efficien? Why? On what physical fact could that be based? )

Energy density of the storage, and the preexisting hydroelectric facilities having already had the land area committed to the water storage. It's call a parasitic use for an existing sunk cost. Like when Pakistan's Karachi nuclear facility takes its waste heat and desalinates water with it, instead of just directly using the atmosphere as a heat sink.

However there are countries/places where nuclear plants are used for dessalination, not really because of the lack of fresh water, but more because of savings if you build one combined plant instead of a water plant treating fresh water and a power plant. Parkistan is not sucha country ... with nuckear power below 5% of the contries power consumption and one of the countries with absolutely no fresh water problem ... that would be more than nonsense.

Here's a picture of the Karachi nuclear desalination plant for you. You can understand pictures, right?

I think that about covers disassembling your posting.

Comment Re:Levels of Security (Score 1) 119

I'm quite tired of the hi-tech this-security-is-hackable discussion. Of course it's hackable. Everything is.

If you think so and can prove it, then you can earn $1000 and eternal fame by hacking DJB's qmail. Over 15 years and still hasn't been hacked.

Actually, it has been hacked, and it's relatively easy to do.

Functional decomposition is a really poor way of abstracting complexity, when it's being used in isolation, and does not include mandatory boundary layer order and direction of operations over said boundary.

I really don't need to spend $1,000 worth of my time to argue with DJB, when he'll happily argue with anyone for free.

Comment Re:I really wonder how other employers/employees.. (Score 1) 124

The contractors they use are corporations which provide workers who are W-2 employees of those corporations. A true contractor is an independent 1099 worker who set rates, covers their own healthcare, retirement, etc. Don't confuse the two.

I don't. The contracts we dealt with at IBM and Apple, and the contractors I've personally dealt with in the context of Microsoft and HP, were all 1099 workers.

While I've dealt with contracting corporations in the service industry as well, most of the people who fulfilled the contracts were doing piecework as 1099 contractors, and not full time employees of the contracting corporation. In this context, I'm referring to "temp reps" (for sales), and traditional temp agencies for seasonal work, or to bolster e.g. accounting or HR departments during "flash mob" situations (accountants brought on as 1099 contractors for audits are a good example of this).

Most things like forensic accounting or private investigation for law firms are run on billable hours. Most law firms which do not operate on a retainer or contingency fee basis, are also contractors. Generally, in Silicon Valley, you'll see a lot of outside law firms brought in to prosecute patents (for example) after vetting by in house counsel to ensure that the boiler plate on the application, and the claims, are more or less correct.

When I was tech lead for the UNIX Conformance project at Apple, we had four contractors, all 1099 workers: one for man pages, one for some of the user space work, one to run the tests, and one to do the compiler conformance work on gcc. We ended up hiring two of them full time, later on, which is something which we couldn't have done, if they were employees of a contracting agency.

In fact, I have to say I've personally only interacted with an agency at one point in time, and that was at IBM. The agency was contracting a worker to IBM that was in the U.S. on an H1-B visa, and the contractor whose services were being provided to us had to have a placeholder to act as the sponsor for the visa as a means of (eventually) getting a green card. Generally, I've only seen contracting agencies use either 1099 workers themselves, or they employ H1-B workers who have to have a business sponsor them, without actually having a job at one business long enough to deal with the Green Card process (although you can get a Green Card in about six months, if you do the things, like medical, in parallel with all the other steps that can be done at the same time).

Comment Re:Two words (Score 1) 355

No idea where you live that you obviously need desalination plants.

California. Not an island nation. Irrigation for food takes a lot of fresh water. So does industrial processing for a lot of things. So do people.

A lot of countries run desalination. Pakistan uses thermal waste from their nuclear plants to run several desalination plants.

Since Germany was one of the countries mentioned, you should not that they are a net power importer, primarily from French nuclear plants, due to having shut down their own nuclear plants.

I don't think it matters if you just waste the electricity -- although if you have a hydroelectric infrastructure, you can use hydroelectric dams as storage batteries by using the otherwise unused nuclear generated electricity at night to pump water from low storage dams, back into the higher level storage dams that were used during the day to handle peak load.

Use of hydroelectric dams as storage for electricity this way has a significantly better KWh efficiency than, say, Lithium batteries, and balances out the demand load very nicely.

The clothes have no emperor. -- C.A.R. Hoare, commenting on ADA.