Lithium is a metal.
Oops. Right. Sorry.
Lithium is a metal.
Oops. Right. Sorry.
"lithium is in the upper left-hand corner of the periodic table. Only hydrogen and helium are lighter on an atomic basis."
I'm wondering if this is a non sequitur for electric batteries.
Not a non sequitur at all.
An important factor for batteries is energy density: How much energy is stored per unit mass. This is particularly important for electric cars: The higher the energy density, the less mass you havce to haul around for a given amount of "fuel", which means the less "fuel" is spent hauling your "fuel" around, so it's a more-than-linear improvement.
Lithium is both extremely light and a very reactive nonmetal. So you're talking about a lot of energy per unit mass for the lithium-based electrode's contribution to the reaction.
Seems to me that any ISP that redirects browser HTTP requests becomes liable to suit from the customers - for substantially more than $20.
Microsoft now likes to act like they are an open source company that believes in open standards.
But they DO. It's step one - embrace:
Well, they cannot become martyrs by just dropping dead. At least they have to kill some unbelievers as well...
Actualy, they CAN become martyrs by dropping dead - after deliberately NOT leaving the area of a plague and thus avoiding the spreading it, at the cost of their own lives.
Martyrdom doen't just come from being killed in a religious war.
Another way to become a martyr, for instance, is to die in childbirth.
Yet another is to die while defending your home and/or family from robbers or other attackers (as my wife pointed out to a crook who was trying to extort "taxes" for a local gang.)
According to a report I saw (following a link from the Drudge Report yesterday):
1)The early symptoms of Ebola are very similar to those of Malaria, to the point that people with malaria are being thrown into the ebola quarantine camps. (Also: Many of the people who HAVE ebola, or their support network, may THINK thay have malaria.)
2) The camp ran out of gloves and other protective gear - leaving the staff and patients unable to clean up after and avoid contagin from the body fluid spillages of the actual ebola patients. Come in with SUSPECTED ebola and you soon have ebola for sure.
That, alone, would make it rational for someone not yet sick or mildly sick, incarcerated in the camp, to break out and hide out.
3) Stories are circulating in the area that ebola is a myth and the oppressive government factions/first worlders/take your pick of enemies are using this story, plus the odd malaria case here and there, to create death camps and commit genocide in a way that gives them plausible deniability.
That idea, of course, can lead to mass action by some of the local population to "rescue" their fellows and sabotage the camps.
The whole think is a real-world example of the cautionary tale "The Boy who Cried 'Wolf'". When the officials lie to the people for their own benefit, repeatedly, until the people come to expect it, the people won't believe them when they are telling the truth about a real threat - and all suffer.
Ebola is one mutation away from being airborne transmissable. It already happened with Ebola Reston -- fortunately for us all, that turned out to be transmissable to monkeys but not humans.
I've heard reports that it may have happened with this one, too.
It doesn't have to be as GOOD at doing airborne transmission as, say, the common cold, to be a BIG problem.
Like file downloads vs. interactive sessions, some power loads just need a long-term average and can be adjusted in time, without noticable impact, to shave peaks and get a closer match to generation - even if some of the generation, itself, is uncontrollably varying.
In fact, this is already being done. A prime example is in California, where a large part of the load is pumping of irrigation and drinking water. California utilities get away with far less "peaking generation" than they'd otherwise need by pumping the water mostly at off-peak hours. Cost: Bigger pumps, waterways (and in some cases "forebay" buffer reservoirs, below the main reservoir) than would be needed if the water were pumped continuously. This is practical because it was cheaper to upsize the water system than build and run the extra peaking plants. (Also: The forebay-to-reservoir pump generates when water is drawn down. It can also be run as a peaking generator, moving reserevoir water down to the forebay during peak load hours.) Similar things can be (and are being) done with industrial processes - such as aluminum smelters.
But there's a limit to load flexibility. Sure you can delay starting your refrigerator, freezer, and air conditioning for a few minutes (or start a little early, opportunistically), to twiddle the load. But you can't use such tweaks to adjust for an hours-long mismatch, such as the evening peak, or an incoming warm front leading to calm air and overcast skies on a chunk-of-the-continent basis. Try it, and your food spoils and your air conditioner (or heat-pump heating system) might as well be broken, or too small for your living area. Sure you can tweak factory load some. But do it too much and you reduce the production of billion-dollar factory complexes and workers who are still getting paid full rate.
Renewable energy actually helps - because its large-scale variations are driven by some of the same phenomena that affect heating and air conditioning loads. More wind means more heating and air conditioning load due to more heat transfer through building insulation. More sun means more air conditioning. Solar peaks in the day and wind in the evening (due to winds driven by the "lake effect" on a subcontinental scale), so a mix of them is a good match for the daily peak. But it's nowhere near "tweak to match generation and load without waste".
I heard a story about another "killer building" near Chicago. (Haven't checked the claims for truth - just repeating it as I heard it.)
Seems there was this nice commercial builing next to O'Hare Airport. Curved walls, lots of lawn, nice walkway up to the door in the middle. Great view through the space over the airport runways.
There was this one spot on the walkway where more than one person was found unconscious or dead of apparent heart failure. There were enough that somebody looked into the coincidences.
Turns out the building's curve was parabolic and it faced a runway. If you happened to be at the focus when a jet taking off crossed the axis, the building concentrated the sound of the engines on you...
Bjarne: There really is a question at the end of this, but it takes some setting up. Please bear with me.
In the late '80s I came to Silicon Valley for a startup, which was using C++ for a major project. I learned the language then and got over the "doing it the object-oriented way" hump. In the process I analyzed what cfront was producing and got a good understanding of what was under the hood (at the time).
The project was very ambitious. Much of it was creating a data base engine for a complicated indexing mechanism. The result was that transactions occurred by creating a transient structure. The bulk of the work occurred during its construction, and errors during this stage had to be unwound carefully.
In those days C++ didn't have exception handling - "catch" and "throw" were reserved words, to be defined later. (So I built a set of exception handling classes that unwound even errors thrown from deep in construction and destruction correctly.)
Some of the architectural types had come to OOP via Xerox PARC and Smalltalk, and didn't want to be "slowed down" by getting "manual" memory management right. So we built a set of classes (incluing "smartpointers") and a preprocessor (to automatically generate pointer-identifying virtual functions) and got garbage collection working just fine. (We did a similar thing for remote procedure calls, and so on. We may still hold the record for layers of preprocessing between source and object...)
C++ WOULD have been the ideal language for it. But we found a little hole in the spec that caused BIG problems. The language got it SO ALMOST right that it was painful.
Consider construction of a subclass with virtual functions. Suppose the base class exports a pointer to itself (say, putting it into an external list). Then suppose that, at some time during the execution of the constructor of a member object of the derived class (or other initialization of the derived class), something gets hold of that pointer and calls a virtual function of the base class that is overridden by the derived class. Does it get the base class or derived class version of the function?
IMHO it should get the BASE class version during the initialization of the derived class UP TO the execution of the first user-written statement of the constructor, and the derived class version once the constructor's guts are executing. Getting the derived version before the constructor proper is entered means attempting to use functionality that has not been initialized. (Before the constructor is enetered you're still initializing the component parts. During the constructorf you initialize the assembly, and the author can manage the issue.) Similarly, during destruction it should get the derived version through the last user written line of the destructor, the base class version after that (as first the object-type members, then the base class(es), are destroyed).
Examples of how this would work in real problems:
- Object represents a an object in a displayed image. The base class is a generic displayable object, which hooks the object into a display list. It has a do-nothing "DrawMe()" virtual function. The derived class adds the behavior. When the display finishes a frame the list is run, calling the "DrawMe()" methods of all the objects. If one is still under construction and the derived class overriding is clalled, uninitialized memory is accessed (including pointers containing stack junk).
- Object is heap-allocated. Virtual functions are the "mark()" or "sweep()" pointer enumerator for the garbage collector. Base classs is the generic "I'm a heap allocated object" substrate, hooking into an "allocated objects" list with do-nothing virtual functions. At each level of object derivation the new version of the function enumerates, for the mark and sweep phases, the member variables that are pointers (and calls the base class version to also enumerate the pointers at the more baseward layers. The pointers' own initialization clears them to NULL before this can happen. If a derived class constructor exhausts memory and triggers a garbage collection before all the pointers are set to null, getting the derived-class version of an enumeration routne means following stack-junk pointers. Crash!
- Exception-handling during constructors involves a similar mechanism to identify what levels of DEstructor need to be called to unwind the construction. Again a virtual function (this time the actual destructor) identifies the level of construction that needs unwinding. Again, getting the derived class overriding before the constructor is entered means calling the destructor for an uninitialized level. Oops!
There are, no doubt, many similar real-world patterns, since the problem is fundamental.
So when we discovered the wrong level was getting called in some cases, I did a survey of available compilers, looking for one that got it "right". WIth both constructor and destructor behavior to be done right or wrong, there were three wrong ways and one right way:
- cfront (and the cfront-derived compilers got it wrong one way.
- the three commercial compilers for PCs got it wrong a second way.
- gnu g++ got it wrong the third way.
So we had to program around it. We were able to get the exception handling to operate correctly. But both exception handling and garbage collection required that all nontirival processing had to be moved from initialization to constructors. Member variables of object type had to be moved to the heap - replaced by smartpointers - and allocated by the constructor. Compact composite objects allocated and freed as a unit became webs of pointer-interconnected heap objects, allocated separately (thus multiplying allocations and frees). In addition to the extra memory management, the number of pointers that had to be followed by the garbage collector exploded. Efficiency went out the window.
Of course, since the published language definition didn't actually DEFINE the behavior, "right" or "wrong" were not official.
This was during the ANSI deliberations for the first generation of standards. The current draft explicitly left open the issue of which overriding you got in this circumstance. So I petitioned the committee to define it - "correctly". I had high hopes, since everybody would have had to make a change so it wouldn't favor some vendors over others. But on the second round of email replys I got somethint that seemed strange - saying something about it not being important because "C++ doesn't have an object model". (Say WHAT?) I was too busy to pursue it further, let it drop, and the standard came out with the behavior explicitly undefined.
When the second round of ANSI standard came by (after my participation in the project was over) I tried again, just to "fix the language" to avoid this sort of subtle bug. Still no luck: This time the standard not only left it open, but explicitly said (approximately) "Don't write code where this could happen."
By the third round I had "gone over to the hard(ware) side of the force" and didn't pursue it.
IMHO this is the one thing in C++ that is a real language killer (of the sneak-up-behind-and-knife-the-developer variety).
So, FINALLY, the question:
Has this been fixed yet by the newer standards? If not, is there a chance that it will be at some future time? (Perhaps if YOU brought it up there might be more attention paid to it.)
It's called fair queuing. Serve all active customers equally.
That's what my solution would have been, as well. And I wondered why they didn't. Why did they set caps on particular users, rather than just split it equially on a moment-by-moment basis?
But then, a few years back, I was put on a team designing the hardware accellerators that handle bandwidth division in big router packet processors.
Turns out that doing real fair queueing, when you've got a sea of processors and co-processors trying to hot-potato all the packets, is NOT easy. It involves information sharing among ALL the streams, simultaneously, packet by packet, across coprocessors, processors, chips, even boards. This is both N-square and doesn't parallelize well. So a typical implementation works by setting per-user or per-category-within-a-user limits (only havng to access one, private, data structure per stream), assigning limits to each and counting each's usage without reference to the current usage of others.
That means that, to avoid dead backhaul time while customers are throttled below what's available, you have to give them oversize quotas. But that means the "flight is overbooked" and the heavy users, with more packets in flight, get more than their share of "seats", squeezing out the lighter users. To get back to moment-to-moment fair, with only the quota "hammer" for a tool, you have to throttle them back. Tweaking their quotas even on a minute-by-minute basis, let alone milisecond-by-milisecond, would swamp the control plane, and you couldn't easily share the storage for the rapidly-adjusting limits by classes, but would have to store them, as well as usage, per stream (or at least have many subclasses to switch them among). Oops!
I had some inkling that might be fixable. But the company downsized, and I was laid off, before I could examine it deeply enough to see if it could be done efficiently. That was a processor generation or two ago, and I've been doing other stuff since. Good luck, telecom equipment makers!
The FAA regulations are the biggest factor.
The next is liability litigation. Try to run a company when one suit from one crash (even if it's not your fault) might drain your entire investment and bankrupt you. Try to get insurance in the same situation.
Either alone might make it hard. Both together have essentially frozen designs for private aircraft for over half a century and nearly destroyed civil aviation.
11.91% of the population vs. 9% of the workforce. That says 32% fewer jobs per capita in the manufacturing sector.
Make that "24.4% fewer" or "other states have 32% more" jobs per capita.
Oops: Got output and jobs merged:
11.91% of the population vs 11.7% of the total output. A bit behind in value added. (Horrible, since the value added in, say, computers is hysterically high.)
11.91% of the population vs. 9% of the workforce. That says 32% fewer jobs per capita in the manufacturing sector. Doesn't sound like the "number one state for manufacturing jobs" to me.
California is by far the number one state for manufacturing jobs, firms and output â" accounting for 11.7 percent of the total output, and employing 9 percent of the workforce.
I'd love to see that in per-capita or per-acre terms.
It's also the largest state in population, with 11.91% as of the 2010 census. That's half again as many as Texas, a pinch under twice as many as New York or Florida, almost three times that of Illinois or Pennsylvania, and by then you've used up more than a third of them.
11.7% of the output jobs vs. 11.91% of the population says the AVERAGE of the rest of the states has it beat. Some of the others are REALLY depressed, so the best of them beat it into the ground.
Similarly, it's the third largest state in area - with the largest amount of COMFY area.
It has resources, the best ports for trade with Asia, decent roads and railroads to the rest of the continent, etc. And it's got some capital-intensive industries and lots of access TO capital. It SHOULD be a nova to the rest of the country's furnaces. So why isn't it?
One picture is worth 128K words.