Isn't there a massive oil boom going on in the US and other countries at the moment?
Yes, and not for the first time either. There was a shale oil boom in the 1970s that went bust when Saudi Arabia opened the oil spigots in the 80s.
Note what this means: we knew the shale oil was there; we had the ability to get it, but we left it in the ground because it was more expensive than regular petroleum. As we anticipate not having access to cheap oil, we're turning *back* to shale.
This is what it looks like when a planet runs out of oil. It's not like one day you pump the last drop of petroleum out of the ground. We'll never get that far. What happens is that we'll go after increasingly marginal sources of oil, until the day comes that the next drop of oil extracted is more expensive than some alternative energy source. And because what is "economically extractable" is dependent on technology, nobody can put a precise date on that, but looking for more energy efficiency and resuming shale oil operations are two sides of the same coin.
The F150 has been the best selling vehicle in America for decades, so it takes a lot of guts, or a lot of motivation to do something radical with it. I'm guessing that Ford isn't doing this out of public-spiritedness, but because they anticipate higher fuel prices some time in the next few years.
Of course comparing gasoline prices to global production is a bit like comparing weather to climate; there are factors in play which swamp the long term trends -- over the short term. Still I've seen some predictions that gasoline will hit $6/gallon in the next five years from it's current level of $3.65/gallon. If that prediction is even remotely true, then even if the new truck is plagued with problems it'll be a winner. And eventually crude prices are going to send gas prices that way.
Aluminum *does* corrode. In most situations the oxides form a stable protective layer, but in situations where aluminum is in contact with dissimilar metal you can get galvanic action and the less noble metal will corrode. There's also a phenomenon called stress corrosion cracking where a metal in a corrosive environment can fail catastrophically after being repeatedly exposed to stress.
So a piece of structural aluminum near a fastener in a salty environment isn't safe from corrosion failure. Naturally I'd assume Ford is on top of this, and you'd have nothing to fear from your new aluminum truck. How safe it would be after a ten or fifteen years of being driven hard over New England roads is something that I wouldn't be altogether sure of. Again I'm sure the engineers have taken this into account, engineers are fallible so we'll have to wait and see.
Steel really is an amazing material, both strong and tough. It tends to fail in benign ways (bending rather than breaking), which also contributes to the safety of a steel vehicle. When steel is damaged it is easy to repair. My wife has had a couple incidents with her car and a certain steel beam in the garage at work. When it happens we replace the passenger side doors and have our mechanic beat the door pillar back into shape with big hammer. I'm not sure an aluminum vehicle could be repaired this way.
So as a geek I'm delighted Ford is trying something new. But there are good reasons nobody's attempted this before. I'm hoping it's a brilliant success, but we won't be sure until the vehicles have been on the road for a few years.
Find someone to blame, then make sure they get *all* the blame.
Now, imagine what happens with the plumbing analogy when you try to make everything go backwards.
Alright, I'm game. We're talking about residential rooftop panels, right? Here goes.
I'm imagining... nothing happening.
According to the link the panels the guy installed generate 35kw. That's bound to be sucked up by his neighbors on the running off the local transformer.
Still not enough to merit an accusation of dishonesty. It's just an interesting fact, that people with working brains can take into account without hyperventilating.
It actually *is* a quite interesting fact, because it shows how the relationship of antarctic ice to global temperatures is quite complicated, as are weather conditions in any one region of the Earth at any particular time. It's something to keep in mind, next time you look out your window and see a little unseasonable snow, or unseasonable sunshine for that matter.
to do something that furthers his criminal enterprises has a name. It's called "conspiracy".
So if you ever try your hand at hunting down criminals like this, be aware of the potential danger of tying yourself to the criminal's legal fate. If you've done business withhim that's the least bit shady, and he's overseas beyond the reach of local authorities, things could get quite ugly for you.
I assumed "it" referred to his basement.
Maybe the world has colors in it besides black and white.
Well, we're talking far too abstractly here to be very meaningful. I'm not saying an RDMS couldn't be *part* of the picture. I'm saying that a system architecture that punted all the persistence and data consistency problems to a distributed RDBMS is a non-starter for something on this scale. People don't build systems like this one that way. If they don't, even though that technology is a mature one, that's a good reason to be skeptical of the idea that that approach would be a panacea.
No, it doesn't have the "pop" of NoSQL,
More to the point, it doesn't have the scalability across distributed systems. Show me one application approaching this scale, just *one*, that relies RDBMS clusters and two-phase commit exclusively to support this kind of transaction volume. Don't get me wrong; I'm an old-school RDBMS guy myself; I know a lot about relational database systems, including their limitations. I'd look to the way outfits like Amazon,
RDBMS servers are made to just do the job quietly and reliably, with very strict ACID compliance...
This is a very simple-minded approach to architecture, one that's admittedly very serviceable in a wide array of applications. But useful as ACID is as a set of assumptions you can rely upon, it's not the only way to create a reliable, serviceable system. In fact there are situations where it's provable that ACID falls short. Google "CAP Theorem" and "eventual consistency". It's fascinating stuff.
It does not sound to me as though known management tools were used. Did they sit down with the government personnel in charge, and present their approach, and what the site would look like (menus, flow, etc) when finished? Were there testable milestones, and a final presentation of working software? It sure doesn't sound like it.
They might well have done all these things and still failed to catch the problems before the site's launch.
Performance, like security (ack! scary!) is a non-functional requirement -- that is to say it's not the kind of requirement where you can sit down with a checklist and say, 'yep, this it works,' or 'no, it doesn't.' You have to develop a more sophisticated test.
Load testing is a step in the right direction, but you also have to look at system architecture. Remember the days before people figured out that you had to load web ads asynchronously, after the page content was loaded? Sometimes the page load would be slow, not because the page's server was loaded, or because of the user's browser or internet connection were slow. Often it would be the ad server that was overwhelmed, which if you think about it is bound to be more common than the content server being overwhelmed. You could functional test and even load test the heck out of a page with synchronous ad loads, but until it went into production chances are you wouldn't catch the fatal performance flaw. That kind of problem is architectural; some of the data being delivered is coming from servers outside your control.
Ordinary tests are about ensuring reproducible results, but when the architecture leaves you vulnerable to things happening on servers and networks outside your control your problems *aren't reliably reproducible*. You have to design around *possibilities*.
Some of the problems with Healthcare.gov were of this nature, although with not so simple a solution as "use window.onload()." The site is supposed to orchestrate a complex, *distributed* process *synchronously*. You have to go out the Homeland Security's database to confirm citizenship, then to the IRS databases to confirm claims about income, then get quotes from the private insurers that meet the customer's needs. There is, in my opinion, no way to be 100% sure, or even 80% sure that a system like that will work under real work load, unless you present it with real work load.
Were I architecting such a site, I'd plan to do a lot of that work batch; that is I'd build the healthcare application offline on the user's browser, with internal consistency checks of course. Then I'd send the user's application through a batch verification system, emailing him when a result was ready. This is a clunky and old-fashioned approach, but it wouldn't force the user to chain himself to his browser . And it would have more predictable performance. Predictability is a vastly under-rated quality in a system. A system which is fast most of the time may not be as desirable as one which provides the answer consistently.
The sooner you fall behind, the more time you have to catch up.