Absolutely agree. Once the infrastructure is in place. As it is, I still can't even charge my Volt at work...
Slashdot videos: Now with more Slashdot!
Surely this is what GM had in mind when they produced the Bolt EV concept car. It's quite obvious they are indeed working on it...
Volt does not have a "traditional gasoline drivetrain", nor anything close. There's no conventional transmission and a generator is utilized to supply power from an internal combustion engine. What differences specifically do you want to elaborate on?
My hunch is that they won't because there is no advantage to be gained in doing so.
The generation 1 Volt (2011 - 2015 model years) has several drive modes. It's not a parallel hybrid under most circumstances, though it's hard to say how often my Volt operates this way. The motor only runs in CS or "Hold" modes, and the theory I think is that at certain speeds the engine is more efficient when driving the wheels directly.
It certainly has nothing to do with power--when I have enough remaining charge, the motor never runs (unless driving in temperatures below 15F), and I can drive any speed I like.
Also I don't remember the Volt ever advertised as a "hybrid". It is an EV with a range extender. In electric mode, it operates just as a EV would.
(The obvious differences between a Volt and Tesla are battery capacity and the range extender motor. Tesla's range is due to battery capacity, but there's a reason the Tesla costs twice as much as a Volt.)
That isn't at all true. My laptop has both IPv6 and IPv4 addresses. When I make a request to google.com, I'm using IPv6, when reaching sites that don't have IPv6 I fall back to IPv4. As a user I don't even notice this.
Similarly, HTTP/2 has to be implemented on clients and servers before it will be functional, else both endpoints need to agree to fall back on HTTP/1.1.
There's some additional network configuration needed before IPv6 is useful, but no need to convert anything.
Existing standards that are "good enough" tend to be hard to replace.
Your claim smacks of hyperbole, but that aside I've also had to use code from developers who like to name their methods x(), f1(), t2() you get the idea. I can't tell if they're too lazy to type more than that, or they are striving to make all code fit in a 40-column window (ala GW-Basic), or they hate the idea that anyone else would ever try to read and comprehend their code.
There's got to be a nice balance.
Exactly right, but your sensible viewpoint doesn't belong anywhere on a blog site, apparently. No, you can't completely describe the Volt as a plug-in hybrid, EV, series or parallel hybrid, or whatever--it's a Volt and there's nothing else exactly like it.
I read this forum having come from other EV forums where readers are complaining endlessly that the Volt isn't a true EV, that it has far too limited range, that it was designed as a parallel hybrid and should've been a series hybrid, etc. Folks. This is all new stuff. If you want to change the world, stop posting drivel that drives away readership.
And BTW I'm sure GM would've loved to have released an EV in 2010 with 200+ mile range, one hour charge times, and a sub-$25k price. The reality is that it wasn't practical in 2010, and may be only barely practical today given the economics involved and the state of the technology.
The Volt is a great stop-gap. It gave us something to buy these past four years while we wait for more advanced EV's to become feasible and hit the market. The drivetrain is complex, but apparently has a very low failure rate. The ICE will run frequently or continuously in extreme conditions, but most drivers can expect lifetime averages well over 100 MPG driving in real-world conditions. Why can nobody simply call this what it is: A technical coup for GM.
Can't argue, and thank you for the interesting examples. I don't think HTTP is perfect, I was wondering out loud whether it is merely good enough.
Seems to me though that most of those problems arise from sloppy implementations (like you said, did they read the docs??) which supports my 2nd point. A perfect specification isn't going to prevent poor implementations.
Two remarkable things about HTTP/1.1.
One, it remained a relatively simple protocol. Yes there are a lot of nuances around content negotiation, transfer encodings and such but at its core it is a simple, flexible and effective protocol to use, and can be implemented quite efficiently via persistent connections and pipelining. It was designed for response caching as well, and the CDN infrastructure is in place to make use of caching whenever possible.
Two, despite the simplicity of HTTP/1.1, a shocking number of implementations get it wrong or don't use it efficiently. Pipelining is disabled in many implementations due to compatibility concerns, and few applications can use it effectively. Many applications make excessive and unnecessary use of POST requests which are inherently not cacheable and result in many synchronous requests performed over high-latency connections. (SOAP was notorious for that.)
I'm skeptical that any protocol revision can improve on HTTP/1.1 sufficiently without making it harder to implement correctly than it already is.
If there were a broad initiative to begin to use the features of HTTP/1.1 properly, as they were designed, most of the shortcomings would vanish without the need for a new protocol.
Hydrogen fuel cells might become viable in the future, who knows. That doesn't mean we shouldn't develop the technology, but in the meantime, we need alternate energy today, and you can buy and drive an electric car now. I plan to look closely at electric cars for my next vehicle. By the time (5-10 years) I'm ready for another, if fuel cells are available, I'll consider those too, but they don't help me now.
You're talking about the party that screams about deficits and the federal debt when they do not control the white house, yet passes measures that raise the deficit when they are in power.
Thinking logically won't help you understand politics. Here are the rough priorities of the GOP party (and Dems for that matter):
- Tell voters whatever it takes to get (re)elected,
- Promote legislation that satisfies their campaign contributors (i.e. big business),
- Do whatever it takes to block the other party from getting elected (into *any* office).
Do these priorities sometimes conflict? Sure. Is that a problem? Only if you make it a problem. You see?
Thanks. When you say:
Well, to point to evidence otherwise, if by JIT your taking about dynamic translation/recompilation for optimization purposes then no
That is the definition of a JIT I have always used, thus my confusion. All of the reasons you state are performance arguments against JIT, but there are other reasons for JIT compilation, including architecture independence. But since all the world's an x86 these days, that feels less important now than it did 15+ years ago.
Speaking of dynamic recompilation, back in 1998 or so I was running Windows NT on a Digital Alpha. Since there were few native apps, Digital included a translator to run Windows apps (Office etc.) compiled for x86 natively on Alpha, and I'm sure it was some sort of JIT. Those apps would run slowly the first time, but get noticeably faster the next 2-3 times they run. And it legitimately translated and ran the apps faster than a good Pentium Pro machine of the day. That was due to an edge that Alpha had over Intel, rather than software, but the hardware advantage didn't last long.
1) Sure. When I said "performs very well" I mean within an order of magnitude. I've heard claims that JIT technology will surpass ahead-of-time compilation. I don't quite believe that either, and at any rate it's clearly not there yet. I didn't claim the JIT was equally fast or faster, but it's close enough that there just wasn't a ton of people interested in using a compiler like gcj.
2) I'm skeptical that JIT requires a GC (in the general case, though for Java it is clearly required by the JVM). Do you have a reference for this claim?
3) Yes, and of course real Java code does bounds checking too. Back when I was using gcj extensively, I was finding examples of duplicate array bounds checks, one added by the developer and one inserted by the compiler. A good compiler should be able to eliminate the redundant check. At times this is hard, such as when the caller does the bounds checking, and does so in a different translation unit.
4) Interesting. There are examples, such as the complex type, that would be far more efficient as a value object. But I agree with your point that passing objects that are too large as value objects is probably counter-productive.
Mostly I agree with your points. Especially that Perl is dog slow.