For what particular reasons should we have gone back to rail passenger service decades ago?
Average voltage on an AC line is 0 volts. RMS is probably what you intended. But yes, Watt-hours are all basically the same for a given RMS voltage/frequency. We will ignore power factor, that would take a lot longer to discuss...
Just to be pedantic:
The electrons going through your appliances are almost entirely the ones that were in the wire of the appliance to start with. Some electrons may actually drift enough to have come from your house's wires. But for any significant number of electrons to physically be the same ones that were in the power plant is very low probability.
This may not seem obvious at first, but the reason is that the drift velocity of electrons is actually very slow relative to the currents typically used. In other words, a piece of wire has so many damn electrons that you don't really need to move a very large portion of them to get a large current. If we were all using DC mains, then eventually you would see electrons making a round trip. But with AC, as mentioned above, the average voltage is 0, so the electrons move back and forth, but not typically getting very far in either direction.
Also, a more direct thing to consider is that most electrical systems use isolated transformers. So literally, the electrons are not passing from the utility across that barrier (unless auto-transformers are used.) It is an energy conversion to/from a magnetic field.
There is a clever and very practical solution I've heard of in Germany that utilities are using to dump excess renewable electrical energy: hydrolyze to H2 and inject into the natural gas system. Infrastructure already exists, technology already exists, very low cost to implement. So you aren't exactly storing for the purpose of the electrical grid, but overall energy management is pretty good.
Similar things can be done with stored thermal systems in northern climates (you can heat water when energy is in excess and draw from it later.)
Exactly, I'm getting to be a codger with a 5-digit ID. But, I recognize that if you don't have some social media presence, then you just don't exist. And it isn't really an age thing. Plenty of people twice my age look forward to seeing what I'm doing through Facebook. I'm lucky that my wife does most of the posting for both of us because I just don't want to spend time on it.
But that is the real difference, do you text and check Facebook when stopped at every red light, or do you keep one foot planted in the Analog world?
I try to be nice as possible to the person behind the counter, but I just say the equivalent of "do you want my money or not? Let's just bypass the part where you ask for my info." Party City was the latest place that wanted my info. They have no need for my info. I give them money, they give me fancy paper plates for my toddler's birthday party.
Yes these are really good points. Certainly they did abort more than 100 different bad ideas.
There is a really good series "From Earth to the Moon." What I love is how much people were able to accomplish without email, CAD, collaboration apps, etc. It is hard enough to coordinate 10 engineers even with modern technology. You really get a sense of the planning involved from 1961 to 1969 and beyond. They had an overall plan and were determined not to fail their primary mission, and there were big question marks left in their plans for things yet to be invented.
NASA could afford to blow up rockets and not run out of funding. I think that is a key element to work within practical limits of available capital. They were certainly agile and were willing to throw out bad ideas.
Absolutely Apollo 1 was a failure. People died. That was unacceptably outside of the mission parameters. But I argue that is the reason to avoid a careless approach that encourages failure.
I suppose it comes down to semantics of words.
To me, failure means "Failure to meet project goals" that is always bad. Money runs out, timelines slip, safety is compromised, etc.
It sounds like fail-fast means: "quickly write off solutions that are unworkable" To most engineers, that is just a feasibility study. Granted, the faster and cheaper you can write off a truly bad solution, the better off you are. Breaking a few prototype is not a failure as long as that was considered in the project budget. But giving up on a workable solution too early can lead to much churning.
That's why people should fail; or rather, not be afraid of failure.
That is the key element. People take the failure part too literally as a measure of success. Failure may occur. Everyone has to do their own math on the risk/reward. There is no guarantee of the big payout for the 1 in 50 success rate.
Try like hell, recognize failure.
I think that is easier to agree with.
..and many companies burn through their capital on their 3rd attempt at failure. Failure isn't the goal. Forward progress is the goal. Recognizing failure or impasse quickly and cutting losses is the goal.
Sometimes doing nothing is a perfectly good option if there isn't a viable path given the current state of technology, market demand, and capital available. Timing is everything.
100% agree. This 'fail fast' crap is extremely narrow-minded. We didn't get to the moon by failing fast. We got to the moon by trying like hell to get it right. Failing faster would have led to 100 different aborted attempts at the first sign of trouble in a design. All the approaches had many failure modes that had to be worked through diligently. At what point do you declare failure vs. work through a problem?
If you know for a fact that your toolchain covers every case for you, that is great. I have worked on one project where someone took some really great synchronous design from the tool's libraries and put "just a simple set of logic gates" on the output signals to convert the output to gray code. That was fun.
So of course it would be a waste of time to whip out a K-map for everything. But the point is, could you? Or does a designer at least know why glitch cases happen, and what specific actions the tool is taking to avoid them?
I will say that this sort of understanding is helpful in software too for nested if statements, state machines, etc. where you can determine if state changes can be reduced, or if they are covered properly.
Wow, you can see the pavement on the highway and it isn't full of craters. The surface streets had a nice grid-like orientation and stoplights in logical places.
Have you ever been upstate New York in winter, or Boston any time of year?
Maybe it can be deployed sooner in limited areas that are well-mapped and have predictable road conditions. You will absolutely need to keep daily updates of detours and changes to stoplights, etc.
I just wonder if any existing automotive companies will want the liability, or if this will create a rift: traditional automobile companies and auto-automobile companies.
Exactly. Maybe the trend is that CS simply builds on the existing languages and solutions so that underlying principals are less relevant. There are new and higher-level concepts being taught in CS. However, prospective students should carefully weight this against their career goals and what is employable vs. academic study. Employers will care more about the domain knowledge first, and programming ability second. i.e. a math or physics major that can follow good programming practices and has a rough understanding of computability theory.
True, you may not need an EE degree. But if you can't draw a K-map and cover glitch cases, just as one example, then you are not qualified to develop programmable logic. While the FPGAs and micros come with a lot built-in, you still have to understand circuit principles when designing the surrounding support components and proper interfacing of signals, ratings, timing specs, etc. We need to understand power consumption in components to best manage it from software. So typically, the requisite skills are taught in EE, computer engineering, or something closely related. Kudos if a CS program teaches that, but I'm not sure if that is consistent.