About one-sixth of federal spending goes to national defense.
It's worth pointing out that our approach to national defense terrified the founders, many of whom wanted to lay down rules in the Constitution to make it impossible for the US to have a standing army. Compromises with practicality caused them to instead write the Army Clause of the Constitution to discourage standing armies by not allowing Congress to allocate funding for them more than two years at a time.
There's even a legitimate argument that many of our contracts with army suppliers, lessors of land and buildings used by the army, etc., are unconstitutional, because they represent multi-year commitments. Technically, they should all contain language saying that their continuation each year after the second is contingent upon Congress allocating the funding, meaning the US could simply terminate the contract. Of course, no one would want to sign a contract that contained such language, and we've all collectively decided to ignore the Army Clause.
Note that no such restriction exists on the Navy.
The Militia Clause offers a bit of an out, since it allows (and requires) the federal government to organize, arm and discipline the militia. So it's arguably acceptable under the constitution for the federal government to maintain a training cadre, arms stockpiles, etc. even in time of peace. This "expansible army" approach is the one the US followed up until WWII. Congress declared war, then we used the small cadre force to train and equip an army, then when the war was over the army was scaled back to the minimal force again.
I'd like to see us go back to a fully constitutional approach to our army. Since we're not in a state of declared war, we should scale it back to a training and maintenance force. At the same time, we should actually direct a little of our current spending on the army to arming and disciplining the militia, which the federal government has almost never done well. By that I mean that the federal government should provide training to the entire militia population (males between 18 and 45, but we should amend the constitution to eliminate the gender specification). I don't think it needs to be compulsory, nor that the militiamen be compensated, nor even that the federal government needs to provide the weapons for peacetime training. Per the constitution, officers should be appointed by the states.
I think this approach would allow us to maintain a much larger force of trained soldiers, at a tiny fraction of the cost. I think it would also allow patriotic Americans from all walks of life to work and train together and to get to know one another, something that the army did pretty effectively at times in the past. It would also make it dramatically harder for presidents to engage in foreign adventures without a congressional declaration of war (another constitutional requirement we've been ignoring).
Note that this wouldn't reduce the cost of the Navy. I suspect that we'd classify the Air Force in with the Navy as well, since it's more like the Navy in fundamental ways (the big one in the founders' view being that armies can occupy territory, but navies and air forces cannot). I think perhaps those could be pared back a bit as well, but that's just a political decision, there's no constitutional need for it.
Actually, from the anti-tax Republicans/Libertarians it would be: "I don't think the government should be in the business of providing these benefits."
Well, the Libertarians would say that. The Republicans aren't so interested in cutting services, just taxes. Sometimes they justify it with the idea that they're "starving the beast", cutting revenues while not cutting (or even increasing) services, on the theory that the resulting massive deficits will eventually force cutting of services, but mostly they don't bother.
As crazy as it sounds, the last few decades seem to indicate that the more fiscally conservative of the major US parties is the Democratic Party, even if we exclude the aberration (I hope) that is Donald Trump. As a fiscal conservative myself, I think I've been voting for the wrong guys.
I'm surprised that this isn't already integrated with the locomotive. The locomotive is almost certainly diesel-electric, so why did they have separate generators on the cars, rather than just drawing from the massive diesel generators in the locomotive? And if they add solar panels, to all of the cars why use them to charge batteries, rather than just feeding any excess juice to the locomotive, allowing it to burn a little less fuel to keep the train moving? I suppose this might result in a little bit of waste when the train is sitting still, so I suppose it's worth having enough battery capacity to capture that energy, but most of the time it's sitting still it's probably in a train station which could likely use the power.
Note that I know almost nothing about any of this stuff, so this isn't a "they're stupid for not doing that" post; I'm actually asking questions. I suppose the simple answer may well be "Because the locomotive isn't presently designed to do that".
You've never used a wok?
I have a wok. It has a flat bottom. Lots of them do, though I suppose purists might say that makes them not real woks.
The razor blade doesn't get it clean, you still need the rubbing compound to get the stain out.
I've never seen a sheet of glass get stained, on a stove or elsewhere. Physically, I can't see how glass possibly could get stained. To take a stain, an object has to have pores into which the colored material can flow. Glass has no pores.
Certainly it's never been a problem with the glass-topped electric ranges I've used. I've had a gas range for the last three years (well, at the moment we're cooking on a Camp Chef on the back deck because we're in the middle of remodeling the kitchen), but the 20 years before that I had a glass-topped electric stove, so it's not like I don't have any experience with them.
The pan is a non-issue though. In real world practical terms, I can go from boiling to quiet to boiling a reasonable amount of sauce with about 2-3 seconds of reaction time, just by turning the burner knob. Gas is still far more convenient for many cooks than induction electric.
Maybe our gas range was badly designed. Each burner had a bit flame spreader on it that I suppose may act as a heat sink/reservoir. We didn't see the benefit.
There is no magic bullet. C is powerful and can be dangerous, but if you're writing security-critical software in C and you don't know how to avoid security holes, I'd say by definition you are not "highly competent."
The point is that it's quite clear that no one is highly competent at writing security-critical software in C.
FWIW, I write security-critical software, and I write in C. Well, mostly in C++, which is a little better. I'm pretty good at it, too. Security researchers have found very few problems in my code. But I don't kid myself. I know there are bound to be some issues in there. The same would be true if it were written in Rust, but there are whole classes of problems that are eliminated by use of something like Rust... and there really is no benefit to the lack of safety of C. With C++ I can build some safety nets that C doesn't have, but there are limits.
Those glass rangetops have some drawbacks. Your pans need to have completely flat bottoms or they heat unevenly for one.
I don't think I've ever had a pan that didn't have a flat bottom.
They also don't put out as much heat, so if you're cooking something that needs very high heat they don't work as well.
Hmm. Never noticed a problem with that.
Finally, if you do spill something on it and it burns it can be a bitch to clean the burned on residue off of the surface. You have to use rubbing compound to get it clean.
No, you just use a razor blade and scrape it off. Like one of these: https://www.amazon.com/Stanley...
Hand-holding was deliberately rejected to avoid complexity and increase efficiency. And it worked, until people tried to make things "easier."
The plethora of serious security defects in code written by highly-competent C programmers gives the lie to this notion.
I surely NEVER want to give up my gas range for electric....ugh, that is NO way to cook (electric).
I don't get the fascination with gas ranges. I just removed a gas range/oven and we're replacing it with a double oven and a separate range. I don't cook that much but the people in my house who do enjoy cooking didn't really care that much for the gas range. Yeah, heat changes are instant -- at the flame -- but the pan still has to heat up or cool down. And the gas stove top was so hard to clean compared to a modern electric range, which is just a flat sheet of glass, trivial to wipe down.
Of course, a year after I left, someone modified the code and it started eating up ram. When they called me, I told them to put the code back the way it was, because even the source said "this may look wrong - but it's not. DO NOT TOUCH". They reverted to my old code, and everyone was happy.
So... from that anecdote, are we to conclude that C code is unmaintainable, your code is unmaintainable, or both?
All in all, your story is a great example of why C is bad. It took a long time to build something that worked correctly, and once completed the code is brittle and unmaintainable, to the degree that you felt the need to comment that it should not be touched. And you were apparently right. Also... the code could still be full of security holes. The fact that it runs correctly in normal circumstances says next to nothing about what happens under attack.
What exactly IS "AI?"
The AI relevant here is Artificial General Intelligence. That is, AI that has roughly human-level capacity for abstraction, creation of explanatory models of the world around it, and application of those models to create new knowledge as needed to accomplish its goals (whatever those may be).
I think that's about as precisely as we can define it right now, because we don't yet understand intelligence well enough to define it much better than that. But it's clear that there is a qualitative difference in the sort of intelligence that humans have vs the rest of the animals on our planet. Many other animals exhibit various cognitive abilities that we have, including self-awareness (though that may or may not be necessary for general intelligence), abstract thinking, theory creation/modeling, and application of abstract models. But none can do it remotely as well as we can, and that difference is the reason that we're the dominant life form.
So what we're talking about is AI that can do that. And it seems quite clear that once we've achieved a general artificial intelligence that is capable of understanding what we've learned about how to build general intelligence, but is a little faster or a little smarter than we are, it will be able to design a better successor. And so on, quickly outstripping us.
Of course, it's also possible that there's some reason that we do yet know about that this cannot happen. But if so, we really don't know what that might be. Not the faintest glimmering of a clue. That being the case, it's a good idea to be thinking really, really hard about this space and about how to try to manage what could be an existential risk to humanity.
Talking about regulating it, though, is silly. We have no idea what kind of regulation would even be useful. We need to learn a lot more, first.
We are more about bandaging up the problems then preventing them in the first place. Look at pollution. Places don't work on reducing it until it becomes a problem.
Which is the right thing to do.
The reason we don't pre-emptively address problems until they become problems is that we can't actually know what will be a problem until it is. Take a look through the last few decades of history at all of the prognostications of what the major problems were going to be, then look at what actually happened. It's really quite rare that we get our predictions right. Note that it's easy in hindsight to look at what did become a problem and then find the predictions -- they always exist -- but if you look the other direction, first looking for the predictions and then at how many of them become true, you'll find that we have a terrible track record.
The reason predictions of the future are nearly always wrong is pretty simple: We can only extrapolate from current knowledge, but current knowledge is always incomplete, both about what exists now and especially about what we'll learn in the future.
This doesn't mean that trying to guess is a bad idea. It's not. In fact it's crucial, because it gives us the opportunity to debate and plan responses if and when we become certain that something actually is a problem. But we always have to remember that forecasts are only forecasts, and that the further out they are the less accurate they are. They're primarily useful for ongoing contingency planning, until we can actually confirm that what seems likely to be a problem really is a problem.
Regarding AI, I think there is cause for concern. We should be investing in thinking about the possible consequences of AI superintelligence and how we can deal with it. It's and incredible tricky and subtle problem, because we're talking about trying to control something that is, by definition, much cleverer than we are, and therefore able to see right through everything that we think and do.
I think Elon Musk is right to be concerned. I think he's wrong if he's suggesting that we should start putting regulations in place now. We have no idea what kinds of regulation would even be useful. What we should be doing, right now, is what we are doing, discussing and thinking about the possible problem, trying to understand what its parameters might be, how it might play out, what our options may be, etc. If we should do more, then that "more" should be more thinking and more research and more debate. We should establish academic posts that encourage smart people to think about the issues, and fund conferences and journals to facilitate the flow of ideas and debate. We should do what we can to make sure that all of the people working on the practical questions of figuring out how to build artificial general intelligence are also thinking hard about the moral and ethical questions.
Musk probably is right that it's a good idea for lawmakers to start becoming aware of the potential problem. But we should not, at this time, start trying to make laws to address the issue, because we have no idea what laws to write. Asimov's robot stories were mostly a demonstration of that fact. Even given a set of impossibly abstract and hard-to-implement rules like his Three Laws of Robotics, his stories demonstrated over and over how the apparently well-meaning rules resulted in perverse outcomes. And we're very far from being able to define anything like those rules, much less know what laws we should create to impose and enforce them.
BTW, I highly recommend that everyone read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies". It's a thorough and excellent introduction to the subject.
I see the flaw in your logic. You are making the assumption that just because they are "educated" they are gaining the ability to illicit independent thought.
s/illicit/elicit/. I don't normally bother with spelling corrections, but the difference in meaning is huge.
This is the biggest issue with our basic public education system as well. We no longer teach, we just make them memorize everything and then never really teach them how or why and a lot of times when to use the information provided.
Uh huh. You haven't paid much attention to the evolution of education in the US, I see. What you describe is exactly what was done in the early through mid 20th century. It's actually gradually evolving away from rote learning (which is good).
In any case, we're not talking about public education, we're talking about higher education. Entirely different kettle of fish. If your university focused on rote memorization, then you got seriously shortchanged... and your education was not typical of US higher education, not even in community colleges.
Where there's a will, there's a relative.