If we're naming a lost, Jupiter-sized planet on the edge of the solar system, I'm pulling for "Yuggoth" before "Nibiru".
If we're naming a lost, Jupiter-sized planet on the edge of the solar system, I'm pulling for "Yuggoth" before "Nibiru".
I'm going to propose a more radical fix: we need to stop letting the DOM have reliable access to so damn much information.
When we started the move away from webpages and toward web applications, we let the DOM have access to pretty much everything, because applications are big and general and data-hungry: The DOM captures keystrokes so each website can have it's own controls and hotkeys (and which unintentionally lets a user be identified by keystroke dynamics). The DOM has access to blocks of offline memory so that applications can be stable offline or when infrequently connected (and which is another vector for super-cookie tracking). It has access to viewports and peripherals for responsive layouts (which is more data for a browser signature that can easily allow user activity to be correlated). CSS needs read access to layout colors if it's going to be changing them dynamically (which means that those colored as recently-visited by the browser are know, which allows for history-based signatures).
The DOM has demanded every piece of data available to the browser in the name of ever more byzantine applications, even though all but an insignificant portion of the web is still consumed in a page-like way. You can use NoScript and set Opera/Firefox/Chrome preferences until your blue in the face, but you will never reduce your tracking cross-section while the standards bodies insist on pushing these very broad, demanding features in the standards themselves.
I'm reminded of one time back in high school when we were discussing a poem by Margaret Atwood. The English teacher mentioned as an aside, "who knows where Margaret Atwood is from," thinking it would be a good segue. Silence. "I'll give you a hint: she's writing in her native language."
- "No." There was another pregnant silence and before I could hazard a guess on New Zealand, he gave up and said, "Canada! Margaret Atwood is perhaps the most famous Canadian poet!"
So help me, my thought at the time was actually, "Ohhh. Canada... they exist too."
The point of the story is, just because you speak English doesn't make it any more likely we'll remember that your country exists. Sorry, Canada. If it helps at all I'm in Texas, so you're not exactly foremost in our thoughts.
Most of my immediate neighbors are blonde, yet I still have no desire to dye my hair. What other people do amounts to squat from the American perspective. It only matters when you have a multinational company that wants to sell liters in the U.S.
And guess what: that's perfectly fine. They do it all the time. The U.S. standard AND metric. We buy milk by the gallon and soda by the liter. The supermarket sells bananas by the pound and drug dealers sell cocaine by the gram. Hell, two of my favorite bars serve completely different "pints" of beer. No one is confused because, frankly, there's not a lot of practical overlap to lead to confusion. No one compares milk to soda on a equal-volume basis. You just use whatever units you damn well feel like.
Confusion usually only reigns in technical fields. For example, a naval project may use miles, kilometers, and nautical miles all in the same bit of code. (And frankly, while it's a headache, we should be the most capable members of society when it comes to jumping through these hoops.) But for the the average person? It doesn't matter.
First, the headline is a "duh" headline. Of course the lure of robots is the ability to do without humans. That's the whole point -- the very defining characteristic -- of a robot. To automate complicated work.
Second, you assume an all-or-nothing future. That will not be the case. If you have a few more robots and a few more workers, you can drive unit prices down and pick up a few more customers, even without the recently dismissed human workers that can no longer afford to be those customers. That's true at every tier of production, from commodities to luxury goods. This drops the price of labor, and concentrates wages more in the (relatively) few jobs that can't yet be automated. What you see is further stratification of the economy, not collapse.
In your scenario, we're all too poor to be the economy going, and so the economy will never let us get to that point. In a more realistic scenario, a very large portion of the people are out of work, many more have dropped from middle- to lower-class, the wealthiest experience no change -- all while robot labor allows, say, 2/3 of the previous customer base to support the same level of profit for a given product.
10% unemployment is such an absurdly high number that it produces a lot of civil unrest when it actually happens. 30% unemployment would mean a complete re-structuring of society as we know it. The invisible hand proposes the stability of the market, not the stability of our place or our civilization's place in it.
It's especially sad to see this from an IEEE publication (even spectrum).
First, the major unifying concept in Maxwell's equation was the displacement current, a quantity for the changing field in a dielectric with units as current density. This answers the age-old question, "how do you have a current circuit when one part (a capacitor) is clearly 'broken' and not conducting?" Maxwell was the first to answer the question with a solid theory. So a better way to write the sentence you quote would be, "Maxwell’s equations explain how high-frequency flows of electrons in conductors generate electromagnetic waves, and they were also the very first to explain how an insulating material, where there is no flow of electrons, would also act in a circuit" Basic electromagnetics education fail.
Second and more to the topic: if we pretend that there is some sort of "magneto current carrier" (a magneton), then we can extend Maxwell's equations to cover a hypothetical magneto current. Pretty much any electric current-flow problem can be re-stated as a dual magneto current-flow problem. There are a lot of practical upshots to this -- such as making simulations that converge to answer much more quickly -- but the one most related to antennas is that you can demonstrate that the radiation of an antenna is related to the conduction gap between it's elements. For example, if you have a dipole antenna with elements separated by width d, then you can also model that as a "cigar band" (open cylindrical sheet) of magneto current. For a molopole, you might use a "washer" (flat cylindrical ring) of magneto current between the conducting element and the ground plane. This is not new. It's been used for decades. This is the shortcut to the concept that's been known for decades. You do not need recourse to any concepts in quantum mechanics.
I usually lean on the inverse-Godwin theorem: if I can't figure out some way to compare something to the Nazi's, Hitler, or the Holocaust, I probably shouldn't be that upset by it.
The first two seasons of TNG were pretty bad, but after that they improved. The big issue they overcame was breaking away from the original series mold.
In the early episodes you can really see how they tried to take original series roles and divide them up among a new crew (Riker as the stand-in womanizer for Kirk, Data as the stand-in for Spock, etc.). It also uses a LOT of the conventions of the old show trying to get ahold of that remnant original series audience. We can look back on the omnipotent Q abducting people and making them fight dog-faced Napoleonic soldiers and cringe at how hokey it is. We can look back on the relatively-omnipotent Excalbians abducting people and making them fight Kahless and Genghis Khan with a little help from Abraham Lincoln and get a giddy little thrill. The difference is that TOS had a shoestring budget was aimed at a more forgiving youth audience, and TNG had a respectable budget (but still hokey scripts) and was aimed at those same people after they grew up in to sophisticated adults.
It took two seasons, but they eventually got over that hurdle and turned into their own show. When asked when he first knew that they "had something" in the show, Patrick Stewart said it was while shooting "the Measure of a Man" (s2, ep9). If you think of the question a different way -- "at what point did you realize the job wasn't necessarily shit?" -- then the answer says a lot about the quality of the preceding 33 episodes.
If you think "inane, stupid, and soul-crushing" ends at high school, you've been sheltered.
While the goal if college per se is not to churn out office drones, there is a lot of drone-ery to be done, and someone in college can do worse than to fall into one of those jobs. It's a good fallback plan for, say, the history major who just can't find in-field work. For those people, a college degree proves you can show up every day, do a task of moderate complexity, and meet deadlines reliably. That's also exactly what being anything other a straight C student demonstrates.
But most History and Philosophy and Liberal Arts departments around the country don't feel (or at least can hope) that they are training people who will stick with and contribute to the field. At that point, you can argue that you need some creative skills to break new ground. Unfortunately, opportunities for ground-breaking is foreseeably rare, and it's not going to be done if you don't have the necessary information and tools to create. Even a kid making a building-block tower needs to be given building blocks. They need students who are going to absorb that information and grow through participation. Which is exactly what C students have failed to do.
For other fields -- such as engineering -- where you can reasonably expect to get a job in the field, and then to flex your creativity once you're there, "innovation" means being basic competence, coupled with experience. Innovation as people imagine it today -- that the fruits of hard, long work can be cheated out through something cheaper and easier -- is a myth.
I would gladly concede that we need to do more to give our C students real options for becoming productive, prosperous members of society. I just don't think the rest of us are missing out for lack of their "creativity".
That's why I'm waiting for the Windows "Thundercougarfalconbird" edition
I'm curious what kind of cutting edge file management, load balancing, and slide-show-presenting needs are such a challenge that the OS needs to be above 1 GB. It doesn't take that much effort to support people who just want to scroll through thumbnails of their vacation photos. If you have an interesting program -- a 3D video game, a compiler, a simulator -- it will have its own minimum system requirements. And like those programs that have lower requirements, the OS generally scales up (to a point) in capability with better specs.
Two person home. Two each of cell phones and laptops connected. Two entertainment devices (gaming console and Blu-Ray). I also have another wifi-ready console that I've just never setup for network play. Also one tablet, and one printer. Considering a wifi thermostat.
That's 8 devices without trying, for two users. That's also not counting "sometimes" devices on the whitelist: work laptop, frequent visitors' phones.
1) We recognize that Summer break was never meant to be time off (it's time when you needed all hands in the field and wouldn't have sent your kids to school anyway), and that do-nothing, responsibility-free childhoods are a rather recent human development. However, it's still healthy for kids to have to learn through play and be free to pursue things on their own. They need the break.
2) We recognize that the break puts a burden on parents to find activities, day care, or camps during the summer. However, it also provides a huge block of time for lengthy family vacations, which would otherwise be impossible to schedule, or even costlier because all kids get the same three week-long mini-breaks. This is good for the entire family's health and quality of life.
3) We recognize that other countries are lapping us in education. But we also have to recognize that that has nothing to do with time or money spent per student. We invest more per student than pretty much any other country, but we get worse results. That's because the fundamental changes to in teaching methods that we've made over the past 50 years have been for the worse, and other countries have made changes for the better.
4) No one wants to pay teachers for the nine months of work that they do already. More time means more cost, which no district's taxpayers are going to pay.
Ending Summer break is another costly distraction from the real problems: many teachers are unqualified for lack of training or materials, all teacher now teach mechanically to standardized tests which distract from the actual material, and many students are never going to achieve their full potential unless we first address some very hard, very real social problems first.
For developers, there are some times when the documentation SHOULD be larger than the code. The most important questions for many documentation efforts should be along the lines of "why did we choose this value" and "what values can this never be changed to without breaking something". The undocumented code must always be treated fragile, because it only gives you the final state of an engineering process. It doesn't convey any of the many small decisions that hemmed in that design. It gives you something that may work, but does not tell you how to build something that will work in the future if things change. If you give good documentation to a competent programmer, he can probably build something very close to the original program.
"documentation is bad everywhere" is one of those lies developers tell themselves to help them sleep at night. There are programs out there with outstanding documentation. (For example, as a grad student who had never toughed MatLab before I was easily able to teach myself in about a week by just scrolling through the help files.) It's just that those programs are rare, and almost none are FOSS.
This makes sense, because involvement in projects is voluntary, and contributors choose where to dole out their time. There are generally no "customers" with a carrot and stick to make the developers sweat about their failures and oversights. It makes sense that almost no one choose to spend time documenting. Even if they understand that it's a necessary pain, no one wants to be stuck doing in.
The solutions would have to be institutional. I can't think of a single OS project I've seen that had something like "decent documentation for new features" as a gating condition for a major release. That kind of cultural change is hard (and unlikely), but needs to be done if anything is to be accomplished. The only alternative is automated documentation, which doesn't really do anything more than re-state the source code in a different form. It's still only useful if the developers are religious about updating meta-code comments, which they never are.
If all else fails, immortality can always be assured by spectacular error. -- John Kenneth Galbraith