http://www.sublimetext.com/for... gives you a good start. But really; the best way to find out is to fire Sublime up and see; it's pretty self-explanatory for much of it (although having the vi mode plugin disabled by default can be a bit jarring until you figure out how the plugin system works).
Why not just use ed? It should already be there....
vi aka vi
I predate emacs (esc alt meta key) and it is on all unix systems. emacs is still spotty and sometimes you need to install it--vi is always there.
Not true -- I've been in many situations where 'which vi' has returned nothing.
Now ed is always there -- any environment that doesn't contain ed is not worth being in. It's been the default since 1971, unlike newcomers such as vi that didn't show up until 1976.
Off his lawn?!?!?
He said nano -- the bastard child of pico.
Nano is the notepad of the POSIX world -- it eats line endings, messes up indentation, and makes a mess of config files -- just like pico did back in the day.
I still remember using elm with pico integration; it was great for writing an email, horrible for coding. I used emacs for that, until it started getting too unwieldy.
Now if I'm in a lightweight environment, I'll use ed. If I'm in a graphical environment, I'll use Sublime. If I'm in a terminal, I'll use vim.
I use Sublime with vim bindings turned on. It has features I use every day that vi/vim doesn't have, and doesn't get in the way of my vim muscle memory. It also doesn't get in the way of my ed muscle memory, nor my Mac muscle memory. In fact, pretty much whatever legacy text editor my muscle memory thinks I'm using, Sublime will interpret the commands correctly and let me get the job done.
I've used all the listed editors, and eventually settled on the vim/Sublime combo, as they accomplish everything the others do, and then some.
And to think that 20 years ago, I was a diehard emacs user. I liked my macros, but Sublime can do all that too; it just prefers python over LISP.
Indeed -- and it's also the issue of short-term gain vs. long-term gain. People will hand over their Facebook passwords in exchange for chocolate. Just because an individual is short sighted shouldn't mean that their entire social community has to suffer in the long term because of it.
I'd disagree to your first part: there's not much difference between one President and another when you come right down to it; they are heavily restricted in their actions by policy makers. Plus, your municipal vote for the president has almost no effect on the result, compared to municipal elections where one interest group can sway the entire outcome. Mayors and aldermen have huge amounts of leeway, and their decisions affect your life directly.
I'd rather someone discovers a president was fraudulently elected than a mayor. But I'd rather that they found out the mayor too, if there was fraud involved. This is much easier to do with offline voting than with online voting.
I hate it when people try to vote against something that makes life easier, out of privacy concern and security...
If you have viruses on your machine, that's your own darn fault, why penalize everybody for your stupidity?
The second half has already been responded to, so I'll tackle this bit.
If you have malware on your machine, that's likely your own fault (most likely through ignorance). Unfortunately, everyone on your network, on your social network, and on the malware's distribution chain is penalized for your stupidity.
So let's back up one level...
Online voting makes life easier, agreed.
Unfortunately, abuse of online voting doesn't just affect the person not using it to vote, but also affects everyone in the municipality.
You can't have it both ways: either the upstream has to think of the privacy and security concerns, or the end operator (citizen) does.
As "online" implies global, it means that unlike mail-in, where abuse is likely limited to people who are actually a part of the municipality plus a few external interested parties, suddenly abuse is open to the entire world, where statistics indicate that a 0.001% of the 7 billion population = 70,000 actors likely to attempt to abuse the system for reason X instead of the 0.15 of a person who is likely to abuse the system for reason X locally.
The main way to ensure best security is to limit scope: only expose a function to the actors that need to access it. "On the Internet" does the inverse.
And that's just one reason it's a bad idea; there are plenty of others. All of them have solutions, but all the solutions are going to run afoul of statistics when you move a system that's been exposed to 15,000 people into an arena where it's exposed to 7 billion people.
You forgot -pulling off to the side of the road to let emergency vehicles pass
In most parts of the world, that first one could mean that they deliver the lobsters to the room in a live state.
Actually, that's not how it has always been. The "magic number" at one point was 13 years old; in recent history (past 50 years) it tended to be 21 years old. Over the past decade or so, it has creeped up to 30. This goes for all the things mentioned; cars, jobs, marriage, kids.
The reason? Should be obvious: baby boomers. They're keeping their money as they retire, and are often spending it in out-of-area places. As a result, the job shortage that still exists is filled by temps because it's the only way the companies can afford to sell things to the boomers at the prices they expect. If they raise prices and hire full-time employees, the boomers will go elsewhere to spend their disproportionate amount of money.
Once the boomers start to die off, we'll start to see the pendulum shift the other way, as local demand rises, job vacancies rise, and the value of local skilled labor finally rises. Of course, that's another 15 years off, by which point the millennials will be the establishment and it'll be the next generation that gets the benefit of having an earlier workforce transition.
in Canada, self-service checkouts have the impulse items surrounding the lineup area to get to the kiosks, similar to how they're positioned for the lineups to get to actual cashiers.
I'm no millennial, but I almost always use a self-service checkout at stores who have functioning* kiosks. I've spent my time as a kid doing those sorts of jobs, and tend to be better/faster at using the scanners than a checkout clerk -- so why spend 5 minutes waiting in line and an interminable time waiting for the clerk to process all my items, when I could breeze through a kiosk with no lineup in 30 seconds? I still talk to the store staff on the way out, but no longer have to deal with a lost 20 minutes in my day.
However, in Canada, there are still plenty of stores with buggy kiosks -- one of the common scenarios is kiosks running Windows Embedded with a small HD; the transaction logs have to be manually collected/cleared, and the longer they're left without doing that, the slower the interface becomes until it eventually goes unresponsive.
The other issue is places that haven't calibrated their scale response time correctly, so the kiosk keeps flagging up errors if you're not quick enough to drop your item (whether it be an over-sized item or a carton of eggs) onto the scale.
One of my local vendors also recently underwent ownership change, and the new owner's policy was that every credit card signature required employee verification -- so if you use an American Express at one of their kiosks, the thing starts blaring out its alarm as soon as you sign the pad, and then you have to stand and wait until some clerk has the time to go to the main kiosk console and hit "accept" (they never visually inspect, as they have no reason to -- if your signature doesn't match, there's no step 2; they can't decline the purchase under local laws and cardholder agreements).
But as I said; I still opt for the kiosk when possible -- if it's a small purchase, I can be in and out of the store in a matter of minutes; if it's a large purchase, I save the difference in time it takes me to process the items vs a worn out cashier.
The only people that don't benefit here are the extra staff that act as storage help and double as checkout overflow help when things get busy. But there are other jobs to do these days that pay better for the same skill set.
I used to have an account on DEC's Alpha test servers, and remember testing out VAX/VMS back in the day.
Seeing OpenVMS being pushed for Itanium products though... that's running one doomed OS on another doomed and believed extinct platform.
I don't really see where they're going to make a profit on this, at least enough to survive until they can port it over to a modern x86 architecture.
After they do THAT, I can see it being viable, especially if they provide legacy binary support. There's still a lot of iron running VMS, and most of it, while necessary infrastructure, is running on hardware that I can't imagine can last much longer.
But they'd better get the port and compatibility layer rock solid before they try selling it, or we're in for some painful times (brownouts, water service outages, etc).
QuickTime was actually an excellent multimedia container platform that integrated what Apple had learned from HyperCard with a multimedia delivery stack. It was technologically way more advanced than Shockwave and the other competitors of the time.
Unfortunately, the bastard child that was delivered to most Windows web browsers was actually Sorenson codec video in a QuickTime wrapper, pushed through a QuickTime Plugin. This plugin is what everyone has a deep loathing for -- to make QuickTime work on Windows, Apple had to port a major portion of the Mac OS API, as QuickTime integrated deeply into the system calls of the OS. So what Apple did was created a stripped-down Mac OS that ran inside a plugin for your browser. So every time you loaded a page that required the QuickTime plugin, your web browser booted Mac OS, which then loaded the QuickTime component handlers, which then grabbed the container and loaded the data stream. This was run through the codec handler, which then passed the result back to the OS, which used the QuickDraw renderer to blit the result to a virtual screen and audio device. THEN, the virtual screen was passed back to the plugin handler code and from there to DirectX, as was the audio.
So you ended up with a combination of the worst aspects of Windows, the worst aspects of browser plugin architecture, and the worst aspects of the Mac OS (inclduing bad memory management and cooperative multitasking) all being experienced in one place. No single point of failure here; it had so many potential points of failure that you generally hit at least one with each loaded instance of the plugin.