People value a stable government. They won't stop paying taxes until they fear their government more than the anarchy that would replace it. They don't fear this enough.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
My experience is similar. I have appreciated how easy it is to work with. I point one domain at my home server, a sub-domain of that at Google AppEngine and my other domain at Google Sites.
Your needs will determine who the best host is for you. Here's what works for me:
- Self hosting: this allows me to build complex things and access very large amounts of data at no per-month cost, but bandwidth cannot go too high without causing a problem
- Google Sites http://www.google.com/sites/ov... : Free hosting for basic content
- Google App Engine https://support.google.com/a/a... : Big stuff to small stuff, pricing is free to pretty widely variable.
Thanks, you may be right, but I was certainly wrong, I was actually thinking of the Netflix vs Verizon issue:
Verizon has confirmed that everything between that router in their network and their subscribers is uncongested – in fact has plenty of capacity sitting there waiting to be used. Above, I confirmed exactly the same thing for the Level 3 network. So in fact, we could fix this congestion in about five minutes simply by connecting up more 10Gbps ports on those routers. Simple. Something we’ve been asking Verizon to do for many, many months, and something other providers regularly do in similar circumstances. But Verizon has refused. So Verizon, not Level 3 or Netflix, causes the congestion. Why is that? Maybe they can’t afford a new port card because they’ve run out – even though these cards are very cheap, just a few thousand dollars for each 10 Gbps card which could support 5,000 streams or more. If that’s the case, we’ll buy one for them. Maybe they can’t afford the small piece of cable between our two ports. If that’s the case, we’ll provide it. Heck, we’ll even install it.
The Comcast deal may be entirely different, I have little doubt the technical aspects were at least partially different, but I suspect the motivations were the same.
See, that's what they *did* and that's what pushed this change. Netflix didn't want to pay to put rack space in because it costs more, that raises their prices and their customers don't care about latency at all. A half second is huge in internet response times but customers couldn't care less if it their movie took an extra half second to start. Rather than give Netflix the bigger connection it needed to make it's customers happy, even when Netflix offered to pay for it, Comcast refused. That way they could force Netflix to pay Comcast extra money in order for Comcast customers to get decent Netflix service.
Your average consumer believes that the bandwidth they pay for each month reflects how fast their ISP will carry traffic to them. Comcast realized that they could sell that idea to the consumer and then not provide it and the average consumer wouldn't know or blame them. Then they could demand money from content providers.
We do want CDNs, but we want them provided because they improve service that people care about, not because ISPs refuse to give their customers sufficient access to content providers in order to make more money.
The courts have recently been unkind to software patents. Google has lots of money for lawyers and they could spend less on lawyers than they're paying Microsoft if some (any?) of the patents were to be invalidated. Google has agreements with MS that may hinge partially on not going to court with them. However, a win for Kyocera could save them mucho dinero.
Google may be able to support Kyocera without breaking their agreements and it could be a big win for them without jeopardizing anything except money. They have money to spare.
Not so much "demand" as "cite."
It helps if you're humming A-Team theme music while you read it.
In the 1980's, a think tank with access to classified information and all the data they could put their hands on about oil and food calculated that when US oil access decreased to a certain threshold there would be a cycle of problems ending in wide spread starvation. They also realized that it would be possible to minimize the starvation deaths if enough land and equipment were dedicated to corn production, but at the same time realized that there was no way the market profit margins would entice anyone to make the investments. So they had the necessary secret meetings and ethanol fuel additives were the result; essentially they created a government incentive to ramp up the capability to produce food in an environment where the food isn't needed.
That's my guess anyway.
Or else, you know, it's just government bureaucracy making poor decisions, but that seems unlikely.
Thank you for some perspective. I've been reading the other posts and I've been just a little disgusted by the entitlement attitude throughout. I've worked for a company that went under, worked for a division that was eliminated, worked for a company that couldn't pay me for a while and been fired for problems that weren't my fault (that's four different employers.) It sucks, but none of them owed me a job. I'm not owed a job even now when I feel I'm doing great work for the company that employs me.
I was very close to writing a snarky post.
Your comment reminded me how much it sucks to wonder how you're going to get by, what you're going to do to take care of your children and if you'll ever get back to where you were. IBM may need to do this; they've been slowly building to an implosion for decades. I'd love to have IBM come back. I root for companies that can come back from the brink of oblivion, like Yahoo is, like Microsoft is trying to and like Radio Shack has failed to manage. I hope that in ten years, when my children are telling me about how cool IBM is, I'll be able to say that there was a time it looked like they were doomed before they turned it around however painfully.
To those who have to find new jobs, I add my heart goes out to you and I hope I get to work with you some day when we can both look back on this as a point when things started to get better.
Yeah, I know, that's funny and yes, for a good three seconds, I had a moment of incoherent and dumbfounded shock at the idea someone could be seriously saying that. Then I saw the moderation and realized I'd been had. I paused for a second and realized I had some actual experience that wasn't so far off.
There was a time I liked VMWare. I used it until I discovered how much better Xen performed for me. I was a fan of XenSource until they were taken over by Citrix. When I took a job with Microsoft as the standard (no kidding, the boss sat me down and gave me the lecture my first week for daring to use VNC instead of MS Remote Desktop) I learned to use Microsoft virtualization instead. This was before Hyper-V and it.. well lets just say it was a hard acclimatization, so when I needed something that actually worked well, I convinced them that VMWare was a big enough enterprise player that we could use it where MS just couldn't do the job. That didn't mean I got a budget of course, it just meant I could use the free version. It wasn't great, but it was good enough. IE worked with it but keeping IE patched meant that IE stopped working, so now I had a system that couldn't work with anything but outdated and insecure software. Long story short, until I retired that system years later, I had portable Firefox 2 to run the interface.
I still don't love Hyper-V but it has performed better than VMWare free crap and if it still doesn't do some things (seriously, when will they enable USB access for clients?) at least I don't have to keep ancient browsers around to manage it. I miss Xen and still don't think KVM is as good. For that matter I miss the Phoenix browser. The best thing that could have happened to the Mozilla browser was to throw away all the crap that kept it from doing the one thing it was supposed to do best. I will appreciate it if Spartan is even half the improvement Phoenix was over Mozilla. I won't be surprised to write a comparison on how both started out with noble goals and decent performance before they were killed by the same loss of focus by their parent company in ten more years.
Andrew Flinchbaugh was approached by NJ police and ordered to give up his camera but he recorded the incident on his mobile phone. That recording has now gone viral. They did give him his camera back, but not without arresting him and not without going through the photos first, something that should require a search warrant they did not have. At one point he says that if they take his camera, they will have a lawsuit on their hands. It will be interesting to see if Mr. Flinchbaugh is true to his word."
Link to Original Source
Why are you bringing up the average user when he was talking about the end user who has a strong reason to keep something patched? That's comparing a Mint home user to someone running the distribution upgrade servers.
If you are in charge of managing an important system or network, then you can either fix the problem yourself, have your programming team fix it and commit the fix back to the upstream vendor or you can potentially hire the work out. Even if you are an average end user, you could actually fix it if you were willing to put in the work, however unlikely that scenario might be.
As for remembering, is it harder to remember "username" and "password" or "usernamepassword"? It's the same. You just don't press return in between them.
Logically? No. But in practice, I support both approaches and yes, for no obvious logical reason, it makes a huge difference.
Some certainly do and that bothers me. It shouldn't be that hard to set the MITM proxy to reject invalid certificates and provide the reason for the rejection to the users, but I haven't seen it done right.
You think Sony did?
I doubt the value "most" in your statement.
How is 8 letters username + 8 letters password harder or easier to crack than a 16 letters password?
It isn't easier to crack, but people remember usernames easier, so you get people who will enter 16 characters instead of eight. The validating server can treat them as separate lookups or not without impacting the efficiency of brute force attacks. The advantage of using multiple entries is that you end up getting more characters that have to be guessed correctly, which is a compound effect, so adding a PIN or multiple choice question compounds it further and isn't pointless at all.
Say you are trying to brute force my slashdot password and it's eight characters. That's 7213895789838336* possible combinations you have to work through to target one user, but I'm user 166417, which means you'd be 166417 times more likely (at least) to get illicit access if I weren't using a separate username.
Now, if my username were hidden and combined with the password entry and had to be eight characters, you'd have 52040292466647269602037015248896 potential combinations, which is obviously harder to crack, but you'd sacrifice functionality for that trade off and 7213895789838336 is a reasonable number of permutations for the level of security required. In reality, I'm not limited to eight characters so the real number is even higher.
Now, you have a valid point if you say that 16 characters would be a better length for passwords, but if you required that, there would be far fewer people who would sign in and make comments which would degrade the value of the whole system.
* - I know there is additional math that can be done here, not limited to but certainly including the tendency of people to use words and pseudo words in their passwords. I've read the manuals and brute force cracking articles too but I'm not getting paid to figure it out so my motivation to get a more accurate number is low.