When you shop around for hosting, the price/GB can fluctuate wildly. Amazon's EC2 is almost at the top with $0.12/GB, but Cogent at $5/Mbit (~0.015/GB) is one of the cheapest for transit/paid traffic.
Even less? How about free, using peering agreements on internet exchanges? This way, providers like Hetzner can sell their bandwidth for even less, like 5-10TB included and â 6,90/TB after (â 0.0069/GB).
ISP's should just whine less and do their homework. I can understand small ISP's having trouble when leasing lines from the larger ones (article has Trimco vs BT as example), but the main problem is that the larger ISP's promote this "bandwidth is expensive" myth even harder...
And that's exactly how EBS is supposed to be backed up - it saves snapshots all the time to S3. Small and cheap incremental backups stored to a 99.999999999% durable storage area. But apparently, Amazon messed up the backed up copies as well - instead of producing an outdated, but valid snapshot, they replied to affected customers with:
A few days ago we sent you an email letting you know that we were working on recovering an inconsistent data snapshot of one or more of your Amazon EBS volumes. We are very sorry, but ultimately our efforts to manually recover your volume were unsuccessful.
Huh? The linked products are beyond horrible compared to any decent and MUCH cheaper AC PSU. Just look at any half decent review site, like the awarded products @ hardwaresecrets.
I won't be paying $280 for a 400W DC PSU with 65% efficiency when I can get a whisper silent 500W PSU at 87%+ efficiency for $99 (Enermax Pro87+), or a fanless Seasonic X-400 for $134. The numbers just don't make sense. And yes, these are "honest" wattages, the 400W one actually delivered 600 in overload testing. You do have to do your homework when it comes to buying a PSU, but it really isn't that hard nowadays - aim for 80+ Gold and it's usually safe.
I don't really care how simple and straight forward a DC system is, but if it's costing me 2-3x in purchase and wastes 30% of the input power as heat, count me out.
You have already lost everything. Just edit the login form to mail all the passwords to you and kill the session table. Done. No amount of salting trickery will save you.
The article is kinda stupid too - who in the world still uses a static salt? Most proper frameworks like django use algo$salt$hashed value in the database, so the developer can switch algorithm any time and have it applied on next login and use unique salt values per user. Running SHA1() a hundred times won't turn a polynomial time problem into exponential time, there will always be a better GPU next year which can create rainbow tables just as easy.
Just save the "password" as a three part tuple and use unique salts. You'll be safe until they finally get quantum computers working.
Let me tell you one thing about that: Java isn't the problem. In my definition of feeding the GPU: triangles/sec, fillrate and OpenGLES objects/sec, Java is just 10% behind a raw C benchmark like glbenchmark 1.1/2.0. They quoted 880kT/s, I managed 750kT/s in non native code. And to get that, you have to carefully feed the GPU with the right batch sizes, don't issue too many state changes, pack things interleaved in the video buffer, don't use dynamic point lights, etc etc. It isn't as bad as an NDS, but the Snapdragon GPU is quite hard to tame.
The problem with using the GPU is that every context switch requires a complete reinitialization of the GL context, even on a PC, alt tabbing into and from fullscreen games takes ages - it's fine when specific applications which requires the speed use it directly, but it's not when going from one activity to another gives you a loading screen.
Animation performance and touch responsiveness? Is that the best he can come up with for such a title? I have no idea what he's talking about, but scrolling the browser works just fine here on a not-so-recent HTC Desire. The only time things break down is when the garbage collector halts everything for a third of a second (see DDMS/logcat messages), and those pauses are reduced to sub 5ms in the new builds. That's tons more useful than rendering surfaces to quads and using OpenGL ES to draw them, and IMO, the Android team made the right decision.
No OS is immune to fragmentation. On a data store disk with ext3 and tons of files in the 5M range, this is what happened (sudo filefrag *):
rt-01n8vmuqn8xtls6d.w4c: 141 extents found, perfection would be 1 extent
rt-01n9q0j59s1sovam.w4c: 23 extents found, perfection would be 1 extent
rt-01nk9zgmitrsow7g.w4c: 8 extents found, perfection would be 1 extent
rt-01nlrr9aaasuk0yb.w4c: 20 extents found, perfection would be 1 extent
rt-01o3kwc33nhpgqg4.w4c: 41 extents found, perfection would be 1 extent
rt-01o3p9b4x2mfbwem.w4c: 16 extents found, perfection would be 1 extent
rt-01ohtzjkl2z2y3wl.w4c: 17 extents found, perfection would be 1 extent
rt-01orb2yYTsp1vALN.w4c: 1 extent found
rt-01orz1hkb5jzbepv.w4c: 29 extents found, perfection would be 1 extent
rt-01q9x02lltcvogr1.w4c: 62 extents found, perfection would be 1 extent
rt-01qq34rl6exztyx3.w4c: 17 extents found, perfection would be 1 extent
rt-01qrz236bvnim44i.w4c: 14 extents found, perfection would be 1 extent
Solution? None. Just add more drives. "Sequential" reads are now at 15M/sec if you balance the load over the raid1 array, it isn't too bad, but if it was an issue I'd take NTFS with its safe and secure online defragmentation API over Linux anytime.
Too bad the price difference between them and others is much too large. I'm not talking about tens of percents, but 2 cent per 1000 views vs the 50ct we currently receive for US traffic. For international traffic, divide by 5 to 10. Advertising revenue is bad enough already, unless you serve millions of pages a month, you're not going to break even. Reducing that by another 90% is plain suicide - it's probably more effective to remove them all and add a donation button if you can take a 90% pay cut.
I really would love to support them, but advertisers just do not want to advertise internationally with the same ad. Even brands like Dell have 30 different versions of their ads, one for each country, and depending on where the visitors are, they get their local version with prices in their own currency and the text in their own language - it simply works better that way. If you serve me US prices for Dell, I still have no idea what the final price is in euros after the import duties, VAT and the price difference of Dell NL vs Dell US, so that makes the ad useless for 70% of the visitors. Project Wonderful can never achieve this with their model. This internationalization is one of the main reasons editors on sites have lost control of the advertisements - there's just no way you (or anyone) can review thousands of ads each day...
I hate how the ads market works and I'd love to see a fix for it, but Project Wonderful isn't it. The market is completely in control of the advertising networks, it's hell for us independent publishers; we just get a check every month, and there's nothing we can do to influence it.
I would love to see such a feature, it would make the life of everyone hosting a advertising revenue dependent site a lot easier. The Slashdot standard answer of Noscript/Adblock/Hostfile doesn't solve anything - there will always be users who don't mind advertisement as much as you do, and it's our job to protect them from harm.
Yes, I run a site that have ad revenue. No, I don't deal directly with the scareware crowd, I sell my space to Google, Right Media, AOL etc. But if they make a mistake and a user gets served a bad ad, I'd love to know from which network it came, so I can demand they take down that ad ASAP and if this is repeated, I will take my business somewhere else.
But browsers just lack that information at the moment, so to report an ad, we ask our users to follow the procedure below:
It will contain a lot of useless info, but somewhere in between, there's the magical <script> tag plus the generated (=bad) content for verification.
The decision doesn't have to be logical; it was unanimous.