Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
When you shop around for hosting, the price/GB can fluctuate wildly. Amazon's EC2 is almost at the top with $0.12/GB, but Cogent at $5/Mbit (~0.015/GB) is one of the cheapest for transit/paid traffic.
Even less? How about free, using peering agreements on internet exchanges? This way, providers like Hetzner can sell their bandwidth for even less, like 5-10TB included and â 6,90/TB after (â 0.0069/GB).
ISP's should just whine less and do their homework. I can understand small ISP's having trouble when leasing lines from the larger ones (article has Trimco vs BT as example), but the main problem is that the larger ISP's promote this "bandwidth is expensive" myth even harder...
And that's exactly how EBS is supposed to be backed up - it saves snapshots all the time to S3. Small and cheap incremental backups stored to a 99.999999999% durable storage area. But apparently, Amazon messed up the backed up copies as well - instead of producing an outdated, but valid snapshot, they replied to affected customers with:
A few days ago we sent you an email letting you know that we were working on recovering an inconsistent data snapshot of one or more of your Amazon EBS volumes. We are very sorry, but ultimately our efforts to manually recover your volume were unsuccessful.
Huh? The linked products are beyond horrible compared to any decent and MUCH cheaper AC PSU. Just look at any half decent review site, like the awarded products @ hardwaresecrets.
I won't be paying $280 for a 400W DC PSU with 65% efficiency when I can get a whisper silent 500W PSU at 87%+ efficiency for $99 (Enermax Pro87+), or a fanless Seasonic X-400 for $134. The numbers just don't make sense. And yes, these are "honest" wattages, the 400W one actually delivered 600 in overload testing. You do have to do your homework when it comes to buying a PSU, but it really isn't that hard nowadays - aim for 80+ Gold and it's usually safe.
I don't really care how simple and straight forward a DC system is, but if it's costing me 2-3x in purchase and wastes 30% of the input power as heat, count me out.
You have already lost everything. Just edit the login form to mail all the passwords to you and kill the session table. Done. No amount of salting trickery will save you.
The article is kinda stupid too - who in the world still uses a static salt? Most proper frameworks like django use algo$salt$hashed value in the database, so the developer can switch algorithm any time and have it applied on next login and use unique salt values per user. Running SHA1() a hundred times won't turn a polynomial time problem into exponential time, there will always be a better GPU next year which can create rainbow tables just as easy.
Just save the "password" as a three part tuple and use unique salts. You'll be safe until they finally get quantum computers working.
Let me tell you one thing about that: Java isn't the problem. In my definition of feeding the GPU: triangles/sec, fillrate and OpenGLES objects/sec, Java is just 10% behind a raw C benchmark like glbenchmark 1.1/2.0. They quoted 880kT/s, I managed 750kT/s in non native code. And to get that, you have to carefully feed the GPU with the right batch sizes, don't issue too many state changes, pack things interleaved in the video buffer, don't use dynamic point lights, etc etc. It isn't as bad as an NDS, but the Snapdragon GPU is quite hard to tame.
The problem with using the GPU is that every context switch requires a complete reinitialization of the GL context, even on a PC, alt tabbing into and from fullscreen games takes ages - it's fine when specific applications which requires the speed use it directly, but it's not when going from one activity to another gives you a loading screen.
Animation performance and touch responsiveness? Is that the best he can come up with for such a title? I have no idea what he's talking about, but scrolling the browser works just fine here on a not-so-recent HTC Desire. The only time things break down is when the garbage collector halts everything for a third of a second (see DDMS/logcat messages), and those pauses are reduced to sub 5ms in the new builds. That's tons more useful than rendering surfaces to quads and using OpenGL ES to draw them, and IMO, the Android team made the right decision.
You do NOT use java.nio (like Jetty's SelectChannelConnector) for maximum throughput. You use it to handle persistent connections, like all those long polling requests via AJAX which return on an event or timeout after a minute. This article is like recommending Apache with its hard limits on how many requests it can serve concurrently over newer, asynchronous servers like Nginx for static media servers with keep alive enabled.
The slides even mentions the C10K problem, but what it doesn't do is mention when to use either technology - async IO for concurrency and endless scaling, and synchronous IO for pushing a 10G Ethernet link to the limits. No wait, the nio setup can do that too, 700MB/s or 5.6Gbit/sec per core on 2008 hardware should be enough to max out anything you can buy now. It's great that synchronous IO can hit 1GB/s, a whopping 30% faster, but useful? I'd say no.
For most users, you don't use either API. Lets be honest here, writing highly concurrent software is hard, why reinvent the wheel when you can get off the shelve software that can do it better? You use Jetty and choose between the SelectChannelConnector or SocketConnector, or choose between Apache or Lighttpd/Nginx depending on the traffic pattern. What you do write is the bit that accepts a whole HTTP request and returns a HTTP response, everything before and after is magic.
Unless you're a file server, each 50k sized HTTP response will require enough work to make sure you run out of CPU or Disk IO long before you hit even the 100Mb/s ceiling in most rack switches. Even if your app is fast, 16 cores x 100ms per request x 50K is only 62 Mbits. Not 5600.
But if you need to scale in concurrent client count, there's no way around async IO. The latest name to watch is Netty. In Plurk Comet: Handling 100,000+ Concurrent Connections with Netty, it scales up to 100000 concurrent connections on a quad core server with 20% CPU load.
Just stop worrying about sockets already, and start worrying about your SQL server suffering a meltdown. Even if you get manage to grow into the Facebook, it's not like using synchronous IO will save you from deploying 30000 servers, it's the application code that's slow. Zero copy, one copy, "string concatenation style twenty copies response building" socket writes don't matter at all, memcpy is cheap compared to a few lines of interpreted code, servers are cheap compared to developers, and never mind the cost of the programming gods giving these presentations.
Also, no matter how much power they claim to save with this beast, it's nothing compared to virtualizing a rack or more into 3 ESX hosts. With this Atom "solution", you'll just have tons of nodes on 0.00 load again like before, instead of having all the physical servers at a load you're comfortable with. To make things worse, if you run a job on the Atoms, it will run in slow motion because of the "performance" of that CPU - you cannot burst to the full 2/4/8 vCPU allocated, @x3 speed +turbo when needed. It's *much* harder to use 512 slow CPU's than fewer, faster ones.
And never mind running any commercial software on it. The per core/socket license will make it impossible.
No OS is immune to fragmentation. On a data store disk with ext3 and tons of files in the 5M range, this is what happened (sudo filefrag *):
rt-01n8vmuqn8xtls6d.w4c: 141 extents found, perfection would be 1 extent
rt-01n9q0j59s1sovam.w4c: 23 extents found, perfection would be 1 extent
rt-01nk9zgmitrsow7g.w4c: 8 extents found, perfection would be 1 extent
rt-01nlrr9aaasuk0yb.w4c: 20 extents found, perfection would be 1 extent
rt-01o3kwc33nhpgqg4.w4c: 41 extents found, perfection would be 1 extent
rt-01o3p9b4x2mfbwem.w4c: 16 extents found, perfection would be 1 extent
rt-01ohtzjkl2z2y3wl.w4c: 17 extents found, perfection would be 1 extent
rt-01orb2yYTsp1vALN.w4c: 1 extent found
rt-01orz1hkb5jzbepv.w4c: 29 extents found, perfection would be 1 extent
rt-01q9x02lltcvogr1.w4c: 62 extents found, perfection would be 1 extent
rt-01qq34rl6exztyx3.w4c: 17 extents found, perfection would be 1 extent
rt-01qrz236bvnim44i.w4c: 14 extents found, perfection would be 1 extent
Solution? None. Just add more drives. "Sequential" reads are now at 15M/sec if you balance the load over the raid1 array, it isn't too bad, but if it was an issue I'd take NTFS with its safe and secure online defragmentation API over Linux anytime.
Too bad the price difference between them and others is much too large. I'm not talking about tens of percents, but 2 cent per 1000 views vs the 50ct we currently receive for US traffic. For international traffic, divide by 5 to 10. Advertising revenue is bad enough already, unless you serve millions of pages a month, you're not going to break even. Reducing that by another 90% is plain suicide - it's probably more effective to remove them all and add a donation button if you can take a 90% pay cut.
I really would love to support them, but advertisers just do not want to advertise internationally with the same ad. Even brands like Dell have 30 different versions of their ads, one for each country, and depending on where the visitors are, they get their local version with prices in their own currency and the text in their own language - it simply works better that way. If you serve me US prices for Dell, I still have no idea what the final price is in euros after the import duties, VAT and the price difference of Dell NL vs Dell US, so that makes the ad useless for 70% of the visitors. Project Wonderful can never achieve this with their model. This internationalization is one of the main reasons editors on sites have lost control of the advertisements - there's just no way you (or anyone) can review thousands of ads each day...
I hate how the ads market works and I'd love to see a fix for it, but Project Wonderful isn't it. The market is completely in control of the advertising networks, it's hell for us independent publishers; we just get a check every month, and there's nothing we can do to influence it.
Link to Original Source
I would love to see such a feature, it would make the life of everyone hosting a advertising revenue dependent site a lot easier. The Slashdot standard answer of Noscript/Adblock/Hostfile doesn't solve anything - there will always be users who don't mind advertisement as much as you do, and it's our job to protect them from harm.
Yes, I run a site that have ad revenue. No, I don't deal directly with the scareware crowd, I sell my space to Google, Right Media, AOL etc. But if they make a mistake and a user gets served a bad ad, I'd love to know from which network it came, so I can demand they take down that ad ASAP and if this is repeated, I will take my business somewhere else.
But browsers just lack that information at the moment, so to report an ad, we ask our users to follow the procedure below:
- On IE, press F12 to access the debugger, on Firefox, install Firebug and press F12. In Chrome, use Ctrl-Shift-I.
- In Chrome: select the "html" element and ctrl-c ctrl-v the data into a text file, save it.
- In IE: Ctrl-S inside the developer tools saves a version I can use. Don't use save from the IE screen.
- In Firebug: Select the html tab, select the html node, right click and select copy innerHtml. Paste it into a text file and save.
- Email me the result
It will contain a lot of useless info, but somewhere in between, there's the magical <script> tag plus the generated (=bad) content for verification.
One vendor we considered was Media Temple, their VPS (not the grid service) aren't the cheapest, but their offering looks more polished than the others. The $50 for hosting is probably the cheapest part of the project and if you ever reach the limits of the VPS, there's still plenty of time to switch to a bigger package or another host. By then, you'll have a good idea what the computational requirements are of your site.
We didn't go with them tho - after benchmarking and testing, what we needed was a bit too expensive to rent. We went with 10u rackspace and enough hardware to fill most of it instead. Pro: can't beat the price and total freedom in choice of OS and software. Con: you have to manage everything yourself and pay upfront for all the hardware.
The more you apply what you've learned, the more you'll discover. If you wanted to create a book indexing application, you'll discover that you'll need to master either Swing or SWT, that you need some kind of storage and learn how to use an flat files for storage or use ORM like Hibernate or write your own SQL queries in JDBC and how to setup a database like PostgreSQL. If you go for an web interface - learn about JSP & containers like Tomcat. Want to do stuff in 3D? Learn how to use OpenGL and the JOGL bindings and read up on basic linear algebra.
Or you could jump from Java to another language. Don't get me wrong - in my opinion, Java is still one of the best designed language, and a huge plus is that you can step through everything to see how it works, right down into the C++ code powering java.nio, but it's a lot of work to get some results on the screen, if you don't count System.out.println(). ActionScript is quite a bit easier to apply, you can get going and write a small game in no time. Python + Django is the perfect web framework for starters, it's almost as powerful but much much easier to learn than JSP+Taglib with their arcane
Once you have seen a few languages and discover their strengths and weaknesses, you'll be able to apply your skills even better. In a digital world, everything is possible. Go create your own future.