5243631
submission
mcrbids writes:
Looks like a pretty serious hole has been found in Linux — affecting 32 and 64 bit versions of Linux with and without SELinux using a creative way to exploit null pointer references. You can check it out yourself. As of this writing, there are no patches available for this, making it a potential zero-day exploit.
3114309
submission
mcrbids writes:
For the past 6 years or so, we've been heavily developing a proprietary, custom vertical application based on Linux, Apache, PHP, and PostgreSQL in a home-rolled PHP framework based loosely on Horde. We've been quite successful in the marketplace with our relatively classic technology based on HTML 3.x.
After investing heavily in fully redundant server clustering over the past year or so, we're finding that we'd like to improve our look and feel, improve response time, etc. and the natural way to do this is by incorporating javascript/ajax into our product. We've already begun some using ajax(y) in a few areas where very large tasks need to be coordinated over a long period of time — EG: longer than a typical browser timeout.
But we don't want to re-invent the wheel. There is a bewildering array of javascript frameworks, and with any framework, there's the risk of getting stuck trying to do something not anticipated by the framework developers.
So, which is the best, and why? Which should be avoided? Here are some of the frameworks I've seen so far:
Dojo, Ext JS, Fleejix.js, jQuery, Mochikit, Modello, Mootools, Prototype, Qooxdoo, Rico, and Scriptio. So far, in my research, jQuery and/or Prototype seem to be front runners, Dojo perhaps a close second.
I'd be most interested in the opinions of people who have switched from one to the other, and why?
829379
submission
mcrbids writes:
I'm revamping our web-based application and am currently reviewing options as far as logging, particularly with redundant, clustered hosting solutions. I've run into a few problems that it seems no amount of online searching seems to have found.
My first concern is about scalability — our application writes directly to local log files. Unfortunately, many of the log entries are quite large and so cannot be piped over syslogd. Other options are much heavier, come with significant administration overhead, or bottlenecks. Is there a syslogd replacement that will allow for very large (tens of KB or larger) log entries?
My next question is about logfile integrity. A perfect log file is write-only, never rewrite. A one-way street, data goes in, gets saved, and never gets deleted. But any log file is essentially just a file, and a single # echo "" > /path/to/log will kill the log file dead. Yes, you can log remotely, but this increases complexity and therefore the chances of failure. Also, what if your remote log server is also compromised? I've been considering the use of a CD-R, especially the ability to recover from a buffer underrun during a write sequence. I've simulated a few, tying up the HDD with I/O while burning a CD-ROM. It under-ran, renegotiated, then resumed writing without incident. Why not use this capability, leave the drive in a sort of permanent under-run, and renegotiate for log entries? Wouldn't doing so create a file that could not effectively be erased, even if the host was compromised?
568840
submission
mcrbids writes:
Scientology is attempting to stop Anonymous from marching this weekend, Saturday, the 15th by filing a request for a restraining order in what can only be described as the latest development of Scientology vs. The Internet.