Developers too often seem to think they know everything, when (esp on large teams) often they have zero idea what it takes to bring their ideas to the real world. It takes serious designers to develop a scalable app, even if lots of people think they know how. I work in production support of multiple websites, meaning I have to clean up after the mistakes developers make on a daily basis. The support folks who have to write patches for our products often grieve over the situations the original developer placed them in. It often takes a major rewrite to fix many performance issues, because the original programmer never imagined all the different situations their code would be used. Prod support is where the real issues are discovered and solved. Accept it and move on.
Slashdot videos: Now with more Slashdot!
$50 at Amazon, although it's a little late to be shopping online today.
Like many of you, I've been learning and using Perl since 1.0 was first released on CSU. I was privileged to be able to contribute to the early versions by porting Perl to the many various platforms I had access to at my employer (a compiler company) at the time. 25 years later, I still find Perl useful in my current job. Love it. Thanks Larry and everyone else!
SGI's automatic parallelizing software came from Kuck and Associates, Inc (kai.com). I worked there for 8-1/2 years, and one disappointing fact we learned was that the only people who really cared enough about parallelizing their software to analyze their code and modify the source to make it faster were either research scientists (of which there were relatively few) who mostly wanted quicker and cheaper results (because renting time on supercomputers costs $$) or marketing departments of computer hardware manufacturers (of which there were fewer) who only wanted to be able to advertise higher SPECmark numbers for their hardware. SGI was the only manufacturer who shipped our product with every C and Fortran compiler they sold. IBM, DEC, HP only sold it as an option, but all used it internally to speed up their own benchmark numbers.
Automatic parallelizing is tough, tougher than you think. It's nearly impossible if you don't have a human performing program analysis and adding source code directives to inform the compiler about data dependence needs.
Why does this article use the term "multi-server microkernel OS"? I don't see anything in the article or anything else about Genode referring to multiple servers. Sounds like they're just trying to redefine the term "microkernel"
I read TFA where is says the new laptop is for sale now, but it's not listed on their website anywhere. http://dell.com/xps13 and all you see are 4 models, and all include windows 7 home premium. There is no option for another OS.
Search for "xps 13 developer" from within dell.com and you get three links to their wiki containing press releases about this new product.
The Cray website (http://www.cray.com/Products/XC/XC.aspx) has more details. 3072 cores (66 Tflops) per cabinet, initially, and the picture make it look like they have 16 cabinets, making 49152 cores total. Amazing.
While the article says they 'unveiled' it, it doesn't give any information about the hardware at all. I'm guessing it hasn't actually been built yet. Too bad. The Top 500 Supercomputers list is due to be updated this month.
Right now I'm running a free IP v6-over-v4 tunnel from my router to Hurricane Electric. I got assigned my own v6 LAN range. Mac OS X works fine, hits the v6 version of a website if it exists, the v4 version otherwise. Doesn't always work, I know. The DNS part is the problem to figure out. The larger infrastructure DNS servers (comcast, at&t, verizon, etc) need to support IPv6. Comcast has just begun rolling it out to end users, so hopefully they've got dnsv6 servers that work now and still return the correct regionally sorted IP addresses for cloud services like akamai.
When I have a list of 200+ servers and VMs that I'm responsible for, as well as the applications that run on them, who has time to tune each server? While a nice idea, it's simply not practical at the scale most large businesses run at.
We used to use FreeBSD on some servers, but they all quickly became dead ends, as OS patches and upgrades were painful and time consuming. Now we're a SLES house.
I agree. Too little, too late. It'll take years for them to turn things around, and they just don't have the time.
http://www.giganews.com/vyprvpn/ I use this when I want to, and they have VPN POPs in europe, southeast asia, and the us. Works great.
Having previously worked for a US national cell phone company that went through mergers/buyouts, I can tell you this. Until the date that the purchase is approved and announced, there is a "wall" between these two carriers. I guarantee T-Mobile's marketing dept is not making business decisions with any thought or concern about whether AT&T will like it or not, because the people who make those decisions are not allowed to talk to each other. Marketing and engineering teams are not allowed to start talking about integrating systems and product lines until that magic date passes. Employees have undoubtedly already been cautioned to be careful what information they pass along on any normal business calls between the two organizations. After all, it's possible the deal could still fall through.