well, maybe it is.
...Obviously Dell can't do that with their own in-house offerings, so perhaps they just couldn't compete with vendors running on cheaper servers.
Dell's public cloud problem wasn't hardware. Cloud providers buy hardware before building the service. Dell failed to stand up a live OpenStack public cloud. HP and Rackspace already have theirs running with real customers.
Building a public cloud is hard. It takes either a big company with lots of resources, or a smaller dedicated company with good funding. Both require long term commitments.
How Could Swarms of Robots Help Humanity?
Of course they would help. Unless they went crazy and started hurting people. Which they almost certainly would.
Cloud utilization is growing. And it's growing in startups and small companies. The reason isn't because of career choices by IT professionals. It's because it's a lot easier to buy a cloud-based solution with your company credit card than to requisition a VMware cluster.
Much of Amazon's cloud customer usage is for shadow-IT and small startups who do development work. Microsoft spent over $3bn on Azure and has little to show for it. Of course, object storage is a no-brainer for streaming content providers because who cares where you store a large block of data.
Regarding uptime and connectivity, Amazon suffered a major glitch last year that tanked Netflix for about a day because they didn't have enough connection redundancy. Their are providers out there who do. One I know of has multiple availability zones in the US, 3+-homed internet, and power from at least two non-connected grids.
Organizations are moving to the cloud, but large enterprises are not moving their legacy applications to the cloud. Yet. It's really hard to migrate 1000 applications running on legacy hardware, some of it with outdated OS's and non-x86 hardware.
It will eventually happen because companies are sick of having Chief Electricity Officers.
Agreed. I have two of these I use every day. They are excellent professional quality monitors that would be awesome even if not 30".
The real problem is that a missile interceptor is more expensive than the missile (or decoy) it is supposed to intercept. Take for instance Israel's Iron Dome vs. Hamas' rockets. A single Iron Dome interceptor costs $10k+, if not one order of magnitude more, while a single Hamas rocket is less than, say, $100. The same holds true for strategic defense missile systems: it's always a lot more expensive to intercept a ballistic missile than to send one. That's the real issue here. As long as missile defense technology doesn't become a lot less expensive (think e.g. some kind of futuristic force field shield of some kind that doesn't consume a lot of energy when idle), it will always be overwhelmed.
You're right about the costs, to an extent. We must also consider the cost to Isreal of a Hamas rocket hitting a populated area. This cost is far more than the cost of an interceptor. So, while the Isrealis may have a cost imbalance vs. Hamas, they are likely preventing an even greater imbalance by selectively using interceptors.
If you are "proxying" connection, then you are downloading from user D1 and uploading to D2. It does not matter if you are not retaining that data, you are still copying stuff illegally. So in the end if content owners are unable to determine identity of actual downloaders, they can go for proxying users and hit them with exactly the same lawsuit.
The FA says the traffic sent over the proxy is encrypted, so there would be no evidence it was copyrighted material.
So this guy "wrote the exploit code that was later taken by Slammer's authors and used as part of the worm", and he's not dead or serving an eleventy hojillion year federal prison sentence?
Times change indeed...
The article mentions he was paid by a company in Germany to penetrate their heavily-fortified SQL Server installations. This is when he developed the exploit code. Presumably it's not illegal for a company to pay you to security test its systems.
He also took the steps of communicating the exploit to Microsoft before releasing the code. He even asked their permission before divulging the code, and didn't do so until MS had released a fully corrective patch.
You're right, however, he'd be in jail if it happened today.
I agree, code reviews are the best way to identify shitty code. What if the code is bad, but the bugs aren't really provable? Let me give some examples.
I've seen this happen, especially in old code. The code works, but it's full of 2,000-line God Classes, dangerous half-objects, and doThisThatAndTheOther() void methods. Young developers are happy with it because it works and they continue writing the same kinds of idioms.
Arguing for change in code this bad will require a rewrite, which is hard to justify if you can only find potentially dangerous behavior vs. real bugs. Your only argument comes to, "Yes, the code works, but down the line this catch Exception block could result in unpredictable results." Folks who don't have years of answering to customers when these problems manifest don't see the danger.
Shirley, someone else must have been in this situation before.
If HP Issued the patches, and xerox pushed a fix, then who's fault is it really?
Please mod this up. The article says Xerox administers the CalWIN program. Xerox would likely be responsible for at least smoke testing this patch, even though it came from HP.
Since the article isn't very detailed, it's hard to tell who is to blame most, but it seems at least as much blame goes to Xerox. I can think of many scenarios that would make it either companies' fault.
What if Xerox used nonstandard data structures for their CalWIN? It might not be possible for the program creator to imagine every possible scenario. That's why no one slaps an Oracle patch on a production system without first testing it for weeks or months beforehand.
At the very least, I'd expect Xerox to do a phased rollout of the patch to small group of users. If there are problems, many fewer people are affected.
Poulson can issue 11 instructions per cycle compared to Tukwila's six.
These go to eleven.
For long term heat-proofing your home, air sealing is one of the most cost effective measures. Most energy loss does not occur through windows or doors. Even if the attic is properly insulated, if there are air leaks then hot air is infiltrating into the living area.
Many local utilities will do a blower door/infrared camera test on your home. When I did this, the "aha" moment was seeing that my kitchen walls were reading 100 degrees F. The reason was that the interior walls were open to the attic at the top of the wall, and hot air was circulating inside my walls. This made the kitchen extremely hot in the summer.
I hired a contractor to seal the air leaks as identified by the IR imagery, and the leakage of my house was reduced by 33%. My house now holds a more constant comfortable temperature. The next step was adding insulation, but this should only be done once the air leaks are sealed. Adding insulation to a leaky house does not stop the leaks. My city rebated about 40% of the cost of this work (it cost about $1700 combined).
Un-closed chimneys, dryer vents, and fan vents all leak energy. Try to seal your chimney when not in use, and install one-way dampers on other vents where possible. It makes a huge difference.
I live in a climate where it can reach 100F during the day, but it cools to 60-65 at night. I use a whole house fan at night to cool the interior down very cold, then shut all the windows in the morning. Last summer I went the entire season without needing A/C. I recommend AirScape fans because they are quiet, small, easy to install, and efficient (just a customer).
Don't think that just because your home is new that it is not leaking energy. Our local utility audited the leakiness of many homes and found that the most leaky ones were built in 1999. Before spending five digits to replace windows or upgrade your A/C, get your house energy audited. Otherwise you could be wasting money.
a guided missile is just a disposable drone?
The next big buildout in PCs will come from Television. As screens get larger, it will become easier to just use a TV with a keyboard/mouse instead of a PC.
Businesses will still use PCs. Power users will too. Everyone else will have a TV that functions as a PC, or a PC device that integrates with their television (DVR, streaming content). Most consumers will not want to buy a PC once the television can do everything the PC does.
Everyone else will still use smartphones and occasionally tablets. Dell would be smart to create a cheap, black box PC that is easy to use from the couch on a television display.
>If Oracle wins on this, and really does dump UX, then I need to bring a bunch of AIX gear in and put a team of developers to work porting our custom code which means no optimization, no rewrites, no efficiency
Could you not contract with oracle for extended support of their software on Itanium? I have heard of such things happening. It will cost a buttload, but probably cheaper than porting your code.