Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Do I get at least a pair of rubber gloves? (Score 1) 135

Don't want to reduce your smug, but we're doing just this - restart services from the failed component, service the failed resource on a non-critical timeframe. The small shop with a half-dozen server boxes doesn't give a damn about cooling costs or this level of service, for the most part. If they do, they're likely going to someone else to satisfy that requirement, not doing it in house.

I've got stack of servers in my datacenter that are allocatable on demand. Any unused server blade is a potential spare. If a production blade tips over with a CPU fault, memory error, or similar crash, its personality (FC WWN's, MAC's, boot and data volumes, etc.) are moved to another blade and powered on through an automated process. Since the OS and apps live on the SAN, both VMs and dedicated server hardware can be abstracted away from the actual services they provide.

This is a product my company's selling to the market at large right now, and that I designed. Any of our IaaS customers can take advantage of the redundancy and fault tolerance built into the system. Even the six-server small IT shop.

Even then, a small IT organization can easily virtualize and provide some level of HA services in hypervisor clusters now. It's just not that hard anymore. Take the handful of servers you're running on now, replace them with an equal number of nodes in a VMM cluster, and go to town. Any of those systems fails, shift the load to the other nodes and effect repairs.

Comment Re:SOX HIPPA etc (Score 1, Insightful) 157

"Cloud computing," while it has very nearly achieved meaningless buzzword status, is an attempt by the business and marketing types to get their heads around what is a very real evolutionary transformation occurring in IT. The drivers are the drying up of CapEx budgets, the need to reduce service delivery time, and the requirement to purchase and pay for only what an organization needs to fulfill their business requirements.

Capital expenditures are coming under increased scrutiny, and are under constant budgetary pressure. "Cloud" based, on demand services allow IT service procurers to shift from a CapEx based model of owning the infrastructure to an OpEx model. Operational expenditures are typically larger chunks of budget than capital purchases, and moving IT services there can allow them to get "lost in the noise". Less red tape, less stringent approval processes, etc.

Time to deliver service is a labor cost, and if a procurer can shift that operational expense from internal overhead required to deploy an IT architecture to acquiring those same services from a cloud provider, it's perceived as a big win. The provider gets to deal with the headaches of capacity management, infrastructure design and integration, and delivering IT resources. The purchaser gets the luxury of simply specifying how much they want, for how long, and letting the provider leverage its economies of scale and automated processes to deliver the resources within the terms of the provider's SLA. The tradeoff is that the consumer of cloud services loses the ability to specify the platform and all its parameters in exchange for rapid delivery of a standardized service.

IT organizations are also under increased pressure to abandon the concept of designing and purchasing for peak capacity. Cloud providers are specifically addressing these needs by allowing their customers to pay only for what they use, not the spare capacity. Since the "cloud" capacity is shared, reused, and managed by the provider, the customer is afforded the ability to scale their environment dynamically to meet the needs of the business and its budget.

Now, how this ties into Web Services is important. Web Services, for a long time, was a solution in search of a well-defined problem. Now, with the "cloud" becoming a workable construct, Web Services come to the forefront as the way that stateless platforms can interact without intimate knowledge of the underlying infrastructure. Web Services will become more and more important as IT services are increasingly abstracted away from the hardware and OS platform. As I've worked for the past two years as a design architect for an infrastructure-as-a-service type platform, I can say with some authority that they're are an integral part of how we're going to need to deal with virtualized environments and stateless service contexts as they become pervasive elements of IT solutions.

Government

Secret EU Open Source Migration Study Leaked 311

Elektroschock writes "For 4 years MEP Marco Cappato tried to get access to the EU Council's 2005 open source migration study because he is a member of a responsible IT oversight committee in the European Parliament. His repeated requests for access were denied. Now they have finally been answered because the Council's study has escaped into the wild (PDF in French and English). Here is a quick look. It is embarrassing! Gartner, when asked if there were any mature public Linux installations in Europe, claimed that there were none. Michael Silver said, 'I have not spoken to any sizable deployments of Linux on the desktop and only one or two StarOffice deployments.' Gartner spread patent and TCO FUD. Also, the European Patent Office participated in the project, although it is not an EU institution."
Data Storage

New York Times Wipes Journalist's Online Corpus 94

thefickler writes "Reading about Peter Wayner and his problems with book piracy reminded me of another writer, Thomas Crampton, who has the opposite problem — a lot of his work has been wiped from the Internet. Thomas Crampton has worked for the New York Times (NYT) and the International Herald Tribune (IHT) for about a decade, but when the websites of the two newspapers were merged two months ago, a lot of Crampton's work disappeared into the ether. Links to the old stories are simply hitting generic pages. Crampton wrote a letter to Arthur Sulzberger, the publisher of the NYT, pleading for his work to be put back online. The hilarious part: according to one analysis, the NYT is throwing away at least $100,000 for every month that the links remain broken."
The Courts

3D Realms Sued Over Failed Duke Nukem Forever Plans 180

Take-Two Interactive has now sued 3D Realms over the cancellation of Duke Nukem Forever . Take-Two did not provide continuous funding for the game, but they did pay $12 million for the publishing rights to the game. A Bloomberg report quotes Take-Two's complaint as saying that 3D Realms "continually delayed the completion date" and "repeatedly assured Take-Two and the video-gaming community that it was diligently working toward competing development of the PC Version" of the game. (The complaint refers to 3D Realms as part of Apogee Software, Ltd., not to be confused with Apogee Software, LLC, the publisher behind the still-forthcoming Duke Nukem Trilogy.)
Software

Computers With Opinions On Visual Aesthetics 125

photoenthusiast writes "Penn State researchers launched a new online photo-rating system, code named Acquine (Aesthetic Quality Inference Engine), for automatically determining the aesthetic value of a photo. Users can upload their own photographs for an instant Acquine rating, a score from zero to 100. The system learns to associate extracted visual characteristics with the way humans rate photos based on a lot of previously-rated photographs. It is designed for color natural photographic pictures. Technical publications reveal how Acquine works."
Games

On the Feasibility of Single-Server MMOs 316

GameSetWatch takes a look at the issues involved in creating an MMO that does not split its users among many different servers. They suggest that running a single "shard" is the next step in the evolution of MMOs, since it better allows player choices to have a meaningful impact on the game world; supporting different outcomes across multiple shards is a technical nightmare. They estimate, from the hip, that the cost to develop the technology required to support a massive amount of players (i.e. far more than EVE Online) on a single server to be roughly $100 million. Another recommendation is the strong reliance on procedural and user-generated content creation to fill a necessarily enormous game world.
Security

Hacker Destroys Avsim.com, Along With Its Backups 780

el americano writes "Flight Simulator community website Avsim has experienced a total data loss after both of their online servers were hacked. The site's founder, Tom Allensworth, explained why 13 years of community developed terrains, skins, and mods will not be restored from backups: 'Some have asked whether or not we had back ups. Yes, we dutifully backed up our servers every day. Unfortunately, we backed up the servers between our two servers. The hacker took out both servers, destroying our ability to use one or the other back up to remedy the situation.'"
Earth

GPS Accuracy Could Start Dropping In 2010 210

adamengst writes "A US Government Accountability Office report raises concerns about the Air Force's ability to modernize and maintain the constellation of satellites necessary to provide GPS services to military and civilian users. TidBITS looks at the situation and possible solutions."
Social Networks

Facebook Users Get Lower Grades In College 284

Hugh Pickens writes "According to a survey of college students Facebook users have lower overall grades than non-users. The study by Aryn Karpinski, an education researcher at Ohio State University, found that Facebook user GPAs are in the 3.0 to 3.5 range on average, compared to 3.5 to 4.0 for non-users and that Facebook users also studied anywhere from one to five hours per week, compared to non-users who studied 11 to 15 or more hours per week. Karpinski emphasized that correlation does not equal causation and that the grades association could be caused by something else. 'I'm just saying that there's some kind of relationship there, and there's many third variables that need to be studied.' One hypothesis is that students who spend more time enjoying themselves rather than studying might tend to latch onto the nearest distraction, such as Facebook or that students who use the social networking site might also spend more time on other non-studying activities such as sports or music. 'It may be that if it wasn't for Facebook, some students would still find other ways to avoid studying, and would still get lower grades. But perhaps the lower GPAs could actually be because students are spending too much time socializing online.' As for herself, Karpinski said she doesn't have a Facebook account, although the co-author of the study does. 'For me, I think Facebook is a huge distraction.'"

Comment Everyone's got an alphabet... (Score 1) 1057

Yes, technical tests are fair, and required.

Recruiting agencies will put everything and anything on a candidate's resume. Their people are evaluated by how many interviews they can schedule, and how many placements they achieve. So, they will stack the applicant's CV with anything that's ever been in the same 10-mile radius as the individual in question.

The result is that you're faced with a piece of paper that looks identical to every other programmer's. It's an alphabet soup - .NET, C#, Java, C, C++, Python, .NET, ASP, ADO, JDBC, XSLT, VB, XML, SQL, BLAHBLAHBLAH. And every programmer lists every one of these. Even the projects look the same - "Senior programmer for web development project implementing a user portal and reporting system for... etc, etc, etc." I have a stack of 12 resumes sitting here in front of me, and could literally swap the names in the header and you wouldn't be able to tell the difference.

So, we make a first pass, weeding out those who a) can't write, even after the headhunter has spiffied their CV, b) are scant with details, and c) hop assignments every three months. The rest have to do a preliminary interview, and that mainly consists of determining whether or not they actually know any of the stuff that their resume's been padded with. Sure, we ask the traditional interview questions, but we also do a base knowledge test. You would be surprised how many .NET "programmers" we get who don't know the most basic C# operations, web developers who don't know the difference between POSTing and GETting form data, and "7+ years experience SQL" applicants who can't tell you how a left outer join works.

Programmer's resumes are becoming increasingly useless, especially when you're flooded with nearly identical H1-B applicants with the same vague academic credentials (ie: "BS Elec. Eng., India", "Masters of Computer Applications" - no institution listed, "Bachelor of Engineering" - no discipline given), identical alphabet soup, and interchangeable litanies of 6-month contracts scattered about the country. They all have widely varying levels of competence that is in no way obvious from their resume. They all have "7+ years experience." That's what makes up 90% of the programmer pool these days, and even if you're in the top 10% with verifiable credentials and a real track record, you're gonna have to go through the same process. Because those of us on the hiring end can't tell anymore from your paperwork, without giving you some sort of objective evaluation of skills, syntax, and basic concepts.

It's been the rule for some time now. Ten years ago, it was all of the newly minted MCSEs rolling out of the fly-by-night tech schools. Before that, it was the "paper CNA/CNE's" who were able to sit the Netware exams and pass by the grace of their deity of choice. And there have always been the code-monkey grindhouse diploma-mill shops, who crank out "programmers" in the language du jour - especially since the web boom - CGI, Java, .NET, AJAX. All with credentials like "Doctor of Divine Coding" and the real competency of an ADD third-grader.

So, don't take it as a personal insult. You may be better than all of the above, but no one can determine that from a piece of paper, a firm handshake, and a good story about how you were the lead coding god on your last project. 'Cause there's a dozen other applicants out there with the same spiel.

Power

Submission + - Hydrogen turbines generate clean electricity

Roland Piquepaille writes: "The Lawrence Berkeley National Laboratory (LBL) has developed near-zero-emission gas turbines using pure hydrogen as a fuel. But because this LSI (low-swirl injector) technology also can use other fuels, it has the potential to help eliminate millions of tons of carbon dioxide and thousands of tons of nitrous oxides (NOx) from power plants each year. In fact, burners with the LSI emit 2 parts per million of NOx, more than five times less than conventional burners. The multi-patented technology is currently available for licensing. I sure hope that a utility company will be interested. But read more for many additional references and photographs comparing a high-swirl injector (HIS) and a low-swirl injector (LSI)."
Security

Submission + - Laptop Anti-theft Software Options for Linux

yourexhalekiss writes: "I'm going back to school this fall, and I run GNU/Linux on my laptop. With school being what it is, I want to keep my Kubuntu-powered System76 Darter Ultra as safe as it can be. Checking through SourceForge and Freshmeat, I can't find a single laptop theft-prevention or tracking program that works with GNU/Linux and has published code.

What do other people use to protect their non-Windows or Mac laptops, and how effective is it?"

Slashdot Top Deals

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...