Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Openstack bias (Score 1) 64

I don't know why people are making this a VMware -OR- OpenStack decision, or keep touting VMware's capabilities.

If all you want is easy virtualization and are willing to pay a lot for it, VMware is the easiest solution.

If you want to build your own EC2, because you believe that's core to your business, VMware is a terrible way to go about it. Openstack may not be the right solution either, you may want Eucalyptus or CloudStack or one of the many commercial cloud providers. The model is different.

But it's a *business* decision, not a technical one, at its core. VMware is contributing code to OpenStack, while bashing it, while OpenStack is pushing to replace VMware, while also sitting on top of it. It's a complicated world.

Disclaimer: I do cloud strategy for a company that makes a cloud management software product, so these are my own opinions,

Comment Re:Why does Paypal need "cloud" ? (Score 2) 64

Some fair points, but here's my responses:

- "Deployments" doesn't only mean "rolling out new applications" wholesale. It could mean, "I want to test my new fraud analytics algorithm on our last six months of transaction history", or "We're adding a new feature that might be very popular", or "turns out our application is hitting a database bottleneck, we're working on figuring out why that is, till then, spin up three more read-only slaves to see if we can alleviate it"
- "Time to Market" covers a lot of scenarios - sure, yes, in the year-long development cycle for an entirely new application hardware procurement is a small percentage of the overall lifecycle. But how much hardware to procure? What datacenter should it live in? Can you accurately predict which portions of the application will be the busiest? Or slowest? In effect, in the same way a lot of organizations are using agile development methodologies to partition their work into smaller, faster, more responsive units, cloud computing environments allow for more agile infrastructure
- You say you can Ghost a server in an hour. Which is great. In more traditional environments though, you need to also configure network switches, firewall rules, load balancers, configure virtual IPs, backup schedules, provision storage, configure replication (possibly), add the node to monitoring, load application stacks, load code, adjust the app configuration, QA it, and then go live with it. In one of my customer environments, there are 13 separate agents that may have to be configured to run depending on what kind of server it is, and in which datacenter it's going to live, each with its own requirements, dependencies, and so on. In most environments, building out a server is going to take a lot longer than an hour.

BTW, I never said explicitly that "time to market" was one of the fundamental cloud benefits - I wouldn't use that term, as it's a little loaded. But I do think it gives you agility. Not everyone needs that - I was talking with a mining company a few years ago that had a decent enough infrastructure, but it was completely predictable. For every mine that opens, they need X servers. When a mine closes, they decommission X servers. They might still get *some* benefit from cloud, but it certainly wouldn't be worth the hassle of dealing with Openstack.

Comment Re:Your bias is showing. (Score 2) 64

Why would I be biased about OpenStack? I have no horse in this race. If VMware is "orders of magnitude" cheaper, good for them, go nuts. My point is just that there's a lot of moving parts in these decisions. What is true for one organization is not true for another. And look, those TCO calculations are messy.

Here's a story I like to tell. A friend of mine is head of server architecture at a very large company. They're a huge IBM shop, and they were deciding where to make their primary server investments over the next three years. They can do linux on the mainframe, AIX on P-series, or Linux on X-series. Which one is going to be the cheapest? Best ROI for their workloads? Best TCO over five years? He's got three IBM server reps, one for each platform. He tells each of them that he wants an in-depth TCO study done for their respective platform, based on their current skillsets, investments, technologies in use, etc. They do interviews, asset inventories, it takes a month and a half. At the end, he gets three reports back, one from each sales rep, each conclusively proving that *their* platform would be the cheapest . The numbers worked out, everything was written out clearly, nothing was made up or fudged. It's just that if you make a couple of slightly different assumptions, or weight things differently, you can come up with a totally different result.

Again, that doesn't mean that CS is wrong, it's just a question of where you want to spend your costs, and that TCO means different things to different people.

One thing I will quibble with - $1b is their capex spend annually, that's capital outlays on hardware and software for their entire IT organization. He said they saved orders of magnitude on their people costs to operate. A fraction of a fraction of their capex spend.

Comment Re:Openstack bias (Score 4, Informative) 64

Well, VDI is not an openstack use case. But even the traditional use case - it's a purely business decision. They believe that the cost to run and operate VMware is lower than the cost to run and operate Openstack. I know a couple of very large enterprises who have come to that conclusion. I also know one or two that went with openstack and wish they hadn't. Then I know a couple who love their openstack deployment.

The key thing here is that VMware and Openstack are not really 1:1 comparison points. You can run Openstack on top of VMware vsphere. Why would you do this? You want amazon-like APIs, a real storage service, DRS, and so on. Or you could run it on top of Xen or KVM and save money, but lose functionality. Or you could go out and buy RH's Openstack implementation.

This is a very complex series of decisions, and it's not really easy or possible to say, "Well, we didn't decide to do Openstack because VMware is better"

Comment Re:Why does Paypal need "cloud" ? (Score 4, Interesting) 64

So - there's a couple of reasons why they would want their own cloud:

- Their business isn't actually very static. As you might imagine, they have daily spikes of traffic at particular times of day, likely early evening across the US. It might not be worthwhile for them to do elastic computing for that, but think about holiday times, like Christmas, and their purchase volume certainly goes up.
- Development environments - very often, developers will want a sandbox environment to use for a few weeks or months and then get rid of them. Or, they might want to run some analytics on 50-100 nodes and then tear them down
- Easier infrastructure lifecycle management - abstracting the running OS into a virtual machine makes it much easier to archive out old hardware and onboard new - just migrate the VMs over to a new machine, pull out the hardware, throw it away.
- Rightsizing hardware - cloud allows them to buy a small number of predictable builds and then size their compute to their needs - no need to dedicate an 8-core machine with 8 GB of RAM for an internal email server, or a sandbox to play with MySQL

PayPal actually has a very complex business, huge infrastructure, crazy security requirements, tons of applications and people, and is generally a technology heavy company.

Comment Re:Note that "Joel" is involved with this. (Score 1) 188

Pfft, there's no programmer shortage, but there IS a shortage of competent, quality programmers. I've been hiring software developers for years, and have never been able to simply put my listing out there and get competent candidates. You have to go to meetups, get recruiters, offer strong employee incentives, be willing to train people on new languages (or give them the time to pick it up on their own), etc. in order to get top-notch candidates.

But yes, you can get 200 resumes a week from people who usually can't articulate the last project they worked on, are unable to solve a simple programming test in the language of their choice WITH access to the Internet, or can even explain the basics of how a computer works.

There are a ton of developers out there, most of them are terrible. Anything that improves the base of skilled software developers in NYC is going to be incredibly valuable.

Comment Having done this before, it is possible (Score 2) 331

In a former life, I ran the technical sales organization for a company I started with some friends, and later sold to a much larger organization. So I've seen a couple of different models for how to do this.

The first question is - how are your sales people currently compensated? If they're compensated with a straight percentage commission, or something similar like a sliding percentage based on quota achievement, then the easiest thing to do is to also give your consulting engineers a straight commission on add-on deals that they are involved in. That percentage is typically a fraction of what the sales person makes - for example, if your sales people get 10% commission, then the technical presales folks get between 1-3%. It's critical to understand that the sales person also needs to get their commission, and the SE/presales guy is getting his cut almost like a bonus for bringing the opportunity to the sales person's attention.

This can get tricky, though, because what happens if you have multiple engineers working on one account? You can't very well pay every presales guy who touches every account 1-3%, as your margins will go to hell. In those cases, if you want to keep doing straight percentage, you need to divide it up account by account as opportunities roll in.

The other way to handle that situation is to have revenue targets, and to pay people a bonus based on their achievment, along with a personal target. So, perhaps across all the engineers, they have a target to generate $1m in revenue worth of add-on business in a quarter. If they get that target, each engineer gets $10k as a bonus, plus a variable amount based on their personal contributions. This can cause hard feelings sometimes because it involves passing judgement on people's contributions, but may be more sustainable, and also helps align the presales person with the overall goals.

Which brings me to the last point - impartiality. It's true that sales people are often incentivized to sell things that the customer doesn't need, or at inflated prices, because of their commission structure. However, engineers tend not to think that way, partially because as a percentage of their income, commission represents a dramatically lower amount compared to a sales person, and partially because they understand that if they help sell something the customer really doesn't need, they're going to be the ones who have to implement it or help fix the situation once it's screwed up. Also, if you set the revenue targets to be communal, it helps encourage people to think about the business as a whole, instead of closing one gigantic deal.

Hope this helps.

Comment Re:Ahhh... I Finally Get It! (Score 1) 973

So, full disclosure, I am an appreciator of Jason Robert Brown's music, and I have met him on several occasions, though we're not at all close.

I think something that a lot of people miss on this thread, and your post touches on a little bit, is that JRB is not a rock musician, or in a band - he's a musical composer. Fundamentally, he spends a whole bunch of time writing a show, creating the music, spends months, usually years getting it right until it's finally performed for the first time - and then that's it. He's done. He doesn't get to go on tour and perform it, nor does he get to sell the sequel.

His only real revenue sources come from:
- show licensing
- cast recordings
- sheet music

What would he go on tour with? Himself performing the parts from his show?

Also, for anyone who talks about the era of shakespeare as the golden era of theatre has no idea what they're talking about. In those days, playwrights had to create shows in days or weeks , get them in the theater, after which the good parts were stolen by other playwrights, and yes, no one considered the idea of a play to be something "special".

Guess what? 99% of them were terrible. Even talented writers would churn out show after show of utter shit, just because, why not? They're not going to get paid once the show is performed, so what's their incentive to make quality over quantity?

Meanwhile, JRB can spend a year or years writing a show, because he knows that if the market accepts that his show is worthwhile, he'll make his time and money back.

As an example along those lines, Stephen Sondheim has talked about how his royalties from west side story allowed him to focus on creating new, great work instead of stressing about shows.

Comment Here's what you have to consider (Score 3, Informative) 410

Is...is this something that's good for your career? Is it a promotion? Is it a lateral move?

If it's a promotion you didn't ask for, and you turn it down for very clear reasons, AND you're doing a good job at your current role, there's a good chance you'll be fine. After all a valuable employee at Position X who turns down a promotion to X+1, is still valuable at X. However, it is likely that future promotions will be unavailable to you, at least for a while, as you'll be perceived as "happy where you are"

On the other hand, if you're being moved laterally to a non-technical position, there's a decent chance they say something like, "Well, lunchlady55 is smart, and very organized, good manager, but not really hands-on technical enough for what we need. We don't want to lose lunchlady55, but we're suffering because of L55's technical weaknesses. Why don't we move L55 laterally to a project manager-type role where we can leverage his/her strengths and backfill the technical position with someone who's very technical but requires lots of oversight"

In that situation, they're actually being good managers, by recognizing that they have a valuable employee who is just in the wrong position, and trying to rectify the situation. On the gripping hand, they're being bad managers, because if this is the case, it should really be explained to you.

If the latter situation is the case, you put them in a much rougher position, because they like you, but you're not meeting their needs in one area or another. In this case, you may lose your job.

The best way to handle this is to have an open and frank conversation with your manager. Talk about what the organizational chart looks like. Who will you be reporting to? Is there a raise or other compensation for being on-call? Be frank - are there concerns about your current job performance that led to this lateral move? Are they eliminating your position and they're just trying to protect you personally?

Based on all this, you can make an informed decision about what the situation is. You may want to try to negotiate yourself a better deal. For example, you're on call for the weekends, but whenever you have to do off-hours work while on-call, you get 2x that amount of time off your regular day during the week. Or you get paid for on-call time. Don't try to negotiate this until you understand why this is happening.

Comment Re:400 CPU cluster or 400 node botnet? (Score 5, Informative) 175

Actually, in this case, it's very straightforward. He's using Amazon EC2. EC2 charges by the hour, and all you have to do is spin up the number of servers you want. In fact, I happened to run the numbers on what the costs are for running 50 "8-core" servers, and it happens to be...$34/hour. So, what he did was say, "If I run two jobs an hour, I make a small amount of money. If I run 4-5 jobs per hour, I make more money"

This is, of course, a textbook use case for EC2, and I'm surprised no one has done it sooner.

Comment Re:hmm (Score 1) 381

Well, if Oracle does hand this out, I don't get it certainly. And for the record, I think that Oracle is overpriced, and believe their pricing model is not sustainable over the long term.

But you see my point, no? These are features that are in more traditional enterprise RDBMS systems. If you don't need them, you don't have to spend the money. If you need them, you can spend the money, or you can cobble together a system on your own. If you do cobble together a system, you're responsible for supporting that infrastructure internally.

You've made several separate points:
- RDBMS technology has never been "trialed under real-world conditions" and "transactions are a joke for scalability" - neither of those statements are true, nor have you backed them up with any data. A ~100TB DB2 or Oracle database is hardly impressive these days. (I'll also throw in that if ACID compliance bothers you that much, you can disable it at a session, table, or database level in an RDBMS. At least you have the option.)
- "Distributed atomic transactions don't scale" - this is demonstrably true, but irrelevant to the point at hand. 2PC has its place, but in the RDBMS world, has been largely discarded for the exact reasons you specify.
- "It's a good thing you have all those tools ...because you need them to make that complex pile of enterprisey spaghetti work. If only there was something out there that just worked, and didn't need all that hand-optimizing and tool-fiddling to kludge it into usability. - I think it's hysterical that you would throw this elsewhere into the thread, when you freely admit that NoSQL-esque systems are missing all sorts of features that you expect someone to implement outside of the solution.

Let me rewrite your sentence - "It's a good thing you have all of those third-party software packages...because you'll need them to get that immature open-source key-value store to work. If only there were something out there that had a fully integrated stack of DR, HA, and management capabilities that just worked, and didn't need all of that custom infrastructure around it, and had a large community of people and knowledge who know how to manage and operate it"

Let me restate AGAIN my point - NoSQL data stores are interesting, have their place, and will no doubt continue to grow and be an important part of the data management ecosystem. HOWEVER, to say that traditional RDBMS systems are done for, or don't scale, or are useless, displays an ignorance of reality that undermines the whole discussion.

Comment Re:hmm (Score 2, Informative) 381

I sort of agree with you, from the perspective that there's crusaders on either side - people who insist that traditional RDBMSes are the Only Way and people like you who insist they've "never been trialed under real-world conditions". Both statements are clearly incorrect on their face.

However, there are a multitude of features that these systems have that are not available in NoSQL systems, or only available in such a watered down form that its unfair to compare the two. A list:

- On-disk encryption
- Compression
- Schema/data versioning (present one picture of data to one set of clients, while presenting another layout of the same data to another set during a data migration)
- Automated failover between servers, clusters, facilities, datacenters
- "Flashback" - say "I want to run a query against my data as it looked last week at 3pm", and it just works.
- Shared-disk clustering

As far as transactions go, they may be a "joke" for scalability (not quite sure what that means), but they're awfully useful when dealing with sensitive information you need ACID compliance for. For example, I would prefer my bank not use an "eventual consistency" model when dealing with my credit card transactions.

Now, as I said above, a relational database *may not* be the right decision for your application. But the idea that relational databases don't scale is ridiculous. I've seen petabyte datawarehouses running teradata that absolutely scream through data. I've seen Oracle systems that do 10s of thousands of write transactions per second, and several times that in reads. They exist.

Comment Re:hmm (Score 4, Insightful) 381

Uh, no, that is not correct. Relational DBMSes such as Oracle, Teradata, DB2, even SQL Server are all designed to scale into the multi-terabyte to petabyte range. The issue is one of a couple of things:

- Cost - "real" relational databases are expensive. I once had a conversation with someone who worked at Google, who talked about how much infrastructure they have written/built/maintain to deal with MySQL. Many of those problems were solved in an "enterprise" DBMS 3-10 years ago. However, the cost of implementing one of those enterprise DBMS is so high that it is cheaper to build application layer intelligence on top of a stupid RDBMS than purchase something that works out of the box
- Workload style - most of the literature around tuning DBMS is for OLTP or DSS workloads. Either small question, small response time (show me the five last things I bought from amazon.com) or big question, long response time (look through the last two years worth of shipping data and figure out where the best places to put our distribution centers would be). Many of these workloads are combos - there could be very large data sets and complex data interdependencies, with low latency requirements. It may be possible to write good SQL that does these things (in fact, I know a couple luminaries in the SQL space that will claim just that), but the community knowledge isn't there.
- Application development - when you're building your app from scratch, you can afford to work around "quirks" (bugs) and "gaps" (fatal flaws) to get what you need. This dovetails with the other issues, but when your core business is building infrastructure, it's worth your while to deal with this. When your core business is selling insurance or widgets, or whatever, it is not.

None of this is to say that the "nosql" movement is a bad thing, or that there's no reason for its existence, or that no one should bother looking at it. However, there is a definite trend of "this is so much better than SQL" for no good reason. SQL has scaled for years, and I know loads of companies who work with terabytes and terabytes of data on a single database without any issue.

A far more interesting discussion is the data warehouse appliance space - partitioning SQL down to a large number of small CPUs and pushing those as close to the disk as possible.

Comment Re:Yet another IT company gets to live my dream! (Score 1) 189

I'm not quite sure what you mean by your second comment, but to be clear, they *lost* $17M net of their $3.4M - so their gross expenses were $20m.

Hi Bernie, how are you able to post from your cell?

What are you talking about? They have:
- $3.4m in revenue
- $17m net loss

That means they lost 17 million dollars after earning 3.4 million - the 3.4m made up for some of the money they lost. If they had made zero dollars in revenue, they would have lost $20.4m - instead, they only lost $17m.

Slashdot Top Deals

The decision doesn't have to be logical; it was unanimous.