Ask Slashdot: What Type of Asset Would You Not Virtualize? 464
An anonymous reader writes "With IT and Data Center consolidation seemingly happening everywhere our small shop is about to receive a corporate mandate to follow suit and preferably accomplish this via virtualization. I've had success with virtualizing low load web servers and other assets but the larger project does intimidate me a little. So I'm wondering: Are there server types, applications and/or assets that I should be hesitant virtualizing today? Are there drawbacks that get glossed over in the rush to consolidate all assets?"
Busy databases (Score:5, Insightful)
Re:Busy databases (Score:4, Funny)
Re: (Score:2, Informative)
If you refer to the VMware "vCenter" VM, you are wrong.
Virtualizing it gives you many advantages, the same ones you get from virtualizing any server. Decoupling it from the hardware, redundancy, etc.
Why would you NOT virtualize it ?
Just make sure you disable features that would move it around automatically so you can actually know on what host it's supposed to be running.
Re:Busy databases (Score:5, Funny)
'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?
Kind of like how it's a bad idea to mess with a host's eth0 settings if you're currently logged in via ssh through eth0.
Re:Busy databases (Score:5, Informative)
'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?
Kind of like how it's a bad idea to mess with a host's eth0 settings if you're currently logged in via ssh through eth0.
In Oracle VM Server for x86 and VMWare vSphere (and probably most other virtualization platforms) the VMs run on hosts independent of the management platform, ie vCenter for vSphere.
vCenter is not considered critical for the operation of VMs. If vCenter dies your VMs will continue to run without interruption. You simply lose the ability to use advanced features such as vMotion, Storage vMotion, DRS, HA and vDS. However, you can still log into an ESXi host and start up another instance of vCenter. This is no different if the physical machine hosting vCenter died.
As far as I know, the upcoming highly available version of VMWare vCenter (heartbeat) which runs two instances of vCenter together is ONLY available in VM form, I don't know of a physical deployment for vCenter Heartbeat (but I could be wrong).
Re: (Score:3)
We're running vCenter on a pair of physical (non-VM) servers with heartbeat. Heartbeat is a huge pain to get working and apparently pretty much requires Windows Active Directory and MS SQL (we would have preferred Oracle since we already had that in place, but our VMWare support folks couldn't get the combination of vCenter, Heartbeat and Oracle working together.)
Re:Busy databases (Score:4, Interesting)
Couple of corrections. HA (high availability) works without vCenter, if a host running vCenter dies HA will restart it on another host like any other VM. A vDS will continue to function you just can't connect VMs to the distributed switch.
You are correct, it is only the ability to reconfigure HA and vDS that is lost with vCenter. Loss of vDS would be critical to the operation of VMs.
I have never actually run vCenter Heartbeat. They effectively make you pay for 2 copies of vCenter and 1 copy of heartbeat. I've had customers interested... until the price comes up. Can't say I disagree.
Re: (Score:3)
You are thinking of FT, not HA.
(Which is easy to do, since really their names are back to front with regards to functionality.)
Re: (Score:3)
Re:Busy databases (Score:4, Informative)
This isn't really a problem. First, if you have a reasonable sized infrastructure, it makes sense to build a redundent vCenter instance... And IIRC, it may be clustered. Second, if you kill your vCenter instance, you can still connect directly to your ESXi hosts using the vSphere client. You'll still retain the ability to manage network connections, disks, access the console, etc.
Re:Busy databases (Score:4, Insightful)
You usually know you shouldn't mess with eth0 in that situation...but you do it anyway.
Re: (Score:3)
Re:Busy databases (Score:5, Informative)
'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?
However, your vCenter is much more likely to fail hard if you place it on a separate physical server.
Physical servers fail all the time. By virtualizing vCenter, what you accomplish is that you protect it using HA; if one of your VM hosts fails, you can have two hosts standing by to start the VM.
You can also use HA to protect the SQL server that vCenter requires to work, and you can establish DRS rules to indicate which host(s) you prefer those two to live on.
If some operator erroneously powers this off, or mucks it up, there is still an easily available tool, the VPX Client; vSphere Client, can be used to connect directly to a host and start the VM.
You can also have a separate physical host with the vSphere CLI installed, to allow you to power on a VM that way. It does make sense to have perhaps 2 cheap physical servers to run such things, and maybe double as a DNS or AD server, to avoid circular dependency problems with DNS or authentication; these would also be the servers your backup tools should be installed on, to facilitate recovery of VM(s) (including the vCenter VM) from backup.
That's fine and valid, but vCenter itself should be clustered. Unless you are paying through the nose for vCenter Heartbeat, running it as a VM is the best common supported practice for accomplishing that.
Re: (Score:3)
For those with an aversion to running a Windows SQL box just for vCenter, try the SuSE/Postgresql based VMWare vCenter Server Appliance.
If it were PostgreSQL, and didn't have so many limitations, VCSA would have been a welcome addition. Last I checked the VCSA is SuSE + Oracle Express DB, and if you choose an external DB with the VCSA, it has to be IBM DB2 or Oracle; SQL Server is not a supported external db of the VCSA. With the VCSA Oracle express, you have some significant limitations, just like
Re: (Score:3)
Re: (Score:3)
Re: (Score:3)
If my VMware cluster has problems, not having the vCenter server around makes it harder to get things working again
Not sure why it would - why not just use vSphere client to connect directly to the host/s? If you're running HA, you can try the SuSE/Postgresql based VMWare vCenter Server Appliance.
Re: (Score:3)
You can't, (or at least you couldn't at the time, not sure about vSphere 5), initiate any vmotions without vcenter, so when you have an esx host with intermittent storage connectivity problems that made your vCenter VM hang, you can't easily vmotion the remaining VM's off of that physical host without vCenter.
You do realize how easy it is to re-register your vCenter VM on another host, right? Once it comes up it'll clean up the "orphaned instance". Just power it off, log into another host directly, browse the datastore, register and boot. Dead simple. I did this in our DR test environment last week when we had vCenter problems and as soon as vCenter connected to all the hosts and realized there was another copy of the VM it just made the appropriate assumption that the one that was powered on was the one I wante
Re:Busy databases (Score:4, Informative)
A corollary to this is to make sure you have a local physical nameserver configured in all of your systems. Basically, go through a cold start power up sequence and figure out what you need in what order to get things back online. Testing the resulting procedure would be good idea too ;-)
Re:Busy databases (Score:5, Informative)
It's just fine to do that. However, a few things are important:
(1) You need at least 1 point of access to the environment at all times -- e.g. You need a "backup way" in that works and gives you full control, even if for some reason no VMs are running (worst case).
(2) You need to ensure there are no circular dependencies -- if all VMs are down, your environment is still sufficiently operational for you to correct any issue. An example of a circular dependency would be that you have virtualized a VPN server/firewall required to gain access to your ESXi hosts; yeah, it's secure from an integrity standpoint, but what about secure from an availability standpoint, and secure from a disaster recovery standpoint?
(3) If you have virtualized your active directory servers, you should ensure you have a backup plan that will allow you to authenticate to all your hypervisors, virtualization management, and network infrastructure/out of band management, EVEN if AD is crippled and offline.
(4) If you have virtualized DNS servers, you should have at least 2 DNS servers that will probably not fail at the same time, because you have eliminated as many common failure modes as possible.
(a) Redundant DNS servers should not be on the same server. Ideally you would have two sites, with separate virtualization clusters, separate storage, and 2 redundant DNS servers at each site.
(b) Redundant DNS servers should not be stored on the same SAN, or the same local storage medium.
(c) If your Hypervisor requires a DNS lookup to gain access to the SAN, you should have a backup plan to get your environment back up when all DNS servers are down. Each virtualized DNS server should be stored either on separate DAS storage, or on shared storage that can be accessed even when DNS is unavailable.
Re: (Score:3)
An example of a circular dependency would be that you have virtualized a VPN server/firewall required to gain access to your ESXi hosts; yeah, it's secure from an integrity standpoint, but what about secure from an availability standpoint, and secure from a disaster recovery standpoint?
Simpler example, all your virtualization hosts get their address from a mac addrs hard coded DHCP server... And the DHCP server is virtualized. I almost accidentally did this one.
Re:Busy databases (Score:4, Interesting)
Re: (Score:2, Informative)
In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?
Re: (Score:3)
Very likely, and this does mitigate things.
If the physical host has a lot of VM's using a lot of LUN's on the SAN, then there may still be contention for bandwidth on the fiberchannel card. Luckily this does not come with the massive overhead that is associated with contention for bandwidth on a local disk drive, but it's still a potential bottleneck to be wary of.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?
In an enterprise environment all VMs (of any type) should be coming from external storage either SAN (FC, iSCSI) or NAS (NFS). Storage, Network and Hosts are usually separated into layers with full redundancy. No single point of failure should exist. Even if a host needs to be taken down for maintenance or upgrades etc the VM is migrated live to another host without interruption. Because the data is external it is accessible to all hosts and the hosts can be treated as a commodity item and swapped in/ou
Re: (Score:2)
The problem is when you take and virtualize without a thought towards optimizing the hardware to ensure that you don't cause problems for yourself later on down the road. Now that said I don't virtualize database servers in prod (I do in dev/test - but that is different), however this has nothing
Re: (Score:3)
As long as you think there's no difference between terrorists and hippies, I'm not interested in anything else you say.
Re: (Score:3)
Shared disk does not make I/O happy.
This was addressed during the VMWorld 2011 conference. VMWare is only limited by the amount of hardware you throw at it just like any other x86 platform: Achieving a Million I/O Operations per Second from a Single VMware vSphere 5.0 Host
http://www.vmware.com/resources/techresources/10211 [vmware.com]
You can go with IBM/Power or Oracle/SPARC if you have exceptionally large systems, but if you're coming from x86 applications there are minimal CPU, Memory, IO limitations which can't be resolved. The only limitations for
Re: (Score:2)
On the other hand, you end up spending a lot of money for the perceived benefits of virtualization (hardware abstraction, portability, etc).
We virtualized SQL Server 2008 R2 and ended up going back to Microsoft clustering. With clustering we still get HA but do not have to pay for VMware licenses. On VMware we were dedicating entire hosts to a single guest due to the high RAM utilization. In addition we were also taking the virtualization hit on the resource level by abstracting out disk and CPU access.
Re: (Score:2)
That's a value proposition. Which costs more, the up front costs for virtualization, or the loss of business during downtime, and cost of emergency hardware migrations?
Clustering is a great solution, but most things that can be solved with clustering are probably not solved by virtualization. They're two different solutions for different kinds of reliability risks.
Re: (Score:3)
The only limitations for x86 virtualization are proprietary cards...
And license dongles. Some work. Some don't. Worst is when they work "sometimes".
VMWare is only limited by the amount of hardware you throw at it just like any other x86 platform...
Consolidating multiple low load servers ... say 9 physical low load servers onto 3 virtual hosts, there's tremendous value there. If one of the hosts goes down, you can even run 4/5 on the remaining two while you fix it... the 3 virtual hosts are cheaper than the
Re: (Score:2)
License dongle issues should be punted back onto the vendor of the software in question (repeatedly). It may not work the first time, but enough admins and their bosses raising hell with support and sales would hopefully push them to make their garbage compatible with ESXi, Xen, etcetera. USB pass-through compatibility is trivial and works for every consumer device using USB 1.1 and 2.0 standards. If they are giving you parallel or serial port dongles, then there are bigger problems with how the vendor d
Re:Busy databases (Score:5, Interesting)
Shared disk does not make I/O happy.
PCIE SSDs are advertised to deliver 100,000 to 500,000 IOPs. Has anyone experimented with PCI-Express based SSD solutions in their VM hosts to keep high IO VMs like VDI and SQL from swamping their SAN infrastructure?
http://www.oczenterprise.com/interfaces/pci-express.html [oczenterprise.com]
Re: (Score:3)
Re:Busy databases (Score:5, Informative)
Virtualisation != shared disk IO.
If you're serious about virtualisation it's backed by a SAN anyways, which will get you many more IOPS than hitting a local disk ever would.
We virtualise almost everything now without issue by setting 0 contention. Our VM hosts are 40 core (4 socket) machines with 256GB ram. Need an 8 core VM with 64GB ram to run SQL Server? Fine.
We save BUCKETS on power, hardware, and licensing (Windows Server 2008 R2 datacenter licensing is per socket of the physical host) by virtualising.
Re: (Score:3)
Database I/Os tend to use random access patterns much more than other I/O workloads, such that latency is frequently the key performance factor, not throughput. SANs can give you lots of IOPS when most of those IOPS are substantially sequential and can take advantage of a large stripe block being read into a cache from multiple drives at once. However if each IO requires a head seek, a SAN's performance won't be substantially better than local disk and, when a DB's IO requests are queued with other VM IO re
Re:Busy databases (Score:5, Insightful)
One of the systems I manage has a 1.3TB ms sql server database. It absolutely flies.
The same SAN also hosts a few 8-10TB oracle databases with no issues.
What idiot shares spindle sets on a VM DB setup? OS goes to the shared pool, each DB gets its own set of LUNs depending on performance needs. This isn't rocket surgery.
Re:Busy databases (Score:4, Interesting)
At most places you don't get to buy SAN often enough or large enough to have the luxury of allocating a dozen spindles to three RAID-10 sets. Eventually management's store everything spend nothing philosphy causes those dedicated RAID-10 sets to get merged and restriped into the larger storage pool, with the (vain) hope that more overall spindles in the shared pool will make the suck less sucky.
In that kind of situation, it's not always crazy to spec a big box with a dozen of its own spindles for something performance oriented because you can't be forced to merge it back into the pool when the pool gets full.
Re:Busy databases (Score:4, Interesting)
That is more of an issue of accounting.
At a shop I used to work out we broke out the cost per spindle and that purchase had to be paid by the org that need the resources. Absolutely everyone and their mother wanted completely horrendous amounts of resources for every 2 bit project. However, since we were in the business of actually ensuring there was a return on the investment we had to enforce resource allocation.
This translated to a few interesting things. Projects had to be approved and had to be assigned a budget. Resources were fairly well managed and projected utilization was also fed into the overall purchase. We could not actually purchase new hardware without funds and you can be damn sure we weren't dipping into our org funds to pay for someone elses project. If the PHB had enough power to say "do it" he also had the power to allocated resources.
Probably the only thing that made any of this actually work were that budgets were real. ie, this is the cash you get to fund your project... don't waste it all on licensing or else you get no hardware. (I also said a few things) Head count was also tied to this resource allocation. We had man hours to apply to given amount of staff and the only way to get more help was to ensure budgets were enforced. We were pretty good about ensuring budgets were enforced from even the lowliest tech because over expense could very well end that lowly guy on the food chain. (Being consciously aware of these makes helps to turn your most ineffective resource into the most effective!)
Now, I had one moronic group under-spec and over spend their budget. I had to knock on some doors, but I effectively managed to get them donated hardware from groups who way over-killed on budget planning. They were grateful and I brought costs down by not putting more junk on the dc floor. However, I sometimes think I should have let survival of the fittest win out there.
Re: (Score:2)
Shared disk does not make I/O happy.
It is just FINE to virtualize them; and shared disk is not an issue in any whatsoever. However, your consolidation effort will likely not be successful if done haphazardly, correct sizing, design, and capacity planning is important; choosing good hardware is important, not just "buying whatever server is cheapest and seems to have enough memory and large enough amount of disk space", because there are a LOT of details that matter, especially regarding CPU, CPU chip
Re: (Score:3)
Don't use RAID5 for busy databases. Don't dare use RAID6 for any database.
Don't use SATA drives for busy databases.
In fact, if you are serious about consolidation,
don't use RAID5 or RAID6 period.
Treat the storage like a black box. Come up with an IOPS, bandwidth, and response times and ensure the metrics are hit.
IBM sells an XIV composed entirely off the shelf components on SATA (http://opensystemsguy.wordpress.com/). EMC has dynamically allocated tiering called FAST (http://www.emc.com/about/glossary/fast.htm) over SSD/SAS/SATA. NetApp has cach acceleration but uses RAID-6. They all produce systems to meet enterprise level loads while violating those rules.
Let the vendor size out a system that
Re: (Score:3)
Let the vendor size out a system that meets your requirements (metrics) without putting stipulations like above in the RFQ. They only cause problems and often eliminate viable options.
Treat the storage like a black box. Come up with an IOPS, bandwidth, and response times and ensure the metrics are hit.
Spoken like a true salesdroid; "Please ignore the man behind the curtain," just trust our performance claims, even when they defy what physics and math says the avg performance should be. The vendor w
First choice (Score:5, Funny)
Re:First choice (Score:5, Funny)
Already done. Most companies have hundreds of managers sharing the processing, memory, and storage facilities of one brain. Too bad the power and wasted space savings don't scale.
Not virtualize (Score:5, Funny)
Assets not to virtualize:
1) Women
2) Beer
3) Profit
fb, #3? (Score:2)
Re:Not virtualize (Score:5, Funny)
Re:Not virtualize (Score:5, Funny)
Going from zero girlfriends to one imaginary girlfriend could, I suppose, be counted as an improvement. Going from one real girlfriend to one imaginary girlfriend... not so much, although, mathematically speaking, all girlfriends are partially imaginary.
I assume that by "partially imaginary" you mean they are all complex.
You would then be right.
Re: (Score:2)
Also, iPads, iPhones, iPods, TVs.
Re:Not virtualize (Score:4, Funny)
Assets not to virtualize:
1) Women
I've already virtualised all the hot chicks in the office in my head. That's what gets me through those cold lonely nights....
Re:Not virtualize (Score:4, Funny)
Wrong... This is slashdot... Virtual Women are a necessity.
cognos (Score:3)
ibm even says give it its own physical machine if you're going to virtualize it.
Re: (Score:3)
That is because each box is running a java container requiring a terabyte of ram to render some html output.
Maybe the Cafeteria? (Score:2)
The company cafeteria isn't all that great, but jeez nothing is less satisfying than a virtual burger and virtual fries.
Beh (Score:2)
Gold, houses, aircraft carriers, 17th century Dutch paintings.
Re: (Score:3)
To be serious for a moment... (Score:4, Insightful)
How about backups?
Consolidating and virtualizing your backup servers sounds like a recipe for trouble to me.
Dan Aris
Re: (Score:2)
We run Netbackup and we virtualized the master node, but the media servers are still physical.
Re: (Score:2)
I run a large backup environment with Tivoli on IBM pSeries. We carve the pSeries up into multiple LPARs which write to a physical library which is logically carved up. I have to separate the backup environments due to regulatory issues and virtualizing both the backup servers and the library makes things much easier for me. I can set up between 4-8 LPARs per virtual library, and given the horsepower of the pSeries that I'm using, I don't have a ton of physical servers to manage.
Sure (Score:5, Funny)
I would not virtualize the servers that are running the virtual machines.
Re: (Score:3, Interesting)
VMware ESXi is actually a supported guest for VMware Workstation...
Whilst that may sound crazy, it makes system design, testing, and generally skilling up a lot easier.
Databases and Heavy memory java apps (Score:3, Informative)
Well yes Databases would make a poor virtualization target. Also your heavy memory usage java app like the company app server using a terabyte of ram to display the department wiki.
Re: (Score:2)
Yes your little mysql company home page wordpress site it just fine to run virtualized. I am talking about enterprise databases.
Re: (Score:2)
I've always had the same belief based on experience that databases shouldn't be virtualized. However, recently I was thinking about experimenting with adding in drives dedicated to the database and only the database. I'm not a hardware guy and it's been years since I was a real sys admin so my thinking may be completely off. Would that be feasible? I'm not talking about a little database for a wordpress blog either. I'm talking about a database with hundreds of millions of rows in several different tables.
Re: (Score:2)
Your take a pretty good hit on IO in VM and your average database server will use as much ram as you can throw at it. This means it's rarely a good idea to VM production DB servers.
Re: (Score:2)
It's not the database, it's the application
a database with strange performance variations can draw out race conditions in poorly written applications
If the virtualized database takes too long to respond and an internal error happens, that can create quite a mess.
Much better to have a database server with guaranteed response time.
Plenty (Score:2)
Seems like a front-end serving statically cached content is a great match for Virtualization. DB servers and search servers (Solr, etc) aren't a good match imho, unless you have a very well implemented sharding/horizontal scaling solution. If you pre-generate your content, we've typically used hard boxes for those as well, but you may benefit from virtualizing those if you want to easily scale horizontally (assuming you have the hypervisor overhead).
Anything with strict timing constraints (Score:5, Insightful)
Don't virtualize anything requiring tight scheduling or a reliable clock, such as a software PBX system performing transcoding or conferencing.
http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf
Re: (Score:3, Interesting)
Re: (Score:3)
Don't virtualize anything requiring tight scheduling or a reliable clock, such as a software PBX system performing transcoding or conferencing.
Pffft. We're running cisco's voip stuff one one of their cisco UCS chassis here. Not a problem, and entirely supported.
NTP hasn't been a problem for years, so long as you read and understand the VMware document and have some reasonable knowledge of NTP (more knowledge than the people packaging ntp for redhat, is unfortunately required).
Don't *ever* fallback to loca
Re:Anything with strict timing constraints (Score:4, Interesting)
Get nagios to monitor each VM, and each host (meh, not so important - only for esxi host logfiles), compared to the ntp server(s), and compare the server against a smattering of external hosts (perhaps including your country's official time standard if you're a government organisation). We're monitoring 250 VMs here, and none of them have ever been more than 0.1seconds out.
I forgot to say: monitor against external ntp providers (asking Networks to punch holes through firewalls for your monitoring host(s) appropriately) even (especially) when using super expensive GPS clocks as your stratum 1 source. You have to remember that GPS receivers are manufactured by cheapest bidder incompetent fools who don't even understand how TAI-UTC works, hence why they're lobbying to abolish UTC time. Symmetricom, I'm looking at you. Good thing leap seconds are updated in the ephemerides at UTC=0, so on the east coast of Australia, are applied erroneously 3 months early when optical telescopes aren't observing the sky.
Security (Score:2)
Pretty much anything can be successfully virtualized if you throw enough hardware at the host. Just keep in mind that these machines are all actually running on the same processors, and there's probably going to be a way to escalate rights from VM to host or VM to VM. In your environment this may not be an issue, but it's worth keeping in mind.
Non x86 HW/Software or Certified systems (Score:2)
The only workloads that you can't really virtualize tend to be things like OS/400, but that is where things like LPARs can come in OR workloads that do a lot of privileges calls to the CPU or a specific instruction set. Also there are a slew of non-technical reasons I've seen like in Healthcare or Pharma where a specific machine is written into a specification for drug manufacturing or such.
Even still there really aren't any workloads you can't virtualize and realize some sort of benefit from. Even those
A few types I'd refrain from virtualizing (Score:2, Informative)
- Telephony and rea-time voice/video servers (Asterisk for example). You don't want to explain to your big boss why his phone lines are having hiccups :) Preferable to have an old Pentium running that one somewhere.
- Real-time servers, like game servers (Minecraft), as they constantly take up resources.
- NTPD servers (Network Time Protocol). Worst case, run it at layer 0 (host machine). But not VM
If you look at the pattern, these are all real-time services that have very little leeway for latency. Yes, it's
High Performance Clusters (Score:3)
Virtualize ALL THE THINGS (Score:2)
except BIG backend databases and keep your DBs separate from any disk groups used for virtualization
Re: (Score:2)
Agreed on the datbases ... although I've heard some interesting ideas w/ using database disks for backups of other systems.
Basically, you spread your database across the inner 10% of the disks ... then use the other 90% for your backups of other systems. When the databases aren't at peak, you run the backups.
This way, you spread the database across 10x the number of spindles.
You could probably back up the database itself to the disks, but you'll want some logic to make sure there's more than one disk grou
Time server? (Score:2)
Obvious choice (Score:2)
Heavily threaded things like databases (Score:2)
Most database servers are already doing the same things that virtualization accomplishes. SQL Server 2012 as an example can support multiple database instances, each with multiple databases and will use every last resource available, and be more efficient than hosting multiple copies of in their own OS instance in VMWare.
Network Gear (Score:4, Informative)
Re: (Score:3)
gotta have a night light server (Score:5, Informative)
Imagine coming up from a stone cold shutdown. What would be a super thing to have? How about DNS and DHCP? AD too if that's your thing. Some nice little box that can wake up your LAN in 5 minutes so you can start troubleshooting the boot storm as the rest of your VMs try to start up and all get stuck in the doorway like the Three Stooges.
There are different types of virtualization (Score:2)
If you're going to virtualize something that gets a lot of traffic then it makes sense to scale up the server and environment.
If you're talking about virtualizing an enterprise scale server/server farm then you'll want a solution that is designed to handle that sort of situation.
As some people said, shared disk doesn't make I/O happy. That's a key point which is dealt with in enterprise scale virtualization by spreading the load across many different systems. So the hit of shared load is mitigated by access
Sex partner (Score:2, Funny)
Re: (Score:2)
This is /. How would we know?
Depends on your expected ROI (Score:5, Informative)
Depending on the environment and the available assets to the IT Department.
As an example:
Assume you have VMWare ESXi 5 running on 3 hosts with a vCenter and a combined pool of say 192GB of RAM, 5TB of disk, 3x1Gbps for NAS/SAN/iSCSI and 3x1Gbps for Data/connectivity.
It would become unwise in such an environment (without funds to expand it) to run any system that causes a bottleneck on your environment and thus decrease performance for other systems. This can be:
- Systems with High Disk load such as heavy DB usage or SNMP Traps or Log collection or Backup Storage Servers;
- Systems with High Network usage such as SNMP, Streaming services or E-mail;
- Systems with High RAM usage.
For this example, any of the above utilizing say 15% of your total resources for a single instance server would ultimately become cheaper to run on physical hardware. That is, until your environment can bring that utilization number down to 5% or is warranted/needed/desired for some reason.
In my environment, we have a total of 15 ESXi v5 hosts on Cisco UCS Blades with 1TB of RAM and 30TB of Disk on 10GbE. We do however refrain from deploying:
- Media Streaming servers
- Backup Servers
- SNMP/Log Collection Servers
Hope this helps!
Re: (Score:3)
Cisco UCS is a costly, yet very effective solution. The high costs lay around the requirement for the Cisco 61x0 port extender, gbic costs, licenses for it, expensive PDU's and other costly management features. I really dig the UCS Manager and KVM Manager for the UCS though as it allows for some really large scale deployments with minimized management and implementation. In my opinion, the UCS is really best suited for companies who want/need LOTS of nodes though... 30+ would be a good starting point. In th
Virtualize everything (Score:2)
The advantages of virtualization are too great to not do it whenever possible.
The only limiting factor is, really, is how much money you have to spend on your virtualization infrastructure. VMWare's licensing got a little nutty, and SAN storage got really pricy last year.
But it's worth it. Once you have a nice VMWare cluster running, SO many things become easier. And some things that were damn near impossible before become simple.
That said, you probably want to keep at least one domain controller and one DN
virtualize across more than one host (Score:2)
Just don't virtualize everything into a single host. Have multiple hosts and set the virtualization management to fail over. Otherwise losing one server means losing all the servers. Then only make enough VMs so that if one host failed, things would just run slightly annoyingly slow on the one picking up the load until the problem is fixed. Of course, don't let the annoyingly slow happen to anything mission critical with tight response requirements no matter what.
My 45 cents.... (Score:5, Informative)
You can actually virtualize a whole lot of things. The real key is to put a lot of money into the virtualization hosts. CPUs/cores, ram, a really good storage system.
For the small budget, you can get by on a lot less.
I have virtualized several entire development stacks (app servers, DB servers, storage servers, reporting servers). {But you trade a bit of performance for a reduced physical server count (10 or 15 to 1? A pretty good tradeoff if done right)}
You CAN virtualize SQL servers. Most business DB servers at the small shop end are fairly light load (like finance systems) and virtualize well. {But if performance makes you money (ie: you have a SAAS product - then stay physical }
You CAN virtualize an entire Cognos stack (it is made up of many servers depending on how complex your environment is). {However, IBM is correct that in a heavy BI workload environment deserves physical servers. I run over 18,000 reports a day on my produciton stack. Not going to virtualize that any time soon.}
You CAN virtualize entire profit generating websites. {As long as you keep an eye on host CPU and perceived performance at the end user level}
You can virtualize a lot of this in relatively small environments.
But.. Everyone here who has said it is correct: DISK IO is a major contention point. If you stuff all of your virtual machines inside one single giant data store (VMWare term for a single large disk partition) and expect super fast performance 100% of the time, then you will be greatly disappointed. One of my own stacks would grind to very intollerable performance levels whenever someone restored a database to the virtualized DB server. We moved that DB server virtual machine's disk load onto dedicated drives while leaving the machine itself virtiaulize, and all those problems went away.
Do not virtualize anything thar requires multiple CPUs (cores) to operate efficently. SQL Server is an example fo something that works better with multiple CPUs at its beck and call. In virtualization though, getting all the CPU requests to line up into one availabe window bogs the whole virtual machine down (jsut the VM, not the host). If your workload can't survive on a single virtual CPU, or two at most (single core each), then you are best to keep it on a physical server.
Time sensative systems and high compute workload processes are also ideally to be left out of virtualization. Except.. If you can put a beefy enough box under them, then you might get away with it and not notice a performance impact.
The biggest mistake made when going into virtualization (besides not planning for your DISK IO needs) seems to be over provisioning too many virtual machines on a host. This is a dark trap if you are lucky to have the money to build out a high availability virtualization cluster. You spread you load across your nodes in the cluster. Then one day one goes off line and that workload moves to another node. If you only have two nodes, and one is already over subscribed, suddenly the surviving node is way over its head and everything suffers until you get in and start throttling non esscential workloads down.
So, what do you not virtualize? Anything where performance is critical to its acceptance and succcess. Anything that a performance drop can cost you money or customers. (Remember that internal users are also customers).
Plan ahead ALOT. If you feel like your not going in the right direction, pay for a consultant to come in and help design the solution Even if it is only for a few hours. (No. I am not a consultant. Not for hire.)
Clock stuff (Score:5, Informative)
Virtualization is inapropriately overused. (Score:2, Interesting)
Stone age technology. (Score:3, Interesting)
Virtualization is a stone-age technology, useful for crippling hostile environments. This is why "cloud" providers love it, and developers use it for testing. Incompetent sysadmins use it in hope that they can revert their mistakes by switching to pre-fuckup images, having this strange fantasy that this shit is going to fly in a production environment.
If you REALLY need separate hosts for separate applications in production environment (what you almost certainly don't in presence of package manager and usable backup system), there is host partitioning -- VServer, OpenVZ, LXC-based environments, up to schroot-based chroot environments.
There are exceptions... (Score:4, Informative)
There are a few "workloads" that just don't like to be virtualized for one reason or another.
Active Directory Domain Controllers -- these work better now under hypervisors, but older versions had a wonderful error message when starting up from a snapshot rollback: "Recovered using an unsupported method", which then resulted in a dead DC that would refuse to do anything until it was restored from a "supported" backup, usually tape. I used to put big "DO NOT SNAPSHOT" warnings on DCs, or even block snapshots with ACLs.
Time-rollback -- Some applications just can't take even a small roll-back in the system clock very well. These have problems when moved from host-to-host with a technology like vMotion. It's usually a coding error, where the system clock is used to "order" transactions, instead of using an incrementing transaction ID counter.
DRM -- lots of apps use hardware-integrated copy protection, like dongles. Some of them can be "made to work" using hardware pass-through, but then you lose the biggest virtualization benefit of being able to migrate VMs around during the day without outages.
Network Latency -- this is a "last but certainly not least". Some badly written applications are very sensitive to network latency because of the use of excessively chatty protocols. Examples are TFTP, reading SQL data row-by-row using cursors, or badly designed RPC protocols that enumerate big lists one item at a time over the wire. Most hypervisors add something like 20-100 microseconds for every packet, and then there are the overheads of context switches, cache thrashing, etc... You can end up with the application performance plummeting, despite faster CPUs and more RAM. I had one case where an application went from taking about an hour to run a report to something like five hours. The arithmetic is simple: Every microsecond of additional latency adds a full second of time per million network round-trips. I've seen applications that do ten billion round-trips to complete a process!
Comment removed (Score:3)
oracle server on red hat on vmware... (Score:2)
Is the Antichrist. Or the uncle-christ. Or something. Anyway it's damned slow.
Slashdot Market Research (Score:3)
The whole article is worded as though written by an advertiser. This is nothing but Slashdot Market Research. Either it will be a hit business article, "What Not to Try and Virtualize, Straight from the Engineers" or research into how segments of the industry can convince you to virtualize that anyway.
Must be nice, buy one website and you end up with a corralled group of wise and experienced IT gurus. Then slaughter them like sheep. This post was nothing but Market Research. Move along.
Nope (Score:2)
On the contrary, these instructions are run directly by the host processor at full speed.
VMware emulates the rest of the computer, but it's just time-slicing the CPU to the emulator, so pure CPU runs more or less at full speed.
Please note in your virtual machines that the CPU is not virtualized, it is the same make and model as your actual hardware.
It's I/O that suffers with virtualization, because the I/O devices have to be emulated.