Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Virtualization

Ask Slashdot: What Type of Asset Would You Not Virtualize? 464

An anonymous reader writes "With IT and Data Center consolidation seemingly happening everywhere our small shop is about to receive a corporate mandate to follow suit and preferably accomplish this via virtualization. I've had success with virtualizing low load web servers and other assets but the larger project does intimidate me a little. So I'm wondering: Are there server types, applications and/or assets that I should be hesitant virtualizing today? Are there drawbacks that get glossed over in the rush to consolidate all assets?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Type of Asset Would You Not Virtualize?

Comments Filter:
  • Re:Busy databases (Score:2, Informative)

    by Anonymous Coward on Thursday May 31, 2012 @08:11PM (#40174705)

    In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?

  • by codepunk ( 167897 ) on Thursday May 31, 2012 @08:17PM (#40174753)

    Well yes Databases would make a poor virtualization target. Also your heavy memory usage java app like the company app server using a terabyte of ram to display the department wiki.

  • Re:Busy databases (Score:2, Informative)

    by Anonymous Coward on Thursday May 31, 2012 @08:18PM (#40174773)

    If you refer to the VMware "vCenter" VM, you are wrong.
    Virtualizing it gives you many advantages, the same ones you get from virtualizing any server. Decoupling it from the hardware, redundancy, etc.

    Why would you NOT virtualize it ?

    Just make sure you disable features that would move it around automatically so you can actually know on what host it's supposed to be running.

  • by Anonymous Coward on Thursday May 31, 2012 @08:23PM (#40174823)

    - Telephony and rea-time voice/video servers (Asterisk for example). You don't want to explain to your big boss why his phone lines are having hiccups
    - Real-time servers, like game servers (Minecraft), as they constantly take up resources.
    - NTPD servers (Network Time Protocol). Worst case, run it at layer 0 (host machine). But not VM :) Preferable to have an old Pentium running that one somewhere.

    If you look at the pattern, these are all real-time services that have very little leeway for latency. Yes, it's possible to virtualize them, but it's more work than it's worth, really.

    For me, IMHO, everything else is fair play. Had good luck with Windows Active Directory servers (yes even PDC), SVN, web services, file servers, databases, memcache, and many others.

  • Re:Busy databases (Score:3, Informative)

    by NFN_NLN ( 633283 ) on Thursday May 31, 2012 @08:31PM (#40174887)

    In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?

    In an enterprise environment all VMs (of any type) should be coming from external storage either SAN (FC, iSCSI) or NAS (NFS). Storage, Network and Hosts are usually separated into layers with full redundancy. No single point of failure should exist. Even if a host needs to be taken down for maintenance or upgrades etc the VM is migrated live to another host without interruption. Because the data is external it is accessible to all hosts and the hosts can be treated as a commodity item and swapped in/out.

  • Re:Busy databases (Score:5, Informative)

    by batkiwi ( 137781 ) on Thursday May 31, 2012 @08:37PM (#40174965)

    Virtualisation != shared disk IO.

    If you're serious about virtualisation it's backed by a SAN anyways, which will get you many more IOPS than hitting a local disk ever would.

    We virtualise almost everything now without issue by setting 0 contention. Our VM hosts are 40 core (4 socket) machines with 256GB ram. Need an 8 core VM with 64GB ram to run SQL Server? Fine.

    We save BUCKETS on power, hardware, and licensing (Windows Server 2008 R2 datacenter licensing is per socket of the physical host) by virtualising.

  • Re:Busy databases (Score:5, Informative)

    by NFN_NLN ( 633283 ) on Thursday May 31, 2012 @08:38PM (#40174975)

    'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?

    Kind of like how it's a bad idea to mess with a host's eth0 settings if you're currently logged in via ssh through eth0.

    In Oracle VM Server for x86 and VMWare vSphere (and probably most other virtualization platforms) the VMs run on hosts independent of the management platform, ie vCenter for vSphere.

    vCenter is not considered critical for the operation of VMs. If vCenter dies your VMs will continue to run without interruption. You simply lose the ability to use advanced features such as vMotion, Storage vMotion, DRS, HA and vDS. However, you can still log into an ESXi host and start up another instance of vCenter. This is no different if the physical machine hosting vCenter died.

    As far as I know, the upcoming highly available version of VMWare vCenter (heartbeat) which runs two instances of vCenter together is ONLY available in VM form, I don't know of a physical deployment for vCenter Heartbeat (but I could be wrong).

  • Re:Busy databases (Score:4, Informative)

    by Burning1 ( 204959 ) on Thursday May 31, 2012 @08:48PM (#40175053) Homepage

    This isn't really a problem. First, if you have a reasonable sized infrastructure, it makes sense to build a redundent vCenter instance... And IIRC, it may be clustered. Second, if you kill your vCenter instance, you can still connect directly to your ESXi hosts using the vSphere client. You'll still retain the ability to manage network connections, disks, access the console, etc.

  • Network Gear (Score:4, Informative)

    by GeneralTurgidson ( 2464452 ) on Thursday May 31, 2012 @08:49PM (#40175063)
    Firewalls, load balancers, anything that is typically hardware with dedicated ASICs. These virtual appliances typically are very taxing to your virtual infrastructure and cost about the same as the physical box. Also, iSeries, but I'm pretty sure I could do it if our admin went on vacation for once :)
  • by jakedata ( 585566 ) on Thursday May 31, 2012 @08:53PM (#40175099)

    Imagine coming up from a stone cold shutdown. What would be a super thing to have? How about DNS and DHCP? AD too if that's your thing. Some nice little box that can wake up your LAN in 5 minutes so you can start troubleshooting the boot storm as the rest of your VMs try to start up and all get stuck in the doorway like the Three Stooges.

  • by kolbe ( 320366 ) on Thursday May 31, 2012 @08:58PM (#40175147) Homepage

    Depending on the environment and the available assets to the IT Department.

    As an example:

    Assume you have VMWare ESXi 5 running on 3 hosts with a vCenter and a combined pool of say 192GB of RAM, 5TB of disk, 3x1Gbps for NAS/SAN/iSCSI and 3x1Gbps for Data/connectivity.

    It would become unwise in such an environment (without funds to expand it) to run any system that causes a bottleneck on your environment and thus decrease performance for other systems. This can be:
    - Systems with High Disk load such as heavy DB usage or SNMP Traps or Log collection or Backup Storage Servers;
    - Systems with High Network usage such as SNMP, Streaming services or E-mail;
    - Systems with High RAM usage.

    For this example, any of the above utilizing say 15% of your total resources for a single instance server would ultimately become cheaper to run on physical hardware. That is, until your environment can bring that utilization number down to 5% or is warranted/needed/desired for some reason.

    In my environment, we have a total of 15 ESXi v5 hosts on Cisco UCS Blades with 1TB of RAM and 30TB of Disk on 10GbE. We do however refrain from deploying:
    - Media Streaming servers
    - Backup Servers
    - SNMP/Log Collection Servers

    Hope this helps!

  • My 45 cents.... (Score:5, Informative)

    by bruciferofbrm ( 717584 ) on Thursday May 31, 2012 @09:07PM (#40175219) Homepage

    You can actually virtualize a whole lot of things. The real key is to put a lot of money into the virtualization hosts. CPUs/cores, ram, a really good storage system.

    For the small budget, you can get by on a lot less.

    I have virtualized several entire development stacks (app servers, DB servers, storage servers, reporting servers). {But you trade a bit of performance for a reduced physical server count (10 or 15 to 1? A pretty good tradeoff if done right)}
    You CAN virtualize SQL servers. Most business DB servers at the small shop end are fairly light load (like finance systems) and virtualize well. {But if performance makes you money (ie: you have a SAAS product - then stay physical }
    You CAN virtualize an entire Cognos stack (it is made up of many servers depending on how complex your environment is). {However, IBM is correct that in a heavy BI workload environment deserves physical servers. I run over 18,000 reports a day on my produciton stack. Not going to virtualize that any time soon.}
    You CAN virtualize entire profit generating websites. {As long as you keep an eye on host CPU and perceived performance at the end user level}
    You can virtualize a lot of this in relatively small environments.

    But.. Everyone here who has said it is correct: DISK IO is a major contention point. If you stuff all of your virtual machines inside one single giant data store (VMWare term for a single large disk partition) and expect super fast performance 100% of the time, then you will be greatly disappointed. One of my own stacks would grind to very intollerable performance levels whenever someone restored a database to the virtualized DB server. We moved that DB server virtual machine's disk load onto dedicated drives while leaving the machine itself virtiaulize, and all those problems went away.

    Do not virtualize anything thar requires multiple CPUs (cores) to operate efficently. SQL Server is an example fo something that works better with multiple CPUs at its beck and call. In virtualization though, getting all the CPU requests to line up into one availabe window bogs the whole virtual machine down (jsut the VM, not the host). If your workload can't survive on a single virtual CPU, or two at most (single core each), then you are best to keep it on a physical server.

    Time sensative systems and high compute workload processes are also ideally to be left out of virtualization. Except.. If you can put a beefy enough box under them, then you might get away with it and not notice a performance impact.

    The biggest mistake made when going into virtualization (besides not planning for your DISK IO needs) seems to be over provisioning too many virtual machines on a host. This is a dark trap if you are lucky to have the money to build out a high availability virtualization cluster. You spread you load across your nodes in the cluster. Then one day one goes off line and that workload moves to another node. If you only have two nodes, and one is already over subscribed, suddenly the surviving node is way over its head and everything suffers until you get in and start throttling non esscential workloads down.

    So, what do you not virtualize? Anything where performance is critical to its acceptance and succcess. Anything that a performance drop can cost you money or customers. (Remember that internal users are also customers).

    Plan ahead ALOT. If you feel like your not going in the right direction, pay for a consultant to come in and help design the solution Even if it is only for a few hours. (No. I am not a consultant. Not for hire.)

  • Clock stuff (Score:5, Informative)

    by Charliemopps ( 1157495 ) on Thursday May 31, 2012 @09:07PM (#40175227)
    I work for a telco and we've virtualized a lot of stuff, including telephone switches. We ran into a lot of problems with things that required reliable clock/syncing. 2 way video, voice, television, etc... using virtualized services on a VM machine doesn't seem to work very well either. So if you're using SAS or other cloud based stuff and your users on a virtual machine you run into all sorts of weird transient issues. Lastly, it may seem silly, but if you're company is anything like mine, you have a lot of people that have built their own little tools using Microsoft Access. We didn't realize just how many MS Access tools were out there and how important they were to some departments until we put them on virtual machines and found out that the MS Jet database is a file system based database and definitely does not work well with virtualization... so bad in fact that in many cases it corrupted the database beyond repair. We even tried moving some of the more important ones over to Oracle backends (because frankly, if they're important they should've been all along) but even MS Accesses front end couldn't deal with the virtualization... even though it was an improvement. We finally ended up having all these micro-projects where we had to put all these custom made tools, many of them made by people that didn't know what they were doing and didn't even work for us anymore intro real DBs with real web-based frontends. Reverse engineering some of those was a nightmare. It's a can of worms we likely wouldn't have opened if we had the foresight. I'd personally like to see MS Access treated just like Visual Studio in that only IS is allowed access to it... but I don't make the rules so... whatever.
  • Re:Busy databases (Score:4, Informative)

    by vanyel ( 28049 ) * on Thursday May 31, 2012 @09:07PM (#40175229) Journal

    A corollary to this is to make sure you have a local physical nameserver configured in all of your systems. Basically, go through a cold start power up sequence and figure out what you need in what order to get things back online. Testing the resulting procedure would be good idea too ;-)

  • by bertok ( 226922 ) on Thursday May 31, 2012 @09:12PM (#40175279)

    There are a few "workloads" that just don't like to be virtualized for one reason or another.

    Active Directory Domain Controllers -- these work better now under hypervisors, but older versions had a wonderful error message when starting up from a snapshot rollback: "Recovered using an unsupported method", which then resulted in a dead DC that would refuse to do anything until it was restored from a "supported" backup, usually tape. I used to put big "DO NOT SNAPSHOT" warnings on DCs, or even block snapshots with ACLs.

    Time-rollback -- Some applications just can't take even a small roll-back in the system clock very well. These have problems when moved from host-to-host with a technology like vMotion. It's usually a coding error, where the system clock is used to "order" transactions, instead of using an incrementing transaction ID counter.

    DRM -- lots of apps use hardware-integrated copy protection, like dongles. Some of them can be "made to work" using hardware pass-through, but then you lose the biggest virtualization benefit of being able to migrate VMs around during the day without outages.

    Network Latency -- this is a "last but certainly not least". Some badly written applications are very sensitive to network latency because of the use of excessively chatty protocols. Examples are TFTP, reading SQL data row-by-row using cursors, or badly designed RPC protocols that enumerate big lists one item at a time over the wire. Most hypervisors add something like 20-100 microseconds for every packet, and then there are the overheads of context switches, cache thrashing, etc... You can end up with the application performance plummeting, despite faster CPUs and more RAM. I had one case where an application went from taking about an hour to run a report to something like five hours. The arithmetic is simple: Every microsecond of additional latency adds a full second of time per million network round-trips. I've seen applications that do ten billion round-trips to complete a process!

  • Re:Busy databases (Score:5, Informative)

    by mysidia ( 191772 ) on Thursday May 31, 2012 @09:27PM (#40175383)

    It's just fine to do that. However, a few things are important:

    (1) You need at least 1 point of access to the environment at all times -- e.g. You need a "backup way" in that works and gives you full control, even if for some reason no VMs are running (worst case).

    (2) You need to ensure there are no circular dependencies -- if all VMs are down, your environment is still sufficiently operational for you to correct any issue. An example of a circular dependency would be that you have virtualized a VPN server/firewall required to gain access to your ESXi hosts; yeah, it's secure from an integrity standpoint, but what about secure from an availability standpoint, and secure from a disaster recovery standpoint?

    (3) If you have virtualized your active directory servers, you should ensure you have a backup plan that will allow you to authenticate to all your hypervisors, virtualization management, and network infrastructure/out of band management, EVEN if AD is crippled and offline.

    (4) If you have virtualized DNS servers, you should have at least 2 DNS servers that will probably not fail at the same time, because you have eliminated as many common failure modes as possible.

    (a) Redundant DNS servers should not be on the same server. Ideally you would have two sites, with separate virtualization clusters, separate storage, and 2 redundant DNS servers at each site.
    (b) Redundant DNS servers should not be stored on the same SAN, or the same local storage medium.
    (c) If your Hypervisor requires a DNS lookup to gain access to the SAN, you should have a backup plan to get your environment back up when all DNS servers are down. Each virtualized DNS server should be stored either on separate DAS storage, or on shared storage that can be accessed even when DNS is unavailable.

  • Re:Busy databases (Score:5, Informative)

    by mysidia ( 191772 ) on Thursday May 31, 2012 @09:36PM (#40175485)

    'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?

    However, your vCenter is much more likely to fail hard if you place it on a separate physical server.
    Physical servers fail all the time. By virtualizing vCenter, what you accomplish is that you protect it using HA; if one of your VM hosts fails, you can have two hosts standing by to start the VM.

    You can also use HA to protect the SQL server that vCenter requires to work, and you can establish DRS rules to indicate which host(s) you prefer those two to live on.

    If some operator erroneously powers this off, or mucks it up, there is still an easily available tool, the VPX Client; vSphere Client, can be used to connect directly to a host and start the VM.

    You can also have a separate physical host with the vSphere CLI installed, to allow you to power on a VM that way. It does make sense to have perhaps 2 cheap physical servers to run such things, and maybe double as a DNS or AD server, to avoid circular dependency problems with DNS or authentication; these would also be the servers your backup tools should be installed on, to facilitate recovery of VM(s) (including the vCenter VM) from backup.

    That's fine and valid, but vCenter itself should be clustered. Unless you are paying through the nose for vCenter Heartbeat, running it as a VM is the best common supported practice for accomplishing that.

"The four building blocks of the universe are fire, water, gravel and vinyl." -- Dave Barry

Working...