Linux, Windows, and OSX all have problems with low-latency audio. The sad irony is that 15 years ago, you actually COULD connect a MIDI keyboard to a SB Pro AWE/32's MIDI port, run your sequencer app, and have it do a halfway decent job of both capture and playback. Then, host-based audio happened, and everything went to shit... accelerated by architectural changes to all three platforms that made matters even worse.
Forget about trying to do realtime CPU-based audio on any computer that needs to still be usable as a normal computer. It's impossible. You CAN hand-tweak Linux, Windows, and OS X in various ways to get the latency down (as others have noted, Linux has had realtime kernel audio available as an option for a while), but the tweaks you have to make will render it dysfunctional as a general-purpose computer.
It doesn't matter how fast your i7 or Xeon is, it doesn't matter how much RAM you have, and it doesn't matter if you have a terabyte RAID 0 SSD array... nothing you do will ever make it fast enough to do low-latency host-based audio without ever glitching. You might reduce the glitches to something that happens every 5-10 minutes, instead of every 5-10 seconds, but you'll never eliminate them completely. It's just the nature of how Windows, Linux, and OS X now handle multitasking.
The solution? Re-discover dedicated synth modules. Or set up a second PC whose only reason for existence is to be a VST/soft synth host -- aggressively tweaked for low-latency audio in ways the main DAW PC can't be.
The problem isn't MIDI (that was solved YEARS ago by just using USB to give every physical MIDI port its own dedicated full-bandwidth MIDI cable), and the problem isn't raw data being shoveled around. The problem is that even with a multi-core CPU and abundant RAM, Windows/Linux/OS X will all starve the soft synth for CPU cycles for 3-7ms at a time (usually, more like 12-20ms) while the audio buffer drains. If it empties before the CPU calculates the next 5-10ms chunk of waveform data, you get a loud audio glitch. Audio-generation is a "realtime" activity, and Windows/Linux/OS X in their roles as desktop operating systems all fall flat on their faces when realtime becomes a necessity.
So... the moral of the story: forget about trying to use a single computer as both DAW and VST/softsynth host. If you can avoid live performances involving a softsynth (or pre-record the softsynth and fake the keyboard playing during the performance, you'll save a LOT of money. Audio glitches while jamming or capturing keyboard input suck, but at least they won't affect your real recordings. Use your DAW as a DAW, and give the soft synth host its own hardware that can be properly tweaked for realtime audio.