This.
And the reality of the matter is that digital instruments do a good job of replicating piano, organ and other keyboard instruments. They can also do a halfway decent job with mallet-based percussion. However, it really isn't feasible to digitally replicate the sound of non-percussive instruments like brass and woodwinds, because there are simply too many different things that a real instrument player can do to change the quality of the sound. For example, when playing a brass instrument, you can:
- Vary the position and tightness of the lips and jaw to change the tone to be brighter or more mellow
- Start and stop notes with anything from crisp tonguing all the way down to "lip tonguing", resulting in radically different attacks and cutoffs
- Lip slur between notes instead of tonguing
- Vary the volume of a single note arbitrarily while you're playing it
- Vary the pitch while you're playing it
- Sing while you play a note (multiphonics)
And so on. There's simply no feasible way for software to simulate all those different variables without modeling the entire instrument, and even if you did that, you'd have to have a much more complex input controller than keyboards or wind controllers or any other MIDI input device that currently exists. By the time you've learned to play something as complex as that, you'll probably find that it's easier to learn to play the actual instrument. :-)
You haven't used the virtual instruments made by Sample Modeling then. I have. They're well worth the money.
Breath-based vibrato. Volume (and timbre) that tracks breath in real time. Pitch bends that alter timbre. Asymmetric (easy to bend down, hard to bend up) pitch bends if you desire. Attacks are based on three factors: initial velocity, breath data (or aftertouch) following note-on, and the space following the preceding note. It actually does quite a nice job -- it takes a bit of adjustment to know that sometimes you have to play EXTREMELY legato to get the effect you want, where it would come out as mush on a real instrument if you tried to do that, but that's just a matter of altering technique. Also, they'll do flutter tongue and growl. Singing through the horn is best done outside the synthesis software, since it's an acoustic interaction between two pitches. The only real problem is latency, but on any reasonably fast setup this can be kept under 22 milliseconds.
While it's true you can't easily control ALL parameters at once, it's highly unusual that you need growl, flutter-tongue, and multiphonics all within the span of an interval so short you can't step on a pedal to choose between them. For studio work, hand-editing said parameters after main recording is dead simple (I don't even attempt to do my pitch bends in real time for recordings, though obviously I do live). Brighter/darker is a matter of changing equalization or changing sample sets on the fly (which can be done with a single keystroke or MIDI command in any DAW I can think of). While emulating instruments accurately is a rather different skill set from actually playing them, it's quite a doable one. It does help if you know how they work, but it doesn't mean you have to be particularly good at them. For example I can play trumpet and horn, but low brass is completely beyond my abilities. That doesn't stop me from accurately emulating their actual response. If I do a rip across an octave, the notes in between are going to represent actual partials for at least one valid valve/slide position for the starting and ending notes, because at a fundamental level, all brass instruments behave the same. If you play one, you grasp them all.
You don't have to take my word for it. You can decide for yourself..