There is a lot of good advice in the other posts, but so many are laden with other people's baggage filling in your missing data. Let me condense it for you to a real solution
I have set up high availability systems that are currently handling 18TB traffic a month, with many millions of page views, with systems that you can literally unplug the server handling the load and have a hiccup of less than a second. And I have done this with 2 servers.
Your 1000 visitors a day is something one server could handle the traffic for, as long as we aren't talking something boutique like streaming live HD video. But that is only half your problem - you want to be able to survive a catastrophie on that machine (someone accidentally kicking the power cord, etc).
First, I would suggest you do not want to handle this hardware yourself. I have worked with ServerBeach and RimuHosting, and would gladly recommend either for this setup. You can handle everything else though.
Second, you want two machines, pretty much anything in ServerBeach's category 3 will handle what you need.
Third, you need them in a particular configuration:
1) You want them each to have a publicly available IP (the references the box), then you want a floating IP between them (that will be the IP your web address uses). More about that IP later.
2) You want the two machines to have a second network card, and have a private network between them. (used for heartbeat and disk replication - see below)
3) you want to set up HALinux and DRBD.
HALinux is a software solution that will run on both boxes. One box is your 'primary' and the other is 'secondary'. The secondary box watches the primary one, and if the primary one fails for any reason, the secondary one takes over for it. It does this by pinging it as often as you specify (perhaps multiple times a second), and if it doesn't answer, it takes over its IP address. You see, that floating IP address I mentioned earlier resolves to the first machine, but the second machine can take it over (for this to work, they have to be on the same router). The downtime here is less than a second.
So that is all well and good, but the second machine needs to be able to run just like the first one. This is where drbd comes in.
DRBD is like Raid mirroring, but for two hard drives in separate machines. Everything written to one hard drive must also be written to the second for the write to be successful. Over a prigate Gig-e network, in my testing, the drives suffer about a 22-25% performance hit. All data - the database, the deployed applications, even the config files for all my services sit on this shared drive. If the first machine fails, the second machine has all the data it needs to take over the job.
I have set up exactly this setup more than once. And despite everyone here laughing at your "1000 users" figure, high availability isn't about scalability - your 1000 users might be worrying about something so important this setup is peanuts to them compared to the lost time if you have to spend 15 minutes jerking around with a server problem. I enjoy working on these systems because I can fix problems outside of a crisis mode, since there is always a machine ready to go.
If you'd like help with this, or if you'd even like someone to set it up and host it for you, I'd be happy to help. (dbock at codesherpas dot com)
Don't spend your money on purchasing 2-6 servers... seriously - look into what 2 decent machines in this setup will cost at ServerBeach, and also think how much easier this will be if they handle all the physical stuff for you. The configuration details are something you can handle yourself, and it is not that hard if you are comfortable at a command line prompt.