Comment Think about the complexity of duplication (Score 4, Insightful) 264
there's hardly any fallback if any of the services dies or an office is disconnected. Now, as the hardware must be replaced, I'd like to buff things up a bit: distributed instances of services (at least one instance per office) and a fallback/load-balancing scheme (either to an instance in another office or a duplicated one within the same).
Is that really necessary? I know that we all would like to have bullet-proof services. However, is the network service to the various offices so unreliable that it justifies the added complexity of instantiating services at every location? Or even introducing redundancy at each location? If you were talking about thousands or tens of thousands of users at each location, it might make sense just because you would have to distribute the load in some way.
What you need to do is evaluate your connectivity and its reliability. For example:
- How reliable is the current connectivity?
- If it is not reliable enough, how much would it cost over the long run to upgrade to a sufficiently reliable service?
- If the connection goes down, how does it affect that office? (I.e., if the Internet is completely inaccessible, will having all those duplicated services at the remote office enable them to continue working as though nothing were wrong? If the service being out causes such a disruption that having duplicate services at the remote office doesn't help, then why bother?)
- How much will it cost over the long run to add all that extra hardware, along with the burden of maintaining it and all the services running on it?
Once you answer at least those questions, then you have the information you need in order to make a sensible decision.