




Learning High-Availability Server-Side Development? 207
fmoidu writes "I am a developer for a mid-size company, and I work primarily on internal applications. The users of our apps are business professionals who are forced to use them, so they are are more tolerant of access times being a second or two slower than they could be. Our apps' total potential user base is about 60,000 people, although we normally experience only 60-90 concurrent users during peak usage. The type of work being done is generally straightforward reads or updates that typically hit two or three DB tables per transaction. So this isn't a complicated site and the usage is pretty low. The types of problems we address are typically related to maintainability and dealing with fickle users. From what I have read in industry papers and from conversations with friends, the apps I have worked on just don't address scaling issues. Our maximum load during typical usage is far below the maximum potential load of the system, so we never spend time considering what would happen when there is an extreme load on the system. What papers or projects are available for an engineer who wants to learn to work in a high-availability environment but isn't in one?"
2 words (Score:2, Informative)
map reduce
http://labs.google.com/papers/mapreduce.html [google.com]
Re: (Score:3, Interesting)
I did, however, find this sentence disturbing:
However, given that there is only a single master, its failure is unlikely; therefore our current implementation aborts the MapReduce computation if the master fails.
Huh? So, because there is only one master it is unlikely to fail? This job takes hours to run. This is similar to saying that if you have one web server, it is unlikely
Re: (Score:2, Funny)
Re:2 words (Score:5, Funny)
Re: (Score:2)
Oh, and they also use it all the time on one of the world's largest data warehouses.
Re:They use it to make money, so no criticism allo (Score:3, Interesting)
Google seems to have taken this elementary technique and turned it into a something that can kick the crap out of an over-engineered solution under the right circumstances. I've read the paper, and assuming this is really used how they say it is, I can say that it does a fantastic job of performing AND HA, based on my personal experiences with gmail, google, groups, adwords, maps,
Re: (Score:3, Informative)
But who knows... you could be right. I'm just playing devil's advocate.
Re: (Score:2)
RAID 0+1 is intended to be fast and reliable.
A single master is a single point of failure. However if that server failing doesn't cause running issues then they can ignore that single point of failure as it doesn't matter to production. I would imagine that the code on the Master is well tested by now (and may be very simple anyway) which just means that they now have to worry about hardware failures
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Probability of failure with a single drive: 1 in 1000
Probalitity of failure with ten drives: equals the probability of drive 1 failing or drive 2 failing or drive 3 failing or drive 4 failing or drive 5 failing or drive 6 failing or drive 7 failing or drive 8 failing or drive 9 failing or drive 10 failing.
The easier way to solve that equation is to reverse it - it equals the probabiblity of drive 1 not failing and drive 2 n
Re: (Score:3, Insightful)
No fallacy.... (Score:5, Insightful)
Yes. If you take that sentence in context, the answer is "Yes." Compared to the likelihood that one of the thousands of worker-machines will fail during any given job, it IS unlikely that the single Master will fail. Moreover, while any given job may take hours to run, it also seems that many take just moments. Furthermore, just because a job may take hours to run doesn't mean it's CRITICAL that it be completed in hours. And, at times when a job IS critical, that scenario is addressed in the preceeding sentence: It is easy for a caller to make the master write periodic checkpoints that the caller can use to restart a job on a different cluster on the off-chance that a Master fails.
If a job is NOT critical, the master fails, the caller determines the failure by checking for the abort-condition, and then restarts the job on a new cluster.
It's not a logical fallacy, nor is it a bad design.
For the benefit of anyone reading thru, here is the parapgraph in question. It follows a detailed section on how the MapReduce library copes with failures in the worker machines.
Re: (Score:2, Insightful)
Well, it's easy to say something isn't 'A', and then not spend a word on what IS 'A'.
If I'm so wrong on scalability maybe you can explain it here to me. Thanks.
Re:2 words (Score:5, Informative)
The Macy's Door Problem is a great example of a Scalability 101 problem that map_reduce has no way to address. In the early 30s, when most department stores were making big, flashy front entrances to their stores with big glass walls and paths for 12 groups of people at a time, doormen, signage, the whole lot, Macy's elected to take a different approach. They set up a small door with a sign above it. The idea was simple: if there was just the one door, it would be a hassle to get in and out of the store; thus, it would always look like there was a crowd struggling to get in - as if the store was just so popular that they couldn't keep up with customer foot traffic. The idea worked famously well.
In server design, we use that as a metaphor for near-redline usage. There's a problem that's common in naïve server design, where the server will perform just fine right up to 99%. Then, there'll be a tiny usage spike, and it'll hit 101% very briefly. However, the act of queueing and disqueueing withheld users is more expensive than processing a user, meaning that even though the usage drops back to 99%, by the time those 2% overqueue have been processed, a new 3% overqueue has formed, and performance progressively drops through the floor on a load the application ought to be able to handle. I should point out that Apache has this problem, and that until six years ago, so did the Linux TCP stack. It's a much more common scalability flaw than most people expect.
Now, that's just one issue in scalability; there are dozens of others. However, map_reduce has literally nothing to say to that problem. Do I need to rattle off others too, or maybe is that good enough? I mean, we have the exponential growth of client interconnections (Metcalfe's Law, which is easily solved with a hub process;) we have making sure that processing workloads is linear growth (that is, o(1) as opposed to o(lg n) or worse), which means no std::map, no std::set, no std::list, only pre-sized std::vector and very careful use of hash tables; we have packet fragmentation throttling; we have making sure that you process all clients in order, to prevent response-time clustering (like when you load an apache site and it sits there for five seconds, so you hit reload and it comes up instantly,) all sorts of stuff. Most scalability issues are hard to explain, but maybe that brief list will give you the idea that scalability is a whole lot bigger of an issue than some silly little google library.
Talk to someone who's tried to write an IRC server. Those things hit lots of scalability problems very early on. That community knows the basics very, very well.
Well... (Score:4, Informative)
Zawodny is pretty good...
Re:Well... (Score:4, Informative)
Re: (Score:2)
The unfortunate thing about databases (Score:4, Insightful)
Is that most of them have poor native APIs when it comes to scalability. Some of them have something like
But that is far from optimal. When will they be smart and release an async API that notifies you via callback when complete? This would be very useful for apps that need maximum scalability.
Microsoft's .NET framework is actually a great example of doing the right thing - it has these types of async methods all over the place. But then you have to deal with cross-platform issues and problems inherent with a GC.
It's not that much different for web frameworks either. None that I've tried (RoR, PHP, ASP.NET) have support for async responding - they all expect you to block execution should you want to query a db/file/etc. and just launch boatloads of threads to deal with concurrent users. I guess right now with hardware being cheaper it is easier to support rapid development and scale an app out to multiple servers.
Re:The unfortunate thing about databases (Score:5, Informative)
Most databases have async APIs. Postgresql and mysql have them in the C client libraries. Most web development languages, though, do not expose this feature in the language API, and for good reason. Async calls can, in rare cases, be useful for maximizing the throughput of the server. Unfortunately, they're more difficult to program, and much more difficult to test.
High scale web applications have thousands of simultaneous clients, so the server will never run out of stuff to do. Async calls have zero gain in terms of server throughput (requests/s). It may reduce a single request execution time, but the gain does not compensate the added complexity.
Re: (Score:2)
The only async apis they have are like the example I gave before. These are sub-optimal!
It's true that with a single server handling 10, 100, 200 RPS, the stupid threaded model will likely not make a big difference in _throughput_. It will make a MASSIVE difference in CPU/RAM usage though, and let you easily scale up to 10000 RPS on commodity hardware using just a single thread. Some people like to maximize their hardware usage.
And async is certainly not much more difficult - it's a new way of thinki
Re: (Score:2)
I actually find async programming with a good API to be easier, because everything's an event, and you don't have to design the flow of control of everything else around constantly returning to poll for results, or deal with the locking and race conditions if you
Re: (Score:2)
Do you think that non-blocking IO really offers enough performance gains to compensate for the resulting spaghetti code? This isn't a rhetorical question, I'm really curious.
Re: (Score:2)
Why the spaghetti?
Troubling blocking IO code in C++:ish pseudo:
So, add this object
Re: (Score:2)
I don't understand what is wrong with that.
/*do something*/ part, because you'd want the handle released early to minimize t
Are you unhappy about the
Re: (Score:2)
see this comment [slashdot.org] for more along the lines of what I'm talking about.
I'm unhappy about the wait() call because it doesn't lend itself to fully async coding - if you've got nothing to do in that context, you're stuck blocking the thread when it could be doing other things. So now you have to waste CPU on context switches and waste RAM on state for a new thread.
A good callback-based API doesn't have these deficiencies. You just call a function to dequeue completion callbacks, from however many threads yo
Re: (Score:2)
I'm unhappy about the wait() call because it doesn't lend itself to fully async coding - if you've got nothing to do in that context, you're stuck blocking the thread when it could be doing other things. So now you have to waste CPU on context switches and waste RAM on state for a new thread.
So? Yuo can spend a little bit of cpu time and run more threads or, since the load on the DB is likely to be the bottleneck, get more boxes. Hardware is cheap. Debug time is not, and async programming is harder than
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What I ment to say was of course:
Since you seem to maybe be talking circumspectedly about Java:
Re: (Score:3, Interesting)
I'm afraid the parent post is an example of not seeing the forest because of all the trees.....
Application code should never ever be aware of deployment issues. Making it aware of such things a sure way to ensure nightmares when your environment changes. For example, lets say you have to send mail. You could take the option of always talking to localhost under the assumption that your app will always be deployment on a machine with a mail server. But consider the case when the app is taken and deployed in
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
look at Saas development (Score:2)
Here goes... (Score:3, Informative)
1. Check all your SQL and run it through whatever profiler you have. Move things into views or functions if possible.
2. CHECK YOUR INDEXES!! If you have SQL statements running that slow, the likely cause is not having proper indexes for the statements. Either make an index or change your SQL.
3. Consider using caching. For whatever platform you're on there's bound to be decent caching.
That's just the beginning... but the likely cause of most of your problems. We could go on for a month about optimizing.. but in the end if you just stuck with what you have and checked your design for bottlenecks you could get by just fine.
Re: (Score:2)
Re: (Score:2)
High availability!=high performance (Score:5, Insightful)
Re: (Score:2)
Exactly right. I'd like to add that if you want to write really scalable code then use an asynchronous approach as much as possible. Some programming languages and toolkits make this easy, some make it hard, but it's possible in any. If your database server is slow responding to your application server, make sure your app server can do useful work while it's waiting. The same is true of communication between parts of the server.
I'd thoroughly recommend that you learn Erlang, if you haven't already. Th
Erlang is very cool, CouchDb uses it (Score:2)
Re: (Score:2)
High performance=short response times. In your case you can think about caching more and tuning the system and database access. Maybe you can make the application more scalable, but once you move the database to different server than the application you first get some extra (network) overhead instead of performance, specially in low load situations. And more iron/servers means more money.
High availably [wikipedia.org] is about a 24x7 and no singe point of failure. One method for this is clustering [wikipedia.org] (more application
Re: (Score:2)
Luckily, we now have a good Erlang book in print (again): Programming Erlang by Joe Armstrong. Learn it, live it, love it.
I disagree. Erlang is perfectly good for general programming tasks and particularly well-suited for the sorts of demands placed on public web applications (which is sort of the undercurrent of the requester's question, I think). And while it's true that messaging, lightweight para
Re: (Score:2)
Calling C code from Erlang is not a task for the faint-hearted either. You have to write a port driver to do it safely, which is a lot
Re: (Score:3, Insightful)
disk reads and writes are the least of our troubles when we scaled much more than a small enterprise level of data. The sheer number of moving parts in our environment (not just physical parts, but bits flowing too) killed productivity and we wound up with the complete inability to cache anything.
There's simply too much data flowing back and forth to make caching pay for itself and too often will a hard drive fail requiring even more b
I don't code for it directly (Score:4, Interesting)
Many of the code guidelines we have established are to aid in this. Use transactions, don't lock tables, use stored procedures and views for anything complicated, things like that.
I guess my answer is that we delegate it to the server group or the dba group and let them deal with it. I guess this means the admins there are pretty good at what they're doing.
Re: (Score:2)
I'm in a small shop and we do this too. Truth is, we don't even have a real DBA, but a few of us know SQL Server really well. The reason we actually do it this way is cost. On small projects, Dev time is really expensive, server resources are not. If you can support 30 more clients with one $5k server, then it's simply not worth Dev time to stress over performance.
Truth is, if performance is becoming an issue, then the project should be generating enough revenue to justify the Dev time spent on performan
Watch videos/presentations. (Score:3, Informative)
check these out... (Score:4, Informative)
http://highscalability.com/ [highscalability.com]
http://www.allthingsdistributed.com/ [allthingsdistributed.com]
Not sure what you want to test. (Score:4, Insightful)
If you are using Java on Tomcat, BEA, or Websphere, use a product like PerformaSure to see a call tree of where your Java program is spending it's time. Sorts out how long each SQL takes too, and shows you what you actually sent. If you have external data sources, like SiteMinder, it will show that too.
If you mean "What happens if we lose a bit of hardware" simulate the whole thing on VMware on a single machine and kill/suspend VMs to see how it reacts.
Most importantly, MAKE SURE YOU MODEL WHAT YOU ARE TESTING. IF you are not testing a scaled up version of what users actually do, you have a bad test.
Another option (Score:2)
Erlang is dif
Slightly off topic (Score:3, Insightful)
Re: (Score:2)
Re:Slightly off topic (Score:4, Informative)
Basically, processes are primitives, there's no shared memory, communication is through message passing, fault tolerance is ridiculously simple to put together, it's soft realtime, and since it was originally designed for network stuff, not only is network stuff trivially simple to write, but the syntax (once you get used to it) is basically a godsend. Throw pattern matching a la Prolog on top of that, dust with massive soft-realtime scalability which makes a joke of well-thought-of major applications (that YAWS vs Apache [www.sics.se] image comes to mind,) a soft-realtime clustered database and processes with 300 bytes of overhead and no CPU overhead when inactive (literally none,) and you have a language with such a tremendously different set of tools that any attempt to explain it without the listener actually trying the language is doomed to fall flat on its face.
In Erlang, you can run millions of processes concurrently without problems. (Linux is proud of tens of thousands, and rightfully so.) Having extra processes that are essentially free has a radical impact on design; things like work loops are no longer nessecary, since you just spin off a new process. In many ways it's akin to the unix daemon concept, except at the efficiency level you'd expect from a single compiled application. Every client gets a process. Every application feature gets a process. Every subsystem gets a process. Suddenly, applications become trees of processes pitching data back and forth in messages. Suddenly, if one goes down, its owner just restarts it, and everything is kosher.
It's not the greatest thing since sliced bread; there are a lot of things that Erlang isn't good for. However, what you're asking for is Erlang's original problem domain. This is what Erlang is for. I know, it's a pretty big time investiture to pick up a new language. Trust me: you will make all your time back in writing far shorter, far more obvious code than you did in learning the language. You can pick up the basics in 20 hours. It's a good gamble.
Developing servers becomes *really* different when you can start thinking of them as swarms.
Impressive (Score:2)
Suddenly, applications become trees of processes pitching data back and forth in messages.
We aren't talking a win32 style message pump kind of message passing mechanism, are we? I truly can't stand the message pump in win32 - it always feels like such a 'hack'; I don't have a better solution, though, so I've be
Re:Impressive (Score:4, Informative)
The problem is, it's hard to explain why. The overhead of using things like that is tremendous; Erlang's message system is used for quite literally all communication between processes, and a system like Windows Events or MSMQ would reduce Erlang applications to a crawl. Erlang uses an ordered, staged mailbox model, much like Smalltalk's. If you haven't used Smalltalk, then frankly I'm not aware of another parallel.
It's important to understand just how fundamental message passing is in Erlang. Send and receive are fundamental operators, and this is a language that doesn't have for loops, because it thinks they're too high level and inspecific (you can make them yourself; I know, that must sound crazy, but once you get it, it makes perfect sense.) You're about to see a completely different approach. I'm not saying it's the best, or the most flexible, but I really like it, and it genuinely is very different. What Erlang does can relatively straightforwardly be imitated with blocking and callbacks in C, but that involves system threads, and then you start getting locking and imperative behavior back, which is one of the things it's so awesome to get rid of (imagine - no more locks, mutexes, spin controls and so forth. Completely unnessecary, both in workload, debugging and in CPU time spent. It's a huge change.)
Really, it's a whole different approach. You've just got to learn it to get it. No, I said that. I wrote some code to help explain it to you, though of course slashdot's retarded lameness filters wouldn't pass it, so I put it behind this link [tri-bit.com]. Sorry it's not inline.
Hopefully that will help. Sorry about the lack of whitespace; SlashDot's amazingly lame lameness filter is triggering on clean, readable code.
Re: (Score:2)
Re: (Score:2)
As far as erlang is concerned, they are. And if you want to glue erlang to the outside world without using an FFI, you can run erlang nodes in Java or Emacs Lisp (no kidding), and you can communicate with a node from the shell or even run erlang code on an ad-hoc node on the fly with erl_call.
Re: (Score:2)
Re:Slightly off topic (Score:4, Informative)
Re: (Score:2)
Whereas the current state of things leaves a lot to desired, it's nowhere near as bad as C; that's why ETS and DETS are effectively polymorphisms of one another.
I see no reason that synchronous message passing needs any cut and pasting; why not just make a module to implement it? As far as futures, frankly I think they're misgu
Similar Question - Interviews (Score:2, Offtopic)
Languages, Libraries, Abstraction, Audience (Score:2, Interesting)
Languages - I recently saw a multi-million dollar product fail because of performance problems. A large part of it was that they wanted to build performance-critical enterprise server software, but wrote it mostly in a language that emphasized abstraction over performance, and was designed for portability, not performance. The language, of course
Re:Languages, Libraries, Abstraction, Audience (Score:5, Insightful)
Libraries - Bingo lets throw out nice blocks of tested and working code b/c it's always better to write it yourself. You pretty much have to use libraries to get things done anymore. And are you suggesting someone should write their own DB software when building a web app? Um, yeah see that web app ever gets done.
Abstractions - While most are leaky at some point, abstractions make it easier for you to focus on the architecture (which is what you should be focusing on anyways when building scalable systems).
I see these types of arguments all the time and they rarely make sense. It's like arguing about C vs. Java over 1ms running time difference when if you changed your algorithm you could make seconds of difference or if you changed your architecture you would make minutes of difference...
Re: (Score:2)
Or the GP was completely wrong... or maybe he has just tighter resources than you.
All things he said are usefull to improve performance, and can lead to errors that will decrease said performance if you are not carefull enough. Of course, if your performance hits are due to gross architectural errors, you shouldn't even think on looking into them.
Re: (Score:3, Insightful)
Re: (Score:2)
When has abstraction ever been a strength of java? It has one fucking abstraction, and there are programmers out there who say that sun didn't even get that one right.
Availability Isn't Scalability (Score:3, Insightful)
Is it just me, or is the question hopelessly confused? He's using the term "availability" but it sounds like he's talking about "scalability."
Availability is basically percentage of uptime. You achieve that with hot spares, mirroring, redundancy, etc. Scalability is the ability to perform well as workloads increase. Some things (adding load-balanced webservers to a webserver farm) address both issues, of course, but they're largely separate issues.
The first thing this poster needs to do is get a firm handle on exactly WHAT he's trying to accomplish, before he can even think about finding resources to help him do it.
Define your thread's purposes (Score:5, Informative)
The biggest problem I have seen is people don't know how to properly define their thread's purpose and requirements, and don't know how to decouple tasks that have in-built latency or avoid thread blocking (and locking).
For example, often in a high-performance network app, you will have some kind of multiplexor (or more than one) for your connections, so you don't have a thread per connection. But people often make the mistake of doing too much in the multiplexor's thread. The multiplexor should ideally only exist to be able to pull data off the socket, chop it up into packets that make sense, and hand it off to some kind of thread pool to do actual processing. Anything more and your multiplexor can't get back to retrieving the next bit of data fast enough.
Similarly, when moving data from a multiplexor to a thread pool, you should be a) moving in bulk (lock the queue once, not once per message), AND you should be using the Least Loaded pattern - where each thread in the pool has its OWN queue, and you move the entire batch of messages to the thread that is least loaded, and next time the multiplexor has another batch, it will move it to a different thread because IT is least loaded. Assuming your processing takes longer than the data takes to be split into packets (IT SHOULD!), then all your threads will still be busy, but there will be no lock contention between them, and occasional lock contention ONCE when they get a new batch of messages to process.
Finally, decouple your I/O-bound processes. Make your I/O bound things (eg. reporting via. socket back to some kind of stats/reporting system) happen in their own thread if they are allowed to block. And make sure your worker threads aren't waiting to give the I/O bound thread data - in this case, a similar pattern to the above in reverse works well - where each thread PUSHING to the I/O bound thread has its own queue, and your I/O bound thread has its own queue, and when it is empty, it just collects the swaps from all the worker queues (or just the next one in a round-robin fashion), so the workers can put data onto those queues at its leisure again, without lock contention with each other.
Never underestimate the value of your memory - if you are doing something like reporting to a stats/reporting server via. socket, you should implement some kind of Store and Forward system. This is both for integrity (if your app crashes, you still have the data to send), and so you don't blow your memory. This is also true if you are doing SQL inserts to an off-system database server - spool it out to local disk (local solid-state is even better!) and then just have a thread continually reading from disk and doing the inserts - in a thread not touched by anything else. And make sure your SAF uses *CYCLING FILES* that cycle on max size AND time - you don't want to keep appending to a file that can never be erased - and preferably, make that file a memory mapped file. Similarly, when sending data to your end-users, make sure you can overflow the data to disk so you don't have 3mb data sitting in memory for a single client, who happens to be too slow to take it fast enough.
And last thing, make sure you have architected things in a way that you can simply start up a new instance on another machine, and both machines can work IN TANDEM, allowing you to just throw hardware at the problem once you reach your hardware's limit. I've personally scaled up an app from about 20 machines to over 650 by ensuring the collector could handle multiple collections - and even making sure I could run multiple collectors side-by-side for when the data is too much for one collector to crunch.
I don't know of any papers on this, but this is my experience writing extremely high performance network apps
Has anyone actually answered the question? (Score:3, Insightful)
What you are asking about, of course, is enterprise-grade software. This typically involves an n-tier solution with massive attention to the following:
- Redundancy.
- Scalability.
- Manageability.
- Flexilibility.
- Securability.
- and about ten other "...abilities."
The classic n-tier solution, from top to bottom is:
- Presentation Tier.
- Business Tier.
- Data Tier.
All of these tiers can be made up of internal tiers. (For example, the Data Tier might have a Database and a Data Access / Caching Tier. Or the Presentation Tier can have a Presentation Logic Tier, then the Presentation GUI, etc.)
Anyway, my point is simply that there is a LOT to learn in each tier. I'd recommend hitting up good ol' Amazon with the search term "enterprise software" and buy a handful of well-received books that look interesting to you (and it will require a handful):
http://www.amazon.com/s/ref=nb_ss_gw/002-8545839-
Hope this helps.
Re: (Score:3, Informative)
Re: (Score:2)
Actually, I'm going to completely agree with you; bad original search term. Amazon usually does better (and I should have checked).
The search term "enterprise architecture" seem to produce better general results.
http://www.amazon.com/s/ref=nb_ss_gw/102-6220372- 7 109710?initialSearch=1&url=search-alias%3Daps&fiel d-keywords=enterprise+architecture [amazon.com]
Statelessness (Score:4, Interesting)
At least that's my opinion.
Re: (Score:2)
It is a greater headache to convert for example your webapp from file based cookies to persistent cookies in the DB. Of course trying to be careful not to serialize too much data in cookies either.
my $0.02
Re: (Score:2)
Sort of, but this is a naïve understanding. State is required or you're not doing anything interesting. You're just pushing it around. If you're the guy in charge of the web servers, it might seem like having sessions on the app server and plugin based request routing is a good idea, but it just pushes the problem to the app server guy. If you're the app server guy, it might seem like a good idea to put sessions in a database but that just pushes the state there. You're not solving any fundamental prob
stateless component pools (Score:2)
HA is an IT thing (Score:2)
Re: (Score:2, Insightful)
The first two sentences here are one of my biggest pet peeves...if application developers don't start becoming more network-aware, and vice versa, I think you're dead meat. Hint: there are very few applications these days that aren't accessed over the network. I see so many "silos" like this when I'm consulting. The network guys and the app guys have no idea what the "other side" does. Where if they actually worked together on these type issues instead of talking past each other, something actually migh
Re: (Score:2)
You are talking about two things (Score:3, Informative)
You can address reliability and through put by invest a LOT of money in hardware and using things like round robin load balancing, clusters and mirrored DBMSes, RAID 5 and so on. Then losing a power supply or a disk drive means only degraded performance.
Latency is hard to address. You have to profile and collect good data. You may have to write test tools to measure parts of the system in isolation. You need to account for every millisecond before you can start shaving them off
Of course you could take a quick look for obvious stuff like poorly designed SQL data bases, lack of indexes on joined tables and cgi-bin scripts that require a process to be strarted each time they are called.
Lots of Options (Score:3, Interesting)
First of all, excellent question.
Second: ignore the ass above who said dump Java. Modern hotspots have made Java as fast or faster than C/C++. The guy is not up to date.
Third: Since this is a web app, are you using an HttpSession/sendRedirect or just a page-to-page RequestDispatcher/forward? As much as its a pain in the ass--use the RequestDispatcher.
Fourth: see what your queries are really doing by looking at the explain plan.
Five: add indexes wherever practical.
Six: Use AJAX wherever you can. The response time for an AJAX function is amazing and it is really not that hard to do Basic AJAX [googlepages.com].
Seven: Use JProbe to see where your application is spending its time. You should be bound by the database. Anything else is not appropriate.
Eight: Based on your findings using JProbe, make code changes to, perhaps, put a frequently-used object from the database into a class variable (static).
These are several ideas that you could try. The main thing that experience teaches is this: DON'T optimize and change your code UNTIL you have PROOF of where the slow parts are.
Re: (Score:3, Informative)
A good JMS provider is nice to have for HA though. Nothing like durable message storage to help you sleep well.
Re:Lots of Options (Score:4, Insightful)
AJAX can be a performance win. It can also be a nightmare if done poorly. I've seen far too many "web 2.0" applications that flood servers with tons of AJAX calls that return far too little data without a consideration for the cost (TCP connections aren't free, logging requests isn't free).
Response time is also variable. What feels 'amazing' local to the server can be annoyingly slow over an internet connection, especially if the design is particularly interactive.
Couple things I'd suggest:
1) Don't do usability testing on a LAN. An EV-DO card wouldn't be a bad choice for an individual. For a larger scale development environment a secondary internet connection works well.
2) Remember that a page can be dynamic without AJAX. Response time toggling the display property of an object is far more impressive than establishing a new network connection and fetching the data.
3) Isolate AJAX interfaces in their own virtual host so that you can use less verbose logging for API calls. This is a good idea for images as well.
Re:You get Performance, Easy of Use , or Cost pick (Score:2)
Performance bottlenecks often lie in the disks and network, not in the application.
Re: (Score:2)
When people think scalability has much to do with what language an application is written in, I start suspecting they've never worked in a real data center before.
Re: (Score:2)
Admit it.... (Score:5, Funny)
Disaster planning (Score:2)
Like you loose your data center? How good is your backup, is it off site, do you have a tested plan for restoring the data and system on an interm basis on someone's system?
Then you can look at some more specific things, what happens if I loose this server, this connection, this router, and specific services, DNS, Email, etc.
The big question $$$ depends on how much you have to loose. If you can afford a day of
High availablitiy and scalability... (Score:2)
As for where to read from a developer perspective? (which alot of people replying seemed to have missed the actual question). There are TONNES.
But split the question in two, where can i read about HA:
start here-> http://en.wikipedia.org/wiki/High_availability [wikipedia.org] Theres also many books on the subject (i remember one of the few i happened to like is the things that came out of the sun blueprints boo
How about your local paper? (Score:2)
SEIA (Score:2)
A second or two? (Score:2)
Prioritize. You have statistics already about typical usage, and typical wait and service times. Fix the problem that exists, instead of the problem that doesn't, but might someday.
Re: (Score:2)
Re: (Score:2)
"The type of work being done is generally straightforward reads or updates that typically hit two or three DB tables per transaction. So this isn't a complicated site and the usage is pretty low."
There is no reason this should taking multiple seconds. He has a basic problem there. Now is not the time to be thinking about multiple distributed tiers, as much as he wishes he were working on a really complex, cool system. He's not.
All of the people chiming in with their own details
Re: (Score:2)
Cache what you can (Score:2)
1. Client side cache. Most developers shudder when they think of a web page being cached on the browser. However, some pages (like help pages, new articles) do not change with real time and can be stored on the client's browser for a few minutes. Learn how to use the HTTP Caching directives to reduce the number of unique pages requested by each user.
2. HTML Output
Good book to read (Score:2)
Deals with the subject of high availability from the IT side rather than programming, but anyone dealing with HA systems needs to understand these issues.
Actual answers (Score:2)
While there were some (very) good points about both scalability and HA, they didn't tell you how to go about learning that; HA and HS are two areas where by reading the books or following case studies, you can understand the basic problems, but not see how you actually go about building a particularly scalable or HA system (because it's usually a system, not a single server).
I've worked in maint
Tolerance of delays? (Score:2)
Actually, being forced to use your app doesn't make them more tolerant of delays. It makes *you* more tolerant because your users can't go away. They still hate the delays.
Really only 2 things to think about at the base (Score:3, Informative)
The first is handled by just about any messaging/queue system. J2EE has had one for ages, Microsoft has MSMQ that recently (better late than never...
Then horizontal scaling. Why horizontal? Because just taking a random new box and plugging in it the network is easier and faster (especially in case of emergency) than having to take servers down to upgrade them (vertically scaling). Also adds to redundancy, so the more servers you add to your farm, the less likely your system will go down. There are documents on it all over (Microsoft Patterns&Practices has some on their web sites, non-MS documentation is hard to miss if you google for it, and many third partys will be more than happy to spam you with their solutions), but it really just come down to: "Use an RDBMS that handles clustering and table partitioning, use distributed caching solutions, push as much stuff on the client side (stuff that doesn't need to be trusted only!), and make sure that nothing ever depends on ressources that can only be accessed from a single machine (think local flat files, in process session management, etc)".
With that, no matter what goes down, things go on purring, and if someone ever bitch that the system is slow, you just buy a 1000$ server, stick a standard pre-made image on the disk, plug it in, have fun.
Oh, and fast network switches are a must