I ask you that you go into it with an open mind. Agile stuff has been treated as an overhyped bogeyman (look at all the posts here) and understandably many of us geeks are very cautious to approach it. However, if you pull the layers of hype and bullshit away, you recognize that it's just a different mindset that guides the work. This mindset happens to work a lot better with people and software (I am now convinced of this).
I'm hoping that you have a good instructor for training. But also, I am hoping that you have access to that instructor for questions afterwards, because that's really the most useful part. Every kind of training I've seen presents idealized examples to some degree, but a smart instructor will be able to tell you how you can work your situation.
For example, for varied skill sets, once you make up your team, try to take on a mix of stories (simplified use cases) that *approximately* reflects the skills breakdown. For example, if you have only two people that can touch your matrix code, take only on one or two matrix-work-intensive stories per sprint. After all, you have to be realistic.
(However, maybe a third person on your team would like to learn this matrix code. By sitting that person down with the matrix code expert and going through several dev tasks together, that person may learn something and may pick up these skills. Sure, it will feel slower at first, but now you have greater versatility in your team.)
If you are primarily doing fixes, then just reflect that in your wording of the stories. For example, we had to do some work to improve the startup performance of our product. We came up with stories that said, "As an administrator, I want my server to restart in less than X seconds." When we did some loose estimates, we realized that this was too much work to fit into a one-week sprint. So we split it into "As an administrator, I want my server to restart in less than X seconds, on Windows" and "... on Linux." This is because after doing some investigation, we realized that there was a lot of different work to be done for Windows vs Linux. This is not a perfect split of user stories (you want to keep such platform details out as much as possible), but that's OK. Do what works for you.
I have resolved to myself that, if I can help it, I am never again going to work at a place that does old-school waterfall. My team right now is in the middle of a transition to a Scrum-esque approach. We are running into plenty of problems, our team members have varied skills and some are much more versatile than others (Etc etc). However, it's already a better situation than what we had before.
So far, I am enjoying working off a prioritized product backlog, concentrating on getting smaller features DONE DONE, working in short iterations (2 weeks), and having the team who is fairly open to "let's change it if it doesn't work".
For one, many times before, spend months nailing down a long list of requirements, estimating the whole freaking thing, arguing between product management and engineering, cutting things to try to fit a set of features into some deadline (based on start-of-project estimates). Our product management would fight to keep certain things in that were important but not that important (because they did not want them to get dropped until the next release, that was many many months away). The developers (including me) would sit there, pulling estimates out of our collective ass for a ton of features, with barely any information for some of them.
Then we'd write extensive software requirement specifications (SRS) based on what was agreed to be implemented. The QA would go off to write test cases based on those SRSs. Of course, the use cases in these requirements were too detailed in some respects and not detailed enough in others, and completely missing the boat in yet other aspects. But, since they were signed off as "complete", we spent the rest of the project following what was agreed upon in the beginning instead of adjusting to the new process.
NOW, with the new process, we only estimate maybe 2-3 months of work, and even those we do loosely, using relative sizing (this thing is way harder than this other thing, this other thing is much easier, etc), without doing detailed task breakdowns. We only do detailed task breakdowns for stories (think of them as simplified use cases) for the next two weeks. Those task breakdowns are pretty detailed (we try to split tasks until they can be done in a day or less), but, again, we only do it for 2 weeks worth (every 2 weeks). This spreads out the pain.
So the benefit is that we only take on the most important work first. We can only do so much stuff in the next 2 weeks or two months. So all this detailed planning happens only for the most important things that the product management cares about. Every 2 weeks, the product management can reshuffle priorities and that's ok! No problem, we'll take on whatever is required next.
Developers are liking this because they don't spend weeks and months estimating everything under the sun. Product management is liking this because they can shuffle things around as market conditions change or other information comes to light.
ALSO, focusing on done done done. We used to have this "code complete" date. So some features took way longer to get done (often due to unexpected technical problems... sounds familiar?) and some were quicker than expected. Since one or two people would be working on each feature over the period of 6 weeks or longer, and even then, they only had to report some progress over those weeks. The "code complete" (or "feature complete") date was the date by which all features needed to be finished. Then we'd test and fix any bugs.
So, of course, what happens is that if the feature had technical issues, some coders would rush to check in code without writing enough tests or really even just testing it enough. Often, they would compromise design just to get it in by the date. So often, we'd have poor, half-baked code checked in by this "code complete" date. Now, testing afterwards starts revealing issues. Sometimes, the issues require a design adjustment, but now it's too late to make that design adjustment because we are in this "endgame."
Of course, you can make an argument that a better developer would not call a feature complete without sufficient testing. This better developer should've gone to the manager and said "this can't be done by the deadline" and let the management deal with that. The reality is that in a typical larger corporation, we have a lot of not-so-better developers, who are content on just cranking out something that qualifies as "coded" by this "code complete" date.
SO, by focusing the team on having smaller features be really "done" AND demonstrable by the end of the iterations really forces any issues to the surface. The feature has to be fully tested and work or it's not done. Oh shit, we have technical issues that add work and means we can't finish functional tests for this sprint? Well, no functional tests, then it's not done. OK, so that comes up right away (or at the latest by the end of the sprint) and we can then deal with it by readjusting stories, tasks, etc (and everyone realizes that this is OK).
By talking about the process and how everything went every 2 weeks, we have a chance for people to complain. Because all agile processes encourage adjustment and feedback, you can try things one way for a sprint or another way.
Of course, all of this requires management and coworkers open to change. You will find that most are open to it.