Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Administration on Systems w/ Lots of Users? 15

kidlinux asks: "Since I started using Linux I've relied mostly on documentation to learn how to use any given aspect of the system. Up until now, I've been used to setting up systems for myself and a few of my friends. I have recently been hired to setup a system which will have 100+ users. Some will have shell access, some email only, some web access, etc.. When setting up a system for vast amounts of users, are things done differently? What kind of things do I need to consider when configuring the system? Is there any documentation available for setting up large scale systems?"
This discussion has been archived. No new comments can be posted.

Administration on Systems w/ Lots of Users?

Comments Filter:
  • Clarifications... (Score:4, Insightful)

    by ameoba ( 173803 ) on Sunday September 09, 2001 @10:11AM (#2270580)
    A few questions need be answered :

    Do you want to make email-only users (IE not allow shell access), or are you just expecting some users to limit their use to just that?

    What is the intended purpose of this? Is it a money-making venture, or more of a communal-access project?

    What do you plan on ppl using their shell accounts for? Interactive access involving compilers &C can create a whole world of headaches.

    A few ideas for you to consider...

    Disk quotas & process caps are useful in preventing users from using all available system resources (not to mention how a single fork-bomb can ruin an admin's day).

    If you plan on having different types of accounts, write scripts to automate account creation proccess. (knowing/learning a scripting language will pay off big).

    Resist the urge to run bleeding edge software on the machine. OTOH, you have to keep up w/ security patches.

    If you have any say in the hardware the machine is on, spring the extra cash for good hardware. SCSI drives (SCSI pays off bigtime when you have many different users trying to access the drive. ECC RAM is another good idea. "Server-Grade" hardware will usually last longer & be less prone to failures.

    Logs, Logs, Logs. Make sure to keep good logs & have a good log-rotation process in place. Not only will they help you identify security problems, but they can be useful for debuging the system (and if you feel like putting the work in, identifying the actual usage paterns of the system to streamline/optimize the system for what it's really being used for)
    • Erp, one thing I forgot :

      Plan for growth. There are places you can cut corners when you only have 100 users that are going to kill you when you have 1000. I'm not saying that you should neccessarily design the system around the growth, but try to avoid doing things that will need to be undone at a later date.
  • Normally if u reach a big number of users (100 is not that big ;) u want to think about different password backends, as plain text does not scale very well :)

    I would recommend looking into LDAP and various PAM modules ..
    (there even exist some MySQL modules for authentication)

    • LDAP and PAM are great for other operating systems like Solaris and AIX.

      Unfortunately, the current Linux PAM codebase is a big, ugly, bug-ridden mass of spaghetti code.
      • Unfortunately, the current Linux PAM codebase is a big, ugly, bug-ridden mass of spaghetti code.

        I keep hearing people say this - yet I personally use PAM to do Kerberos authentication on a good number of Linux workstations, and I have very few problems with it. I'm not trying to flame you here, but I am curious to find out just what you mean when you say that Linux's PAM implementation is big, ugly, and bug-ridden. Could you give some sort of example of something that needs to be fixed?

        Inquisitive minds want to know.
  • Some tips ... (Score:5, Interesting)

    by dustpuppy ( 5260 ) on Sunday September 09, 2001 @11:25AM (#2270699)
    First off, 100 users really isn't what I would call 'vast'. Try 6000 users spread around the nation - that is vast (yes, I personally look after such a system.

    Anyway, some tips:
    • map our uid ranges for specific functions. eg for the users who only have email access, let them have a uid ranging from 1000-2000 etc. This isn't always possible, but if it can be done, it does help keep things neat and in order
    • have a variety of scripts (be it command line or gui driven) to help you automate and simplify user administration. eg you might have a script to bulk create users who only need web access
    • work out, document and follow standards - eg passwords must be changed every 90 days, 3 unsuccessful login attempts will disable your account, full name in the comment field. Make sure these are adhered to these rules.
    • make sure you have monitoring scripts to make sure that your system isn't buckling under the load if all the users decide to log on at once
    • if you can, put CPU utilisation caps on each user - this will stop one user from having a runaway process that chews 100% of your CPU slowing the system down for the other 5999 users
    • setup your system so that you can control the number of sessions that each user can have. This will prevent someone sharing their account, or loggin in multiple times consuming resources. Of course, if you got ample resources, then this won't be a problem ... but it's something to check.
    • Periodically check for duplicate uids or null passwords etc - these can be big security holes and they are easy to miss when you have a lot of users.
    • Re:Some tips ... (Score:3, Interesting)

      by Tet ( 2721 )
      passwords must be changed every 90 days


      Fine, but enforce this through policy, not by technological means (e.g., password aging). There have been a couple of studies that show password aging actually reduces security. If people are forced to think of a new password on the spot, they tend to either pick a ridulously easy to guess one, or to write their new password down so they don't forget it. If, on the other hand, you have a script that tells you who hasn't changed their password in the last 3 months, you can ask them to change their password manually. That gives them time to think of a suitable replacement. Does anyone have pointers to the studies I'm referring to? I saw them a few years ago now, but I've lost the details...

  • by unitron ( 5733 ) on Sunday September 09, 2001 @11:34AM (#2270716) Homepage Journal
    "When setting up a system for vast amounts of users, are things done differently?"

    Yes, you will need a megaphone and a bigger bullwhip.

  • by Soft ( 266615 ) on Sunday September 09, 2001 @11:56AM (#2270766)
    Do you mean just a single server to which users connect remotely, or are all the client machines included? Are the latter homogeneous? Are you starting from scratch or is there another infrastructure in place?

    If your only worry is a single machine, IMO there is no fundamental difference with a home system, except that it has to be (even more) solid (think redundant power supply, UPS, RAID, backups...), scalable (think RAM, SCSI disks), and you have to pay (even more) attention to potential local root holes. And use a system that works, no fancy using the latest 2.5.1pre6 Linux kernel or 5.x-CURRENT BSD!

    Think about how it will be accessed and don't cut corners on security (use SSL for POP/IMAP if possible, favor SSH/SCP/SFTP over telnet/ftp, use encrypted passwords for SAMBA). You may want to set up restrictions on local users - quotas, limits on CPU/RAM usage, etc. You will want to automate account creations: define different classes of users, standard configurations, but also groups, mailing lists - manual maintenance of those can be a major PITA.

    OTOH, if you're also responsible for all the clients, then there's a must read: Bootstrapping an architecture [infrastructures.org]. Resist any and all temptation, from yourself or others, akin to "100 users is not enough to bother with automating everything, we'll just handle it by hand", etc. I've been through this myself and regret all the time lost installing, reinstalling systems, spending hours opening batches of accounts, cleaning up old ones, and so on... Computers are good at repetitive tasks, and this one can and should be automated. Of course, keep solidity in mind; you don't want all your network to halt because your upgrade server is stopped for maintenance...

    Finally, if you aren't starting from scratch, if you've just been "promoted" sysadmin for 100 users with an existing network, then good luck. Your best bet is to maintain the old infrastructure, set a new one in parallel, and migrate users and machines one by one. But make sure to interview many users and upset as little old habits as possible, otherwise I hope your asbestos suit is ready!

  • Name services (Score:2, Informative)

    Learn DNS, NIS (or better and if you dare NIS+) or LDAP.

    You don't want to use plain text files in each machine....

  • by randombit ( 87792 ) on Monday September 10, 2001 @09:44PM (#2276059) Homepage
    Perl and shell are you friends. Script like it was going out of style. Never do anything manually more than twice (three times tops). This rule will save you a lot of time in the long run.

    If you have multiple machines (more my area of experience), NFS, NIS, DNS, LDAP, etc are super-important. Make an NFS'ed /usr/local/etc that contains all the important config files, and a cronjob copies the files over ever N minutes or hours. Redhat and Mandrake have a kickstart system that allows you to re-install a system with minimal effort (probably other Linux distros have something similiar). It's useful. Damn useful.

    The high-end hardware is a waste of money with only a 100 users (unless they're constantly hitting the machine hard). One machine we've got here is a 1 Ghz AMD Tbird, Abit KT7A-RAID, 768 Mb PC133, big software striped RAID IDE disks, pretty vanilla (cost us like $1000 in June, it would be $750 today). It handles web (tons of PHP and Perl, also SSL), mail (SMTP, IMAPS, SPOP3), database, rsync, and SSH for 65 people with no problem (plus NIS, NFS'ing out /home, NTP, etc). I'm sure it could take at least 200-300 users before running into any problems (the old server was a 233 Mhz K6 with 128 Megs of memory - it held up suprisingly well for 3 full years with the same load). SCSI, especially, is not worth the money. It's nice, but price/performance on SCSI is not worth it unless you need the fastest possible stuff.
  • Some will only have web access? Then I've got a *lot* of users on my systems!

    100 users is nothing. It's no big deal. The only worry is that you've got enough CPU cycles and disk space.

    Just keep an eye of what they're doing. Look out for disk/CPU/bandwidth hogs.

To the systems programmer, users and applications serve only to provide a test load.

Working...