logs are kept because you need them. I wouldn't expect it to be apparent to someone who has never had to manage a real network, but logs and a reasonable retention are essential. There is a basic tension at work, though. You need logs from a management perspective, the more the better, but the more you have the greater your liability.
For something basic like netflow (which any sane network administrator is going to have) you might have months of data. Places will vary, and some insist they need years, others go with less and some do without. But there's more than just netflow (which is just essential metadata about network traffic), you might use Bro to log web requests or copy out executables, or even just dump the whole stream to disk. The latter takes a large amount of disk space and *significantly* increases liability so places vary from not doing it to keeping an incredible amount (12+ months).
How does it help network administrators? Netflow data is pretty essential to almost any trouble shooting task on the network. A complaint about traffic being dropped can be confirmed or denied by netflow lookup. Need to know what hosts an IP talked to? On certain ports? Doing a basic plausability check for data exfiltration? URL logging gives a trace for a compromise and can then be used to construct indicators of compromise. Capturing exe's on the fly is helpful in post mortem: what exe was downloaded to a compromised host? Do AV companies know about it yet? Full packet captures are extremely helpful in retrospection and can fill in the rest of the blanks. Especially if you are into the questionable practice of MITM the SSL connections.
How does it increase liability? When hit with ediscovery if you've got it you have to produce it. This can get expensive, very expensive if you are doing full pcaps.
Setting retention is a matter of finding a balance between what you need for trouble shooting and can afford to copy and maintain indefinitely. Without dropping below a certain minimum retention that is not really defined, but can hit you in court (a while back slashdot miscovered a company that got in trouble because they didn't log anything to disk which was sufficiently out of line with norms for the line of business they were in to get them in trouble). It matters what your peers are doing.
We have varying retention even for essentially the same data depending on where/how it is being logged/stored. Sometimes these differences amount to bureaucratic/political, other times it is based on capacity of a particular data store. Retention might be defined as a volume of data (10GB), fraction of capacity (90%) or a span of time (30 days).
Access control logging (I assume you are referring to logging authentication events) very likely have considerable lifetime at any facility, but the ability to map specific traffic to a user might be considerably less. For example, many universities employ NAT and depending on specifics of the implementation may or may not be able to map traffic to a user in any given circumstance regardless of retention.
To the point of ensuring provision of service to users, QoS doesn't cut it -- at least not on a subscriber network. If it was just QoS rules access controls wouldn't even be relevant. But to do meaningful traffic shaping (which QoS is not) does require *some* form of user mapping. It could be done anonymously, though in practice I don't see how that would work well (for reasons having nothing to do with fair queuing).
I think I've answered the question as to why keep logs. If not then talk to an administrator, whether it be server or network. Once you gain an understanding of what the job requires the keeping of logs makes sense and there's the risk of going whole hog and keeping too much. Which is when the legal liability aspect needs to be considered.