Ask Slashdot: MRTG and IP Accounting 58
Webdude asks:
"I run a server that has many IP aliases and have found a
very strange thing: all the aliases receive data but all
data is sent out through eth0. I have
MRTG up and running but it doesn't help me because all
traffic is going out eth0. I set up IP Accounting and found
that it records the packets traveling properly but now my
big question is how do I get MRTG (or something similar)
to graph the stats that are in the IP Accounting tables???"
Related question: Any IP quota software for Linux? (Score:1)
Re:Related question: Any IP quota software for Lin (Score:1)
Linux 2.2 so you can check
script or the like then change behaviour. Eg
mail you at 4, again at 6 and shutdown at 7.7
(dont go to 8, your cable co probably charge you
for mac headers and anything else they can scam)
With little scripts... (Score:1)
#!/usr/local/bin/perl
($In,$Out)=(split(" ",`/usr/bin/netstat -b -I $ARGV[0]`))[10,11];
$_=`uptime`;
/^.*up (.+),[^,]+user/;
$Uptime=$1;
$Host=`hostname`;chop($Host);
print (*)ENDE;
$In
$Out
$Uptime
$Host
ENDE
(*) Please insert two (smaller char) here, slashdot doesn't allow this neither as character nor as tag. *sigh*
Not that elegant though, but a quick hack that works.
(It's written for NetBSD's 1.4 netstat, uptime etc. output.)
In your mrtg.conf you can use than something like:
Target[some_name]: `path to skript interface`
MRTG 2.X and MRTG 3.x (Score:1)
Active development on MRTG 2.x (currently 2.7.4) has essentially stopped. There have been occasional patches and slight feature enhancements over the past year or two, but little active development.
The reason active delevopment stopped was the MRTG uses a very simplistic data storage mechanism. Whenever MRTG 2.x runs, it must reading in its entire data file and write it back out. While this works for small to medium numbers of interfaces (up to a few hundred), it starts to slow down dramatically and becomes unusable. The solution for this has been to divide up the load by using multiple instances of MRTG.
To resolve this, Tobi started working on a data storage tool he called the RRD Tool, the Round Robin Database [ee-staff.ethz.ch]. Using this tool, you can support several thousand intefaces. It is also distributed under the GPL as is everything he distributes. You can find more details about it at the above noted site or in the USENIX presentation [ee-staff.ethz.ch] he made.
While there is technically no "MRTG 3.0," several data collecting frontends [ee-staff.ethz.ch] are already in production use for Tobi's RRD Tool backend. The above mentioned cricket is one of them.
your SOLUTION (Score:2)
mydata.pl:
#!/usr/bin/perl
# mydata.pl
#
# parse linux 2.2.x ip-accounting file
# return data for use by mrtg
#
# line 1: data in
# line 2: data out
# line 3:
# line 4: hostname
use strict;
#modify these
my $hostname="www.break.org";
my $ipaccfile="/proc/net/ip_fwchains";
if($ARGV[0] eq "") { exit(1); }
my $linenr=$ARGV[0];
#read and parse correct line of ip_fwchains
sub get
{
my $find=shift;
my $return=0;
my $count=0;
open(FL,"$ipaccfile");
while() {
if(/[ ]+$find.*/) {
$count++;
if($count==$linenr) {
#match byte-counters in ip_acct file
$return=$1;
last;
}
}
}
close(FL);
return $return;
}
my $in=&get("input");
my $out=&get("output");
print("$in\n$out\n\n$hostname\n");
and for your mrtg.cfg:
WorkDir:
Interval: 5
Icondir:
Target[all]: `/root/mrtg/mrtg/mydata.pl 1`
MaxBytes[all]: 1250000
Title[all]: Total TCP/IP Traffic
PageTop[all]: Total TCP/IP Traffic
Here's why... (Routing tables) (Score:3)
The packet receiver listens for packets on eth0. If it finds a packet with a destination address matching one of the host's addresses, it accepts the packet, logs the address it came to, and passes it to the application layer.
When an application (web server in this case) sends data out, the kernel looks at the destination IP address, looks at the routing table, sees that the default route is eth0, so all packets go to eth0. So when using ip accounting, all outgoing packets are logged with a destination of eth0.
What you want to do is to log the SOURCE address, not the destination address. In order to do this you must use source-routing, so that your routing table routes based on the packet's source address instead of just the destination address. Add a route for packets with each source IP and a destination of the corresponding eth0 alias, and then your packets will be logged the way you wanted.
What about the real issue (Score:1)
--Aaron
Precisely !! (Score:1)
--Aaron
Check out IPAC. (Score:2)
---Vitaliy.
MRTG uses SNMP (Score:1)
Outgoing follows incoming (Score:1)
With TCP/IP streams, there are about as many outgoing packets as there are incoming packets. So the graphs aren't going to look much different.
(BTW, anyone know of any inaccuracies in the data from /proc/dev/net when there are hundreds of virtual IPs? Does Linux always keep those statistics accurately?)
Forrest J. Cavalier III, Mib Software Voice 570-992-8824
The Reuse Rocket [mibsoftware.com]: Efficient awareness for software reuse: Free WWW site
lists over 6000 of the most popular open source libraries, functions, and applications.
MRTG can do more than SNMP (Score:1)
--
Ive had the same problem (Score:1)
I never got an answer so i just gave up and assumed it couldnt be done
IP Chains and MRTG (Score:1)
-Robert Gash
I have something working but.... (Score:1)
ipfwadm or ipchains (Score:1)
Example here: (Score:1)
Re:Related question: Any IP quota software for Lin (Score:1)
bandmin might be useful (Score:1)
check this page out (Score:1)
SNMP Howto's? (Score:1)
I am a person fairly knowledgeable in TCP/IP, but have found few good howto's, doc's, or books on snmp. Can anyone help? Thanks.
jay2@home.com
Re:Nop, sorry, no cigar ;) (Score:1)
Load balancing over different cards would be really useful for some people - even if it wouldn't be useful for the original question-asker-guy
Re:MRTG and ip accounting (Score:1)
In order to test it (and make sure it worked when I was setting it up) I had to make sure to have traffic going through the interfaces I was monitoring in order to get anything on the graphs.
I didn't get anything on the graphs otherwise.
MRTG and ip accounting (Score:2)
Once you've got the snmp stuff installed you need to find out how many interfaces it sees:
# snmpwalk localhost public interfaces
You should see something like this:
A lot of text scrolling by real fast - look specifically for this:
interfaces.ifTable.ifEntry.ifOperStatus.1 = INTEGER: up (1)
interfaces.ifTable.ifEntry.ifOperStatus.2 = INTEGER: up (1)
interfaces.ifTable.ifEntry.ifOperStatus.3 = INTEGER: up (1)
interfaces.ifTable.ifentry.ifOperStatus.4 = INTEGER: down (0)
I have four interfaces (lo, eth0, eth1 and eth2 [three are up and eth2 is down])
Look farther down the list for the statistics on that port ( look for interfaces.ifTable.ifEntry.ifInOctets.1 = COUNTER: some-big-number-here - this line counts the packets that come in over interface 1) and chose which number (1, 2, 3, or whatever you have) to put in your mrtg.conf file.
My mrtg.conf file looks like this:
Target[domainname]: 3:public@domainname.here
I've set it to monitor interface 3 in this config line. You can have multiple configs so that you can monitor multiple interfaces. I have both my main ethernet interfaces being monitored.
Something else you may want to look at to accomplish accounting for ip stuff is ipac (look at freshmeat for a url). It doesn't use snmp but instead uses the proc filesystem and counters that you define [you can watch any sort of traffic you want: nntp, smtp, www, pop3, imap - in any direction that you specify] to create graphs that show you you much traffic you've had pass through that machine.
MRTG just counts the traffic currently going by the interface when your cron job kicks in and tells it to look at the interface you specify - it doesn't count all the traffic that went by during the time period between cron jobs. MRTG creates nicer graphs though.
ipac actually graphs the amount of packets that went by - it doesn't matter if there's no traffic going by when you run the stats-fetching tool (fetchipac).
Hope that helped.
Cricket & MRTG (Score:2)
Firstly, you should probably upgrade to Cricket, as it is more flexible, easier to manage and under active development unlike MRTG.
( http://www.munitions.com/~jra/cricket/ [munitions.com] )
As one of the previous posters mentioned, MRTG does indeed use SNMP to get its data. Now I'm assuming you use the CMU SNMP agent (or the UCD.. doesn't matter). You probably only have the MIB-II SNMP definitions supported by your agent.
What is probably happening is that your agent doesn't know anything about the data you are trying to collect. Now with Cricket or MRTG you can configure it to collect from a script. So you will probably need to write a script to ssh (or rsh) into the machine you are monitoring, collect the data and print it to stdout. Then it will happily graph that for you.
HTH HAND.
Joe
--
BSD or Linux? More Info (Score:1)
Re:What about the real issue (Score:1)
A little off topic from the original, but hopefully someone will read this and find it useful as a quick and dirty trick for interface balancing. =>
Re:Source IP for outgoing packets and IP routing (Score:1)
U'll have to use another route(route2 I think)
route add src virtual.address dest default dev eth0:2
Re:MRTG uses SNMP (Score:2)
However, some version of snmpd (cmu) do not
make accurate byte counts from
it uses a kludge to average all packet sizes to
308 bytes. So what you see with snmp may not be
accurate. We sent a modified snmp_vars.c that
correctly reported byte counts with snmp to cmu and I think they rolled it into versions > 3.5.
Version 3.3 didn't even bother reading
So beware with what you think is valid data reported with cmu's snmpd. Its probably wrong.
Just an FYI.
Routing of IP aliases on RedHat (Score:1)
Webdude asked: "I run a server that has many IP aliases and have found a very strange thing: all the aliases receive data but all data is sent out through eth0."
Maybe you use a RedHat distrib, are you?
They are especially designed not to set routing on aliases (don't know for other distribs) :
If you consider this as a problem, you can return to a more normal operation by commenting those lines in /sbin/ifup (in RedHat 5.2):
those ones inI haven't yet tried in RedHat 6.0, but I think your have to remove [ "$ISALIAS" = no ] && from this line in /sbin/ifup (ifup-aliases is the same):
Note the way the init scripts rely on config informations that Linuxconf stores nobody knows were... Since I seen that, I removed this thing.
Afterall, if I wanted such crap, I would use Windows or Solaris...
One of the major design choices of Unix was to use simple text files for configuration, and that's a feature I especially care about
Well (Score:1)
Problem with this of course is that to actually account traffic on a per-ethercard basis you will need to somehow dynamically add a route if an incoming packet is detected. Tricky I'd say.
G'luck tho
Nop, sorry, no cigar ;) (Score:1)
And, since the question-asker-guy says he has ip _aliases_ i.e. eth0:0 eth0:1 etc.. that all map to the same ethercard, that has no effect, it all goes through the same net connect, the kernel just calls it differently. It would help if you had 4 physical cards, and plugged em all into 4 ports on a switch, say.
Re:Source IP for outgoing packets and IP routing (Score:1)
'route add the cardalias it came in on'
Will fix it.
Ack (Score:1)
route add *ipadress* *the_card_used*
Re:Well (Score:1)