Become a fan of Slashdot on Facebook


Forgot your password?

Comment P.S. (Score 1) 349

The entire table is available for adding logging and whatnot. The (empty) range from the ACCEPT line to the final DROP in the sub table can also be decorated with logging or whatever else you might want to do with the failed packet, and then you only get one log entry per event instead of attempt. That logging or whatever was not germane to the example mechanic.

Comment Re:Better than that... (Score 1) 349

Why would the filter be on the NAT interface anyway?

If you don't know that every solution is not one-size-fits-all you deserve not to have that job.

If you cannot suss out the part where a generalized solution is a starting point for adaptation then you should never have managed to get the job in the first place. /doh.

Comment Better for me... (Score 1) 349

I use the below. It has several benefits. It creates a blacklist of sorts of only the bad actors, which can be shared with other rule groups. It only does the test once instead of in separate rules for reject and log etc. I don't boter logging when I can check the bad actors table directly, but any fun things can happen after the ACCEPT (or RETURN) as long as you make sure you reach the pivotal --set operation.

Most importantly this lets 5 attempts per hour from any given IP address and if they have to go away for a day (obviously ajustable) before they can try again. This means you can get deep resistance (60 seconds is too short to truely deter some probes) but if you accidentlaly invalidate a host via over-use you just have to wait a day for things to be good again.

Good for use with ControlMaster option if you frequently have to connect to that host from a particular source. I also allow two password attempts only per connection.

Note that you can replace ACCEPT with RETURN if you want to run other tests after this before accepting the packet.

iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment Auto-blacklisting throttle. (Score 1) 349

iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment Better than that... (Score 3, Informative) 349

The below will create a dynamic blacklist. Any IP address that connects more than three times in five hours (pass or fail) will go into a blacklit that will persist until they stop trying for at least a day.

This will recodr your bad actors _and_ it will "expire" in case you invalidate a system by accident (e.g. over-use).

iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment You don't want to _know_ about the broken stuff (Score 1) 430

I did try to get the coding standard fixed.

Meanwhile, elsewhere in the code, in full compliance with the coding standards I found:

(1) unconditional static down-casts from base types to derived classes despite the possibility of error event classes being the return value. (e.g. class A, A_failed, and B, where B and A_failed were derived from A, and then a key static cast from A* to B* without any check for A_failed at all.)

(2) shaving down (bit shifting and masking) pointers through a void* arg and then into four bytes (char) that were then pushed into a byte queue, where they were later popped off as four bytes and shifted back into a pointer some type. (The "real time programmer" who came from a VX works background didn't believe in just making an array of void* and moving all the bytes at once for whatever retarded reason.) [also broken because the A* to void* to B* three-way conversion isn't necessarily safe since it should be cast to A*, reinterpret_cast to void*, then reinterpret_cast to A* then dynamic_cast to B* to be safe and symmetric.]

(3) so many unsafe operations at in the module call prototypes that I eventually just made my code "correct" (e.g. call-safe) then put a conversion layer that used the unsafe API in both directions and called that translation unit "" and had lots of forwarding functions spelled out why the calling convention was flirting with disaster so that all the unsafe calls and unsafe casts were all in one pile and in one place.

Item 3 was somewhat insurrectionalist because I wasn't allowed to get any of my criticisms to be acknowledged by, or then past my boss who's "it worked when we tested the prototype code that one time" attitude kept things tightly broken.

So we had nicely regimented coding standards but I always look at the brand name of any medical equipment I see sitting next to a bed now because I know what the code base for one particular brand really looks like and how much they didn't give a rats ass about doing things right (as opposed to doing things they way someone decided they should be done based on single test runs).

That guy who noticed that if we build buildings the way we built software, the first woodpecker to come along would destroy civilization? Yeah, he wasn't exactly wrong.

Comment Re:Ya to me sounds like "I'm special" syndrome (Score 1) 430

Treating all programmers as interchangable morons who cannot be trusted to write code is a sign of managerial immaturity.

An outstanding programmer often knows when rules must be broken. Just as an outstanding jazz musician knows when to use dischord.

Now just because the Dunning-Kruger effect causes programmatic noobs to assume they are masters deserving of liberty doesn't mean that the masters are a-priori being immature.

Fault: there is too much baby in this bathwater. Get a seive before proceeding. Session closed... 8-)

Comment And yet, you are wrong to "find it impossible..." (Score 1) 430

I have worked on projects that loast hundreds of millions of CPU cycles because the coding standards encoded individual ideals into runtime performance killers. (the example I have placed elsewhere is "must use getters/setters" and "may not put function defintions inside class definitions" turns "class foo { int X; ... int getX() const {return X} };" (which can be optimized down to a register load) into a (not optimizeable at all) far call from each point of use into the foo.o (object file) after a potential PIC (position independent code) fixup for a shared library.

And this stupidity can waste a _lot_ of man hours. In order to get my part of the medical device that _that_ coding standards bug was written under, to run in acceptable time (e.g. to not overuse my CPU time budget in a freaking real-time heart monitoring ap) I had to break the coding standard and put the getters/setter (or occasionally the plain "public" variable) into the class definition anyway, then run it into the version control system, then go through the "naughty programmer" output list and create a bug report for each such optimization and assign that bug to that "naughty naughty" message. Then the bug review team had to deal with those bugs. Then the code review team had to approve those optimizations.

Even with this only costing me a couple of hours on the one set of modules, when you consider the ten or twelve people that the automation systme then had to nag, and the hours _they_ lost. you get into a lot of wasted time over all.

Now add the part where once every _other_ programmer who silently followed the automatically enforced rules ran over time-budget for their code (so the system was too slow), and all _their_ code had to be fixed once everybody noticed that _mine_ was not so plagued.

Then the cost of the project running late and eventually being determined to be "not worth the effort being expended" and getting canceled outright...

Well, truely hundreds of man hours and _many_ thousands of dollars were wasted on a project that was largely killed because all the programmers were muzzled into paralysis by a huge steaming pile of these sorts of pointless restrictions (many of which would have been good for a _class_ in programming but most of which were _toxic_ to a real project).

Well, you know, there are reasons that failed projects fail, and sometimes those reasons involve over-regimentation of the otherwise creative process of finding solutions and expressing those in code.

Comment Re:Standards are (_Not_ Usually) Good (nor bad) (Score 3, Insightful) 430

Standards and enforcement of same is (usually) a symptomp of the "interchangeable morons" school of management. It presumes that all problems have a (ayn rand-ish) uniform solution that all _programmers_ will process identically.

A small number of "do not do"s with a "unless you have good cause" _guidelines_ are reasonable, but something as firm as a "standard" is a great way to make your great programmers no better than your worst over time.

Standards often contain bugs themselves. Things that create a hidden cost on the programmer and the program that can bog both down.


Even Microsofte eventually abandoned their "Standard" for putting the variable type as encoded dirt on the front of their varialbe names such as "lpintThingy" having plagued their code with Thingies that are no longer long pointers to integers and that cannot be globally searched and replaced because that hazards destroying other code.

Combined rule failure (use getters and setters + don't put member function definitions inside of class definitions => what would be a register load operation becomes an optimization resistant far call across translation units ** every dang time you set or read a scalar).

If you cannot trust your programmers to write good code then making them format it so it _looks_ like good code is a wasted effort.

If you cannot trust your great programmers to write great code eventually they will stop even trying to do so and you will be left with a hassle avoiding idiot or someone looking for a new job.

If you cannot trust your new programmers to understand your previous code then your new programmers are probably inferrior to your older coders.

If you are not winnowing out the _bad_ programmers via rational code review then your management is useless.

All but the most rudimentary coding guidelines are productivity and creativity and performance murderers.

Every company eventually realizes this, on and off, for a while, each time a management team ages into a job, and then forgets it again when they hire new managers.

Comment (In support) (Score 1) 430

Most "coding standard bugs" are hidden in a meta-level of reasoning that is harder to find than solving actual crap code.

True Story: Working at a medical equipment manufacturer writing C++. These two atomic rules, placed far away from one aonther in the standard made a mess. See if you can spot the mess.

(A) No member function may be defined within the class definition, and instead must be defined in the translation unit for that class. [e.g. you have to put the member definitions in the .cpp file not the .h file, so "class X { ... void foo() { /* implementation */} }; is not legal.]

(B) Access to member data may only take place via "getter" and "setter" functions. [as opposed to putting the varialbes in the "public:" part of the class.]

Both harmelss enough by themselves. But I opened a crap-ton of bugs on this issue because the two rules taken together turned simple register load/store operations into unoptimizeable far calls between translation units for each get/set operation. So I put my getters and setters in the class definitions like a sort of sane person (I didn't try to force sanity on them complely and just make some of the trivial values public, as I don't think they could have taken the strain) and, as required by the version control integration with the codeing standards enforcement and bug tracking tools, I filed a request for exception for every single damn such usage and let them choke on their procedure.

But there was a reason that only _my_ code didn't run over its CPU time budget.

  A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. -- Emerson.

Comment Non-Whitespace standards can be very harmful. (Score 1) 430

True Story: Consider these two rules...

(1) Getters and Setters must be used for all local varialbe access.

(2) No function may be defined within the class definition body, and must instead be in the corresponding translation unit. (in C++ terms, you have to put all your member functions in the .cpp file and not the .h file etc.

So now every get/set operation (e.g. a register load or store) is turned into a far (inter-moudlue, cross translation-unit, unoptimizable) call, with arguments, stack frame, etc., to a remote function to do the register load/store.

Create a variable with that microsoft horror where you prefex the variable name with its type "lpszFileName" (long pointer to zero terminated string named FileName) and then change the underlying type after you have written all the code so that some long pointer to int is now a long pointer to a long but it still says int, or is now an opaque 16-bit value instead of a pointer at all but the names in countless blocks of code still lie.

And as far as the whitespace thing, I have a unicode non-break space with your name on it, particularly if you write in Python.

Coding standards that are _dumb_ can be _incredibly_ _dumb_ in many hidden ways.

Comment Google should then provide signed certs (Score 3, Insightful) 299

This cut at free flow of information, and this alligation that the cost is trivial in the parent poster's post, suggests that if it were such a nothing then google should offer a means to comply wihtout forcing people to go out and pay a third party.

If it's so cheap and such a nothing, then what's the problem wiht them providing what is needed to interract with their own service?

Comment In which world do preferences not matter again? (Score 1) 599

You will note that I said "warmer" not "better". Preferences vary and people can tell the difference no matter what you choose to beleive.

You know why there is artificial hiss added to VoIP? Because perfectly accurate digital silence is "not as good" as fake analog hiss when it comes to working with the human perceptions.

See, we are analog beasts. We evolved in an analog world. And we _like_ analog. Part of analog is signal _loss_ through smoothing. How much of which features of sound an individual _likes_ is an _individual_ taste.

Accuracy is not always king, and "better" isn't a universal place. You keep using "better" to mean "more accurate" so you have a religious-grade opinion over the someone esles' subjective experience. That kind-of makes you the dick pissing on other peoples preferences in the name of an absolute.

So you say accuracy is better, and they say warmer signal is better. Why do you think you are the one who gets to choose for everyone?

Hubris, my young man, is its own punishment. That you are bothered enough by a subjective opinion in others to the degree that it is rant-worthy means that you are suffering your own little mania.

Slashdot Top Deals

In space, no one can hear you fart.