Exactly. For content consumption, small and mobile devices are very convenient.
Who exactly spends all of their time simply "consuming" on these devices? It's virtually impossible to spend more than a day online without feeling the urge to add to the conversation, and all iDink devices and touchscreen interfaces do is get in the way of that (2-way) conversation with the outside world.
As to the consumption itself, as far as I can see, everything is clunkier on touch device. Everything. Designers are having to make buttons and icons cartoon sized in order to accomidate simple viewing on these "computers".
I simply cannot accept the proposition that people are -- willingly -- going to accept a future of either creation or consuption on these restricted devices. Even if the whole industry collectively decides to abandon PCs, in a decade or so the current infants playing with iDinks will manage to "rediscover tactile touch based text input devices once called 'keyboards' " as a faster, better method of interfacing with their computers.
Eventually, some of them will even rediscover the command line as well.
When you talk to managers, you need to talk business. Throw every reason you think important into the trashcan. Then build your case from the ground up as a business case. Show that it saves the company money or increases productivity. Basically, make the case that your proposal == more $$$.
Essentially, you must dance the corporate Dance of the Seven Veils, in order to entice managers in the only language they are able to speak.
If I only have a small pool of money to pay tenured professors, why wouldn't I want to select the ones that have proven themselves?
How will you know when they've proved themselves?
How feasible would it be to split the internet right down the middle but share the same lines?
So on one half you could keep the wild wild west net and on the other all the cry babies and censor-happy types can have their walled wide web.
Then just onion-up the wild wild west side.
This wouldn't work because you're forgetting the censor-happy people's mentality: they aren't trying to censor the internet so that they can't get to certain material, they are trying to censor it so that _you_ can't get to certain material because the _idea_ of you looking at certain stuff in private offends them. So this kind of split couldn't happen because the censor-happy people still don't want to allow you to get to the "wild wild west" net.
Wide-scale censoring is all about "I find what you do in private to be offensive so you should be locked up for offending me!" and almost never to do with "I find this content offensive so don't want to see it myself". Much the same way as various activities happening between consenting adults in private are illegal - this isn't about protecting anyone from anything other than offense caused by their own narrow-mindedness.
Note, I do think there is a place for local-scale censorship, such as preventing kids/teachers at school from accidentally stumbling across stuff they shouldn't. However, where kids are *actively* trying to get at porn, et-al, censorship is never going to work and it is far better to spot kids doing this so someone can have a talk with them. That's not to say that I necessarilly think kids looking at porn is a bad thing (indeed, it's completely normal), but talking to them about it to put it into context is probably a good plan.
....and there are people who will hire them.
This is absurd.
The NSA is an organisation of bureaucratic code monkeys. It employs more mathematicians than security staff. The NSA does not do black bag operations.
An organisation like the CIA, yes, would be expected to perform such activities. But the CIA would have a lot more discretion/sense in how it went about such things.
If the NSA does actually start running "black bag" operations, I am confident they will do as poor a job of keeping it secret as they have with the rest of their Austin Powers arsenal of projects.
there are narrow trailers that hitch to the seat tube available http://www.amazon.co.uk/Veelar-Bicycle-Trailer-Shopping-Capacity-20315/dp/B006JRR1HS/
those double as a hand cart
Some more googling suggests that the CBL tells you the honeypot IP after listing. If this is true, could you not look in your proxy logs to see what the URLs to the C&C servers look like and block them based on a pattern that matches the part after the domain name?
There wasn't an especially obvious fingerprint I could derive from the requests when I looked (i.e. each time I've seen this, the request has been considerably different)
The issue is purely that the smarthost shares the same IP address as the web proxy and the CBL honeypot looks for *HTTP* traffic (which was leaving the network) rather than *SMTP* traffic.
It wasn't clear to me from the article that this was the problem. However, It's still not clear to me that this is the case. You assert that fetching some "spammy" URLs causes the listing, but the folks at CBL don't say what their listing criteria is, so I assume you have some hard evidence and not just suspicions that the fetching of honeypot URLs causes a listing?
When you get listed, you can look up the reason why and it tells you.
From my reading about Zbot, the only URLs it fetches are from C&C servers, so the CBL operators would have to have taken over a Zbot C&C server (or have access to the logs from a someone who has gained control of a C&C server).
I believe (and I'm not altogether clear whether this is accurate) that Zbot uses C&C domains that are generated programmatically based on the time of day, so CBL have managed to register some of those domains before the real bot owners and therefore set up a honeypot of C&C servers.
We've been having significant problems with the CBL's ill-thought-out policies
I am not sure what is ill-thought-out about their policies. In both scenarios, IP address is sending SPAM. IP address gets blocked.
The ill-thought-out bit is that the CBL is an *spam email* blocklist, but their heuristics cause networks that aren't sending spam email to get listed and therefore blocked. Whilst there is no arguement that the networks were infected with malware, listing them on the CBL serves no useful purpose since they were of no threat to the systems that would be using the CBL (mail servers).
Previously, sharing an IP address between multiple services was a reasonable idea - there was never a reason not to do this and it conserves IP addresses. However, with the advent of the CBL using an HTTP honeypot to populate an SMTP blocklist, there simply isn't any sensible way to run a network in this configuration - it just takes one person to connect an infected laptop to the network for a short period of time, and all the email starts getting blocked.
Because of this, we are now having to standardise on running mail servers on a separate IP address - this does nothing to decrease the incidence of malware, it simply stops an infected network being listed on the CBL.
The author (you?) ask for a list of honeypot addresses, but you could be a spammer, who could use that list to delay blocking of the SPAM.
I could be a spammer, but I'm not.
The idea was that as the malware was always connecting through the transparent proxy servers, having a list of honeypot addresses or some other way of fingerprinting the request we could (1) automatically isolate the affected system, and (2) automatically inform the sysadmin so (s)he could clean up the mess. This would be a Good Thing for everyone.
As it turns out, the CBL maintainers were not cooperative (for whatever reason), so we're stuck with the aforementioned interrim measure of separating services onto different IPs rather than actually resolving the root problem.
People in the business of securing networks really do need to trust each other to some extent - if they refuse to cooperate out of paranoia then the spammers have basically won already since there's no way anyone can effectively defend against spam and malware in isolation.
Also, I have not seen a SPAM bot that uses the smarthost. This doesn't mean that they don't exist, but I think that they are rare.
Indeed. That was the point I was making: the only way to send email out of the affected networks was via authenticated smarthosts. Yes its posible that some malware could extract the authentication credentials out of a user's mail client (if they have one configured) and use those to send spam, but that's a lot of effort to go to and I've never seen any malware do that (and if malware does do that then *everyone*'s screwed because it'll start sending spam through corporate email servers, gmail, etc.). So the networks in question were essentially immune to sending spam email, yet were still being blocked by the CBL from sending email because they had a client making spammy web requests - this makes no sense.
Hence blocking direct access to port 25 through the firewall stops most spambots from actually sending spam.
And this is exactly how the networks in question are set up, yet this does nothing to prevent the network from being listed on the CBL since the CBL's honeypot is checking for suspicious HTTP connections rather than SMTP traffic.
If the spams are relayed through your own smarthosts, then how about some kind of rate-limiting mechanism with alerts to the administrator? Quick action by the admin would prevent listing.
To reiterate, in case it wasn't clear from the blog article, there was no spam email leaving the network - port 25 is blocked, the only way out of the network for mail is via an authenticated SMTP smarthost. The issue is purely that the smarthost shares the same IP address as the web proxy and the CBL honeypot looks for *HTTP* traffic (which was leaving the network) rather than *SMTP* traffic.
That depends on how much you're letting spamhaus validate actual positives. It has to go both ways.
We've been having significant problems with the CBL's ill-thought-out policies (and Spamhaus imports data from the CBL)...