McAfee Blames Open Source for Botnets 223
v3xt0r writes "It seems that 'the Open Source Development Model' is to be blamed for the recent increase in botnet development. 'We're not taking aim at the open-source movement; we're talking about the full-disclosure model and how that effectively serves malware development,' the spokesman for McAfee says. Why not just blame the IRC Protocol? Or simply admit that Proprietary vendors cannot keep pace with the Open Source Model?"
What? (Score:5, Insightful)
Wow, I've seen a lot of commercial vendors doing that in the recent years also - maybe they're all suspect.
Full Disclosure Vs Secrets (Score:5, Insightful)
But what model would you blame for the hundreds of PC viruses that devestated home and corporate computers in the 90's up to today? I think the exploits they relied upon were simple coding flaws and insecure type checking or buffer overflows that wer simply poor coding kept as a secret.
So, in light of what causes the malware, would I rather the code be fully disclosed or instead guess that there's probably no major exploit possible? I'd probably go with the former considering the sheer number of viruses based on the latter and the fact that it's the exploits based on proprietary code that often do the most severe damage to society.
I would like to ask McAfee what they would think if a competitor found a virus and figured out how to fix it but couldn't tell McAfee that information because it would be considered disclosure. That would be the real irony here. Sites that host viruses and describe/publish them are often very useful sources for people looking to rid them from their computers or even how to avoid exploits in the future.
This article is entitled "Hackers Learn from Open Source" but they only learn as much as the researchers and patchers do. I would rather the community be progressing towards solid impenetrable code than have guarded secrets that keep everyone under a thin veil of security. Because if those secrets are ever discovered by the wrong people, we will not know about them and we'll essentially be caught with our pants down. I'd rather have every programmer know the pitfalls of coding than to have thousands of applications deployed world wide all waiting for one hacker to stumble upon a secret.
You really have to question McAfee's motives here in their Sage magazine
Re:What? (Score:2, Insightful)
They don't explain how the alternative is better (Score:5, Insightful)
Isn't it better to release info so people can do something about it? Network admins can use it to help block the attacks, or disable the vulnerable software. Users can stop using it. And people can ever make their own patches, or use the shared knowledge to look for similar flaws in other software.
We have seen this happen. Can anyone provide a good alternative, because McAfee certainly can't?
Schools and colleges are evil! (Score:5, Insightful)
Well... (Score:4, Insightful)
Because McAfee has an unterior motive and wants to discredit the competition.
With there be anything else?
Full disclosure != open source (Score:5, Insightful)
1. The open source part. Which doesn't contain any kind of anti-OSS slant. It just says that people now have a lot of F/OSS tools to manage their files and whatnot.
2. The part about full disclosure. Where they basically whine that they'd like to have what we all call "security by obscurity." Basically McAffee would like a world where researchers keep a lot more stuff secret, because supposedly being public about that helps evil hackers. Which is as stupid as it gets, yes, but it also has nothing to do with OSS at this point.
So why the fanboy slant in the summary?
Re:They're missing the real culprit. (Score:1, Insightful)
Most IT workers blame McAffee for Current Viruses (Score:5, Insightful)
Headline is a Troll (Score:5, Insightful)
Given that the summary itself says that this is not about the open-source development model, I've got to conclude that the headline is a troll. You can apply the full-disclosure model of security notification to any software, open or closed.
This is about whether the finders of security vulnerabilities give the vendor a grace period to fix the problem before disclosing the vulnerability to the general public. It has nothing to do with open source.
What he said. (Score:2, Insightful)
Linux is evil, Windows is good, proprietary blah blah blah. The biggest shock to me is that anyone has the balls to point to open source and say "YOUR development model is responsible for this mess," especially considering the way Windows ships as default (make all initial users members of Administrators). I'm still reeling from hearing McAfee (or someone officially affiliated) say something to the effect of "Your open code and development is killing us!"
You have to consider the fact that some tools, while they can aid those with ill will, serve mostly to benefit. Take nmap, for example. Some script kiddie can use it to scope out their target. On the other hand, a tech can use it to check for open ports on their own systems to prevent those kinds of things. These are useful tools, but because of their power, they could also potentially be used as bad devices in the wrong hands. You could say the same thing for guns. Innocent people are killed with guns (among other things, such as knives and harsh language). Should a bullet-proof vest manufacturer come out and say, "We're not taking aim at the gun manufacturers; we're talking about the ability to propel small things really fast and how that effectively serves criminals?"
From the sounds of it, it sounds like they're blaming the OSS model simply because malware authors use it. Although, I could have completely missed what TFA was saying; I'm really tired and I keep reading each paragraph over and over and I just can't grok it.
Re:What? (Score:5, Insightful)
And of course, we have to suffer another dig at the full disclosure doctrine. But the part they left out was how they plan to get the black hats not to share information with each other. Full disclosure just assures that the white hats all have the same information and that the battle is fought on pure technology lines and not on who is better at hiding things (a battle the good guys would lose).
McAfee Afraid of Open Dialog? (Score:3, Insightful)
If enough developers 'pool' into working on it, and an open dialog of faults and vulnerabilities continues, could they find themselves out of a job from an Open Source solution?
(especially as they are about to be challenged by MS Defender, which could also benefit from open dialoge to augment a shallower background in the field?)
Improves all development (Score:3, Insightful)
The open source, full-disclosure model improves the pace of ALL software development. All means all, including software development for "bad" purposes.
Full Disclosure Lowers the Barriers to Entry (Score:4, Insightful)
Re:They do have a point (Score:3, Insightful)
In an ideal world, a security researcher will discover a fix and do the following:
1) Create code that reliably exercises the flaw that can be used to verify that the problem really exists and that the fix (when it is finished by the vendor/OSS group) works. You can call this the "exploit code" if you want; it is necessary for someone to create it so that the fix in step 3 below can be tested.
2) Notify the vendor/group of the hole and pass along the exploit code.
3) The vendor/group evaluates the problem, assigns a reasonable fix schedule to it, and eventually a fix is produced, verified to work against the exploit code, and distributed to the world.
4) The hole is then announced on a security bulletin *along with the exploit code* to notify customers/users that might not have updated already that they should do so at their earliest convenience, and to provide customers/users (many of whom are knowledgable programmers) the same tool given to the vendors to verify that the hole is plugged in their systems.
This is a reasonable system. The whitehats try to do it all the time, and for many OSS projects it works out just this way. Blackhats OTOH do only #1 and then distribute the exploit code only to other blackhats, so that when they use a flaw both vendors and customers/users are taken unawares.
Unfortunately, many closed-source vendors break the whitehat process between steps #2 and #3. They are given notification and exploit code, but rather than prioritize a fix they decide that no fix is necessary, because their local astrologer told them that only whitehats find flaws. After enough time with no action, the whitehats MUST move on to #4 so that users can isolate the systems with the hole in order to preserve the rest of their network.
In your house analogy, this is equivalent to notifying a neighborhood that the developer who built many of their houses made a serious mistake in the wiring such that any house at any time might burn to the ground, and that their insurance will not cover it, and the developer has decided not to pay for a fix, and the local fire department has announced that they will not intervene to stop any fires that start due to a wiring fault.
A device is available that can quickly determine which houses are at risk. The developer is spending twice as much money needed to fix the wiring on ads in the local newspaper exhorting those citizens who have these "bad house detector" devices to destroy them rather than share them with their neighbors so that they can hire their own electricians.
The process YOU want is already being followed by the majority of legitimate whitehats. The process McAfee wants leaves everyone screwed.
Re:What? (Score:1, Insightful)
Malware has a fairly small but very dedicated, extremely technical audience, as well as a larger following of much more amateur users who may (to greater or lesser degrees) still be able to compile the source or merge others' patches together into new combinations. And, of course, their adversaries, the AV companies, who are also keenly interested in the code (to recognise it, detect it, and defend against it).
The open-source model, due to its somewhat decentralised nature, the extreme ease of forks, and everyone having the source who wants it, is rather good at rescuing projects with many active users which have been shut down for legal reasons. (Someone else just forks it and takes over development.)
It's also good for someone who wants to spin off an experimental fork with a new feature for development - which constantly allows new, interested people to innovate wild, bluesky features and improve on existing ones - and also for someone who wants to merge together those forks to create new hybrids.
It's those two things that have made it so desirable for many controversial projects. The authors found that by publishing the source, there's less point tracking the original authors down and, uh, convincing them to stop, because if there is enough interest, someone else will just pick it up where they left off.
Malware just happens to be one of the fields benefitting in this particular way from open-source development models - so also are other (legally and/or morally) controversial projects that have small, loosely-linked development teams and are at high risk of being shut down by threats (for example, DRM circumvention software, and peer-to-peer communication and publishing software).
Traditionally, malware authors would be extremely secretive about their code, the better to hide it from AV signatures for as long as possible. This hampered cooperation and research, so they formed into small trusted groups, with little contact between them. But for those not in those groups, with wild ideas of their own, the task of creating an entire bot was daunting. It seems that the idea that actually won out in the end, to the greatest extent, is to make sure the bot project can't be killed just by tracking down the one principal author by publishing the code widely and encouraging forks; and to encourage those with new ideas to experiment freely with forks of the bot code, causing a bunch of rapid mutations the AVs find it hard to keep up with, and to additionally combine that with metamorphic wrappers to make an AV's job as difficult as possible. Because there is now an abundance of bot code to build from, and also easily available shellcode and exploits in forms that can be virtually plugged in, creating a mutation has become much less of a daunting task, so there are, quite naturally, more of them.
It seems to be working for them, you must admit; the number and frequency of different worms you get in your email box daily is a rather visible metric of their success. *sigh*