Forgot your password?
typodupeerror

MS Security VP Mike Nash Replies 464

Posted by Roblimo
from the we're-doing-the-best-we-can-now-matter-what-others-say dept.
You posted a lot of great questions for Mike Nash last week, and he put a lot of time into answering them. As promised, his answers were not laundered by PR people, which is all too common with "executive" interviews with people from any company. Still, he boosts Microsoft, as you'd expect, since he's a VP there. And obviously, going along with that, he says he likes Microsoft products better than he likes competing ones. But this is still a great look into the way Microsoft views security problems with their products, and what the company is trying to do about them.
(1)
What has changed?
by suso


Besides the same old PR scripted answers that corporations like to give in order to obscure or downplay what is really going on. What assurance can you give us that Microsoft is more focused on security and that Vista is going to be any different from the previous incarnations of Windows? What proof can you give us? Information like "We have a new team doing X" or "our process for reviewing changes has gone to X" are helpful pieces of information to answer this question. What else have you seen in the way MS is developing Vista that is different from how you've developed previous products?

Nash: We have been thinking about security at Microsoft for some time. I would say it started back when we decided to do Windows NT back in the early 90s. There has been a big change in the way we approach security from a quality point of view that started in much more depth when Bill wrote the Trustworthy Computing Memo back in 2002.

What happened then was that we decided we were going to get much more focused on security since it was such a huge issue for customers. Remember, we were right on the heels of Code Red and Nimda and we had to do something. For the .NET Framework 1.0, Visual Studio 2002, ASP .NET and for Windows Server 2003, it started with a security push where we took the teams offline relatively late in the product cycle, taught the teams what it meant to write secure code, had them do threat models and code reviews, etc.

What is interesting is how much of this had to do with educating our engineers on what it means to write secure code and changing the culture. I will give you examples of both.

Two or three years ago, we had a vulnerability in Windows Media Player where an attacker could send out a piece of media content with a malformed copyright field and because of a flaw in the code that parsed the copyright, the attacker could over run a buffer and run arbitrary code on the machine. So the question was, should the developer of the Windows Media Player have thought about that kind of attack and take steps to prevent it? Remember, we want the people writing the Media Player to make the world's best media player. The answer has to be YES! While you could have a tiger team work around the organization reviewing all of the code in every product that we ship, that doesn't scale. You could never have enough dedicated security expertise; if they made changes they might break something since they really couldn't understand the details of the code they are making more secure. This works for final reviews, but final review needs to be like the guard rails on the side of the road -- they are a great last resort, but we need better drivers! So we trained everyone. Key thing here is that we also learn new things over time (better tools, new threat vectors, and new scenarios) so the training has to be continuously updated.

Culture is a huge issue as well. Microsoft is a company that is very focused on technology, very focused on business, and very focused on the competition. Getting groups to put security high in their list of priorities was a super hard thing to change at Microsoft. Four years ago, I used to have to have frequent conversations with teams who would tell me that they couldn't go through the security review process because they had competitive pressures or had made a commitment to partners to ship at a certain time. Today, generally, people get it. It's now clear to us that security is a competitive and business priority. While I still see escalations from people who want exceptions, the numbers are pretty low. A big change from four years ago is that when I say no, I get great support from above me in the organization.

A key thing that came out of our experience with Blaster in 2003 was something called the Security Development Lifecycle (SDL). Really the SDL is the formalization of work we were doing previously. Remember Blaster exploited a vulnerability in Windows Server 2003 -- a product that had been through a security push (it also affected Windows XP). When we did the post mortem on how the vulnerability happened, what we realized was that while there were huge improvements in the quality of our code between Windows 2000 and Windows Server 2003, there was still more work to do. In particular, we needed to have: 1) a documented, repeatable process, 2) internal education so that everyone involved in the product release process knew what to do, and 3) a checkpoint in the release process to make sure that this process was followed.

The key things about the SDL is that we basically have to update it every six months because the threat landscape changes, the scenarios we support grow and we learn more.

For Windows Vista, the key things that will make it great are a combination of the most rigorous execution of the SDL to date -- more training, newer tools, threat modeling, more comprehensive review of file parsers, review of code to identify and remove use of banned (risky) APIs and a whole lot of penetration testing.

As a part of this, a lot of work is also being done to change the default configuration to make it safer and more secure. We have done a lot of work to make the system work well for standard users (so that not everyone has to be an admin), but for users who still need or want to be logged on as an admin on their system we make it clear to them when they are about to do something that requires administrator privilege. The user can configure their system to either ask them if they want to escalate, or ask for a password when the system tries to elevate them. We have also gone through all of the system services in Vista to see which ones have admin privilege, verify which ones really need it, and for the ones that don't, remove it.

For Windows Vista we enhanced the engineering process with some new checkpoints in the engineering cycle. One such checkpoint requires that every team developing a system service in Vista go through the process of using a new Vista least-privilege operational model. A team of internal experts had to sign-off on the plan for each service, and in a significant number of cases, teams avoided creating a service altogether when an alternate approach was possible.

While quality is an important approach to improving security and safety, it's just one part of it. There are also some key features we have added to Windows Vista to make it safer and more secure. For example, we have taken the anti-spyware technology that we acquired from GIANT Company Software, improved it and integrated it into the operating system in something called Windows Defender. While the anti-malware technology will also be available to users who have licensed copies of Windows 2000 and Windows XP, for Vista the integration is pretty slick, which makes it much easier for customers to be protected. For Vista, we also improved the firewall built into the operating system. It's bi-directional and is designed to work well with IPSec.

Given the changing landscape on the Internet, and the continued focus on the Windows platform, sadly I know there will be vulnerabilities and exploits that target Windows Vista. Invariably, as we make it much harder for people to find and exploit vulnerabilities in Windows Vista, I am certain of two things: 1) the number and severity of both vulnerabilities and exploits on Windows Vista will be reduced, making the switch to Vista compelling if ONLY for security reasons, and 2) we will continue to focus on security even after we ship Windows Vista so that the work that comes after Vista will be even better.

(2)
Security/user friendly tradeoff
by qwijibo


Is there a general policy within Microsoft to help product teams make consistent security decisions? There are frequently issues where the decision has to be made between being more secure or more user friendly.

For example, file and printer sharing defaulting to off prevents people from unknowingly sharing their resources, but requires non-technical users who do wish to set up a small network to know more about the process than in previous versions.

Nash: This is an old issue that we have made quite a bit of progress on. At Microsoft we had a long history of turning things on by default in the spirit of making user's lives easier and showing off our key features. I have to admit, that in my past I have actually been part of the problem. As the director of product management back in 1995, I was part of the team that drove the decision to turn our web server, Internet Information Server (IIS), on by default in Windows NT Server 4.0.

What the events of the last 5-10 years have taught us (or at least taught me) is that the more you have turned on, the more attack surface area the system has and therefore the more vulnerable it is. If you assume near perfect quality or that there is no one out there trying to attack you, it might even be an ok decision. But since you can't, we need to be more selective about what things we turn on by default.

Consider the case of Code Red. That worm attacked a vulnerability in the ISAPI filter of the index server of IIS. Let's assume for a minute that you don't know or care what the ISAPI filter of the Index Server of IIS is. Even in that case it turns out that if you turned off the Index Server in Windows Server 2000 SP3, that ISAPI filter was still installed. So while you might have thought that shutting down the index service makes you less vulnerable, it turned out that you were not.

So coming out of the whole Code Red experience, we created the Trustworthy Computing Initiative (TwC). One of the key principles of TwC that drives the Security Development Lifecycle is the principle of Secure by Design, Secure by Default and Secure in Deployment (or what we call SD3).

The principle of Secure by Default says that unless most users are using a feature, it should be turned off by default. What we have also learned along the way (and my Code Red example shows this) is that you can't just look at the user visible features, but also need to look at the underlying services. So if the customer feature is off by default (or turned off by the user) then the underlying components that support them should also be turned off when the high level feature isn't using the service.

But you make a great point about complexity. If we turn more things off by default, we need to make it easier for users to turn things on when they want to use them. For example, in Windows Server 2003 SP1, we added something called the Security Configuration Wizard that is designed to help users configure their systems with as much turned off as necessary. The benefit of turning things off by default is two fold: 1) it protects the individual system from being attacked if a vulnerability exists in the feature because the feature is turned off by default, and 2) it also protects the populations of systems because the worm or virus can't assume that the feature is on and therefore the systems aren't broadly exploitable through the vulnerability.

I should note that while we usually think about what features to turn off, Secure by Default is also about what features to turn on. A great example of this is the firewall in Windows XP. Back when we first shipped Windows XP in 2001, we included a firewall, but turned it off by default. Why? Because many of the influential users we spoke to said that they had a firewall and didn't want ours turned on. They also said that they had too many apps that would be negatively affected by having a firewall on by default. That was a good answer for the small percentage of users who had their own firewall, but for most customers it was a mistake. In hindsight, consider that if we had the firewall turned on between October 2001 and August 2004 (when we shipped Windows XP SP2 with the firewall on by default) that Slammer and Blaster might not have been an issue for Windows XP customers to the extent it was. And with Zotob, this was also the case. By the way, for customers who have a third party firewall, or for OEMs that install a third party firewall, they can always turn ours off.

The Windows Security Center, first introduced in Windows XP SP2, is designed to make it easy for end-users to verify that the right security features are turned on and configured properly. We're going to make it even better in Windows Vista.

This is as much about culture (reminding people of the goal of safety and security being job #1) as it is about process (making sure that the default state of the feature is considered in the context of what most people need).

(3)
Top priority for security in 2006
by Anonymous Coward


Given that security is a major topic on IT manager's minds these days with security flaws and patches practically making front page news of some publications, What do you feel is going to be the main focus for security in 2006 for yourself and the industry as a whole?

Nash: The answer for me and for Microsoft is simple. The main focus for security in 2006 is nailing the security quality and features for Windows Vista and Windows Longhorn Server. Don't get me wrong, this doesn't mean that we don't care about the security of older products or products besides Windows, but given that Windows Vista and Windows Longhorn Server are going to be the most significant releases of Windows in the last five years or so, we know that they are going to be used broadly by a large set of users for sometime--so getting it right is critical.

As I noted above, we have the opportunity to apply the best practices in secure design, threat models, code quality, default configuration and penetration testing and more rigor than we have ever had in the past. We have also added some new features like a bi-directional firewall and Windows Defender to make the system safer and more secure. As the project becomes feature complete, we must verify that the system is secure and addresses the issues that are raised in testing.

There is also real work here for the industry as well. Some of this has to do with making sure that applications and security products work with Windows Vista. New applications need to work well for users who have standard (non-admin) user accounts. At the same time, we need to make sure that security products work well on Windows Vista. For example, no one is going to move to Windows Vista unless they have great anti-virus software that works well on it.

My other goal for the industry is that third party applications and internally developed applications adopt our Security Development Lifecycle. Here's why: As we improve the quality of Windows, we're making it harder for people to find vulnerabilities and therefore harder to write exploits. As a result, there will be a natural tendency for security researchers and exploit writers to move up stack. We are already seeing this. As we have learned, the only approach that scales here starts with a well defined process, taught through broad education and verified prior to shipping to drive accountability. The good news here is that we have documented our process pretty clearly and made it easy to learn. Checkout http://msdn.microsoft.com/security to learn more about it.

For customers, the top priority has to be defining and executing their security plan. I spend a ton of time with customers, many of whom have done a threat analysis of their environment and built a security plan. I am still surprised by the number of customers who have a plan but have not had a chance to execute it. The good news is that most have executed their security plan -- so the top goal for them is to reassess their environment and make sure that they are responding to new threats. We've also created a great set of tools to help customers (Developers, IT Administrators and End-Users) be more secure on our platform.

While we want customers to be evaluating Windows Vista, it's super important that business customers in particular, who have NOT yet deployed Windows XP SP2, think seriously about deploying it. While a large number of enterprise customers have deployed Windows XP SP2, many still haven't. While I get that not every desktop will get upgraded to Windows XP SP2 between now and Windows Vista, I think it's critical that laptops and Internet facing desktops move to SP2.

(4)
Outside influences on security
by kalpol


Has open-source software such as Linux influenced the way you think about security in Windows, and if so, how?

Nash: The open source approach has influenced the way I think about security, but I am not sure it's in the way you would have expected. The theory that more eyes makes software more secure is a premise that drove some anti-Microsoft PR back in late 2002, which caused my team and I to respond. My first step was to dig in and try to understand the open source process to see what I was missing.

I learned a few things. The first thing I learned was that while having lots of people look at code sometimes found issues, none of this mattered if there wasn't a good process to close issues. I spent some time reading Linux websites that contained reviews of Linux code. I was surprised by two things: 1) the lack of consistency in the way that software was reviewed, and 2) the lack of accountability to verify that things that were found actually got resolved. Then Blaster hit 10 months later in 2003 and I realized that like Linux we could also suffer from a lack of closure. So we invented the Secure Development Lifecycle, of which the key feature was that it drove consistency and accountability. Here is the background story . . . .

After Blaster happened, I wanted to find out who was responsible for the buffer overflow that was exploited and hold the individual accountable. But once we looked into it, we realized that there was not a documented a process that the developer was supposed to follow that would have prevented the mistake, nor did we have a set of procedures for our developers to verify that a secure development process was utilized. The Security Development Lifecycle is basically the institutionalization of these very things: a documented repeatable process, clear education and accountability. What I learned here was that because we have the ability to establish processes and reinforce them at every level of management that we had an opportunity to make our software do something that the open source approach couldn't replicate.

The second thing I learned about security from the open source approach was about serviceability. One of the things that proponents of the open source approach always talk about is the fact that with open source you don't have to wait for an official patch, since you can download the code, recompile it and create your own fix. I can't imagine this working at scale, since most users could never do this. For the customers who can manage to knit their own patches, the problem is that some distributions sometime update a component with new fixes but they don't always include some of the fixes that more sophisticated users may have done on their own. This effectively undoes the home built patch.

The key learning for me was four-fold. First, it is super important that we have our updates available on all supported versions and all supported languages at the same time. Second, we need to do whatever we can to make sure that our updates are available when vulnerabilities are publicly disclosed. Responsible disclosure helps us a lot since people can confidentially report things to us in return for acknowledgement when we do issue the update. Third, we must have great quality when we do issue the updates. If our updates break things, then people won't trust them. In my mind, the definition of our products is the product that we ship PLUS the latest service pack PLUS any security updates we shipped after the latest service pack. If we don't test our security updates in a broad set of scenarios, then we are likely to break something.

Finally (fourth), it's important that we have tools to simplify the process of deploying updates since it reduces the barriers to deploying the updates and increasing the likelihood that customers are up to date. That is why we have invested in tools to make patch deployment much more straightforward like Windows Update, Microsoft Update, Windows Server Update Services and Systems Management Server.

(5)
What is the basic approach to Microsoft security?
by kickabear


Does Microsoft lean more towards rigidly enforced coding standards as a way to prevent exploitable bugs, or does the company focus more on brute-force bug detection during testing?

I know the easy answer is to say "both, of course" but a 50/50 split is unlikely. So, does testing take the backseat, or does the code?

Nash: My short answer is actually a third choice, which is better design. This starts with really understanding the security threat that a feature might introduce to the system and making sure that the design of the feature or component is designed to reduce the risk. Then we go to implementation which, as you note, is partially about better standards which must be taught through education, but must be reinforced with tools to verify code quality wherever possible.

We also do spend a lot of time using a combination of ethical penetration and interface testing. While bug detection is critical, it really is a last resort -- in some sense the guard rails on the road to safe driving on the road of software engineering. Just like driving your car on a windy road, safety starts with better driver (in this case developer) education.

All of that said, if there is one thing I have learned in the last four years in this job is that there are no silver bullets in security. Instead we make progress through a combination of investments.

(6)
Why add DRM? Also, why not decouple IE?
by Bob_Villa


Why are you adding in DRM controls to Vista that regular users are not going to want? It may come in handy for corporations wanting to control their documents, but I can't see how regular users would knowingly want a product that restricts their access to their documents or files.

Also, I think you could dramatically improve security by decoupling Internet Explorer from Windows. Have it be a separate program similar to Opera, FireFox, Safari, etc... Is there really a valid reason that Windows Explorer has to be driven by Internet

Nash: First, a point of clarification. I assume in this case, you are talking about the Rights Management Services (RMS) client that is now integrated into Windows Vista and not the DRM technology that is used to protect media content that has been built into Windows for some time. In the case of RMS, you are right that corporations see value in protecting their information and controlling the usage of that information. A key piece of feedback we got from customers using the current version of RMS was that setting it up was hard, so we integrated the RMS client into Windows Vista. That said, some customers may not use it. You would only use it if an RMS-enabled application such as Office was installed and a user opted in to use that feature in Office.

We also believe that over time, that regular users will also want to protect their own information. For example in the future, home users may want to protect and control the usage of information such as lists of their friends, photos, banking account information and other personal data.

In terms of your question around Internet Explorer, there are two real aspects of this: 1) the platform implications of having IE in Windows, and 2) the user experiences that are possible with having IE in Windows.

From a platform point of view, decoupling IE would break a lot of things. There are many applications that depend on IE for rendering HTML and for accessing the Internet. Think about email applications, Internet-aware clients like the AOL Explorer or even Microsoft Money that use IE to render HTML in the application. Not only would this break a lot of applications, but it would also put a huge burden on developers who would now have to write their own HTML rendering capability.

From an experience point of view, a key goal for Windows has been to integrate the local experience and the remote (Internet) experience from a user interface perspective. Integrating the web browser into the operating system was a key part of delivering that experience for customers. The area where we can do much better is making sure that the kinds of things that can be done by a remote site is less than what can be done locally--this is especially true for sites that you don't know or don't trust. A key enhancement to the browser for Windows Vista is something called Protected Mode IE. The browser starts with minimal access to system and user resources. For example, when a remote site is accessed, the site will not have privileges to install software, copy files to the user's Startup folder, or hijack the settings for the browser's homepage or search provider. Of course users always can choose to use other browsers and even have other browsers be set as the default on the machine.

I do believe that the progress we are making with IE in Windows Vista will address many of the concerns people have with IE security today.

(7)
Do you ever spend time with "average users"?
by Caspian


Time and again, I've seen average end-users-- grandmothers, "soccer mom" types, businessmen-- whose computers are positively clogged to the gills with spyware, viruses, and other sorts of malware, the overwhelming majority of which they were infected with via the exploitation of security flaws in Microsoft software. I'm often tasked with disinfecting their computers.

How often do you (and the members of your team) spend time with average end-users-- not just in large corporate settings but in small businesses and (just as importantly) in real-world home settings? I believe that if you would spend time with Joe Average and see just how badly his computer's performance (not to mention his personal privacy and the integrity of his data) is suffering from the exploitation of certain bugs and design decisions (e.g. the fact that most end-users run with Administrator privileges) in Microsoft software, it would cause a significant shift in Microsoft's security strategy.

No matter how often $LATEST_WINDOWS_VERSION is touted as more secure than its predecessors, I still keep getting called to average homes to remove countless items of spyware which infected Windows systems via holes (and/or poor design decisions, e.g. the handling of ActiveX controls and the abilities they can have to alter files on the system) in Internet Explorer, and to this day (despite the wide use of antivirus software) most end-user systems I examine do contain at least a few viruses (which entered the system via Microsoft Outlook).

What are you doing to secure Joe Average's PC? Do you have any interaction with average end-users? And if not, why not?

Nash: I personally spend a ton of time with end-users -- often friends and family, but also people that I meet through my job at Microsoft. I have a wife, three brothers, a sister, five sisters-in-law, three brothers-in-law, two parents, one mother-in-law, a father-in-law, one uncle, two aunts, one living grandmother, three kids (although they are all too young to use a PC), five nephews and seven nieces, so I get a lot of calls from family members asking for tech support. It's actually amazing how much their feedback has driven decisions in our security strategy. I will give you two examples:

Right after Blaster happened, my uncle Ken called me to see how I was doing with everything going on with the event. My uncle is a little strange (although he is my only uncle, so I really don't have anything to compare him to) and he sometimes calls me "nephew." He said, "Nephew, what should I do about this latest Blaster thing?" I told him that he should turn on Automatic Updates and turn on his firewall. When he asked me how to do it, I talked him through the dialog boxes and we got him setup. In this process, I learned two important things. The first was that that the process of making these changes was a pain in the neck. The second was that when we really should have changed the default configuration for Windows Update.

When we shipped Windows XP Gold in 2001, we introduced Windows Update for the first time. At the time there were two options that the user had to choose from when they installed Windows: 1) tell me when updates are available, or 2) download the updates and tell me that they are ready to install (the default). When we shipped Windows XP SP1 about a year later, we added a third option which was to download the updates and install them. The problem was that when we added this third option (the best choice for most people), we left the second option (download and tell me) as the default. I am not sure why we did this, but my guess is that no one thought it through. So what did my experience with uncle Ken influence? A few things. First, we created a webpage at www.microsoft.com/pypc that included a little program that turned on your firewall, and helped you turn on the third option for Automatic Updates. We also changed the default setting for Automatic Updates in Windows XP SP2.

My second story is about my grandmother, Estelle (I am 42 years old and not too proud to tell you that I call her Nanny). Nanny got her first PC in 1992 soon after I came to Microsoft. In 1995 she got her second PC -- I was excited about Windows 95 and so was she. In late 2001, I sent a mail to all of my family members telling them that I would only help them with their PC if they were running Windows XP, so my grandmother ran out and bought an XP machine.

In February of 2004 I was down visiting Nanny in Florida. I was on my way home from a business trip, so I was only there for about a day. When I got to her house she fed me breakfast, looked at the latest pictures of her great-grandsons and then said to me that she needed some help with her PC. When I powered the thing on, it was clear that something was wrong. The machine was very slow and you could see the icons on her desk being drawn pixel by pixel.

It turns out that her machine was massively infected by spyware. She had gotten some mail offering her $10 to take an online survey which she had taken seven times. Without realizing it, each time she completed the survey and tried to claim her $10, she had agreed to the terms of a software license and downloaded spyware on her machine. She had effectively sold her $900 PC for 70 bucks. It took me about three hours to get her machine running again. I went back about a month later and installed Windows XP SP2 (beta at the time) on her machine, but what I realized was that we had a much bigger problem with spyware.

With that visit came the vision for Microsoft's anti-spyware strategy and our focus on delivering an anti-spyware solution.

Today, I travel a bit more prepared for situations like the one I encountered at Nanny's house. I have 512MB memory stick with me in my briefcase that includes a copy of Service Pack 2 for Windows XP, the latest beta of Windows AntiSpyware and the current month's release of the Malicious Software Removal Tool.

(8)
Windows updates to unregistered machines?
by Spy der Mann


Dear Microsoft Security VP:

I know a person who doesn't have his copy of Windows registered. His PC got infested by spyware, so my deduction is that his computer was probably used to send SPAM, spread viruses and whatnot. When He called me for tech support, I told him to download the Microsoft Anti-Spyware from Windows update, but his answer was that it required a registered copy.

My question is this: If Windows updates make the Internet SAFER from hackers, spyware and viruses, why limit them to registered copies of Windows? (IMHO this is analogous to not giving the vaccine of the bird flu to illegal aliens)

What do you plan to do about this?

Nash: This is a great question and one that we struggled with as we established the policy. First, I should clarity one thing. While the Windows AntiSpyware offering is only available to users of licensed copies of Windows, we do make our high priority security updates available to unlicensed users of Windows, primarily in order to prevent unlicensed Windows systems from posing a threat to the Internet if they get infected. Although, we do remind unlicensed users of Windows to get genuine.

At the end of the day, Microsoft's first commitment is to protect our paying customers. We made a decision last January to make Windows AntiSpyware technology available to licensed Windows customers at no charge. When we first acquired GIANT Company Software, the plan was to make scanning for spyware a free service on Microsoft.com, but charge for the technology that blocks spyware. The theory was that frequent scanning was a good substitute for people who didn't want to pay for the blocking capabilities. Within a few weeks of running the beta of the anti-spyware technology we realized that this premise wasn't valid since while it's easy to detect and remove the primary spyware infection, spyware often brings with it more spyware and detecting and removing the secondary and tertiary infections was much harder. So we made the decision to include this blocking capability in all licensed copies of Windows.

So the question is, why not protect non-licensed users from spyware? The short answer is that spyware primarily affects the machine that has the infection. Part of the value of owning a licensed copy of Windows is that you are protected from spyware. If you don't pay for your copy of Windows, you aren't protected.

It's hard for me to feel too bad for the person who you know who doesn't have a licensed copy of Windows and is infected. They are using stolen software. I have heard the arguments that Microsoft has lots of money and shouldn't care if people are using our software illegally. I don't buy it (no pun intended). You could make this argument in many other cases, but we don't tolerate people eating a meal at a restaurant and then not paying, or stealing a candy bar from a convenience store or taking a TV from an electronics store. In this case, your acquaintance wants the free meal, but can't understand why we don't throw in dessert.

If your acquaintance installed their own pirated copy of Windows, I recommend that they get a valid copy and install it. If they got their pirated copy of Windows preinstalled on a PC, then they should report the company that sold them their PC and we will use the information to get the vendor to make things right, and will get your acquaintance a valid license in return for the information.

(9)
MSFT employee here
by Anonymous Coward


Hi, Mike,

I have just one question for you. Why do we STILL ship products with KNOWN security issues?

I'll even tell you how it works in the trenches. Folks build the product. At the end of it all a "Security Push" gets declared. For two to three weeks people pretend they care about security by coming up with potential security issues and assigning DREAD+VR scores to them. Then management arbitrarily sets the "bar" below which we don't fix potential and real security issues. This bar is usually very high, sometimes at around 8, because hardly anyone has time in the schedule to fix all issues found. Now, DREAD score 8 means that flaw will affect a ton of customers and cost Microsoft significant litigation. Some of very severe bugs slip under the bar just because they don't affect more than 10% of customers. Now, even this exercise is a joke, because most developers don't know what DFD is and how to put one together.

This wasn't even the most ridiculous part of the exercise. The most ridiculous part is security "code reviews". It's when feature owners walk into a room with a huge stack of printouts and pretend they can be reviewed in a couple of hours they've allocated for this. You can barely glance through this much code in this much time, 90% of security issues remain unnoticed during this "code review".

After all is said and done, product is only slightly more secure (SOME of the most ridiculous things have been fixed), and management gets delusional saying that product is now Fort Knox secure.

If you ask me, that's abomination, not a proper security process. Are there any plans to change it?

Nash: Wow this is a great, yet difficult question. First, I should say that there is a great process for security quality called the Security Development Lifecycle (SDL) that is designed to make sure that we act consistently as a company. This means having a well documented, repeatable process, great education that teaches people how to follow the process and the accountability to make sure that process is being followed consistently. A part of this accountability is something called the final security review (FSR) that my team executes on behalf the company to make sure that the process is actually being followed. At the end of the day, the product group that ships the product is accountable to make sure that the process is followed.

I often get asked the question, "who has been fired for shipping insecure code at Microsoft?" My usual answer here is that we are still learning a lot about security at Microsoft and that most of the security issues that we deal with don't come as a result of carelessness or disregard for the process, but rather new vectors of attack that we didn't understand at the time.

One of the key things that will make this work is consistent execution across the company. I won't say that we have or should have the same level of rigor across all of our products (Windows deserves more scrutiny than say, a game), but we must apply the process appropriately. Generally speaking, Microsoft product groups are following the process consistently. That said, Microsoft has over 60,000 employees, so it's not a huge surprise that we have some people who just don't get it. While it's not a huge surprise, it's also not acceptable. If we have a group that is not aware of the process, then we have an education issue. If we have a group that is knowingly ignoring the SDL or deprioritizing it, at best we have an accountability problem and at worst an HR problem. The only way that I can help is to know about it so I can have it addressed appropriately. While I see that you posted this question anonymously, I encourage you to contact me directly through email and we can meet to discuss this. I assure you that I will protect your identity. If you are not comfortable with this, call my direct line at Microsoft (using an outside line--so that caller ID is blocked or from a conference room) and I promise not to ask your name.

As I have said many times, the Trustworthy Computing Initiative is a journey that we started in 2002 with measurable improvements along the way. In this case we clearly have a problem that needs to be fixed so that we can improve.

(10)
Why no AES in SSL yet?
by jonathan_lampe


Why hasn't Microsoft added AES to its SSL stack yet? As a Microsoft developer, it's annoying to get beaten over the head when facing competing solutions that can use the AES (128-,192- and 256-bit) encryption algorithm in their SSL implementations.

(OpenSSL - including the Mozilla browsers - and Java SSL have all had AES support for a while. Most SSH implementations have also had it for a while.)

Nash: This is a great question. The AES was approved as a FIPS algorithm after Windows XP was released in 2001. Adding it to Windows XP RTM was basically not possible. Our approach for cryptography was and is to support a pluggable model and enable replacement in our platform in a broad sense. IE and IIS depend on the platform (OS) cryptography capabilities, so adding this capability was an operating system change vs. a change in the browser, as was the case with Mozilla.

While it's fair to say that we could have just dropped AES support into the platform, the approach for pluggable crypto enables a lot more flexibility for customers. For Windows Vista, we added support for pluggable cryptography, which we refer to as CAPI next generation or CNG. With CNG we not only add support for AES, but also add support for Elliptical Curve (ECC) Cryptography and the Sha-2 family of hash algorithms.

We are currently looking at the feasibility and benefits of making this capability available down-level. I should also note that in contrast to the existing AES implementations that have not been through an evaluation, we plan to get our implementation evaluated to meet FIPS guidelines and requirements.

(11)
VISTA users must still be administrators?
by arminw


In current Windows systems, many programs will only work correctly if the user is granted administrator rights. Will MS lean on developers to write their software such, that a normal user status is sufficient? Much malware today silently installs itself without so much as a warning to the user. Will VISTA incorporate some sort of warning and ask for a password before ANY executable file can run for the first time or install itself deep in the system? Will users be told NOT to type password unless they are SURE the file comes from a trusted source?

Nash: One of the key enhancements in Windows Vista is something called User Account Control, which in my mind is a fancy name for standard user that works. There are really two parts of User Account Control. The first is a significant set of changes to Windows Vista so that the system doesn't require admin rights in places that shouldn't, while still protecting the system in cases that should require admin. I will give you a simple example that illustrates what I mean. In Windows XP today, you need to be an administrator to run the clock applet in the control panel, but as it turns out there are cases where the user shouldn't need to be an admin to run this applet. For example, a standard user should be able to LOOK at the clock. In addition, while changing the time on the system should require admin privilege (to maintain the integrity of system logs, etc.), when I travel from Seattle to Boston, I should be able to change the time zone of the system so that I know the local time and show up for meetings on time, etc.

So in Vista we separated these functions so that standard users can do the things that standard users need to do, but still require admin for the things that need protection.

The other thing added is something we call protected admin. This is a mode that administrators run in by default. If someone is configured as an admin, their basic execution happens as a standard user. When they try to do something that requires the administrator privilege, the system prompts them to see if they want to elevate to admin to complete the task, and if they consent, just that task is elevated (this is more secure that SUPERUSR ON in Unix that elevates the entire session). When the task completes, the high privileged process is torn down. The system can also be configured to require a password on elevation.

As you note, this also has a lot of implications around application compatibility and a ton of work is being done to help ISVs building solutions for Vista to make sure that their applications run as standard user if appropriate.

For existing (legacy applications) we find that most applications break into one of four categories: 1) applications that already run well as standard user, 2) applications that really do require admin privilege (system utilities for example), 3) applications that check for admin privilege, but don't really need it, and 4) applications that require admin privilege for a some portion of their functionality.

For applications that run as standard user, we are set. Similarly, applications that really should require admin privilege run as they should. If a standard user encounters such an application, in the home (e.g., non domain joined scenario) the standard user is prompted to have someone who has admin privilege type in a password to elevate the system to run the application as appropriate. We call this the "over the shoulder" elevation case.

For applications that check for admin, but don't really need it, the situation is usually that the developer of the application didn't want to take the time to test the application in both the standard and admin user modes, so they put a check in at initialization. We have a pretty good list of these applications, so for the ones we know about, we put a little compatibility shim in the software so that when one of these known applications check to see if the user is running at admin level, the system will report back that they are even though they are a standard user. This preserves application compatibility, but provides no risk on unauthorized escalation since the user really is just a standard user.

For applications that require admin for some part of their execution, we are providing guidance to the ISVs on how to re-factor their applications so that the components that the end sees don't need the privilege and the ones that do need to can be isolated and componentized so that most users don't encounter the escalation.

(12)
OpenBSD
by hahiss


How is it that OpenBSD is able to be so secure by design with so few resources and yet all of Microsoft's resources cannot stem the tide of security problems that impact everyone, including those of us who do not use Microsoft programs?

Nash: First, I should say that OpenBSD includes a relatively small subset of the functionality that is included in Windows. You could argue that Microsoft should follow the same model for Windows that the OpenBSD Org follows for their OS. The problem is that users really want an OS that includes support for rich media content and for hardware devices, etc. So while OpenBSD has done a good job of hardening their kernel, they don't seem to also audit important software that are used commonly by customers, such as PHP, Perl, etc. for security vulnerabilities. At Microsoft we're focusing on the entire software stack, from the Hardware Abstraction Layer in Windows, all the way through the memory manager, network stack, file systems, UI and shell, Internet Explorer, Internet Information Services, compilers (C/C++, .NET), Microsoft Exchange, Microsoft Office, Microsoft SQL Server and much, much more. If a software company's goal is to secure customers, you have to secure the entire stack. Simply hardening one component, regardless of how important it is, does not solve real customer problems.

Second, it is not completely accurate to say that OpenBSD is more secure. If you compare vulnerability counts just from the last 3 months, OpenBSD had 79 for November, December and January compared to 11 for Microsoft (and that includes one each for Office and Exchange - so really 9 for all versions of Windows). I encourage you to look at the numbers reported at the OpenBSD site to verify that this is true.

("Bonus" question added by Mike Nash)

Differences Between Windows & Other Employers?
by eldavojohn


Mr. Nash, what are the greatest differences and similarities between Microsoft Corp. and Data General Corp., your two most recent employers? Most importantly, how drastic were the changes you saw (not necessarily changes due to job function but changes in general)? What do you like the most and what do you hate the most?

Nash: Great question. First, its been a while since I worked at DG (I left DG for business school in 1989). That said, I would say that the biggest difference between the two companies is that while DG was fundamentally a hardware company, Microsoft is first and foremost a software company. DG was primarily focused on driving a business based on selling hardware and software was a necessary component of that business, but not something that was valued on its own. In contrast, Microsoft's basic premise is that the hardest problems can be best solved with software and as a part of that the power of hardware can be realized best through great software.

The second biggest difference is while DG always measured itself in terms of other companies (Digital was the big deal back when I was at Data General), Microsoft is a company that is constantly trying to reinvent itself. As a result, Microsoft is much more self critical, but at the same time willing to make long term investments to address both new opportunities and short comings. The Trustworthy Computing Initiative is a great example. Soon after Blaster happened, a lot of people I spoke to (inside and outside Microsoft) asked me if Blaster was evidence that the Trustworthy Computing Initiative was a failure. My response was just the opposite. I was super glad that we had taken the time to focus on and improve our security. If we hadn't things would have been much worse. At the same time, Blaster did provide some pretty clear guidance on some changes we had to make around Trustworthy Computing (TwC). More than that, it reminded us all that we would have to continue make some major changes in TwC as we continued to learn, so we should just plan for it. That approach is mostly a matter of culture and frankly if the leadership of DG had had a similar point of view, their might be a DG today. For sure it's why there is great change and innovation at Microsoft more than 30 years in. And yeah, it's hard work.
This discussion has been archived. No new comments can be posted.

MS Security VP Mike Nash Replies

Comments Filter:
  • by backslashdot (95548) on Thursday January 26, 2006 @10:11AM (#14566772)
    It's nice they have all this process in plce. But I hve noticed that just about every "security update" that microsoft has produced thanks someone outside Microsoft for finding the issue. This is commendable for sure, but it's also a sign that Microsoft internally isn't finding these issues.

    On Microsoft giving credit to third parties I'd definitely say that is commendible .. from personal experience i can tell you there are other major companies that don't even acknowledge help.
    • by backslashdot (95548) on Thursday January 26, 2006 @10:16AM (#14566803)
      I just checked eEye's upcoming vulnerabilities page .. and it looks like Microsoft has at least 3 serious unpatched vulnerabilties. Including one that they have know about for over 206 days.

      http://www.eeye.com/html/research/upcoming/index.h tml [eeye.com]

      What's that about.
    • But I hve noticed that just about every "security update" that microsoft has produced thanks someone outside Microsoft for finding the issue. This is commendable for sure, but it's also a sign that Microsoft internally isn't finding these issues.

      This is in no way unique to Microsoft or technology companies in general. There is a corporate mentality at most companies that lets you question and doubt but only to a small extent. That is why outside consultants exist. What Microsoft has to do is embrace thes
    • In 2005, Microsoft released 55 security bulletins. Let's assume that all of them were found by external parties.

      None of us has any idea how many security vulnerabilities were found and fixed internally by MSFT employees before their products shipped. I suspect it's quite a bit higher than 55 bugs.

      It's simply asinie to conclude that MSFT can't find and fix security issues just because 55 of them got past Microsoft's developers.
    • My guess is that Microsoft doesn't release hotfixes for undisclosed vulnerabilities, and rolls them into other security updates and service packs. They only issue security notices for publicly disclosed flaws (those found by third parties).

      While you might argue that this is simply PR motivated, and you'd probably be right, there is also another issue. It's clear that attackers have been reverse engineering patches to figure out how they work, and then exploit the vulnerabilities on unpatched machines. If
  • by Roj Blake (931541)
    It is now RMS (Rights Management Services).

    By changing the name they made it less evil. Yea Microsoft!
    • By changing the name they made it less evil.

      RMS==DRM. The layers of irony here are astounding. Stallman is going to be p/o'ed

      It's still simply to make the propaganda change though. RMS=="Restrictions Management System". Wait! "wRongs Management System".

      Perhaps their next acronym will be "Gag Nuturing Utility"
      • RMS==DRM. The layers of irony here are astounding. Stallman is going to be p/o'ed

        It's an outrage. On the one hand, DRM could be argued as good for everyone in what it's trying to achieve, but the real-life implementation is an annoying thing that keeps popping up and telling you that you can't do things, and castigating you for allegedly suspect intentions and/or morals, and tries to stop you using your property in a way that you'd reasonably want to, all because of rights issues.

        Whereas RMS...oh, wai

      • Microsoft's name for DRM is WMRM.
        Microsoft's name for lawyers trading secure files is RMS.

        From what I understand, RMS will have both the encryption and decryption built into Vista, whereas WMRM will remain the same, WMP handles DRM playback, and either a service provider will encrypt the files, or offer a plugin for Windows Media Encoder to do it on your PC.

        How is this renaming DRM again?
    • Um, he doesn't say there isn't DRM anymore, but he assumes he's asking about RMS which is an umbrella API that uses DRM for document/data protection in corporate environments (Office products, etc, could be that it's documented too for general Windows app usage). Again, no name changes here. Two different things though.
    • IMO the "R" in both terms more accurately stands for "restrictions".

      None of this stuff has anything to do with enforcing "rights", its all about managing restrictions.
  • Wow (Score:5, Insightful)

    by tgd (2822) on Thursday January 26, 2006 @10:14AM (#14566783)
    That was a shockingly good interview. Kudos Slashdot. Thats the kind of quality we had around here five years ago. Real solid questions, excellent answers. Keep up the good work.

    (And they'll be just as good when posted again this afternoon, Zonk) ;-)
    • This was modded funny?

      I agree with the (serious interpretation) of the parent comment. It was a shockingly good review, and while I appreciate my Debian box, this helps me feel a little bit better as a developer working for a Windows-only company.

      Thanks, Mr. Nash! I hope that you do get to follow through with that man-from-the-trenches -- if you're really a VP like that, then I'm seriously impressed. Props!

      --clint
  • by putko (753330) on Thursday January 26, 2006 @10:17AM (#14566809) Homepage Journal
    A guy asks why not decouple IE from the OS -- an obvious security problem, given that users typically run as Admin (aka root), so any buffer overflow becomes a flaw that threatens the entire box.

    Mac OS, Linux and the BSDs manage to decouple the browser. I'm assuming with Mac OS, it is somehow possible to share the browser's code. Microsoft has a technolgy called (originally) OLE. The point is, one app can embed another app in it. The apps don't have to run with root rights: folks couple together Word and Excel when both run as user, and they do it all the time. Here's the answer the Microsoft guy gave:

    "In terms of your question around Internet Explorer, there are two real aspects of this: 1) the platform implications of having IE in Windows, and 2) the user experiences that are possible with having IE in Windows.

    From a platform point of view, decoupling IE would break a lot of things. There are many applications that depend on IE for rendering HTML and for accessing the Internet. Think about email applications, Internet-aware clients like the AOL Explorer or even Microsoft Money that use IE to render HTML in the application. Not only would this break a lot of applications, but it would also put a huge burden on developers who would now have to write their own HTML rendering capability."

    That seems to imply that the OLE-like features require the stuff to be part of the OS, but that just isn't true (in my experience). Perhaps there are some extra features that come from having the browser in the OS, but in general, that just isn't necessary -- and given the security problems, just isn't worth it.

    At that point, it is hard to believe the guy -- either he's trying to tell a lie, or he's not informed, or he is informed, but the story is very complicated and he doesn't manage to tell it.

    Of course, others have said Microsoft put the browser into the OS in order to kill Netscape.
    • by Rezonant (775417) on Thursday January 26, 2006 @10:44AM (#14567022)
      Well, the IE engine is just that: an OCX control. Internet Explorer is just a little window that contains an address bar, some buttons, and this OCX control. It's not in the kernel. It's not really part of the OS other than that the shell (which is a user mode app) uses it for some things, and that it's included in the installation. You already got what you asked for. The reason IE has so many exploits is simply bad design and crappy code, not that it's somehow "part of the OS". Unfortunately the MS guy didn't clarify this enough for obvious reasons.
    • by Bogtha (906264) on Thursday January 26, 2006 @10:47AM (#14567060)

      Internet Explorer is at least as decoupled as Konqueror in KDE and Safari in OS X.

      That is to say, it's just a normal application. The thing that is "coupled" to the OS is Internet Explorer's rendering engine, Trident. And when I say coupled, I mean it's just a standard system library that many applications use. Just like KDE applications can embed KHTML and OS X applications can embed WebKit.

      The complaint about Internet Explorer being "coupled" to the OS is, from a security standpoint, nothing but FUD parrotted by the ignorati, and no different from the competition.

      That seems to imply that the OLE-like features require the stuff to be part of the OS

      Huh? No. What he is saying is that many applications use the HTML rendering functionality, so either that functionality is shipped as a standard component with Windows, or all the application vendors implement their own. From most perspectives (security, memory use, developer workload), it's better to have the work done once for the entire platform instead of once for every application.

    • Mac OS, Linux and the BSDs manage to decouple the browser.

      The browser is not decoupled in either Mac OS X or KDE. Both share a very similar architecture to Windows.

      Saying "It's in the OS" really depends on your definition of "Operating System". From the traditional marketing/common-user definition, KDE is just as much part of the OS/Operating Envrionment as the higher-level libraries (such as MSHTML) included in Windows. If you want to run certain software, you get a coupled browser as part of the full-meal
    • by _xeno_ (155264) on Thursday January 26, 2006 @11:21AM (#14567557) Homepage Journal

      IE is decoupled from the OS in the same way Mozilla is decoupled from the OS, assuming you define OS as "kernel." IE is part of the shell, though. Removing IE would break a lot of the existing shell.

      IE is part of Windows in the same way Konqueror is part of KDE. (Wow, a lot of other people came up with that while I wrote this! :)) If you removed Konqueror from KDE - actually, I'm not really sure how that would ripple, but the concept is the same. I think Konqueror handles the desktop in the same way Nautilus runs the GNOME desktop and IE runs the Windows desktop. (That is, it is the application that draws the desktop background and all the pretty icons on the desktop.) Removing it would cause problems with Windows applications because it's assumed to be part of the platform.

      In the case of the Linux desktops, you could probably hack something together that would work without those components. Arguably you could in Windows too, I guess, by having the Task Manager open (since it allows you to run programs by filename). But Windows is designed as a distribution to use IE as the main shell program. If you kill IE in Windows (go to Task Manager, find "explorer.exe", and kill it - or just crash it, there are plenty of ways to do it), you lose the desktop, the Start menu, and the taskbar. IE is the shell that most people interact with. (It's worth pointing out that "iexplore.exe" is a stub program that essentially just runs "explorer.exe".)

      However, even though IE is the shell, it's not MSHTML. (Confused yet?) IE actually hosts MSHTML as an ActiveX control. (Yes, OLE is still around - it's now ActiveX.) So in that sense, the HTML component is decoupled from the shell as you'd expect. However, MSHTML currently gets used to draw the desktop (remember Active Desktop?) and the file view in Windows Explorer. (Google "desktop.ini" for information on how to muck with the HTML displayed in folder views [microsoft.com].) Arguably they could separate the two, and recreate the file browser without the HTML rendering capabilities.

      However, most of this is really a moot point. The majority of times IE is used as an infection vector is when IE is being used as an Internet browser. (The others have to do with the folder view "previewing" certain files, an annoying habit that Nautilus shares. At one point there was a buffer overflow in the ID3 handler, allowing a malicious MP3 to infect you simply by selecting it.) Removing it from the shell wouldn't help much, since it's the use of it as a browser that gets most people. In that respect, switching to Firefox is usually enough to protect you from IE's flaws.

    • I can actually understand the need to integrate IE from a business point of view...

      My employer started a "virtual school" that operated differently than existing online classes. We invited other schools to join us as a virtual school board and expand everyone's customer base. My employer then had a sudden panic attack because they thought they had "lost control" of the virtual school and all the branding advantage that came with being the first or only. I shrugged because we train all of the other teache
      • Very good point -- Microsoft even stated that their goal with IE was to make it "transparent" to the Windows user experience. In those days, starting Nutscrape was a very jarring experience -- it started slow, the icons and colors were ugly and nonstandard, etc. They successfully defined Nutscrape as their own "box", with IE everywhere around it. Netscape/Mozilla didn't really figure this out until they created a semi-IE-clone with Firefox. And Apple has taken a very similar tactic with Safari.

        But that's th
    • Just FYI, Apple's Safari uses the KHTML library [apple.com], which is what other applications can use to render HTML as well.
    • What's the difference between running IE as Administrator, and running any other application as Administrator? There's no difference, so the point is moot. Last I checked IE doesn't run in the kernel. Everything that IE does, IE does from the confines of the process. The real problem is that everyone is Administrator.
    • For me, an important issue is that it is difficult (but not impossible, see the Mozilla Control project [www.iol.ie]) to substitute a different rendering engine in place of IE's. Microsoft's real "crime" was making it relatively simple to include their browser engine in other applications, and making it relatively difficult to have a different engine be included in it's place.

      I was developing an Windows application that required an embedded web control. I looked at the Mozilla Control but the control is tied in to sp

  • by digitaldc (879047) * on Thursday January 26, 2006 @10:27AM (#14566878)
    Today, I travel a bit more prepared for situations like the one I encountered at Nanny's house. I have 512MB memory stick with me in my briefcase that includes a copy of Service Pack 2 for Windows XP, the latest beta of Windows AntiSpyware and the current month's release of the Malicious Software Removal Tool.

    Sounds like a good recommendation - how about shipping Vista with a flash drive with the latest security software on it, with a short guide on how to use it and how to disinfect your PC?

    Interesting (possibly useless) mentions from questions & answers:
    'Firewall' mentioned 13 times
    'Blaster' mentioned 10 times
    'Focus/ed' mentioned 14 times
    'Trust/worthy' mentioned 10 times
    'Key' mentioned 15 times
    'XP' mentioned 17 times
    'Explorer' mentioned 6 times
    'Vista' mentioned 35 times
  • by Danathar (267989) on Thursday January 26, 2006 @10:29AM (#14566890) Journal
    Now as a followup I'd REALLY like to see the same interview (possibly even the same questions) put to the guy in charge of security at Apple.

    That would really put things in perspective
    • Yeah, then the BSD and Linux security teams. If there is such a thing.
    • It would just degenerate into the typical circlejerk of "Macs don't have viruses therefore they are secure!!" "What about nVIR in 1986?" "Norton made my P4 slower than my G3!" etc etc. You can have that flamewar any single day on Slashdot without an official Apple rep present.

      The truth is that OS X Server doesn't have the greatest track record; Apple often lags other vendors for crossplatform OSS and Java patches by months; and there's been a few real boneheaded 'ease-of-use' security flaws in OS X. But you
  • It's obvious (Score:5, Insightful)

    by Billosaur (927319) * <wgrother@NOSPAM.optonline.net> on Thursday January 26, 2006 @10:30AM (#14566894) Journal
    If these responses are genuine then it's clear to see that MS is taking security more seriously. However, their methodology leaves a lot to be desired. The Security Development Lifecycle can't be a seperate function but needs to be an integrated part of the normal Software Development Lifecycle (notice they're both SDL). It starts at the level of the code jockey; I get the sense that they don't really know the competence level of the people they have writing code and they certainly haven't drummed the idea of secure code-writing into their heads. If that's true, all the rest of it doesn't matter. Security review has to start at the code writing level and work its way up slowly; given the market pressure, I don't see that happening.
  • by djupedal (584558) on Thursday January 26, 2006 @10:30AM (#14566896)
    All those talented (?) professionals, and all those plans and schemes and "... documented, repeatable processes & checkpoints in the release process to make sure that this process was followed"...and it all comes down to Uncle Kenny. The building is full to the rafters with brains, yet one simple conversation with a user and the entire project meets an otherwise delayed milestone. Un-be-lieveable...

    Uncle Ken, if you're reading this, give Nephew a swirley if he doesn't cut you a fat bonus, 'cause your instincts are top notch. Except, of course, that you run Windows, but I'm sure the 'XP family pack' gets a workout, so at least the price is right.
  • What do you like the most and what do you hate the most?
    Did he fail to include what his dislikes about Microsoft were? Surely everyone has some problem with their current employer. I wonder if things are run around Microsoft the same way Hitler ran things: Any voice--no matter how minor--of dissent results in termination.

    If so, I hope the employee who asked the question above succeeds in maintaining his anonymity.
  • by Silver Sloth (770927) on Thursday January 26, 2006 @10:33AM (#14566917)
    From TFI

    In late 2001, I sent a mail to all of my family members telling them that I would only help them with their PC if they were running Windows XP, so my grandmother ran out and bought an XP machine.

    So M$ even forces close family to upgrade! Win2K wasn't that out of date in late 2001.

  • by Caspian (99221) on Thursday January 26, 2006 @10:34AM (#14566920)
    "We also believe that over time, that regular users will also want to protect their own information. For example in the future, home users may want to protect and control the usage of information such as lists of their friends, photos, banking account information and other personal data."
    I find this to be wishful thinking at best and completely laughable at worst. End-users will embrace DRM? I think this dude needs to talk to more college students. To end-users, DRM is stuff to get around so they can play their illegal music. Period. That's all it will ever be. End-users won't ever see a need to encrypt data on their computer, since they still go by the "I don't do anything important, so no one would want to break into my computer" school of thought vis-a-vis computer security. (They haven't yet grasped, of course, that the overwhelming majority of attacks are automated.)
    • I think he fails to realize the best way to protect my information is not to have MS wrap it in another layer of security, but to strengthen the layers of security that are already there. If nobody can get a rootkit, keylogger, whatever on my computer without my knowledge to begin with, my data is pretty darn safe -even if its in plain text.
  • Bad apples (Score:3, Interesting)

    by Bogtha (906264) on Thursday January 26, 2006 @10:37AM (#14566944)

    A good interview for the most part, but I have to take issue with this bit:

    It's hard for me to feel too bad for the person who you know who doesn't have a licensed copy of Windows and is infected.

    How about feeling bad for everybody that gets spammed by people using these machines as zombies? It's not just the person using an illegal copy that is negatively affected by their infection.

    • by Narcissus (310552)
      How about feeling bad for everybody that gets spammed by people using these machines as zombies?
      Well to be honest, we only need to get Bill Gates to decide on a date that will see spam ended, then we wouldn't have that problem, either :)
    • Re:Bad apples (Score:2, Interesting)

      by VoxCombo (782935)

      While the Windows AntiSpyware offering is only available to users of licensed copies of Windows, we do make our high priority security updates available to unlicensed users of Windows, primarily in order to prevent unlicensed Windows systems from posing a threat to the Internet if they get infected.


      So the question is, why not protect non-licensed users from spyware? The short answer is that spyware primarily affects the machine that has the infection.

      Sounds like he has an answer to that.

      While I'm sure ma

  • by cpugeniusmv (828846) on Thursday January 26, 2006 @10:39AM (#14566960) Homepage
    [...] but given that Windows Vista and Windows Longhorn Server are going to be the most significant releases of Windows in the last five years or so [...]

    By the time they are released, the will have been the only releases of Windows in the last five years.
  • What a knob! (Score:2, Insightful)

    by debest (471937)
    In late 2001, I sent a mail to all of my family members telling them that I would only help them with their PC if they were running Windows XP, so my grandmother ran out and bought an XP machine.

    He's a VP at Microsoft, and treats his family like the BOFH! I would think that if I didn't want to be in a "forced upgrade" situation, that having this guy in my family would be perfect. No such luck. He must be really popular at family reunions.
    • You don't mandate requirements for your extended support network. I've got neighbors, family, friends I support - probably a dozen machines in all. I don't say it like "my way or the highway" but I absolutely mandate upgrades for those folks.

      For example, last week a family friend had her Win98 box spywared again (even though she was on dialup). I didn't offer to fix the machine. I just gave her an old XP box I wasn't using, but told her I wanted her to get DSL in return for a new free computer. Seemed
    • He's a VP at Microsoft, and treats his family like the BOFH!

      No, the BOFH would tell his "friends" to keep using the unstable, insecure, and crappy products that were Windows95, Windows98, and WindowsME rather than moving to XP.

      Or are you honestly going to try and say that XP isn't both more stable and more secure than those three products? It's not perfect, but it's a helluva lot better.
    • Re:What a knob! (Score:3, Interesting)

      by WhiteWolf666 (145211)
      Whenever a family member or friend now asks for purchasing advice, I tell them, "Get a Mac".

      If they don't "get a mac", then they are on their own in terms of computing help. I suggest getting a comprehensive service plan from the retailer.

      I no longer have the nerves or patience to fix people's computers on a regular basis, and these people are just not interested in safe computing practices, but I can't blame them either; even my Windows boxen used to get infected every now and then.

      People who purchased sys
  • Is this guy really the MS security VP? I find some of his answers amazing. About his uncle he says "I told him that he should turn on Automatic Updates and turn on his firewall. When he asked me how to do it, I talked him through the dialog boxes and we got him setup. In this process, I learned two important things. The first was that that the process of making these changes was a pain in the neck. The second was that when we really should have changed the default configuration for Windows Update."

    It seems
  • by Coppit (2441) on Thursday January 26, 2006 @10:45AM (#14567031) Homepage
    Over and over his answer to many questions was "we've implemented much better security processes". Then the question comes from the Microsoft employee that basically says these processes are a joke, and that "in the trenches" they are just going through the motions. Why didn't he answer the question?

    If Microsoft is serious about security, they need to treat it like they treated reliability. Eventually about 50% of their resources were spent on testing. (One tester for each developer.) I'm sure that this was a battle, but eventually the developers saw the benefit and bought into it. Hopefully Microsoft will eventually devote developers exclusively to security, and in nontrivial numbers.

    Asking developers to do a security review at the end of the development cycle is about as effective as asking them to do some testing at the end.

    • He *did* answer the question. He can't hand-hold all 60,000 employees and Microsoft (and God-knows how many project managers), so he told the employee with the problem to contact him confidentially so that he can look into the issue. How is that not an answer? What exactly did you expect him to say?

      Besides, there's a decent chance that the question was a fake anyway. Who knows with Slashdot?
  • It's interesting how he doesn't address the fact that MS is putting the Internet community at a higher risk because of their own philosophy that you shouldn't pirate. :-p Definitely a stance of "taking care of our company's profits are more important than helping against profit losses caused by problems from our community in general".

    It's also, from having used Windows, interesting that he doesn't say that critical security updates still are sent despite Windows copies not having been activated. Isn't this
  • by databyss (586137) on Thursday January 26, 2006 @10:52AM (#14567114) Homepage Journal
    "We know our old stuff is filled with security holes but that's because we didn't really care before.

    The new stuff will rock! GO BUY IT NOW!

    Oh yeah... open source sucks too!"

    The guy even blew off valid questions from MS Developers.

    That's talent... this guy should run for President.
  • The other thing added is something we call protected admin. This is a mode that administrators run in by default. If someone is configured as an admin, their basic execution happens as a standard user. When they try to do something that requires the administrator privilege, the system prompts them to see if they want to elevate to admin to complete the task, and if they consent, just that task is elevated (this is more secure that SUPERUSR ON in Unix that elevates the entire session). When the task complete
  • Wow. (Score:4, Insightful)

    by earthbound kid (859282) on Thursday January 26, 2006 @10:56AM (#14567182) Homepage
    According to Google, no one has ever said "SUPERUSR ON" [google.com] before this guy.

    I mean, I know it's his job to use MS stuff, but hasn't he tried the competition enough ot know that the command in question is called "su" and that most people just use "sudo" to do superuser commands one at a time? I mean, I know I'm being picky by calling out his semantics, but this is pretty basic stuff for anyone who has ever used a *nix, and as a security guru it seems like he should have at least dabbled until he got the gist of using OpenBSD, or whatever.
  • by Benanov (583592) <brian...kemp@@@member...fsf...org> on Thursday January 26, 2006 @10:57AM (#14567189) Journal
    "In late 2001, I sent a mail to all of my family members telling them that I would only help them with their PC if they were running Windows XP, so my grandmother ran out and bought an XP machine."

    Wow. A Microsoft Employee forced his own grandmother to upgrade...
    • by steve_l (109732)
      yeah, but he didnt say whether or not she bought a legit XP PC or a white-box that came with office and powerpoint for $400.

      Funnily enough, I have recently told my near family members that I dont support windows problems any more. While i used to spyware purge and firefox them, its just a losing battle. From now on they get a choice of Suse or Ubuntu Linux, which I will set up with SSH for remote maintenance if ever needed. Harsh but fair.

      -steve
  • by Anonymous Coward on Thursday January 26, 2006 @10:59AM (#14567214)
    "OpenBSD had 79 for November, December and January"
    "I encourage you to look at the numbers reported at the OpenBSD site to verify that this is true."

    Am I missing something?

    http://openbsd.org/security.html [openbsd.org]

    I count 2:
    - Jan 5, 2006: Do not allow users to trick suid programs into re-opening files via /dev/fd.
    - Jan 5, 2006: A buffer overflow has been found in the Perl interpreter with the sprintf function which may be exploitable under certain conditions.

    Neither of these are remote vulnerabilities, either.
  • One of the first and primary rules of Software Design is simply this:

    All features are designed

    Although it is clear from the response, Microsoft is serious about improving their security their methodology is seriously flawed. Creating a product by rigorous design is good but inserting a seperate security check at some later time is tantimount to writing software that is "hopefully secure". If security is a feature you design as a feature from the start. Anything else you will get somthing less than desira

  • I have a wife, three brothers, a sister, five sisters-in-law, three brothers-in-law, two parents, one mother-in-law, a father-in-law, one uncle, two aunts, one living grandmother, three kids (although they are all too young to use a PC), five nephews and seven nieces, so I get a lot of calls from family members asking for tech support. It's actually amazing how much their feedback has driven decisions in our security strategy.

    So, if not for this guy's extended family, Windows would be a fundamentally le

  • In the trenches (Score:4, Interesting)

    by Hairy1 (180056) on Thursday January 26, 2006 @11:15AM (#14567454) Homepage
    He didn't seem to answer the actual Microsoftie in the trenches who was saying that the processes that are in place are not working. His comments about repeatable processes reminds me of the production line school of thought, that if you can work out how to do something right once, you need only document it and the factory worker can do it over and over again like a robot.

    This has been applied to software development for a long time, and certainly not only by Microsoft. Sadly software developerment isn't a factory job; it is creative, and so you must treat it differently. Quality Assurance isn't something you test in at the end, it has to be a consequence of the entire process. When you are designing something new you have to think from the very start about the security model.

    I don't believe code review will help security - as in my experience code review will only deal with issues of syntax and adherance to coding standards. One way to do it is not to use a language which permits so many potential issues such as buffer overruns that can result in a system being owned.

    Dr Phil talks about setting yourself up for success, and I don't think Microsoft has learned this yet. They are still coding the same way as always, only added on some 'processes', rather than giving the developers the ability to deal with security as a priority higher than shipping.
  • "for users who still need or want to be logged on as an admin on their system we make it clear to them when they are about to do something that requires administrator privilege. The user can configure their system to either ask them if they want to escalate, or ask for a password when the system tries to elevate them. We have also gone through all of the system services in Vista to see which ones have admin privilege, verify which ones really need it, and for the ones that don't, remove it. "

    Why does thi
  • RMS (Score:2, Funny)

    by spacemky (236551) *
    Am I the only one who finds it hilarious that Microsoft uses RMS to mean Rights Management Services? (DRM)
  • by HerculesMO (693085) on Thursday January 26, 2006 @11:26AM (#14567613)
    And it isn't too bad.

    I'll give them credit where it's due... I think XP is a great piece of software, and *knock on wood*, I haven't had any real problems with it. I think the worst of Microsoft's reputation comes from the Grannies and Grandpas who don't know how to use a PC properly -- and their problem is really twofold -- they have the largest operating system in the world, and they have also got the biggest percentage of neophytes who use it. It's really just breeding ground for virii, spyware, and the like. For a reasonably seasoned computer professional, Windows XP works flawlessly.

    I will however, complain on a number of points. First, I had a friend who was a developer for the new version of SQL Server. I say *was*, because he quit. There is a *lot* of bureaucracy in Microsoft, and my friend hated it. Every time work was done, there was a meeting on the 'milestone' or whatever... and people would take turns ratting each other out to say that "So and so didn't do this" or whatever -- it was an extremely competitive, hostile environment. He now works for Yahoo, where he says the attitude is much more lax and people are encouraged to take it easy and work together. I think this attitude is also why Google has amongst the happiest employees and most production coming from its offices in the shortest amount of time. The layers of bureaucracy aren't as thick as they are at Microsoft, because Management and Employees aren't so clearly defined as they are in MS. There's a definite separation of powers there, and it causes a lot of friction and causes a lot less to get done.

    As I mentioned yes, I've drank the Kool Aid. I think however, I can still keep an open mind. I recently attended a Red Hat systems administration class. I think I was the only "windows only" user there -- most of the people were Unix admins of some sort. I managed throughout to keep my mouth shut, because some of the distinct hatred of Microsoft was so reminiscent of Ballmer throwing chairs. I felt out of place at a very snobby party, because every few moments the instructor was there critisizing Microsoft and its products and I always was tempted to ask -- "So what does Open Source have to offer that can compete with Microsoft's products?" This is true in a lot of areas -- Exchange, BizTalk, .NET (Developer tools are laughable in Open Source), etc. I'm not saying any of those products are even close to perfect... but they are currently the best. The instructor was convinced that Exchange can't support how many emails that companies need yet, I just came off a build of Exchange that supports 19,000 users across thousands of geographic sites, all managed from a single location. Is it sheer hatred, or is it totally just idiocy on the part of those guys? I'm not trying to stereotype... I'm trying to understand. I would say 90% of the problems that the Unix/Linux guys laughed about with Microsoft, I could have fixed easily because it was an error on THEIR part, not Microsoft's.

    I know I've said enough already to get modded troll -- supporting Microsoft -- the horror! But look folks, I'm a Windows administrator with great admiration for Linux and Open Source. I run Ubuntu at home, my web site is served off of Red Hat Enterprise 4, and Firefox is the default browser on all my machines, Windows or not. But I know where Linux has strong points, and I know where it has weak points. After taking the class, and passing the test... I can honestly say that in any network *I* set up, I'd never use Linux as a domain controller. I'd use it for web serving, databasing, maybe a handful of other things. But it's not that Microsoft's solution is necessarily the best in itself.. it is the best in CONJUNCTION with other products. Those products, not suprisingly, are also Microsoft products. So I can create my Windows domain, set up users, set up a file server, set up shadow copies, and then all administrative tasks become idiot proof. My users can restore prior copies of files automatically that they delete or simply screw up.
  • by pimpimpim (811140) on Thursday January 26, 2006 @11:28AM (#14567650)
    is... "Only one remote hole in the default install, in more than 8 years!"

    Notice the word "default". You can be sure that when you install BSD on your pc, and connect it to the net, it will be running without problems for a long long time. Try that with a windows install. Instead, he just uses the infamous 'count vulnerabilities' argument, which just doesn't hold because you cannot compare a vulnerability that requires already an account on the system with one that gives root permissions from just any externel connection.

    Furthermore my last OpenBSD install supported all my media hardware, and I could use xmms, mplayer, my tv card etc etc without problems. I would actually say that OpenBSD could be a very good candidate for people that just want to use their pc for multimedia without going through much pain.

  • by belrick (31159) on Thursday January 26, 2006 @11:31AM (#14567684)
    The question from the employee described a situation I've seen all too often, the "Emperor has no clothes" syndrome.

    Management on high sets a policy and directs lower level management to develop a process, and perhaps the process gets developed and perhaps it is even a good one, but the implementation of the process properly requires more resources and less management pressures to get other priorities met (like a ship date).

    The interactions between levels of manangement then almost invariably leads to the situation where the people at the bottom learn that people above them don't want to hear bad news, no matter whose fault it is, and soon learn that telling the truth leads to whacks on the head while telling half-truths, or putting a spin on the truth results in "atta-boy"s.

    Multiply that by two three or four layers of management and you get this guy's response. He doesn't even realize after hearing the question that his policy is considered a joke by the lowest layers.

    The funny part is, there are smart people in the lower layers and they can compare the corporate public communications of each layer of management and see how things get distorted; since they already know the lowest layer, it is ironic that they get one of the best views of the company!
  • by Stan Vassilev (939229) on Thursday January 26, 2006 @11:35AM (#14567739)
    The message from Microsoft: Never visit your grandma without your 512MB flash stick full of patches and antispyware progs.

    From the answers it's obvious things are moving in the right direction, but there's also a lot of "I'm making it sound as if security is important, but it's really just everyone trying to save their ass".

    You can't expect any company to be honest and just say "ok what the heck: yes we're not superhumans, the code base is huge, lots of bad decisions in the past, & we have lots of smart coders, but some less smart ones, and trying to improve on this whole bunch of stuff while remaining compatible is HELLA hard. But we're trying".

    Non-technical users would assume MS is just being monopolistically-lazy about it.
  • Automatic Updates once installed an update and restarted a PC in my lab without asking for permission to restart. It interrupted a long term test of a digital system, and required me to redo two days worth of testing. Take your Automatic Updates and stick it.
  • by RibRdb (243719) on Thursday January 26, 2006 @11:51AM (#14567958)
    SDL sounds nice, bit it misses the point. Bugs are going to happen. When are OS designers going to recognize this and provide decent protection? One of the main purposes of an OS is to protect applications from each other, but I don't really see this happening. Why should running code inside IE allow the attacker to do anything other than acess the internet or view the users cache and cookies? Why should running code in WMF be able to do anything other than drawing?
  • by yamla (136560) <chris@@@hypocrite...org> on Thursday January 26, 2006 @12:31PM (#14568548)
    Grrrr. I have three licenses for Windows XP for my two desktop machines, neither of which even run Windows XP as their primary operating system. So my copies of Windows are licensed and up-to-date.

    It ticks me off that Microsoft won't help unregistered users of Windows. As a direct result of Microsoft not offering antispyware, etc. etc., I suffer the consequences. My computers are hit by spam. My computers are hit by viruses. But my computers are fully licensed!

    Microsoft seems to be missing the point. By providing full updates, antivirus, and antispyware even to unregistered users, they would be directly benefitting registered, licensed customers.
  • by Anonymous Coward on Thursday January 26, 2006 @12:33PM (#14568576)
    The only way he could get 79 vulnerabilities for OpenBSD would be to count the applications in the ports tree. And that's not "counting the whole stack", that's more like counting all the 3rd-party software available for installation.

    So let's do that for Windows: every application that can be installed. All the thousands of them. And with the glory that is cygwin, that will pretty much include everything in the OpenBSD ports tree too!

    No, this doesn't make sense.

    Let's use another criteria: effort to secure, *before* deploying any apps.

    So I follow NSA/NIST/CIS/CERT/MS guidelines and proceedures for installing a Windows server that I intend to expose to the Internet. I install, patch, configure, etc. This takes hours of actual effort.

    For OpenBSD I install it and plug it in. If there are any relevant errata I may patch it. This takes wall time, but about 5 minutes effort time.

    And when I'm done both, which one do I have a hope of actually being able to trust? With which one am I pretty sure I didn't miss anything?

    There's spin and there's outright lying. This fellow is crossing over into the lying.

  • by drew (2081) on Thursday January 26, 2006 @01:33PM (#14569489) Homepage
    given that Windows Vista and Windows Longhorn Server are going to be the most significant releases of Windows in the last five years or so, we know that they are going to be used broadly by a large set of users for sometime--so getting it right is critical.

    At the rate they are going now, they will be the only releases of Windows in the last five years by the time they are ready.
  • I think Mr. Nash didn't understand the point i was trying to make.

    An infected illegal copy of windows can infect LEGAL copies, and spread spyware, malware, etc. Very few people I know, are aware of the existance of Windows Antispyware. Heck, most people are not even aware of antispyware at all.

    In other words, what Mr. Nash is saying is: If you come to us and are registered, we can protect you. But you better do it before someone with an unregistered copy of windows infects your machine.

    I wouldn't give a cent if unregistered machines were the ONLY ones affected. But they also affect registered machines, and don't stop there: They affect DNS servers, e-mail servers, and web servers - whether they're using registered copies of windows or not. Botnets are NOT A MYTH. They are a reality, and I'm sick tired of getting SPAM spread by those.

    Last week one of our webservers (it was a shared host, NOT owned by us) began spreading javascript viruses which exploited the Windows() vulnerability. Have you considered that this infection could have come from an unregistered machine? If that machine had automatically downloaded (yes, for free) Windows Antispyware, we wouldn't have to worry about viruses spreading to the office's network.

    My point was being proactive, but apparently Microsoft is more interested in getting money than in providing a good product in the first place.
  • by rcw-work (30090) on Thursday January 26, 2006 @03:48PM (#14571401)
    For applications that require admin for some part of their execution, we are providing guidance to the ISVs on how to re-factor their applications so that the components that the end sees don't need the privilege and the ones that do need to can be isolated and componentized so that most users don't encounter the escalation.

    Many admins, including myself, are currently supporting third-party software they know to be designed incorrectly in this aspect, and have had no luck applying the little political leverage they have to the ISV to get it fixed.

    In my example the people who coded the application in question did not know about the difference between HKEY_LOCAL_MACHINE and HKEY_CURRENT_USER, or between %TEMP% and Program Files. There have been several major version upgrades since we first noticed this, none of which have addressed the problem.

    How can we help Microsoft help the ISV help us?

"Once they go up, who cares where they come down? That's not my department." -- Werner von Braun

Working...