Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment I'm going to be elitist (Score 1) 83

and say anyone that doesn't understand EBNF probably doesn't need to be granted SuperUser privilege. If there are specific actions that should be permitted for trusted but unsophisticated users, set up scripts to do only those actions.

And I'll demonstrate my age by saying that Unix derivatives, including Linux, BSD, etc, etc, -have a long way to go- to match VMS for a truly useful/administrator-friendly privilege model.

Submission + - Target's internal security team warned management

david.emery writes: According to this story, Target's own IA/computer security raised concerns months before the attack: http://www.theverge.com/2014/2... Quoting a story in the Wall Street Journal.)
But management allegedly "brushed them off."

This begs a more general question for the Slashdot community? How many have identified vulnerabilities in your company's/client's systems, only to be "brushed off?" And if the company took no action, did they ultimately suffer a breach?

Comment Re:*sigh* (Score 1) 312

True, but if your company's product is, for example, software - and that software company is being run by someone with a legal, financial, hardware, operations, or non-software engineering background, the problem is much more difficult. And that's what I'm seeing. First the engineers need to be able to think in terms of business objectives (one of the best courses I ever had was a grad course in "engineering economics"). But second, the management community (starting with the business schools) need to figure out how to train CxOs that actually -understand the business they're in-.

For the last 30+ years, I've been in the large scale systems business. Most, but not all of that has been on projects for the US DoD. I've been appalled by the number of senior executives, military/government, large industry, small industry, who fundamentally don't understand software-intensive systems. As my earlier post said, their software experience is encapsulated in some small-scale programming task, rather than in large scale software engineering. On the one hand, they expect software to perform miracles because "it's software, you can change it," while on the other hand they refuse to invest in software. For the former, the best quote is from a former co-worker, "The software engineer is the system engineer of last resort."

I'm reminded of a system I once reviewed where they had a 'software problem'. But it turned out they had a -networking problem-. They were trying to move large volumes of images over a 10BaseT ethernet connection, and wondered why they weren't getting system throughput. Their ethernet was usually well over 50% loaded and couldn't handle the data. But they expected the software to 'fix' this.

Comment I've worked for good engineering managers (Score 2) 312

I've had the good fortune to work for several good managers, either as direct supervisors or as senior managers, up to the Corporate VP level. That includes people in small companies, in Fortune 500 companies, and even active duty Army officers.

What I've observed is that the top levels of management DO NOT want to listen to what the good engineering managers try to tell them, about topics like staff training and retention, schedules or resources (e.g. hardware/capital expenditures.) Instead, the CxO level people promote those who tell them what they want to hear. It's not universal, but many of the good managers I've had are products of deliberate leadership/management training, rather than being promoted from 'nerd' to 'boss' and left to figure it out on their own. Part of that training is how to talk to the CxO level and how to make arguments in terms of corporate business case, objectives, etc.

The only good news is that at least in this millennium, the number of top managers/CxOs who actually know something about software, has increased. They're still a minority, but you may well find a VP who understands that software isn't "that crappy stuff that always makes our systems late, so we'll 'fix' it by throwing more cheap bodies at it." (I got really tired of the engineering VPs whose experience was in hardware, and whose ideas of software systems engineering was framed by "that FORTRAN course I took in college...")

One interesting model that was popular in the early '90s may deserve another look. Some research labs* split managerial duties, separating technical leadership from administration. Where some organizations got into trouble with that model was not treating both classes of managers as equals. The technical leaders too often got marginalized, because the administrators were the ones that talked about the kinds of stuff CEO/CFO wanted to discuss. It takes a tremendous investment at the CxO level to institute a program that recognizes and grows technical leadership as distinct from, frankly, beancounting.

* It runs in my mind that DEC's Western Research Labs was one of the organizations that implemented this approach successfully.

Comment No RSS feed? (Score 2) 213

Am I the the last person in the world who uses RSS readers to browse news sites for stories that I actually want to read? After all, 90% of everything is crap and I'm looking for efficient ways to find the 10%.

The visual clutter on that site is appalling, I thought Pogue had more taste than that.

Comment Re:Very different code (Score 1) 225

"having source code wouldn't have changed anything" Disagree. Particularly at interfaces (e.g. cross-language/cross-compiler calls, APIs to COTS products, etc), sometimes you need to see inside the product to figure out what's failing at the interface. It's nice to talk about omniscient knowledge on the part of product developers, API specifiers (who might not be the same as the API implementers), and customers/users of that API for that product. Our specification techniques are by no means rigorous enough to prevent these kinds of problems.

"Risk" means just that. It's not a guarantee of failure. Rather it's the possibility of failure, coupled with (multiplied by :-) the consequences of the failure occurring.

Here's a real-world example, not related to compiler problems per-se, but relevant to your comment:

We had developed Unix RPC code that worked just fine on several systems, including the commercial product the hardware vendor was proposing. When we got to the installation for test, the code didn't work. After going up to the General Officer/SES level, and across to the vendor senior VP, I was given a copy of the vendor's source code for their RPC library. I was able to step through their code to the actual kernel syscall, and most importantly was able to see the failure value returned by the syscall. (The RPC library took that value, converted it into an ERRNO value of "EACCES", -which didn't tell us -why- the error actually happened.)

When I saw the value of the syscall, which indicated a non-privileged (not root) process was trying to do something that required 'root' privilege, I was able to go back to the vendor and ask, "Did you implement extra security features for RPC?" After some hemming and hawing, they came back and said, "Yes, you have to be root to install an RPC service." More importantly, that security feature WAS NOT DOCUMENTED. (And I can understand the rationale for saying that only privileged/root accounts can install public RPC services, so this was not an unreasonable restriction, but it was a change from standard practice in Unix systems of the time.)

This is an example of (a) a situation where the interface failed, even though all of the -code- was correct; (b) the only way I found this was to step through the vendor library to figure out the problem was -missing documentation-.

Slashdot Top Deals

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...