Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Vague criticism (Score 1) 361

This, and the meta point: the fact that the poster of this "Ask Slashdot" left the conversation WITHOUT having an answer to those questions is itself indicative of poor communication skills. A good communicator will convey that sort of information regardless of how poorly his report listens; a merely average manager will convey merely average general principles and it's up to the report to pull out more information. (And a poor manager will give non-committal evals then fire somebody without warning).

I'm reading the OP as "my manager told me I had poor communications skills. I didn't understand what he meant, so I nodded my head, said I'd work on it, and walked out." Thus proving the point. (Though OP gets some points for at least asking somebody a.k.a. Slashdot. It's the wrong somebody, his manager or his peers would be better choices, but Slashdot is better than nothing.) If the manager can't explain to your satisfaction, go to the next level up the chain and say "I got this feedback from my manager, we talked about it and I didn't understand, can you help me understand?" (But no further, and don't blame your manager.)

I'm reading between lines some here, but what I'm seeing is more conflict avoidance than anything else ... OP is more comfortable asking online / anonymously than face-to-face. I'm an introvert, I feel that too. I've just spent many years breaking that habit, because I realize I'm much more effective face-to-face (read: we all get what we want faster) and I've found online conversations less effective (written conversation has a tendency to include every argument, and ends up coming across very antagonistic).

But let me put a positive spin on things: poor communication is expected of an average, very junior person. This managerial feedback should be viewed as "improve this to get promoted", not "improve this or get fired". (Well, except at a start-up, where being merely average is cause for firing.)

Comment Re:At what scope of time or size of output data? (Score 3) 240

VMs do have good sources of entropy ... while a server indeed has no audio / fan / keyboard / mouse inputs (whether physical or virtual), a server most certainly does have a clock (several clocks: on x86, TSC + APIC + HPET). You can't use skew (as low-res clocks are implemented in terms of high-res clocks), but you still can use the absolute value on interrupts (and servers have a lot of NIC interrupts) for a few bits of entropy. Time is a pretty good entropy source, even in VMs: non-jittery time is just too expensive to emulate, the only people who would try are the blue-pill hide-that-you-are-in-a-VM security researchers.

The real security concern with VMs is duplication ... if you clone a bunch of VMs but they start with the same entropy pool, then generate an SSL cert after clone, the other SSL certs will be easily predicted. (Feeling good about your EC2 instances, eh?) This isn't a new concern - cloning via Ghost has the same problem - but it's easier to get yourself into trouble with virtualization.

Comment Re:So answer the question (Score 1) 252

If you are a developer and "badgering" to promote ideas, then you aren't doing it right

Teams I see where the lead developer promotes ideas to the team and the manager supports those ideas end up being conspicuously stronger ... the teams I see where the manager tries to lead tend to result in everything looking good until the one key leader/manager departs and leaves a team unable to direct itself and too timid to interact with the outside world. Which actually indicates a failure in mentorship and inferior performance on the part of that "superstar" leader/manager.

Comment Re:The same advice in every profession (Score 5, Insightful) 473

Same message but with a more positive spin. (And yes, I'm in that 200K+ category).

Do it better
Make sure your code works right, on the first try. When you have to pick between a band-aid and a permanent fix, choose the permanent fix - and deliver it just as fast as the person proposing the band-aid - because you know the system well and can deliver a fix faster than the outsider who is trying to be conservative. Design defensively and plan for debugging - make sure that when something goes wrong, it's very obvious where the problem started. Don't fear bugs in your code - optimize the process of them being assigned back to you and fixed.
Do it faster
When you know in the back of your head that something you wrote isn't up to par, invest the work immediately. Don't wait for somebody to tell you to do it better. Be a generalist: if you are waiting for another team to deliver a feature, learn how to add it to their area, add it, and unblock yourself - what you are doing will get done 10x faster that way. (It's a lot more work for you, but see: $200K+).
Work longer hours
Longer productive hours, need not be longer total hours. Don't goof off during the workday - no reading Slashdot, no kitten videos on Youtube, etc, do all that after you leave work. Most people only work 2-4 productive hours a day, and have a raft of excuses (meetings, interruptions...). If what your company does cannot keep you interested in your job for 6+ hours a day, you aren't going to be a $200K+ developer working for them.
Bullshit. Kiss ass
Invest in and maintain social relationships. That group you depend on for your new feature? It takes them 2 weeks to deliver when you are a person with a face they see every week, but 2 months to deliver when you are just an e-mail address. And talking to them frequently helps ensure that dependency works right when it arrives, instead of being a technically-correct-but-useless mess. You say it's your manager's job ... I say your manager would be delighted to see you take care of yourself so he can focus on the people less qualified than you who need his help to be productive.
Draw a firm line when everyone is so dependent on you that they can't survive without you
Establish a reputation for reliability; when the company needs an issue fixed right and cannot afford to have an average person fix, re-fix, and re-fix again, be the person who gets called. This requires working in an area where some problems must be fixed "at any cost" (e.g. a $10M contract is on the line) - hence why most $200K+ developers are at big companies where mistakes in certain areas are disproportionately expensive.
Keep your friends close and your enemies closer. Sabotage the competition. Don't ever settle. Sue
This requires some mental jujitsu: you don't have enemies. Your job is to make them succeed just as much as your job is to succeed. When somebody tries to get in your way, don't focus on crushing them. Get around them, co-opt them, give them chances to succeed on your coattails (they may fail due to the incompetence you suspected, but at least you tried). This is a particularly hard skill, found only in CxOs and 95%+ salary range professionals. If you think "win-win" is just a figure of speech, then you aren't mentally ready to be in that crowd.
Keep your heart on your sleeve at home. Nail people to the cross in business.
Your co-workers want problems solved, not drama. (They get plenty of drama from their SO's at home / their sport team / their WOW clan / Glee / Game of Thrones / whatever). Drama is a distraction from your job; see above point about fully utilizing your hours of work. This doesn't mean you cannot have drama - only that it cannot distract from your job.
Cheat. Lie when you can get away with it. Bend the rules until they break
More importantly, know which rules to bend / lie / cheat and which to respect. When a new feature shows up on your plate, dig deeply into it to figure out which parts matter and throw away the rest - and convince whomever asked for that feature that you are giving them everything they need. Know when to break a promise because the world has changed and what was important 6 months ago no longer matters - and take the initiative both in breaking the promise and (don't forget!) educating / convincing everybody else the new thing is more important. Which also implies that being a $200K+ developer means being able to convince everybody around you that your path is the right one to follow - which takes charisma and even more effort.

Comment Re:My goodness (Score 2) 417

It's hiding in plain sight, in the part of the Fifth Amendment most armchair lawyers don't bother reading:

No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.

Due process of law HAS been observed. The current state of law is that if the government can prove you knew something in the past, they can compel you to disclose what you knew. In this case, if the government can prove you used to know the password (which, in this case, they could not originally but could after the FBI decrypted one drive), the government can compel you to reveal the password.

The Fifth Amendment does not protect the password (it's just a sequence of characters); the amendment protects the "testimonial" aspect that you knew that particular sequence of characters was significant. Once that fact is entered into evidence through some other means, the Fifth Amendment's "due process" requirements have been satisfied.

Comment Re:Forgive my ignorance... (Score 1) 322

Does the 5th amendment right to avoid self-incrimination apply only to the particular charges being brough in a given case, or does it cover any statement that could be incriminating, even if it were in a different proceeding, or if the record from Case A were to be used as evidence in Case B?

No, it applies to all cases ... if you state something in case A, it can be used in case B. Which is the point of the protection: if you are on trial for jaywalking, and can "prove" that you were not jaywalking because instead you were robbing a bank across town, the law cannot compel you to state where you were (thereby confessing to some other crime). You can volunteer it / waive your right (and be an idiot), but the law cannot force you to confess.

If somebody were being charged for one crime that probably left evidence on the HDD(kiddie porn, say); would the fact that they know that there is evidence of CC-skimming(but, unlike the kiddie porn, the feds have no circumstantial evidence or other grounds for belief) justify a 5th-amendment refusal to decrypt the volume? Would the other potentially-incriminating stuff be irrelevant because it isn't among the charges(even if the court record could be used as evidence to bring future charges)? Would the suspect be compelled to divulge the key; but the prosecution only have access to material relevant to the charges being filed, with some 3rd party forensics person 'firewalling' to exclude all irrelevant material?

I didn't see the warrant specifically mentioned, but "normally" the search warrant has to specify exactly what is being searched, and is thus ONLY valid for what is being searched. For example, the search warrant would say "the file named kiddie_porn.jpeg", and thus only that file (and not ccfraud.txt) becomes evidence. That said, warrants can also be broad - the hard drives themselves were presumably seized because the search warrant said "any computers and electronic storage devices located at 123 Perpetrator Street". Fishing expedition warrants saying "all files showing evidence of kiddie porn" tend to get thrown out, but a warrant saying "all files under C:\kiddie_porn" backed up by evidence (a P2P log) showing that files in fact were placed within C:\kiddie_porn is probably valid - and a warrant backed up by a P2P log is almost certainly what the search warrant this judge is ruling about says.

Not being a lawyer, I can't tell you what happens if the person examining the encrypted contents happens to see evidence of some other crime. But the physical analogy is this: if the police show up with a warrant to search your house for "computers", they are obviously entitled to seize all computers. And if they walk through your house and see illegal drugs sitting on the table, that's admissible evidence ("in plain sight") (Interestingly, it cannot be seized because the warrant does not specify "drugs". But what happens is the cop calls the judge and says "I'm executing warrant A for computers and see drugs on the table, can I get warrant B to seize the drugs?" and the judge faxes over a warrant right away). But they are not allowed to rifle through all your drawers and closets - drugs found there are not admissible evidence because they are not "in plain sight". (Unless you give the police permission - and they WILL ask. Which is why lawyers always advise saying "I do not consent" - you cannot stop the search / seizure, but not consenting makes any evidence found without a warrant inadmissible and the police potentially liable for misconduct). It's difficult to guess how courts would apply this standard to searching a HDD, but they would do it by starting with the physical analogy and figuring out how it applies to electronics.

What's happening in this case is that the prosecution knows files with kiddie porn names were downloaded. But they still cannot prove the files contain actual kiddie porn. (Maybe this guy is sick and thinks naming his legal porn files with kiddie porn names is funny). So the prosecutor was hoping to compel this guy to hand over the encrypted files (whose names they knew), under a warrant that compels him to be truthful about their contents (by having a neutral 3rd party do the work). The judge decided that the prosecutor does not have enough evidence to prove this guy actually knew what was in the files (maybe he operates a repository with files stored on an encrypted disk, but does not himself have access to the files). The judge also implied that if the prosecutors DID have evidence of what was in the files (maybe 1 or 2 got left on unencrypted drives by the P2P program as intermediate files and the filenames matched?), he probably would authorize the warrant and require this guy to decrypt his drives.

Comment Re:"Stole" or "confiscated"? (Score 0) 812

No, you really would be better off "voluntarily surrendering" your toothpaste to the TSA agent.

Attempting to pass through security with a prohibited item is a crime - think about it, the same laws apply to passing through security with toothpaste ("might be an explosive in disguise") as apply to passing through security with a stick of dynamite, and they're definitely going to arrest you (and not "voluntarily confiscate") for carrying dynamite. Airport security isn't a game where getting caught just means going to the back of the line. You get one chance, once the agent starts looking you over you either pass or get arrested. You do have a choice: "voluntarily surrender" the stupid stuff, or refuse and be arrested for taking a prohibited item through security. That toothpaste is still "yours" despite your arrest - it's just locked up in an evidence locker until you convince a judge/jury that calling toothpaste a prohibited item is stupid ("see? no explosion, your honor"). "Voluntarily surrendering" is indeed a lawyer's trick - but it's a lawyer's trick the TSA is employing for YOUR benefit, because their only other option is to arrest you.

(Which does suggest a really entertaining protest. Get a hundred people to show up at the airport with a tube of toothpaste, go through security, and all refuse to surrender that tube of toothpaste. Watch the TSA have to deal with a hundred arrestees - I doubt they have the holding space, and press headlines "TSA ARRESTS 100 OVER TOOTHPASTE" would be hilarious. And much more likely to affect the law than whining on Slashdot.)

Comment Re:Not exactly a revelation (Score 1) 417

There's a nice spin in there. At any given time, all important apps will be present in all markets (or at least the top three markets). What really happens here is that markets are actually forced to compete with each other a) for developers b) for users (markets that would demand exclusivity would simply die, even if anyone was stupid enough to pull something like that). This is good news for everyone, and the antithesis of everything Apple stands for. No matter how much he SJ tries to spin it, fragmentation is not a problem.

It's not spin - this is because Steve Jobs knows what he is talking about through experience. If the market were absolutely perfect, then indeed all important apps would be present in perfect competition with each other on each platform, and the best apps would come out winners. The problem is that this "perfect markets" idea is universally known to be an inaccurate approximation (the current favorite for adding accuracy is Search Theory). And search theory justifies exactly what Steve Jobs says, that it is easier to find things in an integrated market than in a wildly fragmented market.

Apple is not - and has never been - about providing the best technology. Apple is about providing the best customer experience (to separate consumers from their money as painlessly as possible); sometimes that requires brilliant technological innovation, sometimes that requires competition, and sometimes that requires enough control to make everybody do things one way even when the alternative may be technologically superior.

Comment Re:Kernel shared memory (Score 2, Interesting) 129

Disclaimer: VMware engineer; but I do like your blog post. It's one of the more accessible descriptions of cpu and memory overcommit that I have seen.

The SnowFlock approach and VMware's approach end up making slightly different assumptions that really make each's techniques not applicable to the other. In a cluster it is advantageous to have one root VM because startup costs outweigh customization overhead; in a datacenter, each VM is different enough that the customization overhead outweighs the cost of starting a whole new VM. Particularly with Windows: a Windows VM essentially needs to be rebooted to be customized (and thus the memory server stops being useful), whereas Linux can more easily customize on-the-fly. Different niches of the market.

The second big difference is architectural. VMware handles more in the virtual machine monitor; KVM and Xen use simpler virtual machine monitors that offload the complex tasks to a parent partition. This means that for VMware, each additional VM instance takes ~100MB of hypervisor overhead - small relative to non-idle VMs, but large relative to idle VMs. It's purely an engineering tradeoff: a design like VMware's vmm will always be (a little bit) quicker per-VM; a design like KVM/Xen's vmm will always scale (a little bit) better with idle VMs.

These combine to make it easy to show KVM/Xen hypervisors more deeply overcommitted than VMware hypervisors by using only idle Linux VMs. VMware doesn't care about such numbers, because the difference disappears or favors VMware as load increases. If GridCentric has found a business for deeply overcommitted VMs, more power to you!

Comment Re:Putting vulnerabilities in escrow? (Score 1) 134

It's a really good idea.

Part of the idea would have to be having a REPUTABLE escrow service disinterested in publicity - a service that can work with both the vendor and the security researcher and balance the competing interests.

Every security researcher wants to maximize the severity rating of the bug, an instantaneous commitment to a fix timeline, and an absurdly tight deadline (expects vendor to drop everything to analyze the bug, fix it perfectly on the first try, and release immediately). Responsible security researchers know that instantaneous is not going to happen, so are willing to wait a few months.

Every vendor wants to minimize the severity rating of a bug, and to push out the fix as long as possible. Delaying is less about saving face and more about saving money: regression testing eventually becomes free (next major release) or at least cheaper (batch several security updates and regression-test simultaneously). I know my employer prefers batching, as does my web browser's vendor (Firefox) - non-critical non-public vulnerabilities get queued and are only fixed with the next critical or public vulnerability or other minor update.

So a vulnerability escrow service needs to mediate these two competing interests. They need to keep pressure on for a deadline, but also need to be reasonable about the deadline in the first place (60 days sounds pretty good) and be flexible enough to move it back if the vendor can demonstrate good-faith efforts to fix that, for reasons outside the vendor's control, won't be able to make the deadline. (Example: the fix breaks Adobe, and they now need another 60-day window to get Adobe to release an update.) For a vendor, every deadline is too short, but for a security researcher, every deadline is too long; only an escrow service with a serious reputation for integrity and serious clout will be able to force both sides to accept a compromise, especially when a security researcher who doesn't like the compromise can so easily throw an adolescent temper-tantrum and go public prematurely.

Comment Re:60 days = upper bound, not average (Score 1) 134

Windows is an OS kernel, a very large set of system libraries, plus a few hundred applications (everything from Calculator to Internet Explorer). Linux source is just the kernel. If you want a real comparison, compare a Linux distro (say, Ubuntu) to Windows. Wikipedia already did it for you.

XP is 40M LOC, its contemporary Debian 3.0 is 104M LOC. I don't have a source for the size of the Windows kernel source code, but Windows 7's compiled kernel is ~5.4MB; Linux's compiled (core) kernel tends to run about twice that.

Most Windows (and for that matter, Linux) security vulnerabilities are not in the kernel.

Comment Re:The bad guys thank you Tavis. (Score 1) 497

Well, you need to be faster. Much faster. As fast as open-source software. Don't say you can't do it: we can

If this had been reported in open-source software, there wouldn't even be a fix, just a snarky e-mail (about as snarky as your post, actually) saying this was fixed four years ago and telling the user to upgrade. And woohoo, the latest (open-source) version is free! - when you don't count your time to do the upgrade.

Open source software doesn't support 9-year-old codebases; most open-source projects (core developers) only support top-of-trunk and even most open-source vendors (read: those who sell support contracts) only make 3-5 years out.

I've interacted with Microsoft security before. They are quite serious about fixing things, they have standards for what gets fixed on what timeline and they really do follow them, and get back in a REASONABLE amount of time (usually, ~1 week, not 2.5 business days). Generally, they ask whether a bug is being exploited in the wild. If it is, they react fast; if not, they take their time (a thorough investigation, not a rushed investigation), and not the refusal you naively claim.

The problem in parent's logic (and many other self-styled security exports) is assuming that their personal security issue is the single most important issue on the planet and applying scorched-earth tactics to escalate its priority - a sign of megalomania, not of responsible security research. Is a not-in-the-wild exploit more important than an in-the-wild exploit? Is a not-in-the-wild exploit more important than Joe's long-awaited vacation with his kids? Is a not-in-the-wild exploit worth risking breakage due to an unexpected conflict? Your personal answer to all these may be "yes"; it is plain arrogance to force that answer upon everyone else. That's the difference between responsible disclosure and (this Google idiot's) irresponsible disclosure.

Comment Re:Pfff... (Score 4, Insightful) 1213

Every time someone talks about how great XP is working, I have this odd compulsion to point out the Linux equivalent.

If you ran Linux systems that old, you would be using a 2.4.18 kernel (remember LinuxThreads?). You would be using OSS, because ALSA was still incomplete and PulseAudio hadn't come around yet. Your system's compiler would be gcc-2.95, your python implementation would be 1.5.x and run none of today's code, you would still be on an XFree86 server that doesn't support any graphics card made after ~2004. Your web browser would be Mozilla, because Firefox hadn't come around yet (and today's Firefox doesn't support kernels that old). Your OpenSSL libraries would have started at version 0.9.6b, and been patched roughly twice a year since release.

The odd thing is, were this Linux you would be flamed for trying to get modern things running with such old versions. But as this is Windows, you feel entitled to complain about having to re-learn something new and brag about the "effort" you save.

As somebody who programs for both Linux and Windows for a living - your "saved effort" comes at a significant cost to me. It is increasingly hard to write Windows software that works on both XP and Win7; every new feature has to be written twice, once using the right Vista+ API and once to degrade gracefully on XP. Linux is marginally better - there's a new trendy library-of-choice every few years, but at least old ones disappear before too long. Hardware tends to be less than 5 years old, Linux installs tend to be less than 5 years old; yet tech-savvy XP users somehow feel entitled to stay with a 9-year-old OS. Most people don't keep cars that long; why expect an operating system to last?

Comment Re:Nothing to see here.... (Score 3, Informative) 252

I should perhaps note that I do implement low-level libraries for an extremely reputable company as a day job; I'm familiar with low-level lock implementations both in kernel and in userlevel on Linux, Windows, and MacOS, and exactly how those implementations balance spinning versus blocking.

The Linux kernel preference for spinlocks dates from years ago, when the whole kernel ran under the BKL and was non-premptable anyways so you couldn't use blocking locks. When the BKL came out, all locks were made spinlocks to maintain correctness (and the -rt patchset started up, doing a conversion). The default implementation (still in use today by anything except the -rt patchset!) disables interrupts while any spinlock is held, and thus assumes the only thing holding the lock is another core.

In contrast, Solaris and Windows (and I think MacOSX, though I would have to check my references) use a mix of spinlocks and adaptive locks - spinlocks for use within interrupt handlers, and adaptive locks for everywhere else. Good pthread implementations (glibc included) use adaptive locks - which means the pthread implementation this paper declared too slow to use ALREADY spins ~1000 cycles before blocking. The canonical rule here is that an adaptive lock spins for the same amount of time it would take for a block/wakeup cycle, then blocks; this is guaranteed to be within a factor of 2 of optimal in all cases, which is the best overall lower bound you can possibly get. (Yes, Linux kernel is behind the times; they are slowly getting better, and when eventually the -rt patchset gets merged, Linux will have finally caught up. Sorry, Linux fanboys.)

Spinning versus blocking is a tradeoff. The research paper manages to extract all the gains from the "spin forever" side of the tradeoff without ever admitting the drawbacks (one full CPU core wasted).

Comment Re:Nothing to see here.... (Score 5, Interesting) 252

And now I've read their paper. Quick summary: (1) they do indeed speculatively pre-allocate heap blocks, and cache pre-allocated blocks per client thread. (2) They run free() asynchronously, and batch up blocks of ~200 frees for bulk processing. (3) They busy-loop the malloc()-thread because pthread_semaphore wakeups are too slow for them to see a performance gain (section 2.E.2-3).

In other words, it's a cute trick for making one thread go faster, at the expense of burning 100% of another core by busy-looping. If you are on a laptop, congrats, your battery life just went from 4 hours to 1 hour. On a server, your CPU utilization just went up by 1 core per process using this library. This trick absolutely cannot be used in real life - it's useful only when the operating system runs exactly one process, a scenario that occurs only in research papers. Idea (2) is interesting (though not innovative); idea (3) makes this whole proposal a non-starter for anything except an academic paper.

Slashdot Top Deals

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...