Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Colo? (Score 3, Interesting) 285

One other note: this works as long as you have any semi-private place at work where you can put the drive. It could be a desk drawer or something else. I don't see any reason why there is a requirement that it be stored at a site you "own and control". Just put heavy AES encryption on the backups as I do, just in case the drive falls into other hands. Then your only real risk is financial loss of the disk itself. I know other people at my workplace that all do the same thing. And if you want heavier security and don't mind paying for it and taking extra time, a safe deposit box at a local bank is a good fallback, and certainly much cheaper then a colo. You'd have to have pretty deep pockets for colo space and the bandwidth to back up to and from that location, making it impractical for most people.

Comment Re:Colo? (Score 5, Interesting) 285

A colocation center? Do the initial backup locally then use something to replicate changes in the future?

Too painful and expensive. This can be made much simpler. I have two sets of backups I keep: an internal 2 TB hard drive for local backups, and a pair of 1 TB external drives for off site backups. Every Monday, I unplug the external drive at my house as I head out the door for work. At work, I put it into my locker and retrieve the other drive, which I bring home with me when I leave for the day. When I get home, I plug it into the vacant USB and power cord, and presto: it's online and ready for backup! My software (I use ShadowProtect Desktop) does a full backup of the machine every Sunday night, so Monday mornings it is always ready for the swap again. It's a very quick and painless way to have offsite backups without spending a fortune on comparatively slow Internet bandwidth.

Comment Re:Probably Obama. Or the Tea Party. (Score 5, Insightful) 569

The communications market is so "deregulated" that monopolism takes over, with deliberate barriers to entry placed by noncompete agreements and dirty tactics. And yet so many people think anarcho-libertarian, "laissez faire" deregulation will somehow make their lives better in every aspect.

That's not true at all. Try opening up a new cable company in your local town, or opening up a new power plant and running new wires to all the houses. Oh, that's right, you can't, because the government has decided that it would be inefficient to have more than one set of power lines, or water lines, or cable lines, or telephone lines, etc, going into a single home. So they allow one provider to service the whole town and be a government sanctioned monopoly. That's hardly "deregulation"... in fact, it's the epitome of the government regulating and controlling everything.

Comment That's overly simplistic - population density key (Score 0) 569

It's always a bad idea to compare the US to Europe or Asia. These kinds of comparisons always end up being overly simplistic. The US is a VERY decentralized nation in terms of population, and we have a far lower population density than they do. Compare Houston to Tokyo, for example... Tokyo is tightly packed and Houston is sprawling everyone. It's much easier to bring cheap, high speed broadband to a bunch of tiny, densely packed apartments than it is to bring it to every country lane. Asian and European cities are much more like LANs, and US cities are like WANs, to put it another way. If you want LAN speeds in the much less densely populated US, it is going to be very costly.

Comment Re:Gov't project (Score 1) 516

They're built by lowest bidders Serco and QSS Inc. Neither an American company. If they had decided to hire Americans to do this job, they would have had a very large pool of qualified and skilled workers from which to choose.

I disagree. I've worked with American, Indian and Chinese developers, and you know what the number one issue is? It's not lack of qualification, it's lack of testing! Most developer HATE testing no matter which country they are from, and therefore don't do it. And you know what kind of testing they really, really, really hate? Load Testing! It is especially hated because you can't just generate large loads from your laptop... you actually have to set up dedicated load balancing agents to really simulate a large load. Setting up the load testing environment takes quite a bit more effort than most other kinds of testing, such as unit testing. So, it gets skipped all the time, and bites project after project. And I guarantee you load testing was never done on this site, and probably would have been skipped even if Americans were doing it, unless they were especially conscientious and hard working developers, which most aren't.

Comment Re:I can think of one that Steve Jobs disagreed wi (Score 1) 598

I'd be interested to know what line of work you do, programming wise. My experience tells me that a lot of programming that is being done is meant to be powerful and meant to be built quickly. Running quickly and with low tolerance for faults is a little less important because very few things are mission critical. While anathema to the academic, it demands a certain skill set, which is the ability to very quickly assimilate new arbitrary knowledge about libraries, software, and code, that the programmer hasn't seen before. The result is a fragile sort of knowledge that often lacks formality and granularity but is sufficient enough to accomplish a task very quickly.

There is definitely some truth to what you say. I started out as a programmer in 2007 but quit programming and went into infrastructure/systems engineering after my first programming job precisely because of what you just described. I got into computers and programming initially because I had a desire to cultivate a deep understanding of the computer and how it worked, and I discovered very quickly that modern programming is all about the latest "shiny new framework" and slapping something together as quickly as you can (the politically correct term for this is, of course, agile). That's all well and good, and there is a definite place for that because speed to market is very important in a competitive environment. And a lot of people really seem to enjoy that style of rapid development at the expense of truly understanding what is going on. But it wasn't for me.

That said, there is also truth to what the grandparent said when he posted this:

I wasn't saying a programmer should write everything from scratch every day. But if you don't know how to, you're SOL and at everybody else's mercy when something goes wrong. You're costing your company money. Because things inevitably go wrong.

Those people with the "fragile sort of knowledge" are at everyone's mercy when things go wrong. They literally have no clue where to go next or how to troubleshoot if things don't work exactly right. And in my company, it's me and others at the heart of the infrastructure devops teams who they come to when things go down, because we are the ones who actually understand how it all really works underneath the high level frameworks and scaffolding. We understand the networking, the HTTP, the authentication protocols, languages and everything else at the bottom. The best programmers, at least at this company, are the ones who did those rapid development jobs for just a few years and moved as quickly as they could into backend "Developer Center of Excellence" type teams, where their job is to support other developers, create standards, write programs designed for the infrastructure as a whole and therefore learn the deeper points of the technologies.

Conclusion: The grandparent is right when he says that the best programmers understand how things work under the hood and could write these objects from scratch if they had to. But you are right when you say that not all programmers have to be at that level in order to do something effective. Both types are essential to organization, because you have to have people fast enough to outmaneuver the competition, but also really solid people in reserve to back up the quick and dirty developers when things go wrong.

Comment Re:Yet Another Einstein Article (Score 1) 195

No, sorry, but this is a fractally wrong statement to make. With sufficient curiousity, you will be dedicated to learning as much as you can. The drive to learn will push you where you need to go. Intelligence merely sets the speed by which you'll arrive. Your over-emphasis on intelligence is elitism; It's suggesting that if you can't be "smart enough", you shouldn't be in science.

This is a nice thought, but patently untrue. It's like saying that anyone can be an NFL football player, and your level of physical talent and ability merely dictates the speed by which you'll arrive. But that isn't actually true. Without sufficient "speed" you'll never arrive, and it's the same in science. You may have the curiosity, but without the mental talent and aptitude you'll be forever beaten to new discoveries by all the other scientists who not only have your curiousity, but also the mental aptitude you lack.

Sure, anyone can play football and throw the ball around, but most don't have what it takes to play in the NFL. Similarly, everyone can learn the basics about the scientific method and learn to think in an empirical way, but not everyone has what it takes to be a professional scientist, or to make major scientific discoveries like Einstein did.

Comment Re: Hiring and admission decisions (Score 5, Interesting) 195

Yet I would argue true geniuses need the support structure the Steve Jobs/Edisons/etc can provide to realize their potential.

I think this is right on, but it extends much farther than just "true geniuses". Personally, I'm one of those highly technical people who are really good at the nitty gritty details of making technology work, but as I've learned more about myself over the years I've realized that I need to make sure I stay in the technical arena, rather than going into management or some of the purely "visionary" roles, because the high level of technical talent I have doesn't mean I have a commensurately high level of visionary talent. I've learned that a good idea for me is to seek out the visionary types in my organization and try to get myself onto their projects, because they can supply overall direction and I can provide a really good technical implementation. I'm not trying to compare myself to Woz, Einstein, Tesla, or these other geniuses, because I'm not nearly that smart, but I do think the principle extends to me an many others. There is an almost symbiotic relationship that can be had when technical people realize they need visionaries, and visionaries respect and treat the technical people well. I think it applies to much of industry, not just super geniuses and super visionaries.

Comment Re:you have the source (Score 1) 566

We had some issues with not adding enough randomness in embedded devices, but that problem was largely fixed a year ago. At this point, I think urandom should be fine for session keys. It's not the best choice for long-lived keys in those embedded devices, but those devices (a) don't have RDRAND, since they tend to mips or ARM CPU's, and (b) since they don't have any peripherals other than the flash drive and the networking cards, there isn't that much entropy they can draw upon. There are things you can do to improve things in userspace, such as holding off on generating the host keys and generating the RSA keys for the certificates as long as possible, instead of right after the boot. But that's much more of a specialized problem for a specific class of system.

Comment Re:you have the source (Score 1) 566

How would they detect any shared properties? The point is that they are providing a random number generator (not a stream of random numbers) which is supposedly "secure". Secure means that no one, including the person providing the RNG, can predict the stream of numbers coming form the RNG. If the RNG coming form the US source is not honest, that means that presumably the NSA can predict the stream of numbers coming out of the RNG. But the NSA (assuming that it distrusts the KGB and the MSS) wouldn't want the KGB and the MSS to be able to carry out the same feat. The same is true for each of the other devices. So there's no way that any one of the actors should be able to detect any shared properties --- that's the point of the proposal.

Now, if the NSA is able to gimmick the RNG coming from China, then that's a different story. And to the extent that many electronics are designed in the US and then manufacturered in China, that's certainly a concern. In order for a scheme like this to work, the parts would have to be designed and built in such a way that an outsider would believe that the NSA couldn't have possibly gimmicked an RNG, even if it could have been gimmicked by another spy agency. Then combine this with a device that you're sure couldn't have been gimmicked by the MSS, but may have been subject to pressure from the NSA, and so on.

Comment Re:you have the source (Score 5, Insightful) 566

The random driver has changed significantly since July 2012, which is we were given a heads up about the paper described at http://factorable.net/ which is also when I took back maintainership of the /dev/random driver. We gather entropy at every single interrupt, and mix it into the entropy pool. This is done unconditionally, you can't disable it, like what happened with the SA_SAMPLE_RANDOM flag.

The thing about entropy pools is that when you combine entropy sources, the result gets better, not worse. So the best thing would be if we had hardware random number generators sourced from China, Russia, and the USA. Since presumably the MSS, KGB, and the NSA mutually distrust each other, if we combine the entropy from those three soruces, the result will be stronger than any one alone.

This is why I don't recommend using RDRAND directly. Sure, an honest (emphasis on honest) hardware random number geneterator will always be able to source higher quality entropy than anything we can do by sampling OS events, such as interrupts. But the problem is it's hard to guarantee that a HWRNG is really honest. Especially given the Snowden revelations which seem to indicate the NSA has successfully leaned on at least one chip manufacturer. If you must use RDRAND, I'd recommend generating a random key via some other means, and then encrypting the output of RDRAND by that random key before use the resulting randomness for session keys, etc. Or better yet, do what we do in /dev/random, which is to mix RDRAND with other sources of entropy.

Slashdot Top Deals

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...