Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Re:It's GIT for OSS, SVN for Enterprise. (Score 4, Interesting) 378

The idea of Git eludes you. You don't structure Git projects in a giant directory tree.

The first problem here is that you need to decide, up front, what your structure should be. For pretty much any large project that I've worked on, the correct structure is something that's only apparent 5 years after starting, and 5 years later is different. With git, you have to maintain externals if you want to be able to do a single clone and get the whole repository. Atomic commits (you know, the feature that we moved to svn from cvs for) are then very difficult, because you must commit and push in leaf-to-root order in your git repository nest.

Comment Re:Different strokes for different folks (Score 2) 378

If you're a one-man operation, Fossil gives you all of that without the braindead interface and in a much easier to deploy bundle (a single, small, statically linked binary, including a [versioned] wiki and bug tracker). But it's a strawman comparison, because you can also do svnadmin create as a non-root user and then use file:// URLs for accessing the repository.

Comment Re:GIT sucks on windows (Score 1) 378

Mercurial is nice for larger projects. For smaller ones, I prefer Fossil, which integrates a bug tracker and wiki into a single, small, statically linked executable. It has scalability problems with large numbers of concurrent users of the same repository, but if you only have a few tens of collaborators it works fine and is trivial to deploy.

Comment Re:Wow, just wow. (Score 3, Insightful) 406

There's no hypocrisy if your distinction is one of scale. I regard censorship as only being bad when it has an impact on an individual's ability to speak freely. There is no problem with a single newspaper refusing to carry something, as long as there are other newspapers that are willing to run it or some other (relatively easy) mechanism for publication. There is a problem if a government or an industry body says 'no one may run this story'. There's a difference between saying 'you may not post this opinion on my blog' and saying 'you may not post this opinion on any blog'. The latter is dangerous censorship, the former is exercising free speech - the thing that rules about censorship are supposed to protect. It only becomes a problem when everyone with the infrastructure to host blogs says 'you may not post this on a blog that I run', at which point there should be government intervention.

Comment Re:Translation: (Score 1) 111

There are lots of IP companies that no one has a problem with. There are basically two business models for IP companies:
  • File or buy a load of patents and then, the next time someone independently invents something you've patented, ask for royalties and sue them if you don't get them.
  • Design things of value and sell the rights to use the designs to companies that would end up paying more if they developed something in house.

There are a load of companies in the second category that are very profitable and usually respected. It's the ones in the first category that give them all a bad name.

Comment Re:Endurance (Score 1) 71

That's a bit closer to what I was expecting. Last time I did these calculations, I came up with a figure of something like 100 years at my usage pattern. Seeing this drop to 5 years is somewhat alarming, but not too far off the trends we've seen with decreasing numbers of rewrites per cell in modern flash.

Comment Re:It's... OK. (Score 1) 161

As for PCs you can program your decoder in CUDA or OpenCL so "hardware support" is not very important.

Mobile GPUs are also programmable, but without knowing the details of the algorithms involved it's hard to say what kind of speedup you'll get from a GPU. In general, later generations of video CODECs require inferring more from larger areas and so are less amenable to the kind of access that a GPU's memory controller is optimised for. Just doing something on the GPU isn't an automatic speedup, and until we see real implementations it's hard to say exactly how much better it will be.

Comment Re:Endurance (Score 1) 71

Every time someone gives numbers like these, I look at my laptop's uptime and disk write counters and see what they say. Apparently I've written an average of about 13GB/day since my last reboot. This machine has a 256GB SSD, so if the write endurance scales linearly with the size as your numbers imply (assumes near-perfect wear levelling), this would give it a 24TB limit. I'd reach that limit in just over 5 years, which is a bit longer than the typical time that I use a laptop as my primary machine. It's probably adequate - I feel very nervous using hard disks that are over that age. I'd feel a lot happier with something a bit further away from the 5-year mark though...

Comment Re:Incredible (Score 3, Interesting) 297

Oracle is expensive, but if it were really overpriced then you'd see lots of cheaper alternatives. For a lot of workloads, something like PostgreSQL will get the job done for a fraction of the price. When you really need something at the high end, however, Oracle or a small handful of other companies will charge you similar amounts. The real problem for a company like Oracle is the same as the problem for SGI. In the '90s, a database with a few GBs of data was something you needed Oracle (or similar) and a lot of hardware for. Now, a cheap commodity machine can keep the whole thing in RAM for read-only queries and can write to an SSD (or a few in RAID-1) for a few thousand dollars, including the time it takes someone to set it up. The number of companies that have data of a size where an Oracle DB will work is increasingly small: at the very high end, you have companies like Google and Facebook that can't use any off-the-shelf solution, and at the other you have companies that can get away with cheap commodity hardware and an open source RDBMS.

This is why companies like IBM and Oracle are focussing heavily on business applications and vertical integration. They may be expensive, but there's a whole class of medium sized enterprises for whom it's a lot cheaper to periodically give a huge pile of money to Oracle periodically than it is to have a large in-house IT staff.

Comment Re:impossible (Score 1) 297

Companies have no incentive to invest in infrastructure if most of the benefits will be reaped by other companies. If one company owns an entire campus, town, or island, then they are generally good at improving the infrastructure. If such an area is owned by a diverse set of companies and individuals, then good infrastructure is rarely an emergent phenomenon, unless some organisation is responsible for collecting money to pay for it and for providing it. This organisation is traditionally referred to as a government...

Comment Re:Price (Score 1) 172

Even for sequential reads, SSDs can be an improvement. My laptop's SSD can easily handle 200MB/s sequential reads, and you'd need more than one spinning disk to handle that. And a lot of things that seem like sequential reads at a high level turn out not to be. Netflix's streaming boxes, for example, sound like a poster child for sequential reads, but once you factor in the number of clients connected to each one, you end up with a large number of 1MB random reads, which means your IOPS numbers translate directly to throughput.

Spinning disks are still best where capacity is more important than access times. For example, hosting a lot of VMs where each one is typically accessing a small amount of live data (which can be cached in RAM or SSD) but has several GBs of inactive data.

Comment Re:SAS SSD (Score 2) 172

SAS doesn't really get you anything useful with an SSD. The extra chaining isn't that important, because it's easy to get enough SATA sockets to put one in each drive bay. There's no mSATA equivalent for denser storage, and if you really need the extra speed then why not go all the way and get something like FusionIO cards that hang directly off the PCIe bus?

Comment Re:Summary of your post (Score 2) 372

I compile large projects on a regular basis. We have one machine with 12 cores (24 threads) 256GB of RAM, so I tried running builds entirely to and from a RAM drive. The speed difference between that and using a mid-range SSD was too small to measure (-j12 up to -j64). The difference in performance between an SSD and a RAM drive is significantly greater than the difference between any two SSDs. In contrast, the difference between using a hard disk and an SSD is easily a factor of 2 in terms of build speed and often more.

Slashdot Top Deals

In seeking the unattainable, simplicity only gets in the way. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982