Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:so is Microsoft the good guy now? (Score 1) 157

I'm glad to see MS in this market. I fondly remember the competition in the late '80s and early '90s, with half a dozen serious players in the home computer market and a lot more smaller ones. I'd love to see 5-6 decent operating systems for mobile phones and tablets. I'm also glad to see that they're not doing especially well, but I'd be very happy to see them with 10-20% of the market and none of their competitors with more than 40%.

Comment Re:So what? (Score 1) 157

I think the only feature I don't have from that list on my ASUS TransformerPad (Android) is multiple user accounts (which is something I'd like, but is difficult to hack into the crappy way Android handles sandboxing). Oh, and I have a nice keyboard and a tolerable trackpad built in when the device is in clamshell mode and easily detachable when I want to use it as a tablet.

Comment Re:Microsoft can do whatever they want to it... (Score 1) 157

Apple had an advantage with their CPU migration: the new CPU was much faster than the old one. The PowerPC was introduced at 60MHz, whereas the fastest 68040 that they sold was 40MHz and clock-for-clock the PowerPC was faster. When they switched to Intel, their fastest laptops had a 1.67GHz G4 and were replaced by Core Duos starting at 1.84GHz. The G4 was largely limited by memory bandwidth at high speeds. In both cases, emulated code on the new machines ran slightly slower than it had on the fastest Mac with the older architecture (except for the G5 desktops, but they were very expensive). If you skipped the generation immediately before the switch, your apps all ran faster, even if they were all emulated. Once you switched to native code for the new architecture, things got even faster.

For Microsoft moving to ARM, they're okay for .NET binaries, because these are JIT compiled for the current target, although they're likely to be slower than on x86, but emulated programs will be slower. The advantage of ARM is power efficiency, not raw speed. If you run emulated code, then you lose this benefit. They could ship an x86 emulator (they acquired one when they bought VirtualPC), but they'd end up running the Win32 apps very slowly and people would complain.

Comment Re:What a problem (Score 5, Insightful) 311

Most of the time, even that isn't enough. C compilers tend to embed build-time information as well. For verilog, they often use a random number seed for the genetic algorithm for place-and-route. Most compilers have a flag to set a specified value for these kinds of parameter, but you have to know what they were set to for the original run.

Of course, in this case you're solving a non-problem. If you don't trust the source or the binary, then don't run the code. If you trust the source but not the binary, build your own and run that.

Comment Re:It's GIT for OSS, SVN for Enterprise. (Score 1) 378

On some large projects, the entire project is in one large repository, which git supports.

In that case, you have to clone the entire repo, which is fine if you're working on everything or need to build everything, but it is irritating if the project is composed of lots of smaller parts and you want to do some fixing on just a small part. For example, a collection of libraries and applications that use them (and possibly invoke each other).

On other projects, it is broken up into modules with separate repositories, but the artifacts from each module is deployed to an enterprise-wide maven repository. The modules can depend on each other and depend on certain versions of the other modules. With loose coupling between the modules like that, you don't need atomic commits between the modules.

If you modify a library and a consumer of the library, then you want an atomic commit for the changes, especially if it's an internal API that you don't expose outside of the project and so don't need to maintain ABI compatibility for out-of-tree consumers. With svn, you can do this with a single svn commit, even if you've checked out the two components separately. With git, you must commit the library change, then push it, then commit the user change and then update the git revision used for the external and commit that then push those. Or, if neither is a child of the other, you must first commit and push the changes to the program, then you must update the external reference from the repository that includes both and push that. You have to move from leaf to root bumping the externals version to ensure consistency.

If you don't jump through these hoops, then your published repositories can be in an inconsistent, unbuildable, state. It's not a problem if all of the things in separate repositories are sufficiently loosely coupled that you never need an atomic commit spanning multiple repositories, but it's a pain if they aren't. With git, you are always forced to choose between easy atomic commits and sparse checkouts. With svn, you can always do a commit that is atomic across the entire project and you can always check out as small a part as you want.

Comment Re:That's just cruel (Score 1) 336

You think they're deploying PDP-11s now? They were installed in the '70s, so have already seen about 40 years and are scheduled to run for another 37, so they'll see at least 70 years of active service, which is over two generations.

That is 60 years, not 37 years. TFS, if not TFA, which I didn't read, is officially stupid.

Glass house, stones, etc.

Comment Re:It's GIT for OSS, SVN for Enterprise. (Score 4, Interesting) 378

The idea of Git eludes you. You don't structure Git projects in a giant directory tree.

The first problem here is that you need to decide, up front, what your structure should be. For pretty much any large project that I've worked on, the correct structure is something that's only apparent 5 years after starting, and 5 years later is different. With git, you have to maintain externals if you want to be able to do a single clone and get the whole repository. Atomic commits (you know, the feature that we moved to svn from cvs for) are then very difficult, because you must commit and push in leaf-to-root order in your git repository nest.

Comment Re:Different strokes for different folks (Score 2) 378

If you're a one-man operation, Fossil gives you all of that without the braindead interface and in a much easier to deploy bundle (a single, small, statically linked binary, including a [versioned] wiki and bug tracker). But it's a strawman comparison, because you can also do svnadmin create as a non-root user and then use file:// URLs for accessing the repository.

Comment Re:GIT sucks on windows (Score 1) 378

Mercurial is nice for larger projects. For smaller ones, I prefer Fossil, which integrates a bug tracker and wiki into a single, small, statically linked executable. It has scalability problems with large numbers of concurrent users of the same repository, but if you only have a few tens of collaborators it works fine and is trivial to deploy.

Comment Re:Wow, just wow. (Score 3, Insightful) 406

There's no hypocrisy if your distinction is one of scale. I regard censorship as only being bad when it has an impact on an individual's ability to speak freely. There is no problem with a single newspaper refusing to carry something, as long as there are other newspapers that are willing to run it or some other (relatively easy) mechanism for publication. There is a problem if a government or an industry body says 'no one may run this story'. There's a difference between saying 'you may not post this opinion on my blog' and saying 'you may not post this opinion on any blog'. The latter is dangerous censorship, the former is exercising free speech - the thing that rules about censorship are supposed to protect. It only becomes a problem when everyone with the infrastructure to host blogs says 'you may not post this on a blog that I run', at which point there should be government intervention.

Comment Re:Translation: (Score 1) 111

There are lots of IP companies that no one has a problem with. There are basically two business models for IP companies:
  • File or buy a load of patents and then, the next time someone independently invents something you've patented, ask for royalties and sue them if you don't get them.
  • Design things of value and sell the rights to use the designs to companies that would end up paying more if they developed something in house.

There are a load of companies in the second category that are very profitable and usually respected. It's the ones in the first category that give them all a bad name.

Comment Re:Endurance (Score 1) 71

That's a bit closer to what I was expecting. Last time I did these calculations, I came up with a figure of something like 100 years at my usage pattern. Seeing this drop to 5 years is somewhat alarming, but not too far off the trends we've seen with decreasing numbers of rewrites per cell in modern flash.

Comment Re:It's... OK. (Score 1) 161

As for PCs you can program your decoder in CUDA or OpenCL so "hardware support" is not very important.

Mobile GPUs are also programmable, but without knowing the details of the algorithms involved it's hard to say what kind of speedup you'll get from a GPU. In general, later generations of video CODECs require inferring more from larger areas and so are less amenable to the kind of access that a GPU's memory controller is optimised for. Just doing something on the GPU isn't an automatic speedup, and until we see real implementations it's hard to say exactly how much better it will be.

Slashdot Top Deals

If you have a procedure with 10 parameters, you probably missed some.

Working...