Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:The license (Score 1) 220

No. The GPL does not say you have to give back to the community, it means that you have to pass the source to anyone that you give the binary to. When Google extended Linux to run on their cluster infrastructure, most of those changes remained private. Given that 90% of all software development is in-house and not for public release, this only means that 10% of developers would be compelled to release code by the GPL. The remaining 90% are not affected, but often avoid GPL'd code because of possible future problems.

Giving back to the community is a lot more about culture than license. Companies like Juniper, Yahoo! and NetFlix contribute a lot to FreeBSD, because it reduces their cost to maintain a smaller fork and because the more they give back, the more other developers are likely to care about bugs that they report.

Comment Re:pkgng (Score 1) 220

Recovering from the security incident is not just a matter of reformatting the machines. You don't just turn things back on and hope. All of the code that is being used to build the packages is being audited (this is now basically done). The FreeBSD cluster is now running auditdistd, so that auditing logs of all of the build machines are preserved even in the case of a compromise. The goal is to ensure that a compromise like this can't happen again, not to rush out packages and then have to do the whole thing again next time there's an incident.

For what it's worth, with Poudriere, you can build the entire ports tree into pkgng packages in about 48 hours on a reasonably powerful machine. If you don't want all 20+K packages, then you can do it a lot faster. If you trust iX Systems, you can just point pkgng at the PC-BSD repository...

Comment Re:Opportunity missed (Score 5, Insightful) 103

The UK had a thriving computer industry even into the '80s. Companies like Sinclair did well in the home computer market and Acorn was selling desktops that ran a multitasking GUI very cheaply, with a lot of success in the home and schools markets. The decline started as the IBM PC gained prominence. The UK tech companies found it hard to export to the US, and didn't have as large a domestic market. Selling to mainland Europe required translations, so US companies were able to ramp up economies of scale that left them unable to compete. The ones that were successful, such as ARM (an Acorn spin-off) and Symbian (a Psion spin-off), did so by selling through existing large companies that had an established supply chain.

One of the big problems with getting large multinational companies in the UK is that it's much harder for tech companies to do well on the LSE. A startup in the US wants to get to be worth about a few hundred million and then IPO and continue to grow. A startup in the UK wants to get to be worth a few hundred million and then sell out to a big company. There are a lot of startups in the UK that make it to a few million market cap mark, but almost none that make it past the billion. A lot of this is due to different investor culture, rather than anything related to the people running the companies.

Comment Re:What's the difference with Linux ? (Score 3, Interesting) 220

The reason I switched for FreeBSD around 2001 is that in-kernel sound mixing Just Worked. Two apps could write to /dev/dsp and get working sound. I'm now writing this while watching a DVD on the FreeBSD box connected to my projector and 5.1 surround sound system and I didn't need to do any configuration.

The other big difference is ZFS, which makes a huge difference to how you manage storage. Creating a new volume is as easy (and fast) as creating a new directory. You get compression, deduplication, constant-time snapshots, and a load of other things via a very easy administration interface.

If you're doing development work or running servers, jails give you a way of deploying a complete system that's got almost the same isolation as a VM but with much lower overhead. With ZFS, you can create one stock install and then clone it into a new jail in a few seconds.

The base system is maintained as a whole and the developers take the principle of least astonishment (POLA) seriously. User-visible changes are minimised and new configuration utilities are expected to follow the pattern of existing ones.

For firewalling, there are a number of choices, but the most sensible is probably the fork of OpenBSD's pf, modified to have better SMP scalability.

For security, there's the MAC framework, which is roughly equivalent to SELinux, that the sandboxing on OS X and iOS are based on, and also Capscium, which provides a capability model that is better suited to application compartmentalisation. An increasing number of the system daemons use Capsicum for privilege separation.

That's probably most of the user-facing things. You'll notice that GPU drivers (except for the nVidia blobs) tend to lag Linux somewhat. For Intel it's not so bad, for AMD it's quite a way behind (catching up, but not there yet).

Comment Re:at an OPTIMISITC writing speed of 1GB/sec (Score 5, Insightful) 182

If I had an optical disk that had that kind of write speed and sufficiently cheap media, I'd use it with a log-structured filesystem. The real data would be on some other media, and the optical disk would record every transaction. When the disk filled up, I'd pop a new one in, have it write a complete snapshot (about 40 minutes for a 2TB NAS, and I could probably buffer any changes in that period to disk / flash) and then go back to log mode. Each disk would then be a backup that would be able to restore my filesystem to any point in the period. Actually, given my average disk writes, one of these disks would store everything I write to disk for about 200 years, so it would probably want more regular snapshots or the restore time of playing back the entire journal would be too long. Effectively, the append-only storage system becomes your authoritative data store and the hard disks and flash just become caches for better random access.

The problem, of course, is the 'sufficiently cheap media' part. When CDs were introduced, I had a 40MB hard drive and the 650MB hard disk was enough for every conceivable backup. When CD-Rs were cheap, I had a 5GB hard drive and a CD was just about big enough for my home directory, if I trimmed it a bit. When DVDs were introduced, I had a 20GB hard drive and a 4.5GB layer was just about enough for my home directory. When DVD-Rs were cheap, I had an 80GB hard drive in my laptop, and 4.5GB was nowhere near enough. Now, the 25GB on an affordable BD-R is under 10% of my laptop's flash and laughable compared to the 4TB in my NAS.

If they can get it to market when personal storage is still in the tens of TBs range, then it's interesting.

Comment Re:so is Microsoft the good guy now? (Score 1) 157

I'm glad to see MS in this market. I fondly remember the competition in the late '80s and early '90s, with half a dozen serious players in the home computer market and a lot more smaller ones. I'd love to see 5-6 decent operating systems for mobile phones and tablets. I'm also glad to see that they're not doing especially well, but I'd be very happy to see them with 10-20% of the market and none of their competitors with more than 40%.

Comment Re:So what? (Score 1) 157

I think the only feature I don't have from that list on my ASUS TransformerPad (Android) is multiple user accounts (which is something I'd like, but is difficult to hack into the crappy way Android handles sandboxing). Oh, and I have a nice keyboard and a tolerable trackpad built in when the device is in clamshell mode and easily detachable when I want to use it as a tablet.

Comment Re:Microsoft can do whatever they want to it... (Score 1) 157

Apple had an advantage with their CPU migration: the new CPU was much faster than the old one. The PowerPC was introduced at 60MHz, whereas the fastest 68040 that they sold was 40MHz and clock-for-clock the PowerPC was faster. When they switched to Intel, their fastest laptops had a 1.67GHz G4 and were replaced by Core Duos starting at 1.84GHz. The G4 was largely limited by memory bandwidth at high speeds. In both cases, emulated code on the new machines ran slightly slower than it had on the fastest Mac with the older architecture (except for the G5 desktops, but they were very expensive). If you skipped the generation immediately before the switch, your apps all ran faster, even if they were all emulated. Once you switched to native code for the new architecture, things got even faster.

For Microsoft moving to ARM, they're okay for .NET binaries, because these are JIT compiled for the current target, although they're likely to be slower than on x86, but emulated programs will be slower. The advantage of ARM is power efficiency, not raw speed. If you run emulated code, then you lose this benefit. They could ship an x86 emulator (they acquired one when they bought VirtualPC), but they'd end up running the Win32 apps very slowly and people would complain.

Comment Re:What a problem (Score 5, Insightful) 311

Most of the time, even that isn't enough. C compilers tend to embed build-time information as well. For verilog, they often use a random number seed for the genetic algorithm for place-and-route. Most compilers have a flag to set a specified value for these kinds of parameter, but you have to know what they were set to for the original run.

Of course, in this case you're solving a non-problem. If you don't trust the source or the binary, then don't run the code. If you trust the source but not the binary, build your own and run that.

Comment Re:It's GIT for OSS, SVN for Enterprise. (Score 1) 378

On some large projects, the entire project is in one large repository, which git supports.

In that case, you have to clone the entire repo, which is fine if you're working on everything or need to build everything, but it is irritating if the project is composed of lots of smaller parts and you want to do some fixing on just a small part. For example, a collection of libraries and applications that use them (and possibly invoke each other).

On other projects, it is broken up into modules with separate repositories, but the artifacts from each module is deployed to an enterprise-wide maven repository. The modules can depend on each other and depend on certain versions of the other modules. With loose coupling between the modules like that, you don't need atomic commits between the modules.

If you modify a library and a consumer of the library, then you want an atomic commit for the changes, especially if it's an internal API that you don't expose outside of the project and so don't need to maintain ABI compatibility for out-of-tree consumers. With svn, you can do this with a single svn commit, even if you've checked out the two components separately. With git, you must commit the library change, then push it, then commit the user change and then update the git revision used for the external and commit that then push those. Or, if neither is a child of the other, you must first commit and push the changes to the program, then you must update the external reference from the repository that includes both and push that. You have to move from leaf to root bumping the externals version to ensure consistency.

If you don't jump through these hoops, then your published repositories can be in an inconsistent, unbuildable, state. It's not a problem if all of the things in separate repositories are sufficiently loosely coupled that you never need an atomic commit spanning multiple repositories, but it's a pain if they aren't. With git, you are always forced to choose between easy atomic commits and sparse checkouts. With svn, you can always do a commit that is atomic across the entire project and you can always check out as small a part as you want.

Comment Re:That's just cruel (Score 1) 336

You think they're deploying PDP-11s now? They were installed in the '70s, so have already seen about 40 years and are scheduled to run for another 37, so they'll see at least 70 years of active service, which is over two generations.

That is 60 years, not 37 years. TFS, if not TFA, which I didn't read, is officially stupid.

Glass house, stones, etc.

Comment Re:It's GIT for OSS, SVN for Enterprise. (Score 4, Interesting) 378

The idea of Git eludes you. You don't structure Git projects in a giant directory tree.

The first problem here is that you need to decide, up front, what your structure should be. For pretty much any large project that I've worked on, the correct structure is something that's only apparent 5 years after starting, and 5 years later is different. With git, you have to maintain externals if you want to be able to do a single clone and get the whole repository. Atomic commits (you know, the feature that we moved to svn from cvs for) are then very difficult, because you must commit and push in leaf-to-root order in your git repository nest.

Slashdot Top Deals

Never test for an error condition you don't know how to handle. -- Steinbach

Working...