Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Opportunity missed (Score 1) 103

The UK tech companies found it hard to export to the US

Why?

Because, at the time, the US government would only buy from US tech companies, and most big businesses had their purchasing decisions strongly influenced by what government bought (often for interoperability reasons), which influenced small businesses (for the same reason). Marketing in the USA required a big budget to get national penetration and there wasn't an obvious place to start.

In contrast, a tech company in California could start selling locally and then just expand slowly into more states. Their existing supply chain didn't need many modifications to sell things one or two states over. A British company trying to sell in the USA needed to establish a foothold somewhere. They needed to ship either components for assembly or completed devices to the USA.

Selling to mainland Europe required translations

Is that a big deal? Especially if you went for a few major languages, like German, first. I would think that European manufacturers would have been more used to the need for translations than American companies.

P.S. Wish I had mod points to bump up your post.

For a small company, the cost of translation can be the difference between making a profit and making a loss. You need a big investment to sell enough in France or Germany to recoup the cost of localisation. In contrast, a US company had an English-speaking audience on its doorstep and so could ramp up to economies of scale in the tens of millions of units before they needed to consider localisation. At this point, the incremental cost is sufficiently low that it makes economic sense.

Comment Re:Why didn't 'Andriod' use BSD codebase? (Score 1) 220

It did. Android's libc uses a lot of FreeBSD code. They've recently been talking to us about syncing some of their changes and treating us as they do other upstream projects that they pull code from, rather than maintaining a complete fork. They picked the Linux kernel for a very simple reason: Android was created by a small team, and they had experience with the Linux kernel.

Comment Re:The license (Score 1) 220

No. The GPL does not say you have to give back to the community, it means that you have to pass the source to anyone that you give the binary to. When Google extended Linux to run on their cluster infrastructure, most of those changes remained private. Given that 90% of all software development is in-house and not for public release, this only means that 10% of developers would be compelled to release code by the GPL. The remaining 90% are not affected, but often avoid GPL'd code because of possible future problems.

Giving back to the community is a lot more about culture than license. Companies like Juniper, Yahoo! and NetFlix contribute a lot to FreeBSD, because it reduces their cost to maintain a smaller fork and because the more they give back, the more other developers are likely to care about bugs that they report.

Comment Re:pkgng (Score 1) 220

Recovering from the security incident is not just a matter of reformatting the machines. You don't just turn things back on and hope. All of the code that is being used to build the packages is being audited (this is now basically done). The FreeBSD cluster is now running auditdistd, so that auditing logs of all of the build machines are preserved even in the case of a compromise. The goal is to ensure that a compromise like this can't happen again, not to rush out packages and then have to do the whole thing again next time there's an incident.

For what it's worth, with Poudriere, you can build the entire ports tree into pkgng packages in about 48 hours on a reasonably powerful machine. If you don't want all 20+K packages, then you can do it a lot faster. If you trust iX Systems, you can just point pkgng at the PC-BSD repository...

Comment Re:Opportunity missed (Score 5, Insightful) 103

The UK had a thriving computer industry even into the '80s. Companies like Sinclair did well in the home computer market and Acorn was selling desktops that ran a multitasking GUI very cheaply, with a lot of success in the home and schools markets. The decline started as the IBM PC gained prominence. The UK tech companies found it hard to export to the US, and didn't have as large a domestic market. Selling to mainland Europe required translations, so US companies were able to ramp up economies of scale that left them unable to compete. The ones that were successful, such as ARM (an Acorn spin-off) and Symbian (a Psion spin-off), did so by selling through existing large companies that had an established supply chain.

One of the big problems with getting large multinational companies in the UK is that it's much harder for tech companies to do well on the LSE. A startup in the US wants to get to be worth about a few hundred million and then IPO and continue to grow. A startup in the UK wants to get to be worth a few hundred million and then sell out to a big company. There are a lot of startups in the UK that make it to a few million market cap mark, but almost none that make it past the billion. A lot of this is due to different investor culture, rather than anything related to the people running the companies.

Comment Re:What's the difference with Linux ? (Score 3, Interesting) 220

The reason I switched for FreeBSD around 2001 is that in-kernel sound mixing Just Worked. Two apps could write to /dev/dsp and get working sound. I'm now writing this while watching a DVD on the FreeBSD box connected to my projector and 5.1 surround sound system and I didn't need to do any configuration.

The other big difference is ZFS, which makes a huge difference to how you manage storage. Creating a new volume is as easy (and fast) as creating a new directory. You get compression, deduplication, constant-time snapshots, and a load of other things via a very easy administration interface.

If you're doing development work or running servers, jails give you a way of deploying a complete system that's got almost the same isolation as a VM but with much lower overhead. With ZFS, you can create one stock install and then clone it into a new jail in a few seconds.

The base system is maintained as a whole and the developers take the principle of least astonishment (POLA) seriously. User-visible changes are minimised and new configuration utilities are expected to follow the pattern of existing ones.

For firewalling, there are a number of choices, but the most sensible is probably the fork of OpenBSD's pf, modified to have better SMP scalability.

For security, there's the MAC framework, which is roughly equivalent to SELinux, that the sandboxing on OS X and iOS are based on, and also Capscium, which provides a capability model that is better suited to application compartmentalisation. An increasing number of the system daemons use Capsicum for privilege separation.

That's probably most of the user-facing things. You'll notice that GPU drivers (except for the nVidia blobs) tend to lag Linux somewhat. For Intel it's not so bad, for AMD it's quite a way behind (catching up, but not there yet).

Comment Re:at an OPTIMISITC writing speed of 1GB/sec (Score 5, Insightful) 182

If I had an optical disk that had that kind of write speed and sufficiently cheap media, I'd use it with a log-structured filesystem. The real data would be on some other media, and the optical disk would record every transaction. When the disk filled up, I'd pop a new one in, have it write a complete snapshot (about 40 minutes for a 2TB NAS, and I could probably buffer any changes in that period to disk / flash) and then go back to log mode. Each disk would then be a backup that would be able to restore my filesystem to any point in the period. Actually, given my average disk writes, one of these disks would store everything I write to disk for about 200 years, so it would probably want more regular snapshots or the restore time of playing back the entire journal would be too long. Effectively, the append-only storage system becomes your authoritative data store and the hard disks and flash just become caches for better random access.

The problem, of course, is the 'sufficiently cheap media' part. When CDs were introduced, I had a 40MB hard drive and the 650MB hard disk was enough for every conceivable backup. When CD-Rs were cheap, I had a 5GB hard drive and a CD was just about big enough for my home directory, if I trimmed it a bit. When DVDs were introduced, I had a 20GB hard drive and a 4.5GB layer was just about enough for my home directory. When DVD-Rs were cheap, I had an 80GB hard drive in my laptop, and 4.5GB was nowhere near enough. Now, the 25GB on an affordable BD-R is under 10% of my laptop's flash and laughable compared to the 4TB in my NAS.

If they can get it to market when personal storage is still in the tens of TBs range, then it's interesting.

Comment Re:so is Microsoft the good guy now? (Score 1) 157

I'm glad to see MS in this market. I fondly remember the competition in the late '80s and early '90s, with half a dozen serious players in the home computer market and a lot more smaller ones. I'd love to see 5-6 decent operating systems for mobile phones and tablets. I'm also glad to see that they're not doing especially well, but I'd be very happy to see them with 10-20% of the market and none of their competitors with more than 40%.

Comment Re:So what? (Score 1) 157

I think the only feature I don't have from that list on my ASUS TransformerPad (Android) is multiple user accounts (which is something I'd like, but is difficult to hack into the crappy way Android handles sandboxing). Oh, and I have a nice keyboard and a tolerable trackpad built in when the device is in clamshell mode and easily detachable when I want to use it as a tablet.

Comment Re:Microsoft can do whatever they want to it... (Score 1) 157

Apple had an advantage with their CPU migration: the new CPU was much faster than the old one. The PowerPC was introduced at 60MHz, whereas the fastest 68040 that they sold was 40MHz and clock-for-clock the PowerPC was faster. When they switched to Intel, their fastest laptops had a 1.67GHz G4 and were replaced by Core Duos starting at 1.84GHz. The G4 was largely limited by memory bandwidth at high speeds. In both cases, emulated code on the new machines ran slightly slower than it had on the fastest Mac with the older architecture (except for the G5 desktops, but they were very expensive). If you skipped the generation immediately before the switch, your apps all ran faster, even if they were all emulated. Once you switched to native code for the new architecture, things got even faster.

For Microsoft moving to ARM, they're okay for .NET binaries, because these are JIT compiled for the current target, although they're likely to be slower than on x86, but emulated programs will be slower. The advantage of ARM is power efficiency, not raw speed. If you run emulated code, then you lose this benefit. They could ship an x86 emulator (they acquired one when they bought VirtualPC), but they'd end up running the Win32 apps very slowly and people would complain.

Comment Re:What a problem (Score 5, Insightful) 311

Most of the time, even that isn't enough. C compilers tend to embed build-time information as well. For verilog, they often use a random number seed for the genetic algorithm for place-and-route. Most compilers have a flag to set a specified value for these kinds of parameter, but you have to know what they were set to for the original run.

Of course, in this case you're solving a non-problem. If you don't trust the source or the binary, then don't run the code. If you trust the source but not the binary, build your own and run that.

Slashdot Top Deals

8 Catfish = 1 Octo-puss

Working...