Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:And it was very close to becoming a mess. (Score 2) 36

I will never understand how someone so apparently disconnected from the reality that normal people face can actually manage to get elected, but whatever the reason, it seems a sad indictment of our "representative" democracy.

The problem is that we don't vote nationally for cabinet posts. Someone may be a perfectly competent local MP, in touch with local issues and understanding their constituents' interests, but have absolutely no idea about whatever department they're put in charge of.

Comment Re:Virgin Media? (Score 1) 36

Virgin "Throttle you back to dial-up speeds" Media

No, Virgin 'throttle you to 25% for a few hours when you go over the caps' Media. On their cheapest plan, 25% is still fast enough to stream iPlayer HD and the maximum amount that you can download within the caps is several TBs/month, so it's not really something I've felt the need to worry about.

Virgin "What is infrastructure investment" Media

It's probably the thing that they've done to allow them to bump the speeds that they offer every few years. I was an early adopter for their 1Mb/s and 10Mb/s services and stayed on 10Mb/s as it moved from their most expensive tier to the cheapest. It then moved to 30Mb/s and is now I think 50Mb/s (might be 60Mb/s, not sure).

Comment Re:UFS vs ZFS (Score 1) 75

Like, for example, UFS actually has a repairing FSCK. ZFS fanatics will argue to the ends of the earth that ZFS doesn't need fsck repair because it has built-in raid. Riiiggght.

That's not the argument. fsck is not magic. It is designed around a number of possible kinds of error. It verifies on-disk structures and will attempt to reconstruct metadata if it finds that it is corrupted. Equivalent logic to fsck is built into ZFS. Every ZFS I/O operation runs a small part of this, validating metadata and attempting to repair it if it is damaged. You could pull this logic out into a separate tool, but why bother? zpool scrub will do the same thing, forcing the zpool to read (and therefore repair) everything using the fsck-like logic in the kernel.

Bottom line is, ZFS is groovy and all (though no speed daemon) until it breaks

The same is true of any filesystem. If a filesystem is corrupted to the extent that the self-healing logic in ZFS can't recover, it's also corrupted to the extent that a fsck tool would not be able to recover it.

Comment Re:UFS vs ZFS (Score 1) 75

All of the CDDL stuff in our tree is in a separate cddl directories. You can get a copy of the tree that does not include them and you can build the system without them. This is a requirement for a number of downstream consumers of FreeBSD.

That said, we do enable DTrace by default and the installer lets you choose UFS or ZFS. I'd recommend ZFS over UFS for anything with a reasonable amount of RAM (not your Raspberry Pi, but anything with a few GBs).

Comment Re:NTFS (Score 1) 75

No, NTFS has continued to evolve. The original design team were given changing requirements right up until Windows NT shipped. The on-disk format is basically an efficient way of storing large blobs of data and an efficient way of storing small blobs of data (much like BFS, though with a different approach). Everything else is policy that is layered on top.

Comment Re:A guess (Score 3, Interesting) 243

Symbian, particularly EKA2, was a very well designed system. It was let down by its slow adaptation to changing requirements. The userspace APIs were designed for a world where 4MB of RAM meant a high-end device. You suffered some difficulty programming because it was the only way to make sure things fitted in this little space. When 128MB started to mean a low-end device, this was a problem - the cost wasn't worth paying to be using 10% of the device's RAM instead of 15%. It wasn't helped by the in-fighting at Nokia that resulted in a load of different potential replacements.

Comment Re:There's nothing wrong now... (Score 1) 489

It's called "memory combining".

It's called 'memory deduplication' in other operating systems that implemented the feature (earlier) and in the research literature, but I can understand why Microsoft would not want to use a term that indicates that they're one of the last OS vendors to implement a feature.

It's not always a clear win. Memory deduplication increases the number of CoW pages, which increases the number of TLB faults. It also requires periodically scanning memory that hasn't been recently referenced, which introduces a lot of cache churn (you need to build a hash table of all pages, look for collisions, and then update pages, which introduces hash collisions.) The calculation also requires a lot of locking in the VM subsystem, which can harm performance on SMP machines. Unless you're so memory-constrained that you're about to start swapping, you'll generally get better performance by turning it off. The only place where you get a really big win is where you have a load of VMs which each have a few hundred MBs of identical OS code that can be deduplicated and is read-only so will never cause a fault.

You also don't reduce TLB pressure (one of the top causes of performance degradation on modern systems), because even systems with tagged TLBs don't usually have a way of specifying a bitmask of ASIDs that a page belongs to, so even if it's mapped at the same address you can't share TLB entries.

Comment Re: There's nothing wrong now... (Score 1) 489

Where are your numbers that show that PIC code is slower?

Macrobenchmarks that I've run show about a 10% slowdown for PIC on i386 when tested on Sandybridge and Haswell. Feel free to run your own. It used to be more significant, but 10% is still quite noticeable on jobs that take an hour or two...

Ever since CPU manufacturers have started throwing around the word "pipeline" this hasn't been true. On an AVR an RCALL costs 3-4 cycles and CALL costs 4-5, I doubt a deeper pipeline like an Intel reverses this.

The big cost is the lost of a GPR to the register allocator. You have to store %eip (or whatever %eip was when you did your one-instruction-forward branch followed by pop) in a GPR. The call-pop sequence is usually subject to micro-op fusion on modern x86 CPUs and so is transformed into a single get-eip operation that doesn't screw up store forwarding for the top few stack slots like a normal pop, so it's almost free.

On x86-64, you do not have this issue for two reasons. The first is that you have a lot more registers, so losing one doesn't hurt the register allocator so much. The second is that %rip can be used as an operand directly, so you can compute the target address without needing to copy it to another register.

Saving a few hundred KB here and there with pagetable punning is worth fuck-all when the user is staring at a crashdump

Saving a few hundred KB of i-cache is often a very large performance win on modern CPUs.

Comment Re:Don't forget stats & much has changed since (Score 1) 77

Logic (at least up to first-order logic), set theory (some is covered, but not its connection to logic), game theory (essential to so many things, not covered at school at all), and graph theory (the basis for pretty much anything involving computers) would be at the top of my list. Anything where proofs dominate, rather than rote application of rules (we have machines for that now!) would be nice to see. Probability is already well covered in the UK, not sure about the USA. Statistics would be helpful to pretty much anyone.

Don't drop the calculus, but you can teach people to understand calculus in a couple of months. Having them spend a couple of years going from being a thousand times slower than a computer at solving differential equations to being 500 times slower isn't worth it. It's not like simple arithmetic, where getting a calculator out and typing the problem in can be a bottleneck.

Slashdot Top Deals

"Luke, I'm yer father, eh. Come over to the dark side, you hoser." -- Dave Thomas, "Strange Brew"

Working...