Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Plug can't support ZFS (Score 1) 87

ZFS RAM usage depends on the size of your pool, whether you need deduplication, and your performance needs. The rule of thumb for ZFS is 1GB of RAM per TB of storage, 2GB if you want deduplication. This, however, assumes that you are using mechanical disks and either using the pool locally or via something like iSCSI over GigE. If you're using it over WiFi, then you can get away with a really small ARC, because a cache miss won't slow things down that much, especially if the miss is filled by something that can do random reads quickly.

Comment Re: corporate policy (Score 1) 196

Price per TB is often the wrong metric. Price per used TB is more important. If you have a rack of machines each with disks at the price / size sweet spot of around 1TB, it doesn't actually buy you anything extra if each one is only using 20-100GB. Far more important for most workloads, however, is price per IOPS. Consolidating storage behind something with a couple of big SSD caches can be a huge win here. Getting data from RAM on a machine on a local network is faster than getting it from a local disk, and getting it from an SSD over the network is in the same ballpark.

Comment Re:I never surf the internet from the base OS anym (Score 1) 196

The same is true of Xen, for example; it boots as pure Linux and then Xen takes over Ring-0

No it doesn't. Xen boots and then launches a PV guest as dom 0. The PV guest starts in ring 1 in x86 or ring 3 on x86-64 with the CPU already in protected mode. The kernel entry point is also different for PV guests, so that you can have a single kernel binary that boots as a PV guest or a bare-metal OS. With newer Xen, the dom 0 guest can be PVH, so it runs in ring 0, but with Xen in the hypervisor 'ring -1' mode. It still starts via the Xen entry point, however, not the normal boot process.

Comment Re:It costs the government NOTHING. (Score 4, Interesting) 174

Wow, that's the most oversimplified attempt at discussing economics I've seen for a long time. Hint: economics is complicated - if it were easy we'd have a simple working system that everyone could agree one (or, at least, the people who worked out out would get very rich by betting on the economy 100% correctly and other people would notice) - and any explanation that simple is likely to be wrong.

Consider the following counter example:

The government builds and maintains a road between two places. This employs people, taking them out of the labour pool where they could be doing other things, so it's a cost. But it also allows the two places to trade more cheaply, which increases wealth production. Similarly, employers at either end of the road would have access to a wider pool of employees and potential employees to a wider pool of employers and so people would end up in more productive employment.

Now, would the same apply if private industry built the road? This is where it starts to get more complicated. First, who would build the road? It might be some consortium of businesses at both ends who wanted to use it. If so, then they might charge money to anyone not part of the consortium to use it, which would give them a competitive advantage, but be less healthy for the economy as a whole by producing a barrier to competition (and, most specifically, a barrier to entry for new companies).

It might be a third party that thought that the road would be profitable, who would run it as a 'common carrier' toll road. This, however, would provide a disincentive for people to use it. If they priced it too low, then they'd go out of business (which would discourage future road-building companies). If they priced it too high, then they'd make it unprofitable for some users to use it, however given that the cost of the road is now a sunk cost the economy as a whole benefits if as many people as would gain any benefit at all from it use it.

In some cases, the benefit to the economy may be significantly lower than the cost of the road, so it would not make sense for the government to make the investment. It's often difficult to make that call, however. In the UK, be Beeching Report identified a large number of unprofitable railway lines and, to save taxpayer money, the nationalised railway service closed them. Unfortunately, it turned out that a lot of the unprofitable lines were ones that got people from near where they lived to a more profitable line. When they were closed, people at the edges ended up having to buy cars, which meant that they no longer used the larger lines either, and so pushed those into unprofitability (and so there was a second Beeching Report some years later which repeated the entire mistake). The cost to the economy of no longer having a widespread, cheap, railway network is widely agreed by economists to be significantly greater than the savings from closing the lines.

A nationally owned private rail operator may have seen further ahead, but most likely they'd have had shareholders making the same demands: sell off the unprofitable lines and concentrate on the profitable ones. A larger number of smaller railway operators would have had similar problems, with the ones operating the unprofitable lines going out of business and reducing demand on the profitable ones.

Comment Re:a disgrace to humanity (Score 1) 199

The problem is mostly one of marketing. If you're a top tier university, students will trust that teaching you Scheme or whatever is sensible because you've got a good reputation. If you're a second tier university, then students (and parents) have heard of Java and C# and know that industry likes them, and if one university uses them as a teaching language but you don't, then it's hard to convince them that it really does make sense. And your reputation is dependent on how well your former students do, which depends on the quality of your intake.

Comment Re:Farts in their general direction. (Score 1) 445

Assuming that you can trust Dropbox, there's also the problem of compartmentalisation. My OS provides primitives that allow it to restrict which applications have access to which files. All major mobile operating systems use these primitives to do varying degrees of sandboxing. If I grant an application access to Dropbox, how do I prevent it from accessing every single file I have there? How do I audit the access that each application has? How do I ensure that this corresponds with the global policy that I have for sharing data between applications?

Comment Re:Memory is cheap (Score 2) 407

Volatile memory is defined as memory that requires power to preserve its contents. All of the 'cheap' memory that you can buy at the moment is volatile memory (and the non-volatile memory is either really expensive, really slow, or both). If you double the amount of RAM, then you double the number of capacitors that must be refreshed every few nanoseconds and so you double the power consumption. You also double the amount of heat that's generated and must be dissipated.

Now, what's the number one complaint that people have about most smartphones? Is it performance, or is it battery life? If it's performance, then doubling the RAM makes sense. If it's battery life, then doubling the RAM will make people complain more.

Comment Re:CPython uses reference counting, like objective (Score 1) 407

I implemented GC for the GNUstep Objective-C runtime using the Boehm collector (which, amusingly, performed better than Apple's 'highly optimised' collector in my tests, but then there's a reason that the AutoZone team doesn't work at Apple anymore...) and didn't notice any problems with pausing in typical applications, and did see a noticeable decrease in total memory usage, largely due to programs not leaving objects in autorelease pools for as long.

GC support was deprecated for two reasons. The first is that lots of Objective-C idioms rely on deterministic destruction. Although people are discouraged from using objects to manage scarce resources, you still have classes like NSFileHandle that own a file descriptor and close it when the object is destroyed. Nondeterminism of destruction made these problems really hard to debug.

The second problem was that the memory model for ObjC in GC mode sucked beyond belief. Or, rather, the lack of a memory model. It allowed mixing GC'd and non-GC'd buffers at will, but didn't expose this information in the type system. Consider this toy example:

void makeSomeObjects(id *buffer, int count)
{
for (int i=0 ; i<count ; i++)
buffer[i] = [SomeClass new];
}

Will this work? Who knows. If the caller passes a buffer created by NSAllocateCollectable() with NSScannedOption, or passes a buffer on the stack, then yes. If they pass the address of a global that's declared in an Objective-C source file, then yes. If they pass the address of a global that's declared in a C/C++/whatever source file, then no. If they pass a buffer created with malloc(), then no. If they pass a buffer created with NSAllocateCollectable() but without NSScannedOption, then no.

In short, the whole idea was a clusterfuck from the start. On the other hand, it may be one that's worth revisiting now that Objective-C (with ARC) has a notion of ownership for memory. Source code that uses ARC could be compiled to use GC without a huge amount of pain (although the ARC memory barrier functions seem to have been created with the express intention of not allowing GC to be added later).

Slashdot Top Deals

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...