Comment Re:Blank Media (Score 3, Insightful) 477
Optical drives always seem to be introduced at a capacity that sounds great for backup, but by the time the media are affordable they're no longer enough.
Optical drives always seem to be introduced at a capacity that sounds great for backup, but by the time the media are affordable they're no longer enough.
An IR doesn't buy you much. The time taken for clang to compile OpenCL C to SPIR is about 10% of the time required for LLVM to optimise and codegen the resulting SPIR into native code. The driving force behind SPIR comes from developers who don't want their shader source code embedded in their binary source.
The early 1990's were dark days. Linux was/is a big deal. Where would we be without Linux?
1991, when Linux was released, was indeed the dark days. The i386 port of BSD was delayed by legal uncertainty over the AT&T vs UCB lawsuit. When UCB resoundingly won, 386BSD was released and was a vastly more mature system than Linux. Today, Linux and FreeBSD aren't that different in terms of performance or support. Debian kFreeBSD works quite happily with a glibc ported to FreeBSD and runs most of the same applications as Debian Linux. Linux still lacks some things (kernel sound mixing, ZFS, DTrace, Capsicum, jails, and so on) that FreeBSD has had for a long time and there are things on the other side that Linux has that FreeBSD lacks, but by and large they're pretty comparable.
If we hadn't had Linux, we'd most likely be using a BSD derivative now. On the other hand, if Linux hadn't taken the momentum away from HURD, maybe some of the microkernel operating systems would have seen a lot more attention and we'd now not be in a world where you have 5-10MB of object code compiled from a language with no memory safety running in ring 0...
Why? He's got the name recognition, but he wasn't the first to develop an open source kernel. The UCB team had been doing that before and so had the FSF (although with less success). He was the first to release an open source UNIX kernel for i386, but only by a few months. I may be wrong, but I believe OpenBSD was first to use a public CVS repository, rather than exchanging diffs on a mailing list.
This award feels like pandering. He gets the award for being the figure who is well known for doing things that lots of other people were doing.
download files without notification: dictionary updates
Could be bundled as part of the application, updated via the normal mechanism, without requiring it to have a permission that allows it to send data remotely ('download' can mean an HTTP GET with a really long query).
read contacts: suggestions
Most of the time, I'm not typing a contact's name so this sounds like it would lead to a lot of false positives. I've never seen it suggest a name that isn't a common English name though, so it doesn't seem to actually need this.
modify or delete contents of USB storage: I don't know why it needs this one, store dictionary outside private app directory?
If that is the case, it's bad design.
view accounts on the device: suggest your email address
It doesn't seem to ever do that for me...
Seriously people, think about it, giving your services away for free makes no sense.
Since when does open source mean giving away my services? I get paid quite well to write open source software. My service is writing code, not copying code. I'm happy for people to copy the code for free, because copying it doesn't require any effort on my part. Having a body of open source code available expands my potential set of customers quite a lot, because a lot more companies can afford to pay for a single feature to be added to an open source codebase than can afford to pay for something to be written from scratch that has that feature.
Or are you seriously arguing that the model that makes the most economic sense is to write software for free but charge people for copying it?
I think the meme here is 'obvious troll is obvious'. Open source doesn't mean that the software is free, it means that copying the software is free. Writing it in the first place, fixing bugs, and adding features are all things that someone has to be paid to do (although sometimes people will do it simply in exchange for being able to use the resulting combined work, effectively doing it for free because it's something they need or want).
The problem in the case of OpenSSL is that everyone needs bug fixes and security auditing, but no one was making a coherent effort to sell such a service.
A workstation ought to switch from suspend to RAM to suspend to disk when it receives a signal from your UPS that mains power has been lost.
If you're putting your business in a building where power is sufficiently unreliable that it's worth the cost to have a UPS for every workstation, then you're probably doing something badly wrong. On a server, where downtime can cost serious money, it can be worth it. On a workstation, the extra cost for something that happens once every few years in a country that has vaguely modern infrastructure simply isn't worth it most of the time. The extra writes to the SSD from having to suspend to disk once every few years as a result of power failure are in the noise.
And often it isn't. Satellite and cellular Internet service providers in the United States tend to charge on the order of 1/2 cent to 1 cent per megabyte of transfer
Satellite internet connections don't qualify as 'often' - they're a statistically insignificant amount of the userbase. Mobile connections are, but:
If you've downloaded a 100 MB document from the network, it would cost the end user $0.50 to $1.00 to retrieve it again
We're talking about browser in-memory caches here. A 100MB document will be saved to disk or opened by another application when it's downloaded. It won't sit in the browser's cache.
And this is because when a workstation (a laptop or desktop) hibernates, it writes all allocated RAM to the swap file
Not really, this policy predates hibernation by about three decades. It's so that swapping never needs to allocate new data structures when the machine is already in a memory-constrained state.
This can be as large as RAM, though for speed, it may be smaller in operating systems that store some of their swap file in a compressed RAM disk (such as RAM Doubler on classic Mac OS or zram on Linux). But an operating system still has to provide for the worst case of memory that can't be compressed.
When Linux is using zram, it doesn't follow this policy (actually, Linux doesn't in general). It's impossible to do so sensibly if you're using compression, because you don't know exactly how much space is going to be needed until you start swapping. RAM compression generally works by the same mechanisms as the swap pager, but putting pages that compress well into wired RAM rather than on disk. You can also often compress swap, but that's an unrelated mechanism.
Until you actually use hibernation. How often does that happen on a particular work day?
Generally, never. OS X does 'safe sleep', where it only bothers writing out the contents of RAM to disk when battery gets low, so my laptop never hibernates unless I leave it unplugged for a long time. My servers don't sleep, because if you've got a server that's so idle it would make sense for it to hibernate then it's better to just turn it off completely. My workstation doesn't hibernate, because the difference in power consumption between suspend to RAM and suspend to disk is so minimal that it's not worth the extra inconvenience.
Some of RAM is used as a cache for the file system, but operating systems should be smart enough to purge this disk cache when hibernating.
Most post-mid-'90s operating systems use a unified buffer cache, so there's no difference between pages that are backed by swap and pages that are backed by other filesystem objects. Indeed, allocating swap when you allocate a page made this even easier, which is why this policy stayed around for so long: you could get away with just having a single pager that would send things back to disk without ever having to allocate on-disk storage for them or care about whether the underlying disk object was a swap file or a regular file.
Applications, on the other hand, might not be so smart. Ideally an operating system could send "memory pressure" events to processes, causing them to purge their own caches and rewrite deallocated memory with zeroes so that it can be compressed. The OS would broadcast such an event before hibernation or any other sort of heavy swapping. Do both POSIX and Windows support this sort of event?
POSIX doesn't. Windows has something like this, as does XNU. Mach had it originally, as it delegated swapping entirely to userspace pagers and allowed applications to control their own swapping policies. It's not really related to hibernation, but to memory pressure in general. It's often cheaper to recalculate data or refetch it from the network than swap it out and back in again, so it makes sense, for example, to have the web browser purge its caches when you get low on RAM, because it's likely almost as fast to re-fetch things from the network than get them from disk. On a mobile device, with no swap, it's better to let the applications reduce RAM footprint than to pick one to kill. This works best with languages that support GC, as they can use this event to trigger collection.
"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein