Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Why? (Score 1) 333

Apple doesn't directly fund FreeBSD, but they do sometimes contribute code. For example, most of the MAC framework (SEBSD) was funded by Apple and developed for both FreeBSD and OS X (the pluggable policy stuff in that makes the sandboxing for iOS and Mac App Store apps possible). When Apple wants kernel code written, they often contract work to FreeBSD devs, and are sometimes willing to fund the FreeBSD version at the same time because code that's generic enough to work on both kernels.

Comment Re:Why? (Score 1) 333

Lots of our downstream consumers (especially embedded systems companies) have a policy of 'no GPLv3 on the premises'. They don't want the potential liability, and they especially don't want to make it easy for them to accidentally distribute GPLv3 code. Even if they're not planning on shipping GCC in their product (and, by the way, a lot of them do ship toolchains - embedded also includes companies like NetApp and Juniper - as it makes servicing easier), there's the potential to do it accidentally if the code is sitting there on developers' machines.

Comment Re:security (Score 2) 143

Someone else has already pointed you at the report on the compromise. One of our developers has a VM that turned out not to be as secure as he though, and which had his ssh keys (with no passphrase) that gave access to the FreeBSD cluster machines. As soon as the attack was noticed (very quickly, owing to one particularly paranoid developer), the affected machines were taken offline. Bringing things back online took a long time, for several reasons:
  • All of the code that we're running on FreeBSD.org machines was audited
  • Some of it turned out to be a little bit scary (e.g. build machines having access to the FTP servers so they could push packages) and so the architecture needed redesigning in places.
  • We rolled out auditdistd on all of the hosted machines, so now they have audit logs that are stored in multiple places, for all machines.
  • We redesigned the network layout at all of our sites to reduce interconnectivity of unrelated services.

As to the codebase needing auditing, we had both svn and git mirrors that allowed the entire history to be checked. We also had copies of checksums of releases and so all of these things were verified. Bringing CVS back online took a bit longer, as CVS easily let us verify the top of the tree, but not the history. I think we ended up regenerating the entire CVS history from svn, and took the opportunity to officially remove support for CVS.

Are there still vulnerabilities? Almost certainly. Any codebase more than a few dozen lines long will contain bugs, and some of them are exploitable if you're sufficiently clever. That's why a lot of the focus in 10.0 has been on mitigation techniques. The auditdistd framework lets you easily deploy auditing for an entire site. Capsicum makes it relatively easy to compartmentalise applications and a system daemons use capsicum out of the box. So do some of the normal filter utilities, for example even if you run uniq as a root user, once it's finished parsing the command line arguments it won't be able to access to any files in your system except the ones you told it to read.

Comment Re:The real problem with BSD (Score 1) 143

There are several things that make your post look like a troll:
  • Which BSD are you referring to? There's some overlap between the BSD communities, but only about as much as there is between the communities of various Linux distributions.
  • Where did you ask? Was it one of the places that FreeBSD recommends (mailing lists, IRC channels, or forums), or did you just pick a random place somewhere on the Internet?
  • Did you actually read the FreeBSD Handbook, which contains fairly detailed instructions on installation, or is that one of the documents that wasn't 'written in some semblance of English'.

Comment Re:Hurrah? (Score 2) 143

They've literally deprecated fork, because they can't be bothered to make it work reliably with Core Framework

fork() deserves to be deprecated. The API originates with old machines that could have a single process in-core at a time. When you wanted to switch processes, you wrote the current process out and read the new one in. In this context, fork was the cheapest possible way of creating a new process, because you just wrote out the current process, tweaked the process control block, and continued executing. On a modern machine, it requires lots of TLB churn as you mark the entire process as copy-on-write (including TLB shootdowns which require IPIs on a multithreaded program using multiple cores). And then, in most cases, it's followed by exec() and so the process that you've just created is replaced by another one and you need to go through the whole sequence again to stop its memory being CoW.

Not only is fork() a ludicrously inefficient way of creating a process on a modern machine, it's also incredibly difficult to use correctly. When you fork(), all of your threads and all of your process descriptors are copied. You need to make sure that every thread that you create uses pthread_atfork() to ensure that it doesn't do any I/O after the fork() and before the exec(). You also need to ensure that you close any file descriptors that you don't want to be propagated to the child, which is nontrivial if you have other threads opening and closing files in the background (O_CLOEXEC helps here, but do you remember to use it everywhere?).

Oh, and posix_spawn() isn't much better. It's designed to be possible to implement on top of existing APIs and so ends up being largely useless without non-standard additions. It doesn't, for example, provide a mechanism to say 'close all file descriptors in the child, except for these ones'.

Comment Re:AI and robotics and jobs (Score 5, Interesting) 625

Sounds like the Brave New World solution is required. Stop demonising drugs, and make freely available drugs that give a sense of euphoria and lethargy. If people don't want to do anything with their lives other than take drugs, then let them get on with it in a non-destructive (to people other than themselves) way and remove themselves painlessly from the gene pool.

Comment Re:Almost Exclusive (Score 1) 232

When an HDD fails, you can still get the data off of it. It's expensive, but it can be done.

At current prices, you can buy several TB of flash for the cost of recovery on a single HDD (which may or may not succeed, depending on the failure mode). If your data is important enough to you to even consider that, then you should probably be backing it up regularly...

Comment Re:Hybrid drives anyone..? (Score 1) 232

Depends on how it's used. Ideally, you want the read cache to be managed by your OS and in RAM, but it's quite rare in consumer uses to write more than a few hundred MB in a burst. Spinning disks suck at this if they're random reads, but a few GBs of non-volatile storage that you can use to buffer writes can give a big performance win. The drive can report that it's committed the data to persistent storage immediately and the data can be reordered into a pattern that minimises seeks (by the controller that actually knows about the disk topology) and written out to disk over the next few minutes. It can also help for (fairly common) workloads where you're interleaving reads and writes: the disk can do the reads, the flash can buffer the writes, and then send them out to disk when you've finished the bursty I/O.

Comment Re:Storage. (Score 1) 232

I never understand people who advocate this. For data that you mostly access read-only (i.e. OS and apps) there's very little benefit from an SSD because that's data that will live in the OS RAM cache. The big benefit from SSDs is that they can do lots of random writes. I can easily get 10-30MB/s of random 4KB writes from the SSD in my laptop. With a mechanical disk that would be closer to 200KB/s...

Comment Re:Wrong Summary (Score 1) 166

2: Flash will always be 10x cost of harddrives. In other words, Flash won't overtake harddrives on price.

That's assuming that hard drives keep getting bigger and cheaper. The amount of R&D money required for each generation of improvement (in most technology) goes up, but the spending for HDDs has gone down as manufacturers see that they're hitting diminishing returns. The number of people who will pay for 4TB disks is lower than the number that will pay for 2TB, which is lower than the number that will pay for 1TB disks and so on. For a lot of users, even 500GB is more than they will need for the lifetime of a disk.

The minimum costs for an SSD are lower than the minimum costs for an HDD. Currently, the smallest disks I can find are about 300GB, and they cost about as much as a 64GB SSD. If you bring an 8TB disk to market now, you're betting that enough people will buy it at a premium price to recoup your R&D expenses before SSDs (Flash or some other technology) pass it in capacity. But now, a lot of the people who traditionally bought the high-end disks are buying SSDs and they're caring more about latency and throughput than capacity. If you show an insanely expensive disk that gives 10x the capacity of the current best, most of these people will say 'meh,' but if you show them a disk that can do 10x the IOPS then they'll ask how much and how soon you can deliver. This gives a big incentive to concentrate R&D spending on SSDs.

Comment Re:10X my white and flabby ass (Score 1) 166

It's not unproductive. There are lots of advantages of SSDs over HDDs, and only one advantage of HDDs over SSDs: price per unit capacity. Saying that comparing price per GB doesn't add anything to the conversation is like saying that comparing IOPS doesn't add anything to the conversation. If you remove price comparisons, then there is no reason to think about HDDs at all. The only reason that I have any HDDs today is that SSDs of a similar capacity are not cost effective. Price per GB is the reason that my laptop uses an SSD, but my NAS uses three HDDs: the SSD for my laptop added about 10% to the price, doing the same for my NAS would have added over 1000% to the price. If SSDs were within a factor of 2 of the price of HDDs, I'd have bought SSDs instead.

Comment Re:RISC (iPhone) vs. CISC (OSX) (Score 1) 512

Depends on their market segment. Apple already has the infrastructure in place for shipping fat binaries (sorry, 'universal binaries'). If they can persuade developers to ship x86-64 and ARMv8 fat binaries then a MacBook Air with a quad ARMv8 SoC and a touchscreen (and the ability to run iOS and OS X apps) might be a very interesting device. It would be seriously underpowered in comparison to the MacBook Pro, but the current MacBook Air is anyway.

Slashdot Top Deals

Repel them. Repel them. Induce them to relinquish the spheroid. - Indiana University fans' chant for their perennially bad football team

Working...