Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Comment Re:Wait a mintue (Score 1) 278

No, but that's not really the point (actually, all of the others have added additional security features, but they all had sandboxing last year). The point is that Firefox does not implement the core mechanisms for security that the others all had last year (and, mostly, the year before and the year before that too). This makes is uninteresting as a target.

Comment Re:Wait a mintue (Score 1) 278

This is a reliability measure, not a security measure. The process that plugins run with is not sandboxed and runs with ambient authority. It can read every file in the user's home directory and can open arbitrary network connections. If Flash crashes, then it won't crash Firefox (which is a good thing), but if Flash is compromised then it's exactly the same as if Firefox were compromised. In contrast, if Flash is compromised in Safari or Chrome, the attacker has access to a process running with very restricted privileges and an IPC channel to the browser. To do anything useful, the attacker must use the IPC channel to compromise the sandboxed renderer process, then do the same thing again (though likely with a different vulnerability) to compromise the main browser process (the one that runs with ambient authority). You need, at a minimum, three exploits: one in Flash and two in the browser, to get from a malicious Flash app to a user-level compromise in Chrome or Safari. With Firefox, you need just the first one to do the same amount of damage.

Comment Re:What? (Score 1) 278

Now look at the entitlements for that process. It runs without any sandboxing. A crash in the plugin won't crash the browser, but a compromise of that plugin will give enough privileges to attach a debugger to the main process (on OS X the system will prompt for this, because it looks suspicious, but it can still open arbitrary network connections and read every file in your home directory). Reliability and security often have similar mechanisms, but don't confuse one for the other.

Comment Re:Wait a mintue (Score 4, Informative) 278

The former. All modern browsers except Firefox have decomposed their browser into multiple processes, so that a compromise from one site will only gain control over an unprivileged (i.e. isolated from other stuff the user cares about) process. They also run plugins in separate processes and have fairly narrow communication paths between them. Firefox is still a massive monolithic process, including all add-ons, plugins, and so on.

This basically means that you just need one arbitrary code execution vulnerability in Firefox and it's game over. In contrast, if you have the same in Chrome, Edge, or Safari, then it's just the first step - you now have an environment where you can run arbitrary exploit code, but you can't make (most) system calls and you have to find another exploit to escape from the sandbox. Typical Chrome compromises are the result of chaining half a dozen vulnerabilities together.

Comment Re:This is a big bitchslap to Mozilla (Score 4, Interesting) 278

It also scales based on processor resources. They hit serious TLB scalability issues at around 17 processes (varies a bit between CPUs, in some systems - particularly mobile - you'll hit RAM limits sooner), so if you have more tabs open than this, you will start having multiple independent sites share the same renderer process.

Comment Re:tom (Score 1) 119

Typically not to end users though. Microsoft sold the BASIC that computer vendors (including Apple) burned into ROM. Microsoft QuickBASIC for DOS contained a compiler that could produce stand-alone .exe or .com binaries, though the free QBASIC that they bundled with DOS 5 and later was a cut-down version that only included the interpreter.

Comment Re:Turing Evolved (Score 2) 213

Robots don't feel those emotions, and have committed no massacres on that scale. I trust robots more than I trust humans.

Do you trust a gun? Do you trust a bomb? Of course not, because the concept is meaningless: neither will cause harm without instructions from a human. Both can magnify the amount of harm that a human can do. Autonomous weapons, of which landmines are the simplest possible case, expand both the quantity that a person can do harm and the time over which they can do it.

During the cold war, there were at least two incidents where humans refused to follow legitimate orders to launch nuclear weapons - in either case, the likely outcome of following the orders would have been the deaths of many millions. The worst atrocities of the second world war were caused by people 'just following orders'. And you think that it's a good idea to remove the part of the chain of command capable of disobeying orders.

Comment Re:Uh... let me think about it (Score 1) 574

The person in your story was relying on his ability to read a map, which sounds pretty reasonable, and his ability to read a compass (which was not such a good plan, if he didn't sanity check it with the direction of the sun). The people in TFA, however, are carrying a device that tells them their precise position in the world to within a few metres. If you're not periodically checking and saying 'hmm, I want to get from here to here and I'm nowhere between the two points' then I think that counts as a bit stupid.

Comment Re:Cores Schmores (Score 1) 136

They didn't, the fastest P4 Xeon outperformed the fastest Athlons, but for any given Athlon the equivalent speed P4 was a lot more expensive. Once the Opterons came out, that changed: if you wanted the fastest x86 chip you could buy, you bought from AMD, especially in multi-socket configurations (quad-processor Opterons wiped the floor with memory-starved quad Xeons until Intel integrated the memory controller on die). Worse (for Intel), if you were willing to recompile your code you could get another 20+% out of the Opterons using the x86-64 ISA (more GPRs and cheaper PIC made a big difference, and a floating point ABI that used SSE exclusively and not x87 could give you a 100% speedup in float-heavy code, where even if the x86-32 compiler was using SSE registers for compute it was still losing performance moving them to and from the x87 register stack for function calls / returns).

Comment Re:Cores Schmores (Score 3, Informative) 136

The Thunderbird was nice, but it was more of a price/performance winner than overall performance. A 1GHz Thunderbird ran stable at 1.3GHz and was similar performance to a 2GHz Pentium 4 at a fraction of the cost (particularly as the P4 required RAMBUS DRAM, so you could stick twice as much DDR in Athlon for the same money). It wasn't until the Opteron that AMD really started winning on performance. The integrated DRAM controller was a big win and being first to 64 bits (which, on x86, means more GPRs, sane floating point ISA, and PC-relative addressing) gave them a huge advantage. Unfortunately, they haven't really been competitive since the Core 2, except in market segments where Intel intentionally cripples their offerings (e.g. no more than 2 SATA ports on the Atom Mini-ITX boards to avoid competition with the i3 boards, making AMD the only viable option).

Comment Re: All I know is that this: (Score 2) 273

It's about both cost and risk analysis. If you've got a lot of infrastructure, then you've probably already got a team of decent admins. Adding another server has a very small marginal cost. If you haven't, then the cost is basically the cost of hiring a sysadmin. Even the cheapest full-time sysadmin costs a lot more than you can easily spend with GitHub. Alternatively, you get one of your devs to run it. Now you have a service that is only understood well by one person, where installing security updates (let alone testing them first) is nowhere near that top priority in that person's professional life, and where at even one hour a week spent on sysadmin tasks you're still spending a lot more than an equivalent service from GitHub would cost.

In both of the latter cases, the competition for GitHub isn't a competent and motivated in-house team. It is almost certainly better to run your own infrastructure well, but the competition for GitHub is running your own infrastructure badly and they're a very attractive proposition in that comparison.

Outsourcing things that are not your core competency is not intrinsically bad, the problem is when people outsource things that are their core competency (e.g. software companies deciding to outsource all of the development - it's not a huge step from there to the people working for the outsourcing company to decide to also handle outsourcing management and start up a competitor, with all of the expertise that should be yours), or outsourcing without doing a proper cost-benefit analysis (other than 'oh, look, it's cheaper this quarter!').

If you think outsourcing storage of documents is bad remember that, legal companies, hospitals and so on have been doing this for decades without issues - storing large quantities of paper / microfiche is not their core competency and there are companies that can, due to economies of scale, do it much cheaper. Oh, and if that still scares you, remember that most companies outsource storing all of their money as well...

Comment Re:The gun is pointing at the foot (Score 1) 428

Something of a biased set. I've been using Firefox on Android for over a year, and I am very happy with it. I wasn't aware until your post that Mozilla was collecting satisfaction stats, and even now I can't really be bothered to post there - but I probably would if I were unhappy with it. Firefox with the self-destructing cookies add-on is the only mobile browser that I've found that gives me the cookie management policy that I want.

Slashdot Top Deals

"It says he made us all to be just like him. So if we're dumb, then god is dumb, and maybe even a little ugly on the side." -- Frank Zappa