Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Nice summary, a bit misleading (Score 1) 256

Will it still work when you want to play it on another device? Or when you (or someone who compromises your account) do something that annoys Microsoft and your XBox Live account is deleted? Can you watch it when you're bored because your Internet connection is down? Can you watch it while you're travelling on a train or a plane on your tablet or laptop? What about on the next tablet or laptop you buy, from a different vendor?

Comment Re:Sorry, no. (Score 1) 232

So, step back from the idea of BitCoin for a second - would you use software that used your idle CPU / GPU cycles for some form of distributed computation? How about your bandwidth? For example, would you play a game that ran a BitTorrent client for distributing updates? What about if they also sold space on their tracker for other distributed things, but allowed you to limit to, say, 5KB/s upstream usage? What if they sold GPU time for embarrassingly parallel large-scale scientific computing jobs?

Comment Re:remember when this was for developers? (Score 1) 162

I remember watching one of the WWDC keynotes streamed live when I was still an undergrad, so that would be 2002 / 2003. Mostly it was done as a tech demo of their QuickTime Streaming Server (open sourced as Darwin Streaming Server). So, they almost certainly weren't the first to do live streams like this, but they were doing it a long time before 2010.

Comment Re:No proof. (Score 1) 162

Would that be such a bad thing? People go to these conferences because there's a real benefit in the tutorials and so on that are there. The cost is intended to reflect this, but apparently it failed: the people attending believe they get a lot more value from it than the cost of attending. The only problem with this approach is that it skews the market in favour of established companies, and Apple wants to encourage new developers. This could be fixed by reserving, say, 10% of the tickets for prizes for some competitions along the lines of best independent developer in the App Store, best new app, and so on.

Comment Re:Bias (Score 1) 447

There are a few problems with that. First, it only takes one person to circumvent the DRM and then average customers don't need to do anything: the de-DRM'd copy can be trivially redistributed. Second, it only takes on person to write the software to circumvent the DRM and then it can be packed up into a one-click UI for the average customer. If you search for 'copy DVD' in any web search engine, you'll find loads of easy-to-use tools. There's no reason that the UI for ripping a DVD has to be more complicated than iTunes' UI for ripping a CD - it's only because of the CSS licensing requirements that iTunes doesn't have this functionality out of the box.

Comment Re:What's Actually Wrong With DRM...? (Score 1) 447

HTML5 is not a political action committee

No, it's a (draft) standard. It is being standardised by the W3C, which is a technical standards committee with a mission statement. The goals of the W3C are inherently incompatible with any standard that defines portions that can not be implemented in a compatible way by any vendor that wishes to participate in the market, and that includes DRM.

Comment Re:What's Actually Wrong With DRM...? (Score 1) 447

And that means that they will always be unwilling to reach all of their potential market. And that means that there is a market opportunity for companies that are willing to release without DRM.

As TFA says, the lack of DRM on CDs made it possible to sell portable music players like the iPod along with software to rip all of your CDs. And that increased the demand for recorded music. Now, a significant fraction of the big record labels' income derives from selling DRM-free music downloads. The weak DRM on DVDs was thoroughly circumvented, but it was still strong enough legally to block the sale of set-top boxes that ripped your entire DVD collection and exposed it via a nice menu. It was still strong enough that applications like iTunes and Windows Media Player never got the ability to rip DVDs to play back on your computer. It strangled technological innovation in an entire sector.

Most depressingly, it actually acted against the interests of the people pushing it. I'd love to subscribe to a service like Netflix that let me download films in a DRM-free format (it could even limit it to, say, 30 hours a month). I'd love to have an easy source of films to stick on my laptop or my tablet to watch while travelling. I'd even have liked to be able to buy a DVD at the airport to watch on the plane, but half the time the region locking on my laptop's DVD drive would prevent me from being able to play it back, so I don't even bother trying.

Comment Re:What's Actually Wrong With DRM...? (Score 1) 447

Heck, Google is trying to add native code support to HTML5 which is actually PROCESSOR specific.

Actually, the PNaCl stuff that Google is pushing defines an ABI for LLVM IR, which is platform independent. It defines a 32-bit address space and a set of calling conventions that don't match any host platform, so you need some adaptors for calling into native libraries, but that's fine because you should only be able to call into native libraries via well-defined interfaces anyway for security.

Comment Re:wtf, mate? (Score 3, Insightful) 390

There's a difference between being opposed to porn in public and being in favour of government-mandated censorship. No one has yet produced a porn filter that restricts access to all porn, but doesn't restrict access to anything else, so we'll either end up with a system that has false negatives and still allows porn through (in which case why bother) or has false positives and blocks things that should be completely acceptable (in which case it's very easy to abuse). Worse, this will likely end up with the same lack of accountability that the IWF ended up with, where the government didn't legislate the block list, they just threatened the ISPs with stricter legislation if they didn't 'voluntarily' comply, so you have a private organisation with no public oversight responsible for censoring almost every UK web connection.

And, as the other poster pointed out, there is a difference between a public hotspot and being in a public place.

Comment Re:Whatever! PowerPC been doing 64-bit (Score 1) 332

Being strong on floating point is an aspect of the implementation more than the ISA. SGI's MIPS chips devoted a lot of designer effort and silicon to floating point, because that's what their customers wanted. In MIPS, however, there is a generic coprocessor interface supporting 4 coprocessors. CP0 is the system management coprocessor, which does all of the things like TLB management. CP1 is traditionally the FPU, and CP3 is sometimes the SIMD unit. CP2 is usually some manufacturer-specific extension. Cavium's Octeons, for example, put some network processing acceleration functions into CP2, but I don't think they implement CP1, or if they do it's likely a single floating point pipeline shared between cores. With a multithreaded CPU and a well-designed memory controller, you can have enough threads blocking on reads that you can handle one read and one write every cycle and completely saturate the bus, which is exactly what you want for network processing.

Comment Re:Cheap at half the price! (Score 4, Interesting) 218

The whole "being inhabitable for hundreds of thousands of years" means that you have extremely large amounts of something that is stable enough to be safe to handle.

It depends a lot on the contaminant. Something like tritium is a problem, for example, because it has a relatively short half life, but it will bond to oxygen and form water and if you drink it then it can cause serious problems. Radon gas is also a problem (present in a lot of places with granite) because it is heavier than air and so accumulates in any enclosed space: if you breathe it in then it is quite dangerous.

There's also the problem that a lot of the byproducts of a nuclear reactor are only mildly radioactive, but highly toxic for other reasons. The low decay rate means that they remain toxic chemicals for a long time. On the other hand, this isn't too different from any other chemical plant if there's an accident.

Comment Re:Experiment (Score 1) 297

One of the things I've learned from open source development is that it's really helpful to have some deadlines and milestones. If your group can set these itself, it works well. If not, then you end up with the Valve Time phenomenon, where things keep slipping because you want to get one more feature in, or one more bug fixed.

In a commercial setting, it's also often not enough to just get the work done. You have to also get it to your customers. This means that you need sales people to be aware of when you can deliver it and what it will do. You need to pay developers, which means that you need to ensure that the money going out is not more than the money coming in, except over short periods when you can cover it from reserves. This also requires that you be aware of when things will be finished and how much you are charging.

I did a lot of freelance work where the customer set a problem, I gave them a time estimate, and by the end I presented them with a solution. That works well, but it's hard to scale.

Comment Re:Whatever! PowerPC been doing 64-bit (Score 1) 332

So anybody could implement any of these CPUs independently of the companies that started them, and not worry about any patent violations?

Well, not quite. It's definitely possible to implement a CPU that is compatible with a 20-year-old CPU without infringing any patents, because any patents that were used in the original have now expired. That doesn't mean that the implementation will be patent free, however. For example, our branch predictor is cleverer than any shipping CPU 20 years ago, but I've not done a patent search so I don't know if it's covered by more recent patents. I intentionally avoided some techniques I know to be patented, but some of the older ones might have been patented around 1996-2000ish. That said, if we have to change the branch predictor to avoid patents then it's not a big deal - it doesn't change the ISA (and our CPU runs in an FPGA, so deploying a new version takes a few hours).

By MIPS IV, you mean the R8000, right, or is it R5000?

R8000 was the first MIPS IV chip, yes.

I'm curious about whether Oracle would then come out w/ a SPARC v10 in that case, although it's not that there are others aside from them who would be interested in such a CPU.

Well, Sun did something similar with the UltraSPARC spec. SPARCv9 doesn't specify the privileged mode instruction set, and this used to vary. The UltraSPARC spec that they released along with the T1 specified this and was intended to provide a stable interface for operating systems. Some parts of that may have been difficult to implement without trampling on patents. There are also likely to be issues with things like SIMD extensions.

I'd love to see some of these CPUs, such as the MIPS, get resurrected and used in newer IPv6 routers and other networking gear.

All Juniper routers run a tweaked FreeBSD on 64-bit MIPS and a lot of low-end routers use 32-bit MIPS chips.

With more open hardware, it would be easier to manage

The problem is always competing with companies that have large volumes. Intel is basically a process generation ahead of everyone else, and they have economies of scale that only the big ARM SoC vendors can match. There isn't really a business case for using a reimplementation of an old CPU architecture when you can get a Cortex A15 that will outperform it for less money. The only reason you'd want to is so you can add some custom instruction set extensions that massively speed up your workload, and even then they'd need to give and order of magnitude speedup for it to be a net win. In Juniper's case, they go with in-order MIPS cores with large SMP and SMT support, so that they can do a lot of relatively simple packet processing tasks in parallel. There isn't much branching or floating point in packet filtering (it's mostly just integer arithmetic and a large amount of data shuffling and a lot of data dependencies that completely kill performance on superscalar or out-of-order chips), so general-purpose CPUs (and GPUs) are optimised in all of the wrong places. Two simple in-order cores for them can be faster than one complex out-of-order superscalar core.

Slashdot Top Deals

"It is better for civilization to be going down the drain than to be coming up it." -- Henry Allen

Working...