Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:remember when this was for developers? (Score 1) 162

I remember watching one of the WWDC keynotes streamed live when I was still an undergrad, so that would be 2002 / 2003. Mostly it was done as a tech demo of their QuickTime Streaming Server (open sourced as Darwin Streaming Server). So, they almost certainly weren't the first to do live streams like this, but they were doing it a long time before 2010.

Comment Re:No proof. (Score 1) 162

Would that be such a bad thing? People go to these conferences because there's a real benefit in the tutorials and so on that are there. The cost is intended to reflect this, but apparently it failed: the people attending believe they get a lot more value from it than the cost of attending. The only problem with this approach is that it skews the market in favour of established companies, and Apple wants to encourage new developers. This could be fixed by reserving, say, 10% of the tickets for prizes for some competitions along the lines of best independent developer in the App Store, best new app, and so on.

Comment Re:Bias (Score 1) 447

There are a few problems with that. First, it only takes one person to circumvent the DRM and then average customers don't need to do anything: the de-DRM'd copy can be trivially redistributed. Second, it only takes on person to write the software to circumvent the DRM and then it can be packed up into a one-click UI for the average customer. If you search for 'copy DVD' in any web search engine, you'll find loads of easy-to-use tools. There's no reason that the UI for ripping a DVD has to be more complicated than iTunes' UI for ripping a CD - it's only because of the CSS licensing requirements that iTunes doesn't have this functionality out of the box.

Comment Re:What's Actually Wrong With DRM...? (Score 1) 447

HTML5 is not a political action committee

No, it's a (draft) standard. It is being standardised by the W3C, which is a technical standards committee with a mission statement. The goals of the W3C are inherently incompatible with any standard that defines portions that can not be implemented in a compatible way by any vendor that wishes to participate in the market, and that includes DRM.

Comment Re:What's Actually Wrong With DRM...? (Score 1) 447

And that means that they will always be unwilling to reach all of their potential market. And that means that there is a market opportunity for companies that are willing to release without DRM.

As TFA says, the lack of DRM on CDs made it possible to sell portable music players like the iPod along with software to rip all of your CDs. And that increased the demand for recorded music. Now, a significant fraction of the big record labels' income derives from selling DRM-free music downloads. The weak DRM on DVDs was thoroughly circumvented, but it was still strong enough legally to block the sale of set-top boxes that ripped your entire DVD collection and exposed it via a nice menu. It was still strong enough that applications like iTunes and Windows Media Player never got the ability to rip DVDs to play back on your computer. It strangled technological innovation in an entire sector.

Most depressingly, it actually acted against the interests of the people pushing it. I'd love to subscribe to a service like Netflix that let me download films in a DRM-free format (it could even limit it to, say, 30 hours a month). I'd love to have an easy source of films to stick on my laptop or my tablet to watch while travelling. I'd even have liked to be able to buy a DVD at the airport to watch on the plane, but half the time the region locking on my laptop's DVD drive would prevent me from being able to play it back, so I don't even bother trying.

Comment Re:What's Actually Wrong With DRM...? (Score 1) 447

Heck, Google is trying to add native code support to HTML5 which is actually PROCESSOR specific.

Actually, the PNaCl stuff that Google is pushing defines an ABI for LLVM IR, which is platform independent. It defines a 32-bit address space and a set of calling conventions that don't match any host platform, so you need some adaptors for calling into native libraries, but that's fine because you should only be able to call into native libraries via well-defined interfaces anyway for security.

Comment Re:wtf, mate? (Score 3, Insightful) 390

There's a difference between being opposed to porn in public and being in favour of government-mandated censorship. No one has yet produced a porn filter that restricts access to all porn, but doesn't restrict access to anything else, so we'll either end up with a system that has false negatives and still allows porn through (in which case why bother) or has false positives and blocks things that should be completely acceptable (in which case it's very easy to abuse). Worse, this will likely end up with the same lack of accountability that the IWF ended up with, where the government didn't legislate the block list, they just threatened the ISPs with stricter legislation if they didn't 'voluntarily' comply, so you have a private organisation with no public oversight responsible for censoring almost every UK web connection.

And, as the other poster pointed out, there is a difference between a public hotspot and being in a public place.

Comment Re:Whatever! PowerPC been doing 64-bit (Score 1) 332

Being strong on floating point is an aspect of the implementation more than the ISA. SGI's MIPS chips devoted a lot of designer effort and silicon to floating point, because that's what their customers wanted. In MIPS, however, there is a generic coprocessor interface supporting 4 coprocessors. CP0 is the system management coprocessor, which does all of the things like TLB management. CP1 is traditionally the FPU, and CP3 is sometimes the SIMD unit. CP2 is usually some manufacturer-specific extension. Cavium's Octeons, for example, put some network processing acceleration functions into CP2, but I don't think they implement CP1, or if they do it's likely a single floating point pipeline shared between cores. With a multithreaded CPU and a well-designed memory controller, you can have enough threads blocking on reads that you can handle one read and one write every cycle and completely saturate the bus, which is exactly what you want for network processing.

Comment Re:Cheap at half the price! (Score 4, Interesting) 218

The whole "being inhabitable for hundreds of thousands of years" means that you have extremely large amounts of something that is stable enough to be safe to handle.

It depends a lot on the contaminant. Something like tritium is a problem, for example, because it has a relatively short half life, but it will bond to oxygen and form water and if you drink it then it can cause serious problems. Radon gas is also a problem (present in a lot of places with granite) because it is heavier than air and so accumulates in any enclosed space: if you breathe it in then it is quite dangerous.

There's also the problem that a lot of the byproducts of a nuclear reactor are only mildly radioactive, but highly toxic for other reasons. The low decay rate means that they remain toxic chemicals for a long time. On the other hand, this isn't too different from any other chemical plant if there's an accident.

Comment Re:Experiment (Score 1) 297

One of the things I've learned from open source development is that it's really helpful to have some deadlines and milestones. If your group can set these itself, it works well. If not, then you end up with the Valve Time phenomenon, where things keep slipping because you want to get one more feature in, or one more bug fixed.

In a commercial setting, it's also often not enough to just get the work done. You have to also get it to your customers. This means that you need sales people to be aware of when you can deliver it and what it will do. You need to pay developers, which means that you need to ensure that the money going out is not more than the money coming in, except over short periods when you can cover it from reserves. This also requires that you be aware of when things will be finished and how much you are charging.

I did a lot of freelance work where the customer set a problem, I gave them a time estimate, and by the end I presented them with a solution. That works well, but it's hard to scale.

Comment Re:Whatever! PowerPC been doing 64-bit (Score 1) 332

So anybody could implement any of these CPUs independently of the companies that started them, and not worry about any patent violations?

Well, not quite. It's definitely possible to implement a CPU that is compatible with a 20-year-old CPU without infringing any patents, because any patents that were used in the original have now expired. That doesn't mean that the implementation will be patent free, however. For example, our branch predictor is cleverer than any shipping CPU 20 years ago, but I've not done a patent search so I don't know if it's covered by more recent patents. I intentionally avoided some techniques I know to be patented, but some of the older ones might have been patented around 1996-2000ish. That said, if we have to change the branch predictor to avoid patents then it's not a big deal - it doesn't change the ISA (and our CPU runs in an FPGA, so deploying a new version takes a few hours).

By MIPS IV, you mean the R8000, right, or is it R5000?

R8000 was the first MIPS IV chip, yes.

I'm curious about whether Oracle would then come out w/ a SPARC v10 in that case, although it's not that there are others aside from them who would be interested in such a CPU.

Well, Sun did something similar with the UltraSPARC spec. SPARCv9 doesn't specify the privileged mode instruction set, and this used to vary. The UltraSPARC spec that they released along with the T1 specified this and was intended to provide a stable interface for operating systems. Some parts of that may have been difficult to implement without trampling on patents. There are also likely to be issues with things like SIMD extensions.

I'd love to see some of these CPUs, such as the MIPS, get resurrected and used in newer IPv6 routers and other networking gear.

All Juniper routers run a tweaked FreeBSD on 64-bit MIPS and a lot of low-end routers use 32-bit MIPS chips.

With more open hardware, it would be easier to manage

The problem is always competing with companies that have large volumes. Intel is basically a process generation ahead of everyone else, and they have economies of scale that only the big ARM SoC vendors can match. There isn't really a business case for using a reimplementation of an old CPU architecture when you can get a Cortex A15 that will outperform it for less money. The only reason you'd want to is so you can add some custom instruction set extensions that massively speed up your workload, and even then they'd need to give and order of magnitude speedup for it to be a net win. In Juniper's case, they go with in-order MIPS cores with large SMP and SMT support, so that they can do a lot of relatively simple packet processing tasks in parallel. There isn't much branching or floating point in packet filtering (it's mostly just integer arithmetic and a large amount of data shuffling and a lot of data dependencies that completely kill performance on superscalar or out-of-order chips), so general-purpose CPUs (and GPUs) are optimised in all of the wrong places. Two simple in-order cores for them can be faster than one complex out-of-order superscalar core.

Comment Re:Already done (Score 1) 242

The incentive for manufacturers to ship USB peripherals was the existence of a critical mass of devices supporting said standard, which the iMac most certainly was not, and the appearance of better use cases for said interface

No, it was a critical mass of users who wanted those peripherals. USB interfaces are more complex than PS/2 and so mice and keyboards were more expensive when they had USB interfaces. It took economies of scale to get the prices down to similar levels, and these wouldn't have happened without something like the iMac selling in large quantities, because PC users (including myself) in 1998-2000 looked at buying a new mouse or keyboard, saw the USB version was more expensive than the PS/2 version, and went with the cheap one.

also USB was better, technically, and enabled for example scanners that did not need their own expensive interface card, or cheaper Winprinters because USB had the bandwidth to push rendering in the driver instead of the device

I'll give you scanners, because I owned a parallel port scanner and it took ages to wait for the images. For decent scanners the alternative was SCSI, and that was a lot more expensive. On the other hand, scanners were (and still are) a relatively small subset of the peripherals market. For printers, there was little advantage of USB. A parallel port could push data at the rate of the print head of a decent inkjet printer. We had a number of Inkjets that did all of the rasterisation on the host computer and pushed the data. Laser winprinters never really caught on to any great degree, and at the higher end ethernet interfaces let you avoid the bandwidth limitations.

What I was getting at is that USB was an established standard that Apple adopted and promoted but did not create it

And Thunderbolt, which was the topic of the original post, is also an Intel standard that Apple adopted and promoted but did not create.

and wasn't either the first or the largest manufacturer to use it

It was the first manufacturer to force users to use it. As I said, I bought a PC in 1996 with USB ports, but there were no USB peripherals for it. When they were introduced, they were more expensive than the alternative, and driver support sucked. I had absolutely no incentive to use USB. iMac users at the same time had no other alternative (the iMac also shipped with the least useable mouse ever, so almost everyone who bought one went out to look for a less-crap USB mouse after a few weeks of using it).

If we want to see what happens when Apple creates and is the first and largest adopter of a new technology, Firewire is a better example. Was it better then alternative technologies?

FireWire was only better than USB for devices that needed isochronous transfer or really high bandwidth. That basically meant video cameras and external hard drives, neither of which was a big enough market to drive a new standard. Video cameras were almost exclusively FireWire until USB 2.0, which was almost as good (supported isochronous transfer, ran almost as fast) and cheaper to implement. FireWire replaced SCSI on external hard drives, but most also added USB because it was cheap.

5% is not enough

5% of the market for mice and keyboards is a huge market. 5% of the market for printers is still pretty big. 5% of the market for external disk drives and video cameras is a tiny market and not worth focussing on, unless it's the ultra-high end where every sale rakes in a huge profit (and FireWire still has a reasonable presence there).

Comment Re:Certification (Score 1) 953

Obviously it will cost more than $10K, but look at the numbers I actually wrote in my post. If it sells to opticians and you need one license per office, then that's $40M if the three biggest opticians in the UK alone were to buy it. $40M is a hell of a lot to spend on an in-house application, even including certification.

Slashdot Top Deals

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...