Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Cheap at half the price! (Score 4, Interesting) 218

The whole "being inhabitable for hundreds of thousands of years" means that you have extremely large amounts of something that is stable enough to be safe to handle.

It depends a lot on the contaminant. Something like tritium is a problem, for example, because it has a relatively short half life, but it will bond to oxygen and form water and if you drink it then it can cause serious problems. Radon gas is also a problem (present in a lot of places with granite) because it is heavier than air and so accumulates in any enclosed space: if you breathe it in then it is quite dangerous.

There's also the problem that a lot of the byproducts of a nuclear reactor are only mildly radioactive, but highly toxic for other reasons. The low decay rate means that they remain toxic chemicals for a long time. On the other hand, this isn't too different from any other chemical plant if there's an accident.

Comment Re:Experiment (Score 1) 297

One of the things I've learned from open source development is that it's really helpful to have some deadlines and milestones. If your group can set these itself, it works well. If not, then you end up with the Valve Time phenomenon, where things keep slipping because you want to get one more feature in, or one more bug fixed.

In a commercial setting, it's also often not enough to just get the work done. You have to also get it to your customers. This means that you need sales people to be aware of when you can deliver it and what it will do. You need to pay developers, which means that you need to ensure that the money going out is not more than the money coming in, except over short periods when you can cover it from reserves. This also requires that you be aware of when things will be finished and how much you are charging.

I did a lot of freelance work where the customer set a problem, I gave them a time estimate, and by the end I presented them with a solution. That works well, but it's hard to scale.

Comment Re:Whatever! PowerPC been doing 64-bit (Score 1) 332

So anybody could implement any of these CPUs independently of the companies that started them, and not worry about any patent violations?

Well, not quite. It's definitely possible to implement a CPU that is compatible with a 20-year-old CPU without infringing any patents, because any patents that were used in the original have now expired. That doesn't mean that the implementation will be patent free, however. For example, our branch predictor is cleverer than any shipping CPU 20 years ago, but I've not done a patent search so I don't know if it's covered by more recent patents. I intentionally avoided some techniques I know to be patented, but some of the older ones might have been patented around 1996-2000ish. That said, if we have to change the branch predictor to avoid patents then it's not a big deal - it doesn't change the ISA (and our CPU runs in an FPGA, so deploying a new version takes a few hours).

By MIPS IV, you mean the R8000, right, or is it R5000?

R8000 was the first MIPS IV chip, yes.

I'm curious about whether Oracle would then come out w/ a SPARC v10 in that case, although it's not that there are others aside from them who would be interested in such a CPU.

Well, Sun did something similar with the UltraSPARC spec. SPARCv9 doesn't specify the privileged mode instruction set, and this used to vary. The UltraSPARC spec that they released along with the T1 specified this and was intended to provide a stable interface for operating systems. Some parts of that may have been difficult to implement without trampling on patents. There are also likely to be issues with things like SIMD extensions.

I'd love to see some of these CPUs, such as the MIPS, get resurrected and used in newer IPv6 routers and other networking gear.

All Juniper routers run a tweaked FreeBSD on 64-bit MIPS and a lot of low-end routers use 32-bit MIPS chips.

With more open hardware, it would be easier to manage

The problem is always competing with companies that have large volumes. Intel is basically a process generation ahead of everyone else, and they have economies of scale that only the big ARM SoC vendors can match. There isn't really a business case for using a reimplementation of an old CPU architecture when you can get a Cortex A15 that will outperform it for less money. The only reason you'd want to is so you can add some custom instruction set extensions that massively speed up your workload, and even then they'd need to give and order of magnitude speedup for it to be a net win. In Juniper's case, they go with in-order MIPS cores with large SMP and SMT support, so that they can do a lot of relatively simple packet processing tasks in parallel. There isn't much branching or floating point in packet filtering (it's mostly just integer arithmetic and a large amount of data shuffling and a lot of data dependencies that completely kill performance on superscalar or out-of-order chips), so general-purpose CPUs (and GPUs) are optimised in all of the wrong places. Two simple in-order cores for them can be faster than one complex out-of-order superscalar core.

Comment Re:Already done (Score 1) 242

The incentive for manufacturers to ship USB peripherals was the existence of a critical mass of devices supporting said standard, which the iMac most certainly was not, and the appearance of better use cases for said interface

No, it was a critical mass of users who wanted those peripherals. USB interfaces are more complex than PS/2 and so mice and keyboards were more expensive when they had USB interfaces. It took economies of scale to get the prices down to similar levels, and these wouldn't have happened without something like the iMac selling in large quantities, because PC users (including myself) in 1998-2000 looked at buying a new mouse or keyboard, saw the USB version was more expensive than the PS/2 version, and went with the cheap one.

also USB was better, technically, and enabled for example scanners that did not need their own expensive interface card, or cheaper Winprinters because USB had the bandwidth to push rendering in the driver instead of the device

I'll give you scanners, because I owned a parallel port scanner and it took ages to wait for the images. For decent scanners the alternative was SCSI, and that was a lot more expensive. On the other hand, scanners were (and still are) a relatively small subset of the peripherals market. For printers, there was little advantage of USB. A parallel port could push data at the rate of the print head of a decent inkjet printer. We had a number of Inkjets that did all of the rasterisation on the host computer and pushed the data. Laser winprinters never really caught on to any great degree, and at the higher end ethernet interfaces let you avoid the bandwidth limitations.

What I was getting at is that USB was an established standard that Apple adopted and promoted but did not create it

And Thunderbolt, which was the topic of the original post, is also an Intel standard that Apple adopted and promoted but did not create.

and wasn't either the first or the largest manufacturer to use it

It was the first manufacturer to force users to use it. As I said, I bought a PC in 1996 with USB ports, but there were no USB peripherals for it. When they were introduced, they were more expensive than the alternative, and driver support sucked. I had absolutely no incentive to use USB. iMac users at the same time had no other alternative (the iMac also shipped with the least useable mouse ever, so almost everyone who bought one went out to look for a less-crap USB mouse after a few weeks of using it).

If we want to see what happens when Apple creates and is the first and largest adopter of a new technology, Firewire is a better example. Was it better then alternative technologies?

FireWire was only better than USB for devices that needed isochronous transfer or really high bandwidth. That basically meant video cameras and external hard drives, neither of which was a big enough market to drive a new standard. Video cameras were almost exclusively FireWire until USB 2.0, which was almost as good (supported isochronous transfer, ran almost as fast) and cheaper to implement. FireWire replaced SCSI on external hard drives, but most also added USB because it was cheap.

5% is not enough

5% of the market for mice and keyboards is a huge market. 5% of the market for printers is still pretty big. 5% of the market for external disk drives and video cameras is a tiny market and not worth focussing on, unless it's the ultra-high end where every sale rakes in a huge profit (and FireWire still has a reasonable presence there).

Comment Re:Certification (Score 1) 953

Obviously it will cost more than $10K, but look at the numbers I actually wrote in my post. If it sells to opticians and you need one license per office, then that's $40M if the three biggest opticians in the UK alone were to buy it. $40M is a hell of a lot to spend on an in-house application, even including certification.

Comment Re:Did it really work? (Score 1) 332

I'm not sure what in my post you think you're referring to. My point about SSE is that all x86-64 chips support it, only some x86-32 chips do. It is part of the basic ISA, not an extension. This means that ABIs use it (for example, the SysV ABI uses SSE registers for parameter passing and value returning). And, because it's always there, the compiler can use it. It is vastly easier to generate code for a machine that has 16 registers, any of which can be used as operands for any instruction, than one where you have 8 registers and most operations can only use 2 and a lot have the side effect of moving all values up or down one register. You often end up with a lot of spills in x87 calculations because register allocation and instruction selection are really hard.

Comment Re:Already done (Score 2) 242

Do you remember the computing landscape then? I had a PC back in 1996 (possibly 1997) with two USB ports. Windows NT 4 and Windows 95 didn't support them, there were no peripherals available for them, and they were unused. Windows 95 OEM Service Release 2.1 was released in August 1997 and added support for USB. Every PC came with a PS/2 keyboard and mouse (occasionally an RS-232 mouse, but they were mostly phased out by then, 5-pin DIN keyboards were still common though). Every Mac came with ADB input devices. PC printers were all parallel. USB mice were rare and expensive. Ditto keyboards. I didn't even see a USB printer

Then, in 1998, Apple introduced the iMac with no ADB ports. In fact, no legacy interfaces at all. Any peripheral maker that wanted to sell to Apple customers (and ride on the back of the massive ad campaign for the iMac) sold USB versions of their products. If you walked around a computer store around 1999 / 2000, then every USB peripheral was using translucent plastic to go with the iMac. When I bought a new computer in 2000, it still came with PS/2 ports, and it still came with a PS/2 keyboard and mouse, but when I bought a replacement it came with both PS/2 and USB support.

Without the critical mass of people who could only use USB devices, there would have been far less incentive for manufacturers to start shipping USB peripherals. Why add USB circuitry when everyone has PS/2 anyway? Because that gives you access to the 5% of the market that's buying a Mac for a relatively small extra cost.

Comment Re:Certification (Score 1) 953

Before you start talking open source, blah blah, lets not forget that this is a highly specialized application with very little general appeal, and no geek factor

Open source does not mean community developed, it means customer owned. If it were open source software, then the developers would be paid by the users. My optician is part of a chain that employs 30,000 people and has 1,390 brances. It's the largest optician chain in the UK. The next two employ about 5K each and have about 1,000 branches between them. Now, between them, do you think that they could afford to employ 3 developers full time? Do you think that would cost them more or less than paying $10K per branch every few years for the software? Assuming that the software costs $10K per site, not per computer, then that's $40M per release. That much will keep 3 developers employed full time for quite a while. Oh, and because the software would be developed by the company using it for exactly the purpose that they need, then it will likely serve their purposes better.

Comment Re:Twice as big as it needs to be? (Score 1) 332

Windows is LLP64 (they didn't even make long 64-bit, unlike most UN*Xes)

And this caused a lot of pain because a huge number of programmers believed (some still do) that the C standard guaranteed that sizeof(long) >= sizeof(void*) and so used long instead of intptr_t. They did this because a lot of their headers used packed structures for things like file headers and used a type that was typedef'd to long, and it was easier for them than fixing all of their headers.

Comment Re:Whatever! PowerPC been doing 64-bit (Score 1) 332

All the patents required to implement MIPS IV (64-bit) have expired. The ones on SPARCv9 expire this year. Alpha expired last year. There's a reason for caring about the 20-year mark: it makes implementing the architecture a lot safer. We have a research processor that implements the MIPS IV instruction set for this exact reason: we may accidentally infringe some patents, but it is definitely possible to work around them by implementing things however the R4K did.

Comment Re:Did it really work? (Score 1) 332

There is no logical reason that an x86-64 procressor in 64 bit mode would perform faster than 32 bit mode unless you are memory constrained

Or you benefit from more registers. Or you benefit from vastly more 64-bit registers. Or you're doing floating point and benefit from the compiler being able to assume SSE is present and never use x87 arithmetic. Or you're using shared libraries so benefit form faster position-independent code. But, apart from that, no logical reason at all...

Comment Re:Did it really work? (Score 1) 332

The 64-bit address space doesn't give much advantage in typical desktop applications. Even my web browser with a silly number of tabs open at the moment is using

PC-relative addressing makes position-independent code significantly faster. This is useful for shared libraries, but also for position-independent executables which, in combination with address space randomisation, add some security.

SSE is guaranteed to exist. This alone accounts for most of the speedup, because compiling for x87 is really hard (crazy hybrid of a stack- and a register-based architecture), so generating SSE ops for floating point, even if you're only doing scalar arithmetic, is a lot more efficient.

More GPRs. x86-32 code ends up with a lot of stack spills because it only has a tiny number of general-purpose registers. x86-64 has 16, which makes it a lot easier to work with.

64-bit registers. On x86-32, 64-bit arithmetic is painful, because you need two registers for each of the operands, and you only have 6 registers to use (two of which must be used for the destination in a lot of ops). On x86-64, it's a lot easier to do sequences of 64-bit arithmetic without spills.

Slashdot Top Deals

Beware of all enterprises that require new clothes, and not rather a new wearer of clothes. -- Henry David Thoreau

Working...