Forgot your password?

Comment: 100% would be interesting (Score 3, Insightful) 241

One, if there were a 100% failure rate dousing would have been abandoned years ago.

Actually if the failure rate was exactly 100%, it would be a valuable tool:
it would very reliabily show where NOT to look for water, and by deduction you'll know that you need to look for water at the remaining NOT dowsed places.

The real failure rate would be something very high, but not close to 100%.
By random chance, you're bound to find water, eventually.

The whole point of a scientific statistical test would be to see if the few successes occur as frequently as random chance, or if dowsing has a slightly higher success rate that could NOT be explained purely by random chance.

Comment: Again, what's the problem ? (Score 2) 149

by DrYak (#47778011) Attached to: Research Shows RISC vs. CISC Doesn't Matter

All the reason you list could all be "fixed in software".

The quotes around the "software" mean that i refer about the firmware/microcode as a piece of software designed to run on top of the actual execution units of a CPU.

No, they cannot. OR the software will be terible slow , like 2-10 times slowdown.

Slow: yes, indeed. But not impossible to do.

What matters are the differences in the semantics of the instructions.
X86 instructions update flags. This adds dependencies between instructions. Most RISC processoers do not have flags at all.
This is semantics of instructions, and they differ between ISA's.

Yeah, I pretty well know that RISCs don't (all) have flags.
Now, again, how is that preventing the micro-code swap that dinkypoo refers to (and that was actually done on transmeta's crusoe)?
You'll just end with a bigger clunkier firmware that for a given front-end instruction from the same ISA, will translate into a big bunch of back-end micro-ops.
Yup. A RISC's ALU won't update flags. But what's preventing the firmware to dispatch *SEVERAL* micro-ops ? first to do the base operation and then aditionnal instructions to update some register emulating flags?
Yes, it's slower. But, no that don't make micro-code based change of supported ISA impossible, only not as efficient.

The backend, the micro-instrucions in x86 CPUs are different than the instructions in RISC CPU's. They differ in the small details I tried to explain.

Yes, and please explain how that makes *definitely impossible* to run x86 instruction? and not merely *somewhat slower*?

Intel did this, they added x86 decoder to their first itanium chips. {...} But the perfromance was still so terrible that nobody ever used it to run x86 code, and then they created a software translator that translated x86 code into itanium code, and that was faster, though still too slow.

Slow, but still doable and done.

Now, keep in mind that:
- Itanium is a VLIW processor. That's an entirely different beast, with an entirely different approach to optimisation, and back during Itanium development the logic was "The compiled will handle the optimising". But back then such magical compiler didn't exist and anyway didn't have the necessary information at compile time (some type of optimisation requires information only available at run time. Hence doable in microcode, not in compiler).
Given the compilers available back then, VLIW sucks for almost anything except highly repeated task. Thus it was a bit popular for cluster nodes running massively parallel algorithms (and at some point in time VLIW were also popular in Radeon GFX cards). But VLIW sucks for pretty much anything else.
(Remember that, for example, GCC has auto-vectorisaion and well performing Profile-Guided-Optimisation only since recently).
So "supporting an alternate x86 instruction on Itanium was slow" has as much to do with "supporting an instruction set on a back-end that's not tailored for the front-end is slow" as it has to do with "Itanic sucks for pretty much everything which isn't a highly optimized kernel-function in HPC".

But still it proves that runing a different ISA on a completely alien back-end is doable.
The weirdness of the back-end won't prevent it, only slow it down.

Luckily, by the time Transmeta Crusoe arrived:
- knowledge had a bit advance in how to handle VLIW ; crusoe had a back-end better tuned to run CISC ISA

Then by the time Radeon arrived:
- compilers had gotten even better ; GPU are used for the same (only) class of task at which VLIW excels.

The backend of Crusoe was designed completely x86 on mind, all the execution units contained the small quirks in a manner which made it easy to emulate x86 with it. The backend of Crusoe contains things like {...} All these were made to make binary translation from x86 easy and reasonable fast.

Yeah, sure, of course they are going to optimise the back-end for what the CPU is the most-likely to run.
- Itanium was designed to run mainly IA-64 code with a bit support for older IA-32 just in case in order to offer a little bit of backward compatibility to help the transition. It was never though of as a IA-32 workhorse. Thus the back-end was designed to run mostly IA-64 (although even that wasn't stellar, due to weirdness of VLIW architecture). But that didn't prevent the front-end from accepting IA-32 instructions.
- Crusoe was designed to run mainly IA-32 code. Thus the back-end was designed a bit to run better IA-32 code.


To go back at what dinkypoo says: the back-end doesn't directly eat IA-32 instruction in none of the mentionned processor (neither recent Intel, nor AMDs, nor Itaniums, nor Crusoe, etc.) they all have back-ends that only consume the special micro-ops that the front-end feeds them after expanding the ISA (or the software front-end in case of Crusoe or the compiler in the CPU in case of Radeon). That's true, and you haven't disproved it (of course).

Also, according to dinkypoo, you should be able to replace the front-end, without touching the back-end, and you will be able to get a chip that supports a different instruction set (because the actual execution units never see it directly).
The Crusoe is a chip that did exactly that (thank to the fact that their front-end was pure software), and in fact HAD its microcode swapped, either as an in-lab experiment to run PowerPC instruction, or as a tool to help test x86_64 instruction.

So no, your wrong that "you can't simply change the decoder" is false. YOU CAN simply change the decoder, it has been done.
Just don't expect stellar performance depending on how much your back-end in the way it works from the target instruction set. (x86_64 on a Crusoe vs. IA-32 on an Itanium).
It might be slow, but it works. you're wrong, dinkypoo is right, accept it.

Comment: EU and Japan (Score 1) 163

The analog TV standard in Japan was NTSC with a different black level. This is why the Famicom and NTSC NES use the same 2C02 PPU, while PAL regions need a different 2C07 PPU.

Except that, in the eighties, virtually any TV sold in Europe had a Scart connector (Mandatory on any TV sold after 1980. I don't remember having seen a TV without it), and TV sold in Japan had a RGB-21 connector (technically similar. physically, the connectors have the same shape but use slightly different pin-outs). That was simply the standard interconnect to plug *any* consumer electronics on a TV outside of the US.

So starting with MasterSystem and Megadrive (again that's my first hand experience. I'm not sure but from what I read around that it's also the case with Super Famicom) everybody else in the world got an easy way to have nice graphics.

Only you, the consumer in the US, were stuck with no RGB on your TV set combined with a video standard that completely breaks colours. Everybody else got to play the games without any composite artifact.

The Genesis's pixel clock, on the other hand, was 15/8 times color burst. At that rate, patterns of thin vertical lines resulted in semi-transparent rainbow effects, which weren't quite as predictable as the but still fooled the TV into making more colors.

But were only visible on TV in the US. The other big markets (Japan, EU) got the actual colours directly fed into the screen over the local's default video interconnect (Scart or RGB-21).
Given that most of the games were produced in Japan, it's very likely that very few of the game developers have designed their games specially with US' NTSC composite artifacts in mind.

So to go back to the begin of this thread discussion:

The graphics simply aren't meant to be seen in super clarity. You see all of the pixels, and the colors are overly bright and flat. It's just... wrong.

Nope. That's what you saw as child because you grew up in the US and your local display standard was bad (only RF or composite available, and an NTSC standard that's bad for colours).
The rest of the users elsewhere actually mostly got the same image as emulators display. It was just a tiny bit more blurry, but we had the same colours (thanks to RGB available in nearly everywhere in the other major markets outside of US).

Comment: Different situation (Score 1) 163

Amiga is used a co-processor to display cool stuff on the screen. But its displaying things that it has an actual internal representation. And work on any display connected to the machine (or even emulator, if the emulator can handle the internal copper chip).

CGA/composite is a hack abusing the way NTSC singal work. The machine is ouputing a monochrome signal, but the software abuses the way an NTSC display work and it appears as a coloured picture. (But these colours don't exist in the display buffer. It doesn't work on any other display. It doesn't work in an emulator unless the emulator not only emulates the chips of the machine, but also emulates the problems inside an actual screen).

Comment: in europe: Base+Use (Score 2) 349

by DrYak (#47776289) Attached to: Ask Slashdot: What To Do About Repeated Internet Overbilling?

Well actually NOT *MY* gas billing.
I happen to live on the opposite side of the Atlantic pond (if my b0rked english grammar wasn't already a telling sign).

Here around the utilities are billed in 2 separate steps: capacity and consumption.
- You get a fixed base, that's for paying the infrastructure no matter how much you use (i.e.: you pay a fixed base because you live in a 4-person house and the city has made certain that the water-distribution infrastructure has enough capacity to support the 4 of you).
- In addition you pay the used volume (you pay X per cubic meter of water).

The municpality's utilities don't need to fudge the price with "out of their arses" factors, their fixed costs is covered by a separate entry in the bill.

Comment: Microcode switching (Score 2) 149

by DrYak (#47775413) Attached to: Research Shows RISC vs. CISC Doesn't Matter

This same myth keeps being repeated by people who don't really understand the details on how processors internally work.

Actually, YOU are wrong.

You cannot just change the decoder, the instruction set affect the internals a lot:

All the reason you list could all be "fixed in software". The fact that silicon designed by Intel handles opcode in a way a little bit better optimized toward being fed from a x86-compatible frontend is just specific optimisation. Simply doing the same stuff with another RISCy back-end, i.e: interpreting the same ISA fed to the front-end, will simply require each x86 ISA being executed as a different set of micro-instructions. (some that are handled as single ALU opcode on Intel's silicon might require a few more instruction, but that's about the different).

You could switch the frontend and speak a completely different instruction set. Simply if the two ISA are radically different, the result wouldn't be as efficient as a chip designed with that ISA in mind. (You would need a much bigger and less efficient microcode, because of all the reasons you list. They won't STOP intel from making a chip that speaks something else. Intel will simply produce a chip where the front-end is much more clunky, inefficient, waste 3x more opcode per instruction, and waste much time waiting that some bus gets free or copying values around, etc.).

  And to go back to the parent...

You could switch out the decoder, leave the internal core completely unchanged, and have a CPU which speaks a different instruction set. It is not an instruction set architecture. That's why the architectures themselves have names.

Not only is this possible, but this was INDEED done.

There was an entire company called "Transmeta" whose business was centered around exactly that:
Their chip, the "Crusoe" was compatible with x86.
- But their chip was actually a VLIW chips, with the front-end being 100% pure software. Absolutely as remote from a pure x86 core as possible.
- The frontend was entirely 100% pure software.

The advantage touted by Transmeta was that, although their chip was a bit slower and less efficient, it consumed a tiny fraction of the power and was field-upgradeable (in theory just issue a firmware upgrade to support newer instruction.) Transmeta had demos of Crusoe playing back MPEG video on a few watts, whereas Pentium 3 (the then lower-power Intel chip) would consume way much more.

Saddly, it all happened in an era where pure raw performance was the king, and where use a small nuclear plant to power an Pentium IV (the then high performance flagship) and needing a small lake nearby for cooling was considered perfectly acceptable. So Crusoe didn't see that much success.

Still, Crusoe was successfully used as a test bed for a few experimental CPU to test their ISA before actual test-bed where available. (If I remember correctly, Crusoe where used to test running x86_64 code before actual Athlon 64 where available for developers), and there were a few experimental proof-of-concept running PowerPC ISA.

In a way modern way, this isn't that much dissimilar from how Radeon handle compiled shared, except that the front-end is now a piece of software which run inside OpenGL on the main CPU: intermediate instruction a compiled to either VLIW or CGN opcode which are 2 entirely different back-ends.
(Except that, due to the highly repetitive nature of a shared, instead of decoding instruction on the fly as they come, you optimise it once into opcode, store it into a cache and you're good).

Again, on a similar way ARM can switch between 2 different types of instruction set (normal and thumb mode), 2 different sets, one back-end.

Comment: Neither should internet (Score 1) 349

by DrYak (#47773007) Attached to: Ask Slashdot: What To Do About Repeated Internet Overbilling?

Civilized countries don't allow you to do that

Hence the complaint of some users.
If you don't weight the container when buying goods (glass bottle isn't counted as milk),
why should ISP do it on their network (they count in the overhead by the particular technology that they happen to use between your modem and their servers, instead of only counting the bandwidth to/from the internet (the things that they themselves need to pay and for what they need the money) )

Comment: RGB on Scart (Score 2) 163

Megadrive (the Genesis in EU and Japan) supported RGB out-of-the box (all the signals are there on the DIN / miniDIN cable), no need to mod the console, just buy the appropriate cable (SCART in EU, or the Japanese equivalent).

(I have no first-hand experience, but I might guess that the situation is similar with Super Famicom vs US' SNES)

That the US market had a crappier output possibility, combined with a worst Video standard (nicknamed Never The Same Color :-P ) doesn't change the fact that everybody else around the world had better quality, including the developers back in japan.

(Dithered pattern on anything but NTSC over composite appear as separate pixels).

(The situation is completely different from the first home computer doing "composite synthesis" and achieving more colours on the screen than supported in the GFX hardware. i.e.: a normally 320x200 4-colours or 640x200 monochrome CGA card in a PC outputing 160x200 16 colours on a composite monitor.
That *indeed* was using composite output artifact. But usually that is software that has a distinct separate "composite" video mode. And it only works on NTSC).

Comment: Wrong generation label (Score 3, Interesting) 75

by DrYak (#47772651) Attached to: Fake NVIDIA Graphics Cards Show Up In Germany

Yup, historically, there have always been official card, where the manufacturer try to pass an older chip as a "low-entry" of the newer generation.
(like the GeForce 4MX, which was basically a variant in the GeForce2 familly and thus lacked the programmable shaders of the GeForce 4 Ti familly, but got quite successful due to brand-name recognition)

Comment: Overhead (Score 5, Insightful) 349

by DrYak (#47770321) Attached to: Ask Slashdot: What To Do About Repeated Internet Overbilling?

Imagine if you went to buy milk and bought a gallon but were charged for 1.25 gallons because of spillage in the bottling plant.

Or to be more similar: you got charged 1.25, because they determine the price by weighting it and thus are also weighting the glass milk bottles and the hard plastic crate carrying them.
And when you ask them why you don't get the same amount of gallons that you measure in your kitchen and on their bill, they just answer "No, everything is okay, our bill is 100% right.". Without ever mentioning that you need to take that overhead into account. Without you having any way to check it or control the milkbottle+crate weighting process neither.

Comment: Fail-safe (Score 1) 501

by DrYak (#47758755) Attached to: California DMV Told Google Cars Still Need Steering Wheels

If I'm ultimately responsible for the vehicle, I'll stay in control of the vehicle. Because if there's a 10 second lag between when the computer throws up its hands and says "I have no idea" and when the user is actually aware enough and in control, that is the window where Really Bad Things will happen.

Have a look at how collision avoidance systems that are on the streets nowadays currently work:
- the car will sound an alarm signalling probable impending collision and asking the user to intervene.
- the car will also autonomously start to slow down and eventually brake and stop never the less.

The system is designed in such a way that, although human override is possible, the car will also try to autonomously to follow the best course of actions, unless overridden. You could take the control and do something, or you could also let the car follow its normal program (in traffic jams typically).

Same should be applied to fully autonomous cars one day:
in case of "I have no idea" situation, the user should be able to take over control, but lacking any intervention, the car should also react in a sane way ("I have no idea what to do, and instead I'm gona park on the side of the road and wait safely there until further instructions").

Comment: Hostile environment. (Score 1) 501

by DrYak (#47758649) Attached to: California DMV Told Google Cars Still Need Steering Wheels

Can a human wirelessly communicate with a car 5 miles ahead to know of a road condition and adjust it's speed in tandem with all the other cars in between to mitigate any and all danger in advance?

Do not assume that source of wireless coordination is always 100% trusty.
The wireless coordination information might be hostile origin. i.e.: some idiot with a hacked emitter that systematically ask all the other cars to slow down and move aside to let him go through. In theory such a function has practical uses (ambulances, for example), in practice such function WILL GET abused (idiot wanting to arrive faster, or a criminal trying to run away through heavy traffic).

Can a human react in sub-millisecond time to avoid obstacles thrown in their way.

Yup, that's what I consider as the main reason why we should have robotic drive.
Except for the occasional false positive, the current collision-avoidance systems that are already street-legal nowadays and that are already travelling in some cars around us are already much better than humans in reacting in case of emergency.

The only drawbacks currently are false positive[1].

But even in that situation, most of the false positive are safe.
It just causes the cars to slow down or stop when that should not be needed.


Example on our car:
- auto-cruise control which chooses the wrong taget: with our car, a large truck that is almost as large as the lane can be mis-targeted and our car slows down to yeild, even if the truck is actually in a different lane and we're not actually on a collision course with it if we stay in the middle of our current lane.
- mis-identified target: the current logic inside the car is: "if there's an object on the way and the car is on a trajectory intersecting it, then hit the breaks (unless overriden by the driver)". The car has no concepts of *what* the object is, and might break on useless occasion. Nearby automatic RFID-based tool booth are non-stop drive through: you don't need top stop, just drive through at a slow steady pace. The RFID transponder will beep in advance to alert you that the transaction with the booth was successful and the barrier will open shortly, you know that barrier will open shortly/is opening shortly and you don't need to brake. But the car only sees an object that is still currently inside your lane (it's not able to notice that the object is moving vertically and that by the time you reach it will be safely away) and will auto-brake unless you keep your feet on the gas pedal.
- very simplified hit-box: the car's hitbox is exactly that: a box. the car will panick and hit the brakes if you try to park under a low hanging balcony. You see that there's enough room under the balcony for the car's engine compartment to go there, but the car will react as if it was a solid wall and break if your foot is on the brake instead of the gaz pedal (which will be the case during slow manoeuvres).

"Any excuse will serve a tyrant." -- Aesop