Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Fail *safe* (Score 1) 185

Regular cruise control is sedating enough. You don't need more reasons to not pay attention to the
road unless it's 100% completely autonomous. This is just an accident waiting to happen.

The failure mode is entirely different.

- Regular cruise control keeps the speed, completely ignoring what is in front of the car: it will *blindly* keep the accelerator down and stay at the same speed. If the driver gets distracted, the car will continue straight ahead no-matter-what and can hit something and cause an accident. Leaving a regular cruise control unattended will certainly lead to accidents.
In most extreme situation, if you fall asleep behind the wheel, the car will hit whatever ends-up in front, and you'll have an accident for sure.

- Adaptive cruise control / collision avoidance systems, etc... are designed differently. The car is more or less (within capability of its sensors) aware if there's something in front. In case there's something that the car could collide, the car will automatically slow down and eventually break if needed. It is not blind, it only keeps the speed when there are no obstacles. The *default* failure mode is stop slow down and stop to avoid a collision, unless the driver takes back command and does something different. Leaving an adaptive cruise contol / collision avoidance system unattended will lead to the car eventually stopping when it will eventually meet something.
In most extreme situation, if you fall asleep behind the wheel, the car will eventually stop on its own once there is eventually something in front.
You'll be awaken by the car beeping to tell you that it has stoped to avoid something in front, and by the horn of other driver, angry that you've stopped in the middle of nowhere.

The whole difference is that older technologies *ALWAYS NEEDED* to rely on a driver, otherwise bad things will happen for sure.
Whereas newer technologies are able by default to take a safer solution (usually slowing down / stopping).
(BTW, some of these *newer technochlogies* are already street legal and already around you inside some vehicles).

GM is simply adding steering control to the mix (the car will also follow lanes).

In short, there's a huge difference between a car that stupidly keeps it speed no matter what, and a car that will only drive onward if there's nothing in front and will otherwise slow down and halt. The new GM's technology is of the second kind (like any other assistance in most modern cars).

Comment Nitrogen effects (Score 1) 182

so then you get to learn about the narcotic effects of nitrogen. As that's what the most pronounced effect is, it's an anesthetic, it puts you to sleep while working and standing up.

It's basically due to the fact that we're better at detecting increase of CO2, rather than detecting decrease of O2.

If you start to lack oxygene in a badly ventilated room (e.g.: from the 80% N2 / 20% O2, you reach 80% N2 / 15% O2 / 5% CO2), your body will notice the increase of CO2, and you'll feel asphyxiating, and you'll run away, before it gets dangerous for you.

If you start to lack oxygene because it is replaced by nitrogene ( e.g.: from the 80% N2 / 20% O2, you reach 85% N2 / 15% O2) because you're dosing the room with nitrogene, your body is less likely to register the drop of oxygen, you won't feel the alarms (no "asphyxiation" sensation) and you won't run away. But the oxygen level is *still* low, which is *still* dangerous and you're at the risk of sleeping and passing out.

To go back to the factory example: if a worker becomes sleepy on the job because of low O2 levels (because it's washed out by increasing the N2 concentration in the air) THAT'S A FUCKING DANGEROUS WORKPLACE WITH DANGEROUS WORKING CONDITIONS.

Comment Hypertransport (Score 1) 294

ASROCK did have a period where they were doing some weird stuff. Remember the adaptable CPU slot things that they did for 939/AM2 IIRC but most of that weirdness was for AMD mobos

Of course, that will be for AMD only. At that time, only AMD did have the memory controller embed in the CPU.
The CPU itself communicated with a very standard HyperTransport bus with the chipset.

So you could easily have either:
- swappable CPU+RAM boards on a HT backbone (common in the server & cluster world)
- swappable CPU board if they had the same type of memory connection (both 939 and AM2 used 2x DDR2)

And for the record, Intel started this whole business with the "Slot" form factor on their Pentium 2/3/Celeron (all can connect the same way to 440BX chipset).

Comment Too much firmware ? (Score 1) 294

That's nothing: Intel "Advanced Management Technology" (AMT), has an embed CPU in the chipset that runs a small webserver (Enabling you to remotely control a few settings) and a VNC server (So you can have remote screen/mouse without needing neither a KVM nor OS collaboration).
It's a technology available on most enterprise-oriented servers, workstations and desktops. (Makes life of sysadmin easier).

Comment in plat-form vs. standard abuse (Score 1) 167

I'm just a pedantic fool nitpicking between:

- a in-hardware solution: the platform can be asked to generate a different signal creating a different output.
yeah, there are also hardware limitation (RAM is expensive, so Video-RAM is in small quantities) and thus tricks required (HAM stores small 'deltas' between adjacent pixels instead of coding full RGB tripplet, so it can cram hi-colour picture inside the limited video-ram ; copper can be used to change palette at each line refresh, so that you get a nice gradient, while only using 1 single colour entry from the limite 32 colours palette)

In term of sound that's similar to playing digital sound on a PC speaker (by programming the driving timer to act as a PWM).

- a solution that cleverly abuse an external piece of hardware:
a CGA in composite mode always outputs the same signal. But the computer happens to be connected to the correct piece of equipment (NTSC monitor, using a composite cable) magic happens and something completely new appears. But it only work if you connect it to this peculiar type of equipment. You get the same picture as usual in any other circumstance (in my case: PAL/PAL60/NTSC monitor, using a RGB scart cable). You're not dependant of the hardware in the computer producing a *new* signal, you're dependent on how some external piece of hardware is going to react to the same signal as usual.

In term of sound that's similar to using the disk drive motion for the percussion track of your music. It's a neat creative trick, but only works when the correct floppy drive is attached. It won't work if you upgrade your computer to a harddrive, it won't work if you plug earphones in the audio-out, etc.

The purpose of this thread is that, because of the second type of hacks, you need to perfectly emulate the bad picture quality of TV-sed to have the on-screen look as developers intended it to be. That every developer though that visuals will look in some particular way.
What I'm saying is that actually the rest of the world got near perfect picture quality, because the rest of the world had RGB output (we had Scart here in europe, Japan had RGB-21) and that includes the home of most developers (Japan). Only US kids remember having a different look in their childhood games, because the poor kids were stuck with a bad TV standard.

Comment 100% would be interesting (Score 3, Insightful) 266

One, if there were a 100% failure rate dousing would have been abandoned years ago.

Actually if the failure rate was exactly 100%, it would be a valuable tool:
it would very reliabily show where NOT to look for water, and by deduction you'll know that you need to look for water at the remaining NOT dowsed places.

The real failure rate would be something very high, but not close to 100%.
By random chance, you're bound to find water, eventually.

The whole point of a scientific statistical test would be to see if the few successes occur as frequently as random chance, or if dowsing has a slightly higher success rate that could NOT be explained purely by random chance.

Comment Again, what's the problem ? (Score 3, Interesting) 161

All the reason you list could all be "fixed in software".

The quotes around the "software" mean that i refer about the firmware/microcode as a piece of software designed to run on top of the actual execution units of a CPU.

No, they cannot. OR the software will be terible slow , like 2-10 times slowdown.

Slow: yes, indeed. But not impossible to do.

What matters are the differences in the semantics of the instructions.
X86 instructions update flags. This adds dependencies between instructions. Most RISC processoers do not have flags at all.
This is semantics of instructions, and they differ between ISA's.

Yeah, I pretty well know that RISCs don't (all) have flags.
Now, again, how is that preventing the micro-code swap that dinkypoo refers to (and that was actually done on transmeta's crusoe)?
You'll just end with a bigger clunkier firmware that for a given front-end instruction from the same ISA, will translate into a big bunch of back-end micro-ops.
Yup. A RISC's ALU won't update flags. But what's preventing the firmware to dispatch *SEVERAL* micro-ops ? first to do the base operation and then aditionnal instructions to update some register emulating flags?
Yes, it's slower. But, no that don't make micro-code based change of supported ISA impossible, only not as efficient.

The backend, the micro-instrucions in x86 CPUs are different than the instructions in RISC CPU's. They differ in the small details I tried to explain.

Yes, and please explain how that makes *definitely impossible* to run x86 instruction? and not merely *somewhat slower*?

Intel did this, they added x86 decoder to their first itanium chips. {...} But the perfromance was still so terrible that nobody ever used it to run x86 code, and then they created a software translator that translated x86 code into itanium code, and that was faster, though still too slow.

Slow, but still doable and done.

Now, keep in mind that:
- Itanium is a VLIW processor. That's an entirely different beast, with an entirely different approach to optimisation, and back during Itanium development the logic was "The compiled will handle the optimising". But back then such magical compiler didn't exist and anyway didn't have the necessary information at compile time (some type of optimisation requires information only available at run time. Hence doable in microcode, not in compiler).
Given the compilers available back then, VLIW sucks for almost anything except highly repeated task. Thus it was a bit popular for cluster nodes running massively parallel algorithms (and at some point in time VLIW were also popular in Radeon GFX cards). But VLIW sucks for pretty much anything else.
(Remember that, for example, GCC has auto-vectorisaion and well performing Profile-Guided-Optimisation only since recently).
So "supporting an alternate x86 instruction on Itanium was slow" has as much to do with "supporting an instruction set on a back-end that's not tailored for the front-end is slow" as it has to do with "Itanic sucks for pretty much everything which isn't a highly optimized kernel-function in HPC".

But still it proves that runing a different ISA on a completely alien back-end is doable.
The weirdness of the back-end won't prevent it, only slow it down.

Luckily, by the time Transmeta Crusoe arrived:
- knowledge had a bit advance in how to handle VLIW ; crusoe had a back-end better tuned to run CISC ISA

Then by the time Radeon arrived:
- compilers had gotten even better ; GPU are used for the same (only) class of task at which VLIW excels.

The backend of Crusoe was designed completely x86 on mind, all the execution units contained the small quirks in a manner which made it easy to emulate x86 with it. The backend of Crusoe contains things like {...} All these were made to make binary translation from x86 easy and reasonable fast.

Yeah, sure, of course they are going to optimise the back-end for what the CPU is the most-likely to run.
- Itanium was designed to run mainly IA-64 code with a bit support for older IA-32 just in case in order to offer a little bit of backward compatibility to help the transition. It was never though of as a IA-32 workhorse. Thus the back-end was designed to run mostly IA-64 (although even that wasn't stellar, due to weirdness of VLIW architecture). But that didn't prevent the front-end from accepting IA-32 instructions.
- Crusoe was designed to run mainly IA-32 code. Thus the back-end was designed a bit to run better IA-32 code.

BUT

To go back at what dinkypoo says: the back-end doesn't directly eat IA-32 instruction in none of the mentionned processor (neither recent Intel, nor AMDs, nor Itaniums, nor Crusoe, etc.) they all have back-ends that only consume the special micro-ops that the front-end feeds them after expanding the ISA (or the software front-end in case of Crusoe or the compiler in the CPU in case of Radeon). That's true, and you haven't disproved it (of course).

Also, according to dinkypoo, you should be able to replace the front-end, without touching the back-end, and you will be able to get a chip that supports a different instruction set (because the actual execution units never see it directly).
The Crusoe is a chip that did exactly that (thank to the fact that their front-end was pure software), and in fact HAD its microcode swapped, either as an in-lab experiment to run PowerPC instruction, or as a tool to help test x86_64 instruction.

So no, your wrong that "you can't simply change the decoder" is false. YOU CAN simply change the decoder, it has been done.
Just don't expect stellar performance depending on how much your back-end in the way it works from the target instruction set. (x86_64 on a Crusoe vs. IA-32 on an Itanium).
It might be slow, but it works. you're wrong, dinkypoo is right, accept it.

Comment EU and Japan (Score 1) 167

The analog TV standard in Japan was NTSC with a different black level. This is why the Famicom and NTSC NES use the same 2C02 PPU, while PAL regions need a different 2C07 PPU.

Except that, in the eighties, virtually any TV sold in Europe had a Scart connector (Mandatory on any TV sold after 1980. I don't remember having seen a TV without it), and TV sold in Japan had a RGB-21 connector (technically similar. physically, the connectors have the same shape but use slightly different pin-outs). That was simply the standard interconnect to plug *any* consumer electronics on a TV outside of the US.

So starting with MasterSystem and Megadrive (again that's my first hand experience. I'm not sure but from what I read around that it's also the case with Super Famicom) everybody else in the world got an easy way to have nice graphics.

Only you, the consumer in the US, were stuck with no RGB on your TV set combined with a video standard that completely breaks colours. Everybody else got to play the games without any composite artifact.

The Genesis's pixel clock, on the other hand, was 15/8 times color burst. At that rate, patterns of thin vertical lines resulted in semi-transparent rainbow effects, which weren't quite as predictable as the but still fooled the TV into making more colors.

But were only visible on TV in the US. The other big markets (Japan, EU) got the actual colours directly fed into the screen over the local's default video interconnect (Scart or RGB-21).
Given that most of the games were produced in Japan, it's very likely that very few of the game developers have designed their games specially with US' NTSC composite artifacts in mind.

So to go back to the begin of this thread discussion:

The graphics simply aren't meant to be seen in super clarity. You see all of the pixels, and the colors are overly bright and flat. It's just... wrong.

Nope. That's what you saw as child because you grew up in the US and your local display standard was bad (only RF or composite available, and an NTSC standard that's bad for colours).
The rest of the users elsewhere actually mostly got the same image as emulators display. It was just a tiny bit more blurry, but we had the same colours (thanks to RGB available in nearly everywhere in the other major markets outside of US).

Comment Different situation (Score 1) 167

Amiga is used a co-processor to display cool stuff on the screen. But its displaying things that it has an actual internal representation. And work on any display connected to the machine (or even emulator, if the emulator can handle the internal copper chip).

CGA/composite is a hack abusing the way NTSC singal work. The machine is ouputing a monochrome signal, but the software abuses the way an NTSC display work and it appears as a coloured picture. (But these colours don't exist in the display buffer. It doesn't work on any other display. It doesn't work in an emulator unless the emulator not only emulates the chips of the machine, but also emulates the problems inside an actual screen).

Comment in europe: Base+Use (Score 2) 355

Well actually NOT *MY* gas billing.
I happen to live on the opposite side of the Atlantic pond (if my b0rked english grammar wasn't already a telling sign).

Here around the utilities are billed in 2 separate steps: capacity and consumption.
- You get a fixed base, that's for paying the infrastructure no matter how much you use (i.e.: you pay a fixed base because you live in a 4-person house and the city has made certain that the water-distribution infrastructure has enough capacity to support the 4 of you).
- In addition you pay the used volume (you pay X per cubic meter of water).

The municpality's utilities don't need to fudge the price with "out of their arses" factors, their fixed costs is covered by a separate entry in the bill.

Comment Microcode switching (Score 2) 161

This same myth keeps being repeated by people who don't really understand the details on how processors internally work.

Actually, YOU are wrong.

You cannot just change the decoder, the instruction set affect the internals a lot:

All the reason you list could all be "fixed in software". The fact that silicon designed by Intel handles opcode in a way a little bit better optimized toward being fed from a x86-compatible frontend is just specific optimisation. Simply doing the same stuff with another RISCy back-end, i.e: interpreting the same ISA fed to the front-end, will simply require each x86 ISA being executed as a different set of micro-instructions. (some that are handled as single ALU opcode on Intel's silicon might require a few more instruction, but that's about the different).

You could switch the frontend and speak a completely different instruction set. Simply if the two ISA are radically different, the result wouldn't be as efficient as a chip designed with that ISA in mind. (You would need a much bigger and less efficient microcode, because of all the reasons you list. They won't STOP intel from making a chip that speaks something else. Intel will simply produce a chip where the front-end is much more clunky, inefficient, waste 3x more opcode per instruction, and waste much time waiting that some bus gets free or copying values around, etc.).

  And to go back to the parent...

You could switch out the decoder, leave the internal core completely unchanged, and have a CPU which speaks a different instruction set. It is not an instruction set architecture. That's why the architectures themselves have names.

Not only is this possible, but this was INDEED done.

There was an entire company called "Transmeta" whose business was centered around exactly that:
Their chip, the "Crusoe" was compatible with x86.
- But their chip was actually a VLIW chips, with the front-end being 100% pure software. Absolutely as remote from a pure x86 core as possible.
- The frontend was entirely 100% pure software.

The advantage touted by Transmeta was that, although their chip was a bit slower and less efficient, it consumed a tiny fraction of the power and was field-upgradeable (in theory just issue a firmware upgrade to support newer instruction.) Transmeta had demos of Crusoe playing back MPEG video on a few watts, whereas Pentium 3 (the then lower-power Intel chip) would consume way much more.

Saddly, it all happened in an era where pure raw performance was the king, and where use a small nuclear plant to power an Pentium IV (the then high performance flagship) and needing a small lake nearby for cooling was considered perfectly acceptable. So Crusoe didn't see that much success.

Still, Crusoe was successfully used as a test bed for a few experimental CPU to test their ISA before actual test-bed where available. (If I remember correctly, Crusoe where used to test running x86_64 code before actual Athlon 64 where available for developers), and there were a few experimental proof-of-concept running PowerPC ISA.

In a way modern way, this isn't that much dissimilar from how Radeon handle compiled shared, except that the front-end is now a piece of software which run inside OpenGL on the main CPU: intermediate instruction a compiled to either VLIW or CGN opcode which are 2 entirely different back-ends.
(Except that, due to the highly repetitive nature of a shared, instead of decoding instruction on the fly as they come, you optimise it once into opcode, store it into a cache and you're good).

Again, on a similar way ARM can switch between 2 different types of instruction set (normal and thumb mode), 2 different sets, one back-end.

Comment Neither should internet (Score 1) 355

Civilized countries don't allow you to do that

Hence the complaint of some users.
If you don't weight the container when buying goods (glass bottle isn't counted as milk),
why should ISP do it on their network (they count in the overhead by the particular technology that they happen to use between your modem and their servers, instead of only counting the bandwidth to/from the internet (the things that they themselves need to pay and for what they need the money) )

Comment RGB on Scart (Score 2) 167

Megadrive (the Genesis in EU and Japan) supported RGB out-of-the box (all the signals are there on the DIN / miniDIN cable), no need to mod the console, just buy the appropriate cable (SCART in EU, or the Japanese equivalent).

(I have no first-hand experience, but I might guess that the situation is similar with Super Famicom vs US' SNES)

That the US market had a crappier output possibility, combined with a worst Video standard (nicknamed Never The Same Color :-P ) doesn't change the fact that everybody else around the world had better quality, including the developers back in japan.

(Dithered pattern on anything but NTSC over composite appear as separate pixels).

(The situation is completely different from the first home computer doing "composite synthesis" and achieving more colours on the screen than supported in the GFX hardware. i.e.: a normally 320x200 4-colours or 640x200 monochrome CGA card in a PC outputing 160x200 16 colours on a composite monitor.
That *indeed* was using composite output artifact. But usually that is software that has a distinct separate "composite" video mode. And it only works on NTSC).

Comment Wrong generation label (Score 3, Interesting) 76

Yup, historically, there have always been official card, where the manufacturer try to pass an older chip as a "low-entry" of the newer generation.
(like the GeForce 4MX, which was basically a variant in the GeForce2 familly and thus lacked the programmable shaders of the GeForce 4 Ti familly, but got quite successful due to brand-name recognition)

Slashdot Top Deals

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...