Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Precision (Score 1) 264

Microphone will pick up *a* bang, and thus will give an information when *some* gun was fired in the vicinity of the police.
It could be any gun on the scene, might by the police worker's own gun just as it could be the gun of the suspect/criminal.
(Though if there are multiple police worker, with multiple microphone, maybe one could triangulate a probable point of origin for the shot)

This wrist bands pick up vibration, and thus will give an information when *the gun held in the hand wearing the wrist band* has recoiled (and thus fired).
It's the exact gun in the hand of the police worker, very few doubt about it.

Comment Luck (Score 1) 134

I made 10,000% interest last year

And depending on the days you pick up, an investor might lose 4x of what money was invested.
You got lucky, others won't necessarily.

(I happen to have been lucky, too, only with a very small initial investment)

Comment Following the trail. (Score 1) 134

Regarding: 1, 2 and 3 :
I wasn't referring to matching single transaction/single keys and IP adresses, etc.

I was more referring that, if you want to use bitcoin in a meaningful way, you'll have to interract with the real world.
At some point, a real bitcoin user who isn't just playing with bitcoin for the sake of it, will buy an actual good.
Meaning that the seller will need to send the goods to an actual address.
At the other end of the chain, a would-be future bitcoin customer will need actual BTCs to do transaction. Nowaday it's not practical to mine any significant amount of BTCs using hardware available to the average customer. That means that a future bitcoin customer will need to acquire BTC, usually buying them from money (from an exchange or following a face-to-face meeting, etc.)

So no matter through how many public key the BTCs hops, a motivated enough investigator can always track indentities at both end of the chain:
- initial acquisition
- final spending.
sometime it's going to be the same identity (because it's the same person buying the BTCs and spending them after a few public key hops in-between), sometime it's a different identity (because somewhere along the chain, the BTCs 'changed hands' in a way that wasn't registered and matched to any address: for example 2 random people seeding direct donation to each-others address without an real-world interaction. Might happen several time along a chain).

If a really motivated investigator has enough resources (now we're speaking government-level), it is possible to follow tons of such "money trails". By comparing all of them together, it is possible to build whole nets of interactions, and you can match real identities. Even when 1 single money trails is uncertain (money might have switched hands along the track between known end-point), taking into accounts lots of other such money trails help lift uncertainty.

#4) You can use tumblers and coin exchanges to disconnect a given key from you and a transaction.

That's a good valid way to blur the trail.
In the block chains, what you'll see is thousand of user pouring money into the exchange (user funding their accounts) and thousand of users getting money back (user doing withdrawal from the exchange account). Everything in between happens "behind closed doors". The actual buy/sell actions aren't recorded in the blockchain, they happen in the exchange software's database. In the block chain there's just the exchange who's officially held amount of BTCs corresponds to the amount of BTCs currently being exchange by all users. More or less (see MtGox's heist when those numbers don't match anymore).

So trying to make sense of the complex network of interaction is *hard*, *really hard*. Well beyond the efforts used in simply "lifting uncertainty due to invisible switch of owners". Probably only a few poeple in Russia's FSB and US' NSA might have a slim chance of tracking a suspect. (And the tracking is more likely to rely on backdoors and trojans).

Comment Freeze (Score 2) 134

Joking aside:

at least bitcoin has on purpose been designed in such a way to make it impossible for a central authority to freeze an account, because on purpose there are no central authorities.
So that's a relatively small advantage over paypal.
(although, in both case, these system should be used EXCLUSIVELY for payments only. You should only use them to push money around, you should NOT use them to store money. Neither Paypal, nor the bitcoin network are banks. So if you got a big amount of money frozen, it's only your own fault).

Comment IS *NOT* ANONYMOUS (Score 5, Insightful) 134

The entire security scheme of bitcoin is actually based on the exact opposite:
Not only is it not anonymous, it's public knowledge *BY DESIGN*.

Every single bitcoin transaction (or any other alt-coin for that matter) is publicly broadcast on the network.
Every single full node on the network is always aware of the transaction.

The point is that, thanks to this broadcast, every single bitcoin user can independently verify the transactions and (based on these checks) together all the node can agree who has how many coins left.
Unlike traditional banking (or a web payment like paypal, for exemple), there is no central authority that is the official referrer about account balances (with banks: the bank is the official authority about the content of its users' acounts. With webpayment: paypal is the official authority about the content of paypal accounts, etc. BUT with bitcoin, everyone can control the history of transactions by looking up the blockchain, there is no official central "Bitcoin, inc." that is in charge).

Due to this design (security by public broadcast) that means that no transaction is secret.

At best, it could be called "pseudonymous": the transaction are hidden behind public key hashes. (the civil/legal identity of parties of a transaction aren't directly written into the block chain. Instead the public key hashes are written).
so there's a low risk that an identity is immediately leaked, just by casual look of the blockchain.

It at least takes a conscious effort to track public keys accros the blockchain and follow the money train until an actual identity can be matched.
But that's completely possible and well within the capabilities of governments.

Comment Fail *safe* (Score 1) 185

Regular cruise control is sedating enough. You don't need more reasons to not pay attention to the
road unless it's 100% completely autonomous. This is just an accident waiting to happen.

The failure mode is entirely different.

- Regular cruise control keeps the speed, completely ignoring what is in front of the car: it will *blindly* keep the accelerator down and stay at the same speed. If the driver gets distracted, the car will continue straight ahead no-matter-what and can hit something and cause an accident. Leaving a regular cruise control unattended will certainly lead to accidents.
In most extreme situation, if you fall asleep behind the wheel, the car will hit whatever ends-up in front, and you'll have an accident for sure.

- Adaptive cruise control / collision avoidance systems, etc... are designed differently. The car is more or less (within capability of its sensors) aware if there's something in front. In case there's something that the car could collide, the car will automatically slow down and eventually break if needed. It is not blind, it only keeps the speed when there are no obstacles. The *default* failure mode is stop slow down and stop to avoid a collision, unless the driver takes back command and does something different. Leaving an adaptive cruise contol / collision avoidance system unattended will lead to the car eventually stopping when it will eventually meet something.
In most extreme situation, if you fall asleep behind the wheel, the car will eventually stop on its own once there is eventually something in front.
You'll be awaken by the car beeping to tell you that it has stoped to avoid something in front, and by the horn of other driver, angry that you've stopped in the middle of nowhere.

The whole difference is that older technologies *ALWAYS NEEDED* to rely on a driver, otherwise bad things will happen for sure.
Whereas newer technologies are able by default to take a safer solution (usually slowing down / stopping).
(BTW, some of these *newer technochlogies* are already street legal and already around you inside some vehicles).

GM is simply adding steering control to the mix (the car will also follow lanes).

In short, there's a huge difference between a car that stupidly keeps it speed no matter what, and a car that will only drive onward if there's nothing in front and will otherwise slow down and halt. The new GM's technology is of the second kind (like any other assistance in most modern cars).

Comment Nitrogen effects (Score 1) 182

so then you get to learn about the narcotic effects of nitrogen. As that's what the most pronounced effect is, it's an anesthetic, it puts you to sleep while working and standing up.

It's basically due to the fact that we're better at detecting increase of CO2, rather than detecting decrease of O2.

If you start to lack oxygene in a badly ventilated room (e.g.: from the 80% N2 / 20% O2, you reach 80% N2 / 15% O2 / 5% CO2), your body will notice the increase of CO2, and you'll feel asphyxiating, and you'll run away, before it gets dangerous for you.

If you start to lack oxygene because it is replaced by nitrogene ( e.g.: from the 80% N2 / 20% O2, you reach 85% N2 / 15% O2) because you're dosing the room with nitrogene, your body is less likely to register the drop of oxygen, you won't feel the alarms (no "asphyxiation" sensation) and you won't run away. But the oxygen level is *still* low, which is *still* dangerous and you're at the risk of sleeping and passing out.

To go back to the factory example: if a worker becomes sleepy on the job because of low O2 levels (because it's washed out by increasing the N2 concentration in the air) THAT'S A FUCKING DANGEROUS WORKPLACE WITH DANGEROUS WORKING CONDITIONS.

Comment Hypertransport (Score 1) 294

ASROCK did have a period where they were doing some weird stuff. Remember the adaptable CPU slot things that they did for 939/AM2 IIRC but most of that weirdness was for AMD mobos

Of course, that will be for AMD only. At that time, only AMD did have the memory controller embed in the CPU.
The CPU itself communicated with a very standard HyperTransport bus with the chipset.

So you could easily have either:
- swappable CPU+RAM boards on a HT backbone (common in the server & cluster world)
- swappable CPU board if they had the same type of memory connection (both 939 and AM2 used 2x DDR2)

And for the record, Intel started this whole business with the "Slot" form factor on their Pentium 2/3/Celeron (all can connect the same way to 440BX chipset).

Comment Too much firmware ? (Score 1) 294

That's nothing: Intel "Advanced Management Technology" (AMT), has an embed CPU in the chipset that runs a small webserver (Enabling you to remotely control a few settings) and a VNC server (So you can have remote screen/mouse without needing neither a KVM nor OS collaboration).
It's a technology available on most enterprise-oriented servers, workstations and desktops. (Makes life of sysadmin easier).

Comment in plat-form vs. standard abuse (Score 1) 167

I'm just a pedantic fool nitpicking between:

- a in-hardware solution: the platform can be asked to generate a different signal creating a different output.
yeah, there are also hardware limitation (RAM is expensive, so Video-RAM is in small quantities) and thus tricks required (HAM stores small 'deltas' between adjacent pixels instead of coding full RGB tripplet, so it can cram hi-colour picture inside the limited video-ram ; copper can be used to change palette at each line refresh, so that you get a nice gradient, while only using 1 single colour entry from the limite 32 colours palette)

In term of sound that's similar to playing digital sound on a PC speaker (by programming the driving timer to act as a PWM).

- a solution that cleverly abuse an external piece of hardware:
a CGA in composite mode always outputs the same signal. But the computer happens to be connected to the correct piece of equipment (NTSC monitor, using a composite cable) magic happens and something completely new appears. But it only work if you connect it to this peculiar type of equipment. You get the same picture as usual in any other circumstance (in my case: PAL/PAL60/NTSC monitor, using a RGB scart cable). You're not dependant of the hardware in the computer producing a *new* signal, you're dependent on how some external piece of hardware is going to react to the same signal as usual.

In term of sound that's similar to using the disk drive motion for the percussion track of your music. It's a neat creative trick, but only works when the correct floppy drive is attached. It won't work if you upgrade your computer to a harddrive, it won't work if you plug earphones in the audio-out, etc.

The purpose of this thread is that, because of the second type of hacks, you need to perfectly emulate the bad picture quality of TV-sed to have the on-screen look as developers intended it to be. That every developer though that visuals will look in some particular way.
What I'm saying is that actually the rest of the world got near perfect picture quality, because the rest of the world had RGB output (we had Scart here in europe, Japan had RGB-21) and that includes the home of most developers (Japan). Only US kids remember having a different look in their childhood games, because the poor kids were stuck with a bad TV standard.

Comment 100% would be interesting (Score 3, Insightful) 266

One, if there were a 100% failure rate dousing would have been abandoned years ago.

Actually if the failure rate was exactly 100%, it would be a valuable tool:
it would very reliabily show where NOT to look for water, and by deduction you'll know that you need to look for water at the remaining NOT dowsed places.

The real failure rate would be something very high, but not close to 100%.
By random chance, you're bound to find water, eventually.

The whole point of a scientific statistical test would be to see if the few successes occur as frequently as random chance, or if dowsing has a slightly higher success rate that could NOT be explained purely by random chance.

Comment Again, what's the problem ? (Score 3, Interesting) 161

All the reason you list could all be "fixed in software".

The quotes around the "software" mean that i refer about the firmware/microcode as a piece of software designed to run on top of the actual execution units of a CPU.

No, they cannot. OR the software will be terible slow , like 2-10 times slowdown.

Slow: yes, indeed. But not impossible to do.

What matters are the differences in the semantics of the instructions.
X86 instructions update flags. This adds dependencies between instructions. Most RISC processoers do not have flags at all.
This is semantics of instructions, and they differ between ISA's.

Yeah, I pretty well know that RISCs don't (all) have flags.
Now, again, how is that preventing the micro-code swap that dinkypoo refers to (and that was actually done on transmeta's crusoe)?
You'll just end with a bigger clunkier firmware that for a given front-end instruction from the same ISA, will translate into a big bunch of back-end micro-ops.
Yup. A RISC's ALU won't update flags. But what's preventing the firmware to dispatch *SEVERAL* micro-ops ? first to do the base operation and then aditionnal instructions to update some register emulating flags?
Yes, it's slower. But, no that don't make micro-code based change of supported ISA impossible, only not as efficient.

The backend, the micro-instrucions in x86 CPUs are different than the instructions in RISC CPU's. They differ in the small details I tried to explain.

Yes, and please explain how that makes *definitely impossible* to run x86 instruction? and not merely *somewhat slower*?

Intel did this, they added x86 decoder to their first itanium chips. {...} But the perfromance was still so terrible that nobody ever used it to run x86 code, and then they created a software translator that translated x86 code into itanium code, and that was faster, though still too slow.

Slow, but still doable and done.

Now, keep in mind that:
- Itanium is a VLIW processor. That's an entirely different beast, with an entirely different approach to optimisation, and back during Itanium development the logic was "The compiled will handle the optimising". But back then such magical compiler didn't exist and anyway didn't have the necessary information at compile time (some type of optimisation requires information only available at run time. Hence doable in microcode, not in compiler).
Given the compilers available back then, VLIW sucks for almost anything except highly repeated task. Thus it was a bit popular for cluster nodes running massively parallel algorithms (and at some point in time VLIW were also popular in Radeon GFX cards). But VLIW sucks for pretty much anything else.
(Remember that, for example, GCC has auto-vectorisaion and well performing Profile-Guided-Optimisation only since recently).
So "supporting an alternate x86 instruction on Itanium was slow" has as much to do with "supporting an instruction set on a back-end that's not tailored for the front-end is slow" as it has to do with "Itanic sucks for pretty much everything which isn't a highly optimized kernel-function in HPC".

But still it proves that runing a different ISA on a completely alien back-end is doable.
The weirdness of the back-end won't prevent it, only slow it down.

Luckily, by the time Transmeta Crusoe arrived:
- knowledge had a bit advance in how to handle VLIW ; crusoe had a back-end better tuned to run CISC ISA

Then by the time Radeon arrived:
- compilers had gotten even better ; GPU are used for the same (only) class of task at which VLIW excels.

The backend of Crusoe was designed completely x86 on mind, all the execution units contained the small quirks in a manner which made it easy to emulate x86 with it. The backend of Crusoe contains things like {...} All these were made to make binary translation from x86 easy and reasonable fast.

Yeah, sure, of course they are going to optimise the back-end for what the CPU is the most-likely to run.
- Itanium was designed to run mainly IA-64 code with a bit support for older IA-32 just in case in order to offer a little bit of backward compatibility to help the transition. It was never though of as a IA-32 workhorse. Thus the back-end was designed to run mostly IA-64 (although even that wasn't stellar, due to weirdness of VLIW architecture). But that didn't prevent the front-end from accepting IA-32 instructions.
- Crusoe was designed to run mainly IA-32 code. Thus the back-end was designed a bit to run better IA-32 code.

BUT

To go back at what dinkypoo says: the back-end doesn't directly eat IA-32 instruction in none of the mentionned processor (neither recent Intel, nor AMDs, nor Itaniums, nor Crusoe, etc.) they all have back-ends that only consume the special micro-ops that the front-end feeds them after expanding the ISA (or the software front-end in case of Crusoe or the compiler in the CPU in case of Radeon). That's true, and you haven't disproved it (of course).

Also, according to dinkypoo, you should be able to replace the front-end, without touching the back-end, and you will be able to get a chip that supports a different instruction set (because the actual execution units never see it directly).
The Crusoe is a chip that did exactly that (thank to the fact that their front-end was pure software), and in fact HAD its microcode swapped, either as an in-lab experiment to run PowerPC instruction, or as a tool to help test x86_64 instruction.

So no, your wrong that "you can't simply change the decoder" is false. YOU CAN simply change the decoder, it has been done.
Just don't expect stellar performance depending on how much your back-end in the way it works from the target instruction set. (x86_64 on a Crusoe vs. IA-32 on an Itanium).
It might be slow, but it works. you're wrong, dinkypoo is right, accept it.

Comment EU and Japan (Score 1) 167

The analog TV standard in Japan was NTSC with a different black level. This is why the Famicom and NTSC NES use the same 2C02 PPU, while PAL regions need a different 2C07 PPU.

Except that, in the eighties, virtually any TV sold in Europe had a Scart connector (Mandatory on any TV sold after 1980. I don't remember having seen a TV without it), and TV sold in Japan had a RGB-21 connector (technically similar. physically, the connectors have the same shape but use slightly different pin-outs). That was simply the standard interconnect to plug *any* consumer electronics on a TV outside of the US.

So starting with MasterSystem and Megadrive (again that's my first hand experience. I'm not sure but from what I read around that it's also the case with Super Famicom) everybody else in the world got an easy way to have nice graphics.

Only you, the consumer in the US, were stuck with no RGB on your TV set combined with a video standard that completely breaks colours. Everybody else got to play the games without any composite artifact.

The Genesis's pixel clock, on the other hand, was 15/8 times color burst. At that rate, patterns of thin vertical lines resulted in semi-transparent rainbow effects, which weren't quite as predictable as the but still fooled the TV into making more colors.

But were only visible on TV in the US. The other big markets (Japan, EU) got the actual colours directly fed into the screen over the local's default video interconnect (Scart or RGB-21).
Given that most of the games were produced in Japan, it's very likely that very few of the game developers have designed their games specially with US' NTSC composite artifacts in mind.

So to go back to the begin of this thread discussion:

The graphics simply aren't meant to be seen in super clarity. You see all of the pixels, and the colors are overly bright and flat. It's just... wrong.

Nope. That's what you saw as child because you grew up in the US and your local display standard was bad (only RF or composite available, and an NTSC standard that's bad for colours).
The rest of the users elsewhere actually mostly got the same image as emulators display. It was just a tiny bit more blurry, but we had the same colours (thanks to RGB available in nearly everywhere in the other major markets outside of US).

Slashdot Top Deals

Computers are useless. They can only give you answers. -- Pablo Picasso

Working...