Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Spies are sneaky (Score 1) 202

"Oddly no. One reason so few people die to terrorist incidents is that there's a lot of time and effort expended on preventing and mitigating against them."

There is absolutely no evidence to support that claim. As anti-terrorism efforts have heightened since 9/11 there has been an increase in terrorism from pre-9/11 not a reduction.

Comment Re:Why is the government funding this? (Score 1) 53

"Tor has done nothing but enable criminals ...read silk road trial?"

While technically silk road is enabling people to break laws it was formed as an act of civil disobedience to protest unjust laws. The fix for silk road is not to shut down privacy and tor, the fix for silk road is to shut down the laws that protect the existence of a black market. The federal government has no right to tell individuals what they can and can't do with their bodies. What is in the interest of the general welfare is to regulate manufacture and distribution in the same way they do with food and that can't happen so long as they outlaw the substances and thereby protect the black market.

Next you point to insurance costs due to negative health effects. Then I point to knee and hip replacements that are the result of years of excess physical exercise that cost dramatically more. Suggest we outlaw jogging and shut down gyms since it makes far more fiscal sense than outlawing drugs. Then I point to search and rescue, fireman, high rise construction workers, and high rise window washers and suggest we should outlaw people making the reckless decision to enter these jobs as well since we are calling the cumulative result of preventing individuals from taking individual risks the general welfare and bound to do so if we are to remain logically consistent.

Comment Re:Conspiracies (Score 2) 53

"Am I the only one who thinks "the government" is actually made up of lots of independent minds, each with their own idealism and morality?"

That is the exact argument always levied against government conspiracies. It has been debunked thanks to the work of wikileaks, Mr. Snowden, and CIA revelation dumps. Anyone who didn't buy that argument, worked from the assumption you could trust those individuals to behave like any random stranger from the private sector showing up at your door at night, was sane, and intelligent would have come to the conclusion that these organizations were likely guilty of just about everything they turned out to be guilty of.

Government agencies might be populated with lots of independent minds but they are all basically structured like military or corporations with an executive structure. Every single day, every for profit corporation in the US is strategize to save costs, which translates in saving every penny at the expense of their workers they think they can spin without pushback. Year on year companies manage to slash health benefits for example. You don't need a conspiracy, the organization has interests which conflict with those of it's individual members and it's a significant portion of the population that is generally required to distrust the organization and it's spin in order push back against those interests.

So, get back to me when it proves to take a significant portion of the American population conspiring to prevent the illegal and unconstitutional spying revealed by Snowden from continuing. At this point the government is so corrupt that even the vast majority of the American population finding out about their conspiracies and being pissed about it has had ZERO impact beyond blowing some wind.

Also, the Navy and the NSA are one government organization. There is a reason the NSA director is always a high ranking military officer. How exactly does it take a significant portion of the American population to undermine a project built by a small group of people and paid for by the organization with interests that conflict with the rest of us?

Comment Re:Uh, what? (Score 1) 91

"Well... whatever. How things work in the real world is that the graphics driver generates code in the ISA of the GPU, which the GPU then executes.

We won't see LLVM-in-hardware, for the same reason we don't see Java-in-hardware. Software compilers work well, and allow for hardware that's aimed at being really fast, not at accepting some inappropriate ISA. Also, that hardware wouldn't play nice with other APIs like Direct3D."

If the ISA of the GPU is byte code what stops the Direct3D SDK from generating it? It is the byte code and not LLVM I'm suggesting be implemented in hardware. A common instruction set similar to x86 in CPU land and just like in that space it only exists as a public interface that is then translated into optimized intel or AMD underlying magic on the chip almost immediately.

Or rather, an API agnostic open specification byte code that all API's can target. The benefit is obvious, if intel, amd, and nvidia all implement native hardware support for a common instruction set it not only provides a compatibility blanket that dramatically simplifies building better API's but it also dramatically simplifies producing drivers for their hardware.

Comment Re:Uh, what? (Score 1) 91

"Especially since the bytecode is supposed to be hardware neutral, it is the compilation from bytecode that will have to do the aggresive optimizations to adapt to the target architecture."

This is a confusion in terms. Personally I blame Sun. An interpreter IS a form of compiler, it is the term used to refer compilation at run time. Which is exactly what happens here. That bytecode won't be interpreted or compiled before you open the game, therefore it is runtime compilation, therefore it is interpreting. JIT Compilation == Interpreted. The only difference between a compiler and an interpreter is when they perform compilation. What is happening here is the step happening in the sdk (compilation to bytecode) and the step happening in the graphics driver (bytecode, compiled at runtime aka interpreted, and then executed). Although both are technically compilation, classically you'd call one compilation due to all the work being done in advance and the other interpretation due to the work happening just before execution.

"building native execution of the bytecode would be fastest

Why not call this what it is? It's compilation."

I suppose if you are counting translating the machine code of the interpreter into logic gates and then physically building those gates on a wafer "compiling" that would be compilation.

As for hardware neutral, the api is a standard hardware neutral interface for developers. What difference does it make if the step which interprets the bytecode is executed as bare hardware or slower software? All software can be converted 1:1 to hardware. There are no optimizations which can be done here which wouldn't be comparable or even more efficient directly implemented in hardware. Initially it'd be a kludgy add on stealing chip die space (although not much in todays terms) but later the cards would be specifically designed to optimize the execution of that simple bytecode through the entire pipeline.

Comment Re:Uh, what? (Score 1) 91

"I can't tell if you're just being obtuse, but: the developer compiles shader language to bytecode, and the graphics driver compiles bytecode to GPU native-code. Both of these stages qualify as compilation. (They're both level-reducing language-transformations.)"

Let me put this another way. Byte code is machine code for an imaginary machine, GPU native code is machine code for an actual machine. There is no level reduction occurring when interpreting byte code, both are already machine code, there is a translation from one instruction set to another compatible instruction set. Interpreters are a form of compiler designed to run at runtime rather than well in advance, modern interpreters are JIT compilers. The JVM for instance is an interpreter.

If you start confusing the typical convention of referring to compiled vs interpreted with the fact that technically in all cases the things you are referring to are all compilers it gets confusing. There is greater specificity in saying that bytecode in this case is run through an interpreter and even more specificity in saying that the design of that interpreter is one of JIT compilation (although the term mostly exists as a form of geek marketing to avoid negative stigma of using the word interpret).

"building native execution of the bytecode would be fastest

Why not call this what it is? It's compilation."

I'm not avoiding calling the translation compilation, as I clarified above, this is runtime compilation aka interpretation. I'm proposing that it would be faster to make the imaginary machines instruction set the instruction set physically implemented on the chip. As an intermediate but still ridiculously fast step they could add a handful of gates and perform the translation on the chip. The compiler would then be part of the SDK rather than part of the driver and you'd have compile once run everywhere shader code with the ability to hand optimize available to every developer.

It represents an excellent bit of bait to eventually get all GPU's to implement a standards based shader instruction set, much like Intel and AMD both target the same cpu instruction set.

Comment Re:Uh, what? (Score 0) 91

"No. There's no way in hell that anyone's seriously suggesting running graphics code in an interpreter. Again, it will be compiled by the graphics driver. (We could call this 'JIT compilation', but this term doesn't seem to have caught on in the context of graphics.)"

JIT compilation, An interpreter is a run time compiler, nothing more, nothing less. JIT compilation is a form of interpretation. No modern interpreter sits and converts to native code line by line during execution, they COMPILE to native code at runtime and then execute that. The only performance benefit of compilation vs interpretation is start-up time once executing compiled code is not necessarily faster. The perl interpreter is a good example. People tend to suck at writing fast perl code but someone who actually understands the language can write a perl solution that will rival or beat a C implementation for most solutions. You can compile perl implementations to native binaries and the interpreter typically compiles to an intermediate byte code, you can compile to that byte code in advance as well and run that with the interpreter in the same way you run java byte code on it's interpreter aka the java virtual machine. The only reason we do the byte code thing at all is that it's a machine code for an imaginary machine that is extremely efficient to interpret.

"Why not call this what it is? It's compilation."

Bytecode is the native machine code language of an imaginary machine, when I say native execution I mean alter the GPU to speak that machine code as it's native instruction set... in the silicon.

An interpreter is a form of compiler that is runs at runtime rather than in advance, JIT compiler is a form of interpreter design, and cloud computing is just the current evolution of clustered computing plugged into an internet connection and clusters in turn are nothing more than the distributed computing platforms built before them. You can make up new words all day long but lets stop pretending these things are NEW are not just the progressive realization of computing concepts that were invented in the 50's. Now get off my lawn.

Comment Re:Uh, what? (Score -1) 91

" an LLVM-based bytecode for its shading language to remove the need for a compiler from the graphics drivers

This removes the need for a shader language parser in the graphics driver. It still needs a compiler, unless you think the GPU is going to natively execute the bytecode."

This would do exactly the opposite. You don't compile bytecode, you compile to byte code. The entire point is that byte code is interpreted at runtime. And GPU vendors could put this in the driver but yes, building native execution of the bytecode would be fastest. Vulkan would be the one to provide a compiler.

" If you remove the compiler from a modern GPU driver, then there's very little left..."

How is that a bad thing?

Slashdot Top Deals

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...