Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment expensive but worth it (Score 1) 95

Any of the Sennheiser or Shure wireless packs.

Countryman E6 type headset. Best every

For $700-$800+ you've solved the microphone problem. Look for used, maybe save half.

Maybe a Shure BLX14, $300

An Audio-Technica System 8 might satisfy your needs, $200 +/-

Then go fix your speaker placement and EQ the room. The Countryman likes a slight cut at 600Hz for vocals, choose the capsule cover carefully. The A-T mic I don't know well.

Submission + - Scientists Create Artificial Sunlight Real Enough To Trick the Brain 1

HughPickens.com writes: Navanshu Agarwal writes that Italian scientists have developed an artificial LED sunlight system that looks just like real daylight streaming through a skylight. The LED skylight uses a thin coating of nanoparticles to recreate the effect that makes the sky blue, known as Rayleigh Scattering that doesn’t just light up a room but produces the texture and feel of sunlight. Paolo Di Trapani, one of the scientists who worked on the device believes that the skylight will allow developers of the future to not just build up, but also far down below the ground- without any of the dinginess that currently keeps us above ground.

CoeLux hopes to treat seasonal affective disorder, or SAD. Each year, some 10 million Americans, mostly women, find themselves sinking into a heavy malaise during the wintertime. CoeLux hopes its LED bulbs, which create the illusion of infinitely tall, bright blue skies, will help trick the brains of people with SAD, ridding them of their blues.

Comment To which I say, "duh?" (Score 2) 247

from my blog on this, just now:

Proponents of refactoring have never ever said otherwise (unless they themselves are confused on the matter). Code is only readable if it is either simple, or clearly follows design patterns, or is clearly commented and the comments are up to date with the current version of the code. Code is only easy to change when it is readable and when all external dependencies are well known. That last part is a key thing that metrics aren't necessarily able to capture.

A refactoring project, if not refactoring to the right design patterns to address what was wrong with the structure in the first place, is not going to improve it. One must know clearly why the current structure is making a bug-fix or a new feature difficult to implement.

And while some refactorings are 'good' in that they reduce a lot of copy-paste code, others are good because they add code, or add classes (an alternative increase in complexity). Different refactorings have different effects, and are used in different situations.

And as always, if you don't need to refactor, don't. A refactoring is to improve the design, not to rewrite for its own sake.

And there-in lies the great flaw of the whole idea of such a study: you can't measure the quality of a software design. Some things you just have to judge for yourself, based on experience and attention, and no arbitrary metrics number will ever differentiate between a good design and a rubbish heap.

Disclaimer: I hate software metrics.

Submission + - White House issues veto threat as House prepares to vote on EPA 'secret science' (sciencemag.org)

sciencehabit writes: The U.S. House of Representatives could vote as early as this week to approve two controversial, Republican-backed bills that would change how the U.S. Environmental Protection Agency (EPA) uses science and scientific advice to inform its policies. Many Democrats, scientific organizations, and environmental groups are pushing back, calling the bills thinly veiled attempts to weaken future regulations and favor industry. White House advisors today announced that they will recommend that President Barack Obama veto the bills if they reach his desk in their current form.

Comment Re:Uh, what? (Score 1) 91

"Especially since the bytecode is supposed to be hardware neutral, it is the compilation from bytecode that will have to do the aggresive optimizations to adapt to the target architecture."

This is a confusion in terms. Personally I blame Sun. An interpreter IS a form of compiler, it is the term used to refer compilation at run time. Which is exactly what happens here. That bytecode won't be interpreted or compiled before you open the game, therefore it is runtime compilation, therefore it is interpreting. JIT Compilation == Interpreted. The only difference between a compiler and an interpreter is when they perform compilation. What is happening here is the step happening in the sdk (compilation to bytecode) and the step happening in the graphics driver (bytecode, compiled at runtime aka interpreted, and then executed). Although both are technically compilation, classically you'd call one compilation due to all the work being done in advance and the other interpretation due to the work happening just before execution.

"building native execution of the bytecode would be fastest

Why not call this what it is? It's compilation."

I suppose if you are counting translating the machine code of the interpreter into logic gates and then physically building those gates on a wafer "compiling" that would be compilation.

As for hardware neutral, the api is a standard hardware neutral interface for developers. What difference does it make if the step which interprets the bytecode is executed as bare hardware or slower software? All software can be converted 1:1 to hardware. There are no optimizations which can be done here which wouldn't be comparable or even more efficient directly implemented in hardware. Initially it'd be a kludgy add on stealing chip die space (although not much in todays terms) but later the cards would be specifically designed to optimize the execution of that simple bytecode through the entire pipeline.
United States

Snowden Reportedly In Talks To Return To US To Face Trial 671

HughPickens.com writes: The Globe and Mail reports that Edward Snowden's Russian lawyer, Anatoly Kucherena, says the fugitive former U.S. spy agency contractor is working with American and German lawyers to return home. "I won't keep it secret that he wants to return back home. And we are doing everything possible now to solve this issue. There is a group of U.S. lawyers, there is also a group of German lawyers and I'm dealing with it on the Russian side." Kucherena added that Snowden is ready to return to the States, but on the condition that he is given a guarantee of a legal and impartial trial. The lawyer said Snowden had so far only received a guarantee from the U.S. Attorney General that he will not face the death penalty. Kucherena says Snowden is able to travel outside Russia since he has a three-year Russian residency permit, but "I suspect that as soon as he leaves Russia, he will be taken to the U.S. embassy."

Comment Re:Uh, what? (Score 1) 91

"I can't tell if you're just being obtuse, but: the developer compiles shader language to bytecode, and the graphics driver compiles bytecode to GPU native-code. Both of these stages qualify as compilation. (They're both level-reducing language-transformations.)"

Let me put this another way. Byte code is machine code for an imaginary machine, GPU native code is machine code for an actual machine. There is no level reduction occurring when interpreting byte code, both are already machine code, there is a translation from one instruction set to another compatible instruction set. Interpreters are a form of compiler designed to run at runtime rather than well in advance, modern interpreters are JIT compilers. The JVM for instance is an interpreter.

If you start confusing the typical convention of referring to compiled vs interpreted with the fact that technically in all cases the things you are referring to are all compilers it gets confusing. There is greater specificity in saying that bytecode in this case is run through an interpreter and even more specificity in saying that the design of that interpreter is one of JIT compilation (although the term mostly exists as a form of geek marketing to avoid negative stigma of using the word interpret).

"building native execution of the bytecode would be fastest

Why not call this what it is? It's compilation."

I'm not avoiding calling the translation compilation, as I clarified above, this is runtime compilation aka interpretation. I'm proposing that it would be faster to make the imaginary machines instruction set the instruction set physically implemented on the chip. As an intermediate but still ridiculously fast step they could add a handful of gates and perform the translation on the chip. The compiler would then be part of the SDK rather than part of the driver and you'd have compile once run everywhere shader code with the ability to hand optimize available to every developer.

It represents an excellent bit of bait to eventually get all GPU's to implement a standards based shader instruction set, much like Intel and AMD both target the same cpu instruction set.

Comment Re:Uh, what? (Score 0) 91

"No. There's no way in hell that anyone's seriously suggesting running graphics code in an interpreter. Again, it will be compiled by the graphics driver. (We could call this 'JIT compilation', but this term doesn't seem to have caught on in the context of graphics.)"

JIT compilation, An interpreter is a run time compiler, nothing more, nothing less. JIT compilation is a form of interpretation. No modern interpreter sits and converts to native code line by line during execution, they COMPILE to native code at runtime and then execute that. The only performance benefit of compilation vs interpretation is start-up time once executing compiled code is not necessarily faster. The perl interpreter is a good example. People tend to suck at writing fast perl code but someone who actually understands the language can write a perl solution that will rival or beat a C implementation for most solutions. You can compile perl implementations to native binaries and the interpreter typically compiles to an intermediate byte code, you can compile to that byte code in advance as well and run that with the interpreter in the same way you run java byte code on it's interpreter aka the java virtual machine. The only reason we do the byte code thing at all is that it's a machine code for an imaginary machine that is extremely efficient to interpret.

"Why not call this what it is? It's compilation."

Bytecode is the native machine code language of an imaginary machine, when I say native execution I mean alter the GPU to speak that machine code as it's native instruction set... in the silicon.

An interpreter is a form of compiler that is runs at runtime rather than in advance, JIT compiler is a form of interpreter design, and cloud computing is just the current evolution of clustered computing plugged into an internet connection and clusters in turn are nothing more than the distributed computing platforms built before them. You can make up new words all day long but lets stop pretending these things are NEW are not just the progressive realization of computing concepts that were invented in the 50's. Now get off my lawn.

Comment Re:Are we looking through the center... (Score 1) 157

We don't know that the speed of light was always as it is now.

For speed of light to vary, either photons must have mass (so they don't need to move at c anymore), or the constant of nature c would need to vary over spaceitme. Either of these would have massive effects on pretty much everything: photons mediate electromagnetic force, which not only underlays all of chemistry, but combines with strong interaction to define stable elements and how much energy nuclear reactions release, while c defines the very structure of causality itself.

So frankly, in this case "it's an alien movie" is a more likely explanation.

Comment Re:FDE on Android doesn't work as of yet (Score 1) 124

Latency and bandwidth are distinct measurements.

But they aren't independent. A device that has high bandwidth and high latency must be massively parallel (since for a sequential device bandwidth is simply inverse of latency) and have massive internal buffers to hold all the data being processed. That seems pretty unlikely for implementing such a simple algorithm, unless of course the implementation is purposefully broken.

Comment Re:Uh, what? (Score -1) 91

" an LLVM-based bytecode for its shading language to remove the need for a compiler from the graphics drivers

This removes the need for a shader language parser in the graphics driver. It still needs a compiler, unless you think the GPU is going to natively execute the bytecode."

This would do exactly the opposite. You don't compile bytecode, you compile to byte code. The entire point is that byte code is interpreted at runtime. And GPU vendors could put this in the driver but yes, building native execution of the bytecode would be fastest. Vulkan would be the one to provide a compiler.

" If you remove the compiler from a modern GPU driver, then there's very little left..."

How is that a bad thing?

Slashdot Top Deals

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...