Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Cronies, thugs, and dictators, oh my! (Score 1) 70

It's like they have faith in their government taking their money and using it for the public good rather than allowing them to hoard it for themselves. I think your statement is meant to be an indictment of communism. But all you've demonstrated is that individuals who hoard money are bad and that the people need to be empowered to enforce redistribution of wealth, with force as a deterrent for this type of behavior.

Comment Re:Hours and hours (Score 1) 91

Bear in mind that this is not raytracing. NVidia's backend is server obviously using a path tracing algorithm based on the videos; the images start "grainy" and then clear up as they are streamed. Path tracing works like ray tracing with a huge sampling rate, shooting perhaps 30 rays per pixel. Moreover, whereas ray tracers only have to compute rays recursively when they strike a reflective/refractive surface, path tracers always recurse, usually around 5-10 times, for each of the 30 rays per pixel. (The "graininess" occurs due to the fact that not enough samples have been taken yet; as more samples are taken, it goes away.)

There's also a good chance this is a bidirectional path tracer. There's not enough footage of caustics stuff to tell for sure, but most rendering engines these days use this technique as well. In that case there is an entire other phase consisting of mapping light onto surfaces. This sampling is done before the path tracer actually renders, and is about the same computational intensity.

So a path tracer is around 150x more computationally intensive than a ray tracer; possibly up to 300x for bidirectional path tracers. While "neat, I can make a translucent cube and change its refractive index" is certainly computationally easy enough for a cell phone, the hardware simply isn't appropriate for path tracing algorithms, especially with scenes of any degree of complexity. NVidia seems to be specifically marketing this at the photorealistic rendering market (although I'm not sure how bit that is). POV-Ray in its DOS days (a simple raytracer at the time, although now it supports more advanced rendering features) isn't really in this league.

Upgrades

Theora 1.1 (Thusnelda) Is Released 184

SD-Arcadia writes to tell us that Theora 1.1 has officially been released. It features improved encoding, providing better video quality for a given file size, a faster decoder, bitrate controls to help with streaming, and two-pass encoding. "The new rate control module hits its target much more accurately and obeys strict buffer constraints, including dropping frames if necessary. The latter is needed to enable live streaming without disconnecting users or pausing to buffer during sudden motion. Obeying these constraints can yield substantially worse quality than the 1.0 encoder, whose rate control did not obey any such constraints, and often landed only in the vague neighborhood of the desired rate target. The new --soft-target option can relax a few of these constraints, but the new two-pass rate control mode gives quality approaching full 'constant quality' mode with a predictable output size. This should be the preferred encoding method when not doing live streaming. Two-pass may also be used with finite buffer constraints, for non-live streaming." A detailed writeup on the new release has been posted at Mozilla.
Input Devices

Nintendo Working On Football Controller 123

Siliconera found patent filings from Nintendo for a football controller addon that will work with the Wii. After tucking the Wii Remote into a lateral slot on the football, you slip your hand through a strap so that your fingers touch the Remote's buttons. Then you mimic running and throwing, which is interpreted by the accelerometer. 'The pitch angle and force of the throw determines the trajectory arc of the throw. Side to side motion determines the yaw angle. Pressing buttons on the Wii remote can adjust other options.' The device is described as 'squishy,' so your TV is probably safe, but I'd try it at a friend's house first.

Comment Once, with a build system... (Score 1) 683

At a previous job of mine, I was working with the SCons build system; it's basically Make, but written in Python. It's actually really nice if you know Python, but also fairly slow. In it, every filesystem object (files and directories) are maintained as "Nodes" in a big graph.

Anyway, the project was using an old version of SCons along with lots of legacy code, and with this version, for some reason, when my build script was added, a conflict resulted where a Node in the build system representing a file was initialized twice, once as a directory, once as a file (it was actually a file).

Nowhere in my build script was this file even referenced; it wasn't even a dependency of any of the stuff being generated by my code. After hours of trying to find what was causing the conflict, I eventually figured out I could call File("theFile") to (sort of) "cast" the Node as being a File in the build system, and it would work. To this day, I believe that's how it's implemented, and I have no idea why it worked. :)

Comment Re:Wha...? (Score 1) 251

After doing some fishing around in the OS X version, I've noticed the main problem here:

[~ : jlatane]% ps -x | grep Chrome
571 ?? 0:00.90 /Applications/Google Chrome.app/Contents/MacOS/Google Chrome -psn_0_315469
573 ?? 0:01.09 /Applications/Google Chrome.app/Contents/MacOS/Google Chrome --lang=en --type=renderer --channel=571.1a638f0.1327077787
589 ttys000 0:00.00 grep Chrome

The renderer is the same binary as the main process, but with some different flags used. I don't quite see why they're doing it this way, as having a separate image for the renderers would be much more efficient. In fact, the only reason not to use a separate image is so that they can just fork() rather than fork()/exec(), but the fact that the command line arguments are different for each process indicates that's not happening anyway. They could definitely reduce the time to create tabs even further, as the image size of a simple renderer would be much smaller than that of the full application. Also, they wouldn't have to link the renderers against Frameworks that expect UI events (although, depending on the layout of their code, this could potentially be resolved with lazy linking). Speaking of which, I think you meant "Carbon" when you said "Cocoa":

[~ : jlatane]% otool -L /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome | grep Cocoa
[~ : jlatane]% otool -L /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome | grep Carbon
/System/Library/Frameworks/Carbon.framework/Versions/A/Carbon (compatibility version 2.0.0, current version 136.0.0)

Of course, this is just me taking a quick look at their linking setup. The fact that they've got a 26M image that they're essentially just duplicating for each new tab is a little troubling; why didn't they, at the very least, separate WebKit into its own library/Private Framework rather than statically link it in? The only possible performance benefit this design holds is not waiting for dyld to resolve runtime search path information (on OS X), but that's certainly outweighed by the delay of copying such large images. It all seems far too amateur-ish for Google to me.

Comment Re:10 gigs? (Score 2, Informative) 81

Well, first off, dependencies are, much more often than just the "Library" directories, in their own "Framework" directories. Check /System/Library/Frameworks for the important core Mac OS X frameworks and /Library/Frameworks for your basic system frameworks. You've also probably got a ~/Library/Frameworks directory but there's probably nothing interesting in there unless you're a developer. The rest of the "Library" directories consists more of non-reusable stuff.

However, plenty of applications do just bundle their own versions of dependencies. Just taking a glance around my system, 26.9 of Adium's 60.2MB consists of the "Frameworks" directory in Adium. 122.2MB of iWeb is Frameworks, many of which would probably be useful if they were universally available to developers (FTPKit?). Open source (and open-source-based) applications tend to be the worst about this since they have a habit of packaging large parts of the Linux ecosystem since minor incompatibilities OS X's BSD-grounded system make proper ports less convenient. Having both Crossover and Crossover Games take so much space with so many identical dependencies is just silly. Other notable applications on this front include Battle for Wesnoth and OOo.

Across all applications, localizations are a bit more of a problem, as you said. An even bigger problem is that binaries are often larger simply because they're written in Obj-C; Obj-C supports some very, very cool runtime features not available in any other compiled language, but they add considerably to the binary size.

In general, though, you're right - OS X is far better than Windows about sharing dependencies properly, but there's pretty much no way to get the tight dependency management Ubuntu/Fedora/openSUSE has without having a repository-based package manager, which is an entirely different software management philosophy. (Although the idealist in me likes to hope it's not the case, that model doesn't really foster the develop-something-good-and-make-money-quickly environment that I like about Mac OS X, since there's such a big barrier between you and users).

Debian

Debian Gets FreeBSD Kernel Support 425

mu22le writes "Today Debian gets one step closer to really becoming 'the universal operating system' by adding two architectures based on the FreeBSD kernel to the unstable archive. This does not mean that the Debian project is ditching the Linux kernel; Debian users will be able to choose which kernel they want to install (at least on on the i386 and amd64 architectures) and get more or less the same Debian operating system they are used to. This makes Debian the first distribution, and probably the first large OS, to support two completely different kernels at the same time."

Comment Futurama! (Score 1) 1397

Not just my servers, but all of my hardware is named after Futrama characters. Hermes Conrad, a 320GB storage server, was recently replaced by Dwight Conrad, a new 1TB unit. My Palm is named Cubert Farnsworth, my main system Philip J. Fry (with the boot volume named Bender). And my Mighty Mouse's Bluetooth profile is named Nibbler. My old flash drive is named Morbo, with the new one named Calculon... my Wii is named Lrrr... honestly, it's gone a bit far. I'm going to have to recycle characters within a few years.
Earth

RITI Printer Uses Your Coffee Grounds For Eco Ink 184

Jason S. writes to tell us that for those seeking to "go green" or those just wishing to try something different, RTI now offers a printer that uses coffee instead of ink. In addition to recycling your grounds, the printer also uses good old fashioned elbow grease to move the grounds cartridge back and forth, saving power. Sounds like a novelty that will die quickly as human sloth reasserts itself. "Hosted by Core77 and Inhabitat, this year's Greener Gadgets Design Competition resulted in an incredible crop of innovative consumer electronics designs, and we're excited to offer you the first scoop on some of our favorite designs! Jeon Hwan Ju's RITI printer works by replacing environmentally un-friendly inkjet cartridges with the dregs from your daily coffee. Simply place used grounds in the ink case, insert a piece of paper, and move the ink case left and right to print text."

Comment Re:Very nice of them. (Score 1) 307

While I agree with most of your points, a bunch of extra graphics cards won't really be helpful for ray tracing because of the amount of recursion required. It can be done iteratively with some modifications so it would actually run on GPU hardware, but the overhead in doing this is greater than the performance boost parallelism grants the user.

Besides, ray tracing pretty much sucks compared to modern rasterization techniques until you add in radiosity, caustics, distribution, and other extensions. And if all that's added in, I don't care if you have 12 cores, it will not run in real time.

Microsoft

First Look At Windows 7 Beta 1 898

The other A. N. Other writes "It seems that Microsoft couldn't keep the lid on Windows 7 beta 1 until the new year. By now, several news outlets have their hands on the beta 1 code and have posted screenshots and information about this build. ZDNet's Hardware 2.0 column says: 'This beta is of excellent quality. This is the kind of code that you could roll out and live with. Even the pre-betas were solid, but finally this beta feels like it's "done." This beta exceeds the quality of any other Microsoft OS beta that I've handled.' ITWire points out that this copy has landed on various torrent sites, and while it appears to be genuine, there are no guarantees. Neowin has a post confirming that it's the real thing, and saying Microsoft will be announcing the build's official availability at CES in January."
Novell

Novell Cancels BrainShare Conference 102

A.B. VerHausen writes "While OSCON and SCALE organizers ramp up plans for their events, Novell shuts down BrainShare after 20 years, citing travel costs and budget tightening as main concerns. 'Instead of the traditional in-person conference, Novell plans to offer online classes and virtual conferences to make education and training available to more people at a lower per-head cost to companies,' says the news story on OStatic.com."

Slashdot Top Deals

Is your job running? You'd better go catch it!

Working...