Please create an account to participate in the Slashdot moderation system


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Maybe (Score 5, Informative) 225

I've done some work with both PhysX and the things that AMD is pushing for. I try to keep with the Physics Abstraction Layer, which lets me plug in whatever physics engine as the backend, which gives a pretty damn good apples-to-apples performance metric. Personally, my ultimate choice of physics engine is the one which exhibits the best performance. My experience may differ from others, but I generally get the best performance from PhysX on with an nVidia GPU and BulletPhysics with an AMD GPU. Sometimes, the software version of PhysX outstrips the competition, but I have never seen anything beat PhysX in performance with GPU acceleration turned on. And with PAL, it is easy to check if there is GPU support on the machine and swap in the physics engine with the best performance (PAL is awesome).

Here's the thing: GPU-accelerated physics are just plain faster. Why? Because collision detection is a highly parallelizable problem. Guess what hardware we have that can help? The GPU. Another great part of using the GPU is that it frees the CPU to do more random crap (like AI or parsing the horribly slow scripting language).

AMD is working on both BulletPhysics and Havok so they can do GPU acceleration. But I have a feeling that PhysX performance will remain faster for a while: PhysX was designed to natively run on the GPU (technically, a GPU-like device), while these other libraries are not. Furthermore, nVidia has quite a head start in performance tuning, optimization and simple experience. In five years, that shouldn't matter, but I'm just saying that it will take a while.

So here is my message to AMD: If you want people to use your stuff, make something that works and let me test it out in my applications. You've released a demo of Havok with GPU acceleration. PhysX has been and continues to work with GPU acceleration on nVidia GPUs and will frequently outperform the software implementation. I'm all for open alternatives, but in this case, the open alternatives aren't good enough.

Comment Re:i'm a little clueless here (Score 3, Interesting) 224

One idea would be to use the many available cloud services like EC2, Google App Engine and Azure. The IP blocks those services come in are going to remain fairly regular, but they are so common that it might not be acceptable for a site to block everything from (and whatever EC2 and Azure live on). It is still blockable, though, so it probably would have been better for them (from a technical standpoint) if they hadn't announced their existence and these sites had been slowly indexed by their service before anybody knew what was happening.

Another (better) idea would be to use a service like Tor. Sure, their latency is going to skyrocket, but that's not a big deal since interactivity isn't a primary concern of an indexing service. It's still blockable, if infringing site admins block Tor nodes. This may or may not be doable, as I would imagine many users of said infringing sites use anonymizing networks for their normal traffic.

Sure, either of the solutions I've come up with in five minutes can be circumvented, but the idea isn't to totally eliminate piracy, its to make it inconvenient enough to make getting the legitimate version easier.

Comment Re:Advantages? (Score 1) 255

These guys make a decent point: 10GUI. The keyboard is a pretty nice thing -- we can express quite a wide range of things to the computer. The mouse, however, sucks. An entire hand and we're limited to a position on the screen and binary "clicks." I use an 11-button mouse to help with this, but most applications are not built to support such interaction, so I'm limited to setting them for global commands (back center is reveal, left and swipe is change desktop), save for some special cases (which is pretty lame). The overall theory is that once you establish a better system of interaction with the computer, your everyday tasks become easier. Ultimately, it would be best if we just wired our brains into the computer, but until that is practical, we'll have to work with what we have.

Comment Re:New to open GL (Score 1) 46

Don't learn OpenGL, learn graphics and software engineering first. Assuming you want to learn OpenGL for games, I would recommend David Eberly's 3D Game Engine Design . It is extremely comprehensive and presents an incredibly well-designed engine, WildMagic (which has inspired many other engines, like jMonkey), for which you are given the full source on CD. If you're not looking for games, then you probably don't need to know the latest OpenGL stuff, because scientific visualization usually doesn't require it. And if you DO need the latest stuff from OpenGL, you're probably not actually doing graphics and you probably shouldn't use OpenGL, but CUDA or some other platform (CUDA = awesome).

In any case, you need to know that OpenGL is just a specification, so you rely on other multiplatform libraries like GLUT, GLFW or SDL. I would personally recommend SDL, since it is awesome. GLFW is nice, easier to use than SDL, but harder to tweak the small things for performance. GLUT development died many years ago, so don't use it.


Submission + - Google not losing $1.65M/day on YouTube after all (

secmartin writes: "A report by Credit Suisse released earlier this year claimed that Google was losing up to $1.65M per day on YouTube. This was widely considered to be a huge overestimate; now a new report by research firm RampRate provides a better estimate that takes into account that 73% of Google's traffic flows via peering agreements, leading to a more realistic figure of $477k/day.

What both analysts appear to be missing it the fact that Google is working hard to create a completely transit-free IPv6 network; as Google puts it in their IPv6 FAQ:

To qualify for Google over IPv6, your network must have good IPv6 connectivity to Google. Multiple direct interconnections are preferred, but a direct peering with multiple backup routes through transit or multiple reliable transit connections may be acceptable.

What do you think? Do these new figures sound more realistic, and would it be a good or a bad thing if Google didn't have to pay for their internet bandwidth at all?"

Comment Document Locator from ColumbiaSoft (Score 1) 438

The company I work for uses a system called Document Locator. It is a Windows-shell integrated document management system. Basically, if you took Subversion and gave yourself extremely fine-grained control of repositories, folders and the like. It scales decently, too -- we have millions of documents spread across 25 major repositories, many of which include AutoCAD, Bentley Microstation, Smartplant 3D and other sizable files. The system is also fairly extensible, as we've built quite a few internal applications off of the DL system and there are plenty of third-party plug-ins available (a notable one being Brava, an application that allows adding QC and other markup to repository files). And if you don't want to be constrained to Windows, there is a web client available, which works decently. While it is not without its problems, the overall experience has been pretty good.

Full disclosure: My company is ColunbiaSoft's largest customer and, as such, we know a good deal of the development team.

Comment Re:Just read through the PDF (Score 5, Insightful) 88

Karma be damned, but the use of Windows in a secure system is nowhere near as bad as not sanitizing your inputs on any system. No platform can just make up for bad practice. FreeBSD will happily allow someone to guess 'PASSWORD' as the login password (from TFA: "Software configuration involves setting up a software system for one's particular uses, such as changing a factory-set default password of "PASSWORD" to one less easily guessed."). If you're using Oracle DB, MS SQL or MySQL, if you store passwords as plaintext instead of hashes and secure data in plaintext, you will run into problems (TFA: "...hackers had the ability to obtain more than 40,000 FAA user IDs, passwords, and other information used to control a portion of the FAA mission-support network."). Microsoft may not patch in a timely manner, but it doesn't matter what platform you're running if you don't apply the patches (TFA: " with known vulnerabilities was not corrected in a timely manner by installing readily available security software patches released to the public by software vendors."). PHP, JSP, ASP, ASP.NET, Ruby, Perl or whatever, if you program poorly, you're going to have problems.

PC Games (Games)

Submission + - Duke Nukem ForNever?

Burdell writes: GameSpy is among sites reporting that 3D Realms is shutting its doors. Apparently, the pre-orders of Duke Nukem Forever were not enough to pay the bills.

Comment Re:Because when I think graphics, I think intel (Score 2, Interesting) 288

Larrabee is expected to at least be competitive with nVidia/AMD's stuff, although it might not be until the second generation product before they're on equal footing.

Competitiveness is not a quality of generation number. Still: What statistics have you seen that compare Larrabee and something people use right now (ATI/nVidia)? There is this presentation (PDF) they made at SIGGRAPH, which shows that performance increases as you add more Larrabee cores. Here's a graph which may mean something. The y-axis is "scaled performance" What might that mean?

Graphs show how many 1 GHz Larrabee cores are required to maintain 60 FPS at 1600x1200 resolution in several popular games. Roughly 25 cores are required for Gears of War with no antialiasing, 25 cores for F.E.A.R with 4x antialiasing, and 10 cores for Half-Life 2: Episode 2 with 4x antialiasing.

Sounds neat. I guess that's why they're going to promote the 32-core Larrabee. How much will something to run these cost and how much power will it consume? They're still developing this thing, so why do I keep hearing that it will BLOW MY MIND? I have no doubt that Intel has an army of capable engineers that could build something to render graphics great, but if it costs more than the consumer can possibly pay, there's no real point. Intel is gunning for 2 TFLOPs. I'm pretty sure the Radeon HD 4870 passes that mark already (and you can purchase it for less than $500). Sure, it's a cool technology, but I'd like to see some more facts and figures.

What have I heard? Power usage/heat: 300W TDP. That's pretty horrific. Cost: 12-layer PCB. That's twice the typical graphics card and four more than the high-end Radeon and nForce cards. That doesn't directly translate into cost, but generally more complicated equals more expensive.

But back to the PS4 -- Sony's real mistake with the PS3 was expecting the Cell processor to be the most incredible computing device ever. Original plans for the PS3 included 2 Cell processors, but they changed to the RSX when they realized the Cell wasn't capable of rendering graphics like they wanted to (whereas the XBox 360's architecture was designed with the GPU and CPU co-existing from the start). You can't build a bunch of fast parts and stick them together, you have to build a fast system. Perhaps Sony has learned their lesson.

Comment Re:Perfection Has a Price (Score 1) 726

Not to throw down, but do we even have a definition for "perfect" software? Because I could quite easily argue that not only does the definition not exist, but it is impossible to create a consistent definition for all potential users. The field of MP3 players is a good example: The iPod dominates the market for some strange reason, but many people view other MP3 players as "worse." More buttons would break the "perfect simplicity" of the system, but the inability to arbitrarily build and shuffle playlists is quite obnoxious.

Internet browsers are another interesting domain. My company uses a web-based MIS designed with Internet Explorer 6 in mind. One could blame the system for poor design for reliance on proprietary ActiveX controls, but we're stuck with it. That said, IE6 is the "best" browser can offer to our employees, because Firefox, Opera and Chrome will not function. We're not even touching "perfect" yet.

But I haven't really address the question of the cost of developing "perfect" software, so let's make some assumptions. Let's view "perfect" from a security standpoint (accessibility, confidentiality and integrity) and use Common Criteria as the metric of software goodness.

The first EAL of CC are pretty easy to come by, since the basic definition is that the software works are doesn't crash. The next levels require discretionary (optional) protection, followed by mandatory object protection. That's nothing too interesting, as software of any importance should be doing this stuff anyway. Top-tier perfect software development involves formal verification of the software - as in, developing a description and proving that it will always work. More work is involved proving that your software actually matches the mathematical description of your software, which costs money. If you've ever tried to formally verify software, you will quickly realize that it takes AT LEAST the same amount of time to verify the software as it did to initially develop the software. There are shortcuts such as Automated Theorem Proving, but it's an NP-complete problem and you're going to need to pay (at least) a person who understands it.

So what if we don't want to formally verify our software, but at least check it until it's "good enough." You could hire some hackers to independently test your software, but that costs money. You could internally check it, but that takes time (read: money).

What we really need is a testing facility that integrates well into and speeds up development, which is pretty much what unit testing is for. I can say a lot of good things about unit testing, but they're certainly far from perfect. They take time to develop, but if they help catch glitches, then you're actually speeding along development. Honestly, I couldn't tell you if the net result is time savings, but I can tell you that you can only catch the errors that you are looking for.

In conclusion: I disagree with your statement that developing "zero defect" software costs the same as developing and shipping software with defects. Formal verification is nice, but completely unreasonable to ask for. Independent testing will always cost more money and internal unit testing lacks the independent thought that really finds errors. I would love for software to be perfect, but it is simply too much to ask for out of developers today.

Comment Re:Why use PS3s? (Score 5, Informative) 211

since CUDA is roughly C

Not quite. CUDA looks a lot like C in that it has C-family syntax but the biggest limitation it has is that there is no application stack - which means no recursion. CUDA also lacks the idea of a pointer, although you can bypass this by doing number to address translation (as in, the number 78 means look up tex2D(tex, 0.7, 0.8)). The GPU also has other shortcomings, in that most architectures like to have all their shaders running the same instruction at the same time. For this code

if (pixel.r < pixel.g){
//do stuff A
}else if (pixel.g < pixel.b){
//do stuff B
//do stuff C

The GPU will slow down a ton if the pixel color causes different pixels to branch in different directions. Basically, the three sets of shaders following different branches of that code will be inactive 2/3 of the time.

In the Cell, you really do just program in C with a number of extensions added onto it like the SPE SIMD intrinsics and the DMA transfer commands (check it out). The Cell really is 9 (10 logical) processors all working together in a single chip (except in PS3, where there are only 7 working SPEs). Furthermore, your 8 SPEs can be running completely different programs -- they're just little processors. Granted, you have to be smart when you program them to deal with race conditions and all the other crap you have to deal with for multithreaded programming. The Cell takes about 14 times longer to calculate a double precision floating point than a single (and there aren't SPE commands to do four at once like you can with singles).

So which is more powerful? It really depends what you're doing. If your task is ridiculously parallellizable and doesn't require the use of recursion, pointers or multiple branches, the GPU is most likely your best bet. If your program falls into any of those categories, use a Cell.

PlayStation (Games)

Submission + - Accellerated X drivers coming for PS3 Linux (

t0qer writes: Over at the PS2dev forums a hacker named Ironpeter has successfully managed to bypass the PS3's Hypervisor to gain direct access to it's Nvidia RSX GPU.

This is s first step and far from a complete working driver, but it seems as word of this spreads, more people are helping with the effort to hunt down the Hypervisors Fifo/Push buffer. It won't be long before we're playing tux racer on the PS3 in it's full OpenGL glory.


Submission + - Scanning railroad tracks in 3D (

An anonymous reader writes: Control Engineering has an article about how the railroad companies are using high-speed 3D scanning at 30 mph to detect defects in railroad track. Up until now, amazingly, people have been walking the track to judge the condition of it.

Submission + - How Gateway tech support tortured an editor

An anonymous reader writes: A tech editor tells his story of being to the Gateway of Hell, that is, dealing with Gateway's tech support for the past year regarding a home desktop computer. He relates one anecdote as such: "If you call us up because you forgot your password, we'd have you do a system restore," the support person told me. I wasn't sure if he was joking. He said the company's "Our Answers by Gateway" people would figure out a less extreme way to solve a problem, but you need to pay extra for their advice.

Slashdot Top Deals

Torque is cheap.