Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Really?!?! (Score 1) 503

This BS *again*?!?!

GPU shaders != running code on the CPU.

WebGL allowing shader usage is pretty much a non-issue security wise. GLSL shaders are *extremely* limited in scope. They can't access anything besides model data and textures, and even then only the model data and textures provided to them by the host program. GLSL is very domain-specific and doesn't support pointers or any way to access things outside the purview of the GPU.

Furthermore, they aren't pre-compiled (aside from some vendor-specific methods on *OpenGL ES*, and even those only compile to bytecode IIRC), so WebGL can at least attempt to do some shader validation. OpenGL and WebGL programs literally hand the GLSL source code to the driver, which is then responsible for compiling it. This actually turns out to be good for performance, since future compiler improvements in the driver can result in the same shader on the same hardware running faster. It also means WebGL could do validation on the shaders before handing them off to the driver, to keep an eye out for any obvious attempts to do something bad.

And when it comes to malicious shaders, only two attacks can be executed: try to crash the GPU by running a very intensive shader, or try to peek at other web pages via what seems to be an implementation flaw in WebGL/HTML5 Canvas.

The first attack can be easily avoided. In fact, it *shouldn't* be possible at all on Windows, which is supposed to restart the GPU if the GPU crashes, and when it can't that's a *Windows* bug.

The second is a little harder but, again, looks to be an *implementation* flaw, not a fundamental flaw in WebGL or shaders or anything like that.

Face facts, modern GPUs don't offer any of the old fixed-function pipeline anymore. It's not anywhere to be found on the silicon; modern GPU drivers merely emulate it for old OpenGL programs. This means that if WebGL didn't have shader support it would be completely useless.

Comment Re:Not an accident (Score 2) 309

Raiders of the Lost Ark was good because of people *NOT* named George Lucas. Lucas came up with some good ideas, but also some really bad ones that Spielberg and others shot down.

Basically, once Lucas had the initial idea, it was turned into a good movie by Steven Spielberg, Lawrence Kasdan, and Harrison Ford.

Comment Re:All I can say is (Score 1) 309

I think you're missing the point, which is that Lawrence Kasdan (the writer) and Irvin Kirshner (the director) tried to lift ESB above being a one-dimensional movie, and succeeded for the most part. Real character development, good dialog, good acting & directing, etc.

Of course, if you're the kind of person who just wants to "turn their brain off" when they watch a movie, well...

Comment Article is wrong (Score 1) 453

The built-in alarm clock app in iOS works nothing like the article describes.

The use of a "+" button to mean "add [something]" is used throughout iOS. You don't use the "+" button to adjust an existing alarm, BTW. The alarm clock app initially has no alarms set, so you use the "+" button to add one (which then automatically takes you to a screen where you can set the alarm). If you want to change an alarm, you press the clearly labeled "Edit" button.

Comment Re:Still Much Ado About Nothing (Score 1) 120

Also, Windows (the most likely target of any attack) has had the ability since Vista to restart the GPU if it hangs (which is the only real attack possible when it comes to shaders: use a shader that is so computationally intensive the GPU becomes unresponsive). This isn't bullet proof, of course, but if Windows isn't able to restart the GPU after a few seconds of unresponsiveness then that's a *Windows* bug.

Comment Still Much Ado About Nothing (Score 2) 120

As with the previous article, this is much ado about nothing.

The GPU can only run "arbitrary code" in the loosest possible sense. What happens is that an OpenGL or WebGL application gives the shader source code to the driver, which then compiles it into the native GPU instructions. You *can* pre-compile your shaders in OpenGL ES 2.0, but even then it's just intermediary bytecode, and the bytecode is vendor-specific.

Furthermore, GLSL, the language used for OpenGL and WebGL shaders, is *very* domain-specific. It has no pointers, and no language support for accessing anything outside the GPU other than model geometry and texture data. *AND* it can only access the model geometry and texture data that the application have provided to it, and for GPUs that don't have any on-board VRAM it's up to the *driver* to determine where in shared system memory that the texture will be located.

And you can't get around using shaders on modern GPUs. Modern GPUs don't have a fixed function pipeline, it's not in the silicon at all. For apps that try to use the old OpenGL fixed function pipeline, the driver generates shaders that do what the fixed function pipeline *would* have done based on the current state. Drivers won't keep emulating the old fixed function pipeline forever, though.

Comment Re:Much ado about nothing (Score 0) 178

Yes, something similar was mentioned in the article, and it *should* be fixed. But beyond that, shaders themselves don't expose anything particularly dangerous. GLSL, the language WebGL and OpenGL shaders are written in, doesn't have language features to access anything beyond the GPU. You can't access the user's hard disk from within a shader.

You can't get rid of shaders. Modern GPUs don't have a fixed function pipeline, it's totally gone from the silicon. Instead, for apps that try to use the old fixed function pipeline, the driver generates a shader that does what the fixed function pipeline *would* have done given the current state. Sooner or later the drivers are going to stop even emulating it.

Which is part of the reason WebGL has shader support in the first place, it wouldn't do anyone any good if it was obsolete right out of the gate.

Shaders aren't the problem, crappy web browsers are.

Comment Re:Much ado about nothing (Score 1) 178

Shaders themselves are pretty limited in scope, though. You can't really access anything beyond the GPU, textures, and model geometry.

GLSL (the language WebGL and OpenGL shaders are written in) doesn't have pointers and is most definitely NOT a general purpose language.

Even without shader support in WebGL you'd have the potential for intentionally bad model geometry crashing a really poorly written driver.

Comment Much ado about nothing (Score 2) 178

For the most part this is a lot of security handwaving.

While the GPU itself can do DMA and whatnot, shaders don't have access to any of that. If a shader can access texture memory that hasn't been assigned to it *in certain browsers* then it sounds like a bug in the browser or the browser's WebGL plugin. Being able to "overload" the GPU and blue screen the computer sounds like Yet Another Windows Bug.

A shader isn't just some arbitrary binary blob that gets executed directly by the GPU. Even native programs can't do this. You provide the driver your shader source code, the driver does the rest. It's intentionally a black-box process so that the driver can optimize the shader for the GPU and not force a specific instruction set or architecture onto GPU designers. Thus allowing the underlying GPU design to evolve, possibly radically or in unforseen ways, without breaking compatibility.

Furthermore, a shader can only access memory via the texture sampler units, which must be set up by the application. If the WebGL application (which is just JavaScript) can set things up to access texture memory it isn't supposed to be able to, the problem is with the WebGL and/or HTML5 implementation, not the concept of WebGL or the GPU driver.

Comment Re:Arrogant Ignorance? (Score 1) 2288

I find it interesting that people who grew up using the metric system always try to use the yard as a comparable imperial unit, yet people who grew up using the imperial system don't use the yard to measure things that often. Few people in the US know how many yards are in a mile because nobody cares, it's like someone in a metric-using country measuring things in decameters.

The imperial system, for all its warts, has two advantages that metric does not:

1)Human-sized measurements. The foot is about the length of an average adult's foot. A yard is about the length of the average person's arm, and about the length of the average person's stride. Etc., etc., etc.

2)Evenly divisible units. A foot will evenly divide into halves, thirds, and fourths. A gallon will evenly divide into halves and fourths, ditto for the quart and cup. These are pretty common things to do, especially for things like basic home carpentry and cooking & baking, so it's nice that they divide up evenly.

Granted, converting between Imperial units isn't always easy, but the basics aren't that hard. Nobody actually uses rods & hogsheads to measure things. Most gripes about imperial units being too byzantine seem to come from people who grew up using the metric system, so metric is naturally what's more familiar and comfortable to them.

IBM

Submission + - How to really bury a mainframe (networkworld.com)

coondoggie writes: "Some users have gone to great lengths to dispose of their mainframe but few have gone this far. On November 21, 2007, the University of Manitoba said goodbye to its beloved 47-year-old IBM 650 mainframe Betelgeuse by holding a New Orleans style jazz funeral. In case you were wondering what an IBM 650's specifications were, according to this Columbia University site, the 650's CPU was 5ft by 3ft by 6ft and weighed 1,966 lbs, and rented for $3200 per month. The power unit was 5x3x6 and weighed 2,972 pounds. The card reader/punch weighed 1,295 pounds and rented for $550/month. The memory was a rotating magnetic drum with 2000 word (10 digits and sign) capacity and random access time of 2.496 ms. For an additional $1,500/month you could add magnetic core memory of 60 words with access time of .096ms. Big Blue sold some 2,000 of the mainframes making it one of the first successfully mass-produced computers. http://www.networkworld.com/community/node/23123"

Feed Did A Prank Domain Registration Open Up The Administration's Email Secrets? (techdirt.com)

Not being a political blog, we're not so interested in the latest political scandal brewing from the administration, which supposedly involves a leaked email that highlights some more potentially illegal actions out of our Justice Department. However, what is interesting is what David Cassel pointed us to about how those emails were leaked. The details aren't entirely clear from this interview with the reporter who has the emails, but it looks as though some web pranksters who registered domains like whitehouse.org and georgewbush.org have been collecting emails that are sent to those domains. Occasionally, people who think they're emailing someone at whitehouse.gov or georgewbush.com address their emails to the .orgs instead. In the latest example, it involves an email from Tim Griffin on the subject of "caging" (which apparently is the practice of kicking voters off the voting rolls through questionable and illegal means). Unfortunately for Griffin, it appears he sent a somewhat incriminating email to a bunch of people -- including one email address at georgewbush.org. Oops. The reporter later asked administration officials about the emails, the administration didn't deny the content -- but just took issue with the interpretation, effectively admitting that the emails were real. Again, we have no clue about the political issues involved -- but find it amusing that the email leaks were due to someone typing in the wrong top level domain and having the emails go to some pranksters. The reporter in question claims to have about 500 such emails.

Slashdot Top Deals

grep me no patterns and I'll tell you no lines.

Working...