Please create an account to participate in the Slashdot moderation system


Forgot your password?
Take advantage of Black Friday with 15% off sitewide with coupon code "BLACKFRIDAY" on Slashdot Deals (some exclusions apply)". ×

Comment This Discussion Proves It (Score 2) 311

The fact that, years later, _WE_ are still arguing about this proves that the case has merit.

If WE can't come to a consensus about this... then how is Joe Scmoe supposed to figure it out?

The fact is: this was _misleading_ advertising. They could have easily come up with another name for it (like Intel did with Hyperthreads)... instead they consciously chose to call the extra ALUs _cores_... which does have a meaning to the typical consumer. They did this, on purpose, to muddy the waters... and they REALLY did.

Does that mean that people shouldn't be more careful about what they buy? Sure. But that doesn't absolve AMD from putting out misleading advertising.

Comment Re:Massive Scientific Visualization (Score 1) 111

Like I mentioned... the actual drawing is NOT the bottleneck (but every little bit helps).

Those images you see on the screens are backed by TB of data that has to be read in and distilled down before being renderable. That's what the thousands of cores are doing.

Also: those rotateable ones you see in the beginning are small. If you skip forward to the 2:30 mark you can see some of the larger stuff (note that we're not interactively rotating it). That movie at 2:30 took 24 hours to render on about 1,000 cores.

Again: The bottleneck was NOT rendering time. It was the time to read TB from disk and crunch the data down to the point where you had a renderable image.

Comment Re:Massive Scientific Visualization (Score 1) 111

Like I said: raw rasterizing isn't the main bottleneck... reading the data and transforming the data is.... both things better done on the CPU. Drawing frames takes up a very small amount of the overall runtime... but it's always nice to speed it up!

GPUs wouldn't help much in this scenario... and our CPU clusters are used for many things other than visualization.

Yes, we do have some dedicated "viz" clusters as well... but we typically don't use them because they are too small for loading many TB of data.

Comment Massive Scientific Visualization (Score 3, Informative) 111

This is seriously useful for massive scientific visualization... where raw rendering speed isn't always the bottleneck (but of course, faster is better).

We do simulations on supercomputers that generate terabytes of output. You then have to use a smaller cluster (typically 1000-2000 processors) to read that data and generate visualizations of it (using software like Paraview ( ) ). Those smaller clusters often don't have any graphics cards on the compute nodes at all... and we currently fall back to Mesa for rendering frames.

If you're interested in what some of these visualizations look like... here's a video of some of our stuff:

Comment Re:Versioning and Releases Are Old School (Score 1) 86

Look at my reply to the guy above you for a little more detail.

We _do_ actually test every every code change _before_ it enters a "devel" branch (everything flows through tested pull requests). Part of that testing is actually testing against _downstream_ applications (user's applications). Once it's deemed fit to go into "devel" the PR is merged and a new set of automated testing is kicked off... with more extensive testing against all downstream applications. If that passes then a "release" is automatically done by merging to our "stable" branch (which is actually master).

The entire system (other than pressing "Merge" on the PR) is completely automated... allowing us to roll out several releases per day that are guaranteed to work with all of our user's applications. This gives them the freedom to use the very most up to date version of the library (the master branch) without fear.

As for the version "number" it actually doesn't matter... no one talks about "I'm on version 4ce653b488b3f1a3f929caa136f63f7846303967"... the only question is: "Did you update today?"...

BTW: If we make an incompatible API change (which is rare). _WE_ actually put in PR's to all of our downstream user's applications (currently numbering in the hundreds) to fix their applications.

The purpose of all of this (which is talked about in the papers) is to keep the entire community running on the newest code... all the time. It trades support burden (of old versions) for a burden of always passing extremely rigorous tests. We've found it to be a really positive thing and our userbase is continuing to grow because of it.

About that userbase. This is massively-parallel, scientific and engineering software (the kind that runs on the largest supercomputers in the world). Several hundred users is actually pretty great ;-)

Comment Re:Versioning and Releases Are Old School (Score 1) 86

Quick to criticize and call names I see.

You obviously didn't even bother a cursory glance at the two papers I pointed to. We spend a LOT of time doing release engineering... we just build it all into an automated system so that we can do multiple "releases" per day.

We actually test every code change against a large set of our downstream users codes... automatically. Yes, every change... and yes, against _user's_ applications that are built using our library (even users located on the other side of the world). Our continuous integration isn't just internal... it's over all our user's applications. Every time the code changes pass against the dozens of downstream applications the system automatically rolls out a "release" (by merging to our "stable" branch). Our users can use the stable branch (and update daily if they like) without fear of it breaking their codes.

We are proud of using a Git hash for our "version". We do it this way because we want the entire community to move forward together... all the time. It's a major issue in computational science when people use old versions of libraries forever. Again, go read the papers.

Comment Versioning and Releases Are Old School (Score 1) 86

We run with a full continuous integration cycle... with continuous release. Our software version is whatever the Git hash is. This is for a large computational science library that's in use by hundreds of researchers around the globe.

You can read about some of our software development methodology here:

Although, that's a bit dated now and a newer article has already been accepted by the Journal Of Open Research Software and should be out "real soon now". You can see an early draft of the new paper here:

Comment Hashing System Libraries (Score 1) 246

I wonder if it would be possible for XCode to compute a hash of system libraries / executables that is then embedded into the App binary. Apple could then check this hash against what it should be and reject any app that was compiled with a bogus version of XCode or system libraries.

Might not stop everything... but it could be a start.

Comment Re:And that's why you don't trust apps initially (Score 1) 246

In this case the malware was reporting back to a server with lots of details about where it was running... including what app it was buried inside of and then awaited further instruction.

It's definitely conceivable that the authors would send back specific instructions for what to do _in that particular app_... like steal bank account numbers or mess with email...

Comment Re:Wouldn't it be nice (Score 2) 80

Use an iPhone. This is the whole reason why Apple disallowed multitasking in the first place (relying instead on external notifications)... then they brought out API's to allow apps to do very specific things in the background (finish a download, play music, etc.). For a long time Apps had huge restrictions on what they could do in the background so that they didn't soak up battery.

Only recently were iPhone Apps allowed "free reign" in the background through a mechanism called "Background App Refresh". And you know what? The ability to do that is directly selectable _per app_ right in the Settings for the phone. No extra "Battery Saver" app needed.

It's funny how many techy people react to this type of thing as Apple being overly restrictive... when in reality the majority of users are appreciative of these restrictions as it gives them an overall better experience.

Comment Re:Battery Life on Phones (Score 1) 80

Or the system is working properly. 80% of people are happy with their thin phones and 20% of people can add battery cases if their needs call for it (numbers pulled out of ass based on seeing people with huge cases that might be battery cases).

Why do the rest of us need to carry around a 4 pound brick when our current thin phone gets us through the day?

Comment Re:Dog Poop Stations (Score 1) 177

bags are cheap, and there are MANY biodegradable dog bags available (ex: )

Quit being an ass and pick up after your dog!

My apartment complex has many stations that are always fully stocked with bags: BUT I actually carry and use my own because it's more convenient than using the stations.

Because of the stations it is VERY rare for there to be poop lying around... even though there are TONS of dogs here.

"Necessity is the mother of invention" is a silly proverb. "Necessity is the mother of futile dodges" is much nearer the truth. -- Alfred North Whitehead