Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Cost vs HDD Solution (Score 4, Informative) 268

Note that the 1800 is just for the tape drive. An 8 tape library with drive and media will be more like $4k, and that still only gets you 12TB (given the file types you mentioned, don't plan on getting any capacity boost from the LTO compression). You will have to go with one really big library before tapes win on price. Unless of course you are willing to change tapes manually, or build your own robot/library out of lego. But even then that 24TB figure is only a lower bound on the cross over.
Music

Submission + - "Open Source Bach" project completed; score and recording now online (opengoldbergvariations.org) 1

rDouglass writes: "MuseScore, the open source music notation editor, and pianist Kimiko Ishizaka have released a new recording and digital edition of Bach's Goldberg Variations. The works are released under the Creative Commons Zero license to promote the broadest possible free use of the works. The score underwent two rounds of public peer review, drawing on processes normally applied to open source software. Furthermore, the demands of Bach's notational style drove significant advancements in the MuseScore open source project. The recording was made on a Bösendorfer 290 Imperial piano in the Teldex Studio of Berlin. Anne-Marie Sylvestre, a Canadian record producer, was inspired by the project and volunteered her time to edit and produce the recording. The project was funded by a successful Kickstarter campaign that was featured on Slashdot in March 2011."

Comment dump, snapshots, rsync batch mode (Score 1) 153

The first problem to consider is how you determine which files to backup. Filesystems like xfs, zfs, and btrfs have nice convenient ways to get a list of changed files (and for xfs and zfs, the contents of those files as well). For ext2/3/4 (and other older unixy filesystems) look at "dump". And of course, if you're working with a completely dumb filesystem, you can always use rsync (if your backup disk is remotely accessible) or some external/manual indexing to figure out what files to backup.

If your filesystem supports some form of dump (send for zfs), you can use that to create your incremental changes. If you only have a list of files, use tar, or rsync. If you have want to keep a full backup on the same drive, you can use rsync's batch mode (see the manpage) to efficiently generate incrememental backups, for filesystems that don't do a good job of that.

You don't want to hard link between your live tree and a backup tree. That will result in the changes showing up in both trees, obscuring the changes when you run a backup. It's a technique used with rsync for snapshotting, where two backups trees represent the state of the original filesystem at different times. To make that work, the links are broken for the files that differ between the two snapshots.

Comment Luminosity (Score 1) 125

You might want to look into luminosity based research. The brightness at each pixel may contain some information of the angle of the surface with respect to the camera and a light source. At some point that looked potentially promising. But of course the technique can fail pretty easily. Much of the work I've seen is based on trying to figure out how our brains do this all the time. Try closing one eye, see how 3D the world still looks (better than most 3D movies). You are going to have a tough challenge to beat that. But that doesn't mean its not worth trying.

Comment Re:Can an FPGA multitask? (Score 1) 499

I'm not really familiar with AMBA. However, if for whatever reason AMBA does not scale, one could simply architect a system where the reconfigurable fabric interfaces with multiple AMBA islands :) I hope that scalability is being carefully considered and it does not come to that.

It wouldn't be the first architecture to use FPGAs to support cooperation of processors. The Cray XD1 is one example, it had a mix of opterons and virtex fpgas, some of which were available for compute others solely for interconnect. On a side note the intel paragon used xilinx fpgas to control the leds on the doors, back when super computers where more fun to watch.

I agree I don't think Moore's law has been focused on sequential or even single threaded performance for quite a while. I do think things are getting more interesting. Clock and voltage scaling seem to be very slow if not entirely stalled. Density and die size still seem to be scaling nicely.

I'd also like to point out the trend for lower power devices. I wonder what the trend is for balance of compute and energy is for things like laptops and cell phones. The laptop trend seems to be slow decrease in power consumption with whatever compute fits in that power budget. Cell phones seem a bit more confusing. I would expect to see battery life of smart phones increase with each new generation, but there seems to be an obsession with computational power. On the upshot, at least cell battery life doesn't seem to be getting much worse. I suppose people might actually reject a phone that fails to survive a normal day of use.

Comment Re:Evolving to FPGA (Score 1) 499

All the early MPEG 4 accelerators I saw were implemented in FPGAs. Of course much of that was encoders instead of decoders, since that is the harder problem. Now you can buy cheap mpeg 4 asic/ip core accelerators. Those are still going to be much more energy efficient than using the array of general cores on a GPU.

As for implementing GPU pipelines on FPGAs, it has been done: http://hackaday.com/2008/05/21/open-graphics-card-available-for-preorder/ I'm sure I've seen other research projects or maybe just people screwing around and implementing GPU pipelines "because we can". Its also a convenient solution for educational purposes. But no, if you want to make an efficient GPU for general use, it does not make sense to map GPU logic onto the FPGA fabric. You would loose on the order of an order of magnitude in clock speed, and doing it that way you completely toss away the positive benefits of the FPGA architecture.

I think you might have a skewed impression of how complex mpeg4 encoding and decoding is, and how much area it consumes. Also in the comparison of FPGA logic cells and "gates" in a GPU is a bit faulty. In terms of raw transistor count the largest FPGAs tend to be a little ahead. That "million" or so logic elements in a FPGA does not translate to simple logic gates or transistors. The logic cells are multiple input lookup tables that are used to evaluate arbitrary boolean functions. How many traditional gates can you replace with a single 4 input lookup table? What about an 8 input LUT? The answer does depend on the logic you are mapping, but its almost never a 1:1 mapping.

Also FPGAs do have ram, fixed logic cores (dsp blocks/multipliers, etc), and even conventional processor cores. While its true that however big the array, someone will have a problem that won't fit, you can put an awful lot on a modern FPGA.

As for your final thought about fixed silicon. Not necessarily, look at this fellow's research: http://cas.ee.ic.ac.uk/people/nachiket/ He goes into why CPUs and GPUs are slow for running SPICE circuit simulations. Despite running at a fraction of the clock speed, his FPGA implementation completes the simulations faster and consumes much less power than the CPU or GPU. True a fixed logic accelerator specifically designed to implement the algorithm would be faster, but how many special purpose fixed accelerators do you want to put on your chip? What if the implementation can benefit from dynamically adapting to the current problem? Sometimes it really is more efficient to provide reconfigurable logic and load in the best implementation you have for each problem. Dynamic hardware acceleration is likely one of the reasons intel is producing Atom-FPGA combos. There are ongoing research projects examining the benefits for mobile computing devices. Transistors are cheap, but people want to use cell phones for all sorts of strange things, and there's always something new on the horizon.

Comment Re:Can an FPGA multitask? (Score 1) 499

Reconfigurable logic can be virtualized to get around the area limitations. Have a look at the SCORE publications for research on that topic: http://www.seas.upenn.edu/~andre/compute_models.html

Tabula is a new FPGA company that implements time multiplexed logic to extend the effective size of a computation you can put in a given size chunk of silicon: http://www.tabula.com/ Their products are still statically scheduled and not really amenable to the full virtualization of the SCORE model, but its a real product and you can buy one today.

There's a big space between the fully spatial FPGA and the fully temporal CPU, and we've been seeing that space fill slowly over time. From the CPU side, we've seen cores handle more operations per cycle, hyperthreading, and no multi-core is the default configuration. GPUs are now composed of hundreds or thousands of execution units that are simpler than CPU cores, but more complex than the logic blocks in FPGAs.

There are problems that are best suited for each of these architectures. When you play a graphics intensive game, you expect the GPU to handle stuff its good at and the CPU to handle the bits its good at. FPGAs are just a little bit more obscure. But hybridization does make sense. That's why we've seen PowerPC and Arm cores embedded in reconfigurable fabrics, and now Intel putting FPGA cores in the same package as their dies. We're long past the point of saying that any of these are irrelevant because they are not the optimal solution for all problems.

To add to that, your comment on idle resources. We're also hitting thermal limits. Yes we can still put more and more transistors in a chip, but we can't switch them all simultaneously at full speed without frying the chip. Increasing cache size and core count helps. But if you're going to have more area than can be used simultaneously it makes sense to add different resources that handle different tasks more efficiently (energy and latency). That's part of why intel, amd and nvidia are all mixing GPU and CPU cores on die. If the atom+fpga combo works out well, I would expect to see regions on reconfigurable fabric directly on die in the not to distant future.

Comment Re:Or you can use Excel (Score 2) 64

a spreadsheet application will deliver results a lot faster.

Not really, particularly if you have the data already entered. Running:
R
data=read.csv("data.csv")
hist(data)

takes far less time than selecting your columns, dragging the mouse over to the graph button, selecting the region for your plot, and then trudging through a multi-stage wizard. Even if you actually want to type in some data in a spreadsheet its frequently faster to save the table and load it up in R or gnuplot to graph it. And if you do want something like a histogram or a boxplot, excel doesn't stand a chance (gnumeric at least supports boxplots).

I might accept that creating a slightly prettied up graph might be a little quicker in a gui spreadsheet. But for quick and dirty and higher quality graphs they are slower, if they work at all. Once you start encoding your style preferences in little scripts that you load before graphing, you'll find even higher quality graphs take less time than mediocre graphs from a spreadsheet. And really there's something satisfying about tweaking one line in a single file and that automatically updating the style of 20 graphs in an article.

I generally find that when plotting, if doing it once its a coin toss whether to write a script or manipulate the data and plot manually, twice and scripting definitely breaks even and of course more than that and scripting just gets more and more valuable. R (and many other environments) save your history, so that if you do decide a day later you should have just written a script, its already there, you just need to copy the commands out of the history file. In excel, well at least you learned from experience what to do that next day.

As I see it there are two reasons to graph in a spreadsheet. First if you're actually working in a spreadsheet and just want a quick look at some data (not debating the merits of that, separate discussion). Second, when you're not sure what you want and are unfamiliar with the tools available, a gui gives you something to poke at blindly with a mouse. In that second case, I think one should accept the accept the pitfalls of ignorance with an intent to learn more and improve. Stubbornly grasping your spreadsheets, knowing there's a better world out there, will only hurt you in the long run.

Comment Get a grant or make the prof present (Score 1) 244

If the prof is a co-author they get credit where it counts most for them. If you're at a research school, publications may be the primary metric for their performance, teaching and graduating students only count if they are a serious problem of if the research is sub-par. As such, get the prof to pay for the trip. If the prof won't/can't pay check with the school. Many schools have travel grants for students in just this sort of situation. Finally if all else fails and you really just don't want to go, but you've done the research, make the prof present it. You still get the author credit.
Medicine

Autism Diagnosed With a Fifteen Minute Brain Scan 190

kkleiner writes "A new technique developed at King's College London uses a fifteen minute MRI scan to diagnose autism spectrum disorder (ASD). The scan is used to analyze the structure of grey matter in the brain, and tests have shown that it can identify individuals already diagnosed with autism with 90% accuracy. The research could change the way that autism is diagnosed – including screening children for the disorder at a young age."
Science

Why the First Cowboy To Draw Always Gets Shot 398

cremeglace writes "Have you ever noticed that the first cowboy to draw his gun in a Hollywood Western is invariably the one to get shot? Nobel-winning physicist Niels Bohr did, once arranging mock duels to test the validity of this cinematic curiosity. Researchers have now confirmed that people indeed move faster if they are reacting, rather than acting first."
Space

Astronomers Discover 33 Pairs of Waltzing Black Holes 101

Astronomers from UC Berkeley have identified 33 pairs of waltzing black holes, closing the gap somewhat between the observed population of super-massive black hole pairs and what had been predicted by theory. "Astronomical observations have shown that 1) nearly every galaxy has a central super-massive black hole (with a mass of a million to a billion times the mass of the Sun), and 2) galaxies commonly collide and merge to form new, more massive galaxies. As a consequence of these two observations, a merger between two galaxies should bring two super-massive black holes to the new, more massive galaxy formed from the merger. The two black holes gradually in-spiral toward the center of this galaxy, engaging in a gravitational tug-of-war with the surrounding stars. The result is a black hole dance, choreographed by Newton himself. Such a dance is expected to occur in our own Milky Way Galaxy in about 3 billion years, when it collides with the Andromeda Galaxy."

Comment Dell Latitude XT and XT2 (Score 1) 176

The Dell XT and XT2 and the HP TX all use roughly the same digitizer, though apparently different revisions in each. I have used a XT and XT2, though mostly in linux. I can say that I've seen multitouch function in windows, though I can't comment about stability for more than a few seconds of play. Between the two, the xt2 is an incremental improvement: its considerably lighter, a bit faster, and the hard drive moved from an obscure "standard" to a SATA connector (which may more convenient for long term maintenance). The one advantage of the xt is that its now considerably cheaper. If your primary usage is as a tablet, the dell laptops do not have a rigid latch to hold the lid in place, just magnets and rubber bumps/guides. It stays pretty well, but the sturdy latch on the thinkpad convertibles is better for prolonged use as a tablet.

Slashdot Top Deals

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...