Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Researchers Make Mount Etna Sing 81

The Interfacer writes "Predicting eruptions will become easier now scientists are using technology to translate the patterns in a volcano's behaviour into sound waves. "The research project, which brings together experts from Europe and Latin America, digitally collects geophysical information on seismic movements before using data sonification to transform them into audible sound waves, which can then be 'scored' as melodies. The resulting 'music' is then analysed for patterns of behaviour and used to identify similarities in eruption dynamics and so predict future activity."
This discussion has been archived. No new comments can be posted.

Researchers Make Mount Etna Sing

Comments Filter:
  • [Insert obligatory American Idol joke here.]
  • But will the volcano run linux?
  • Big Etna? BIG ETNA? You sound like a broken record. Why are you so afraid of that pathetic tub of lard?

    oh right... it's a volcano.
  • Pattern analysis? Revolutionary.
    • Pattern analysis of audible sound waves created by a data set of seismic movements...I don't know exactly what benefit it provides, but it could be that the translation helps to eliminate bad data better than just analyzing the raw data and predicting based on assumed importance of certain data. I do know, however, that a lot of experts, whose entire lives revolve around studying volcanos, are very interested and excited about this... which leads me to two conclusions:

      1) It is likely that this COULD be
      • I don't why they have to turn it into music first. Why can't they just do pattern analysis on the original sound itself?
  • by FlyByPC ( 841016 ) on Wednesday August 09, 2006 @11:54PM (#15878427) Homepage
    "I feel the earth... move... under my feet..."
  • Dirk Gently (Score:4, Interesting)

    by SmellsLike ( 911771 ) on Wednesday August 09, 2006 @11:59PM (#15878435) Homepage Journal
    These 'making music out of nature' studies always remind me of that space ship in Dirk Gently's Holistic Detective Agency by Douglas Adams. Where the ship was anaylysing all the maths of the planet, and turning it into beautiful music which was then given to one of the classical composers.

    Having listened to the Etna sounds though its not quite Mozart. Both the audios are at the bottom of the article and not slashdotted yet. It'd be cool if they could explain what was happening at what points in the melodies. Also sounds a little like a 3-year old smashing a keyboard.
    • by kfg ( 145172 ) *
      It'd be cool if they could explain what was happening at what points in the melodies.

      Let's just say that when you hear Asus you'd better C# and run or you'll Bb.

      KFG
    • by Troy ( 3118 )
      It'd be cool if they could explain what was happening at what points in the melodies


      Uncovered by the research: When the volcano is about to explode, it sounds like Yoko Ono.
    • sounds a little like a 3-year old smashing a keyboard.

      Pele is sensitive about Her musical abilities, and she is going to kick your ass.

  • I imagine the vibration described here [pbs.org] would sound like a large gong.

    Or maybe a bell. Ask not for whom the bell tolls...

  • by MikeWasHere05 ( 900478 ) on Thursday August 10, 2006 @12:22AM (#15878504)
    I'm thinking the "convert raw data to music and then extract valuable data from music" step is just in there to ooh and ahh the grant boards. How can that be more efficient than just looking at the raw data?
    • by semiotec ( 948062 ) on Thursday August 10, 2006 @12:53AM (#15878578)
      The point is to change the data to a format that is easier to process.

      For example, if you just look at the wav spectrum or frequency spectrum of a piece of music, it's difficult to tell who was the composer. However, if you re-package the information into sounds, then it becomes much easier to analyse or identify, at least by humans.

      Of course, this is the reverse of what they are doing, i.e. their original data is not sound-based, but the idea is similar, they are hoping that the volcano's data (which is a wave form of sorts) is easier to process in the form of sounds by human ears than by looking at the graphs.
      • The problem is that when converting the data into music, there is no... whats the word... personality for the specific mountain that created that data. Its similar to running rand() and creating music from that. Sure it might be cooler than a bunch of random numbers, but if I'm looking for a pattern I'd rather see numbers and graphs than have to say "Ahh it crescendoed from B# to Fb, this must be relative to the position of the plates... blah blah"
        • The problem is that most pattern analysis algorithms are computationally expensive. Usually on the order of 2^n computations, unless the researcher is particularly clever and managed to use domain specific knowledge to speed the algorithm up. Reducing your data set by a few orders of magnitude can be the difference between running an algorithm in a day and running it until you're dead.

          The up-shot is that instead of making the scientist interpret musical patterns for insights into volcanos (or whatever th

        • Ahh it crescendoed from B# to Fb

          Obviously not a well-tempered mountain. Fb is normally rendered as "E" and has been since Bach's time.

        • You don't seem to be self consistent.

          If the mountain CAN be predicted, the output of that mountain by necessity cannot be random and there has to be a "personality". There has to be something PREDICTABLE otherwise this entire exercise is for naught.

          So converting to sound may seem silly, but what if it happens to provide the insight we need to determine how to make valid predictions?

          I repeat: Converting to sound seems silly, but it is merely transforming the data from one difficult to understand space to one
    • It can be less efficient. It's not like they care, they're fucking around all day and collecting a paycheck while the rest of us pay to support them. Although they did invent the term "sonification technology" so at least they're providing us with a little entertainment. I know I laughed when I read that line. What a bunch of pure bullshit.
      • What if they happen to make a breakthrough discovery because they hear something they can't see?

        Our ears happen to be immense parallel processors, with millions of hairs tuned to different frequencies, all operating simultaneousy.

        Our eyes are similar, the problem is that the graphs/data is not presented in a way amenable to using our eyes rather than our brains. Perhaps if you take the data and transform it into a 2D false color animated movie...

        Again, if it works, it works. Save the vilification for later.
    • This is pure speculation, but I'd guess that this would be advantageous because there has been a ton of research done recently in the area of patterns in sound, whether for searching for specific clips of music, or for identifying similarities for other purposes. I realize that vibrations engineering is also a big field, but maybe they aren't looking for patterns? I don't know, just an idea.
    • Yes, this was my first thought on reading the summary. Those people who are arguing that converting it to sound somehow makes it easier to analyze should explain why it wouldn't be even MORE efficient to convert it to a trippy video with shifting colors which you could watch...after all, most folks' eyes are a good deal more sensitive to data than their ears. I mean, are they going to get well-trained professional musicians with perfect pitch to analyze the data for them or what? In the end, there is jus
      • '... why it wouldn't be even MORE efficient to convert it to a trippy video with ....'

        Because it's linear data in the time domain.

        '... after all, most folks' eyes are a good deal more sensitive to data than their ears.'

        Not so in the time domain; the eye can barely discriminate down to 1/16th of a second (e.g.: movies run (or did) at 24 frames per second).

  • Pattern recognition can be done without translating it into something audible. The pattern is there, regardless of the frequency range. This sounds like BS to me...

    --
    I now have two 120-byte sig spaces. Mod me up and I'll tell you how to get your second sig space.
    • No kidding. Real scientists looking for patterns in the unpredictable convert to spin art.

      KFG
    • Pattern recognition can be done without translating it into something audible. The pattern is there, regardless of the frequency range.

      Would you say the same about a histogram or a scatterplot? Visualisation is widely accepted as a way of discovering and demonstrating patterns in data - the patterns are still "there" if you don't visualise the data, but you might never know it. The same applies to sonification; the only difference is that visualisation is universally accepted by the scientific community,

    • Let us see you try to "recognize" Mozart using sheet music only. Transform said sheet music into music, and he's instantly recognizable.

      Or let us see you try to "recognize" the Mona Lisa using a 2 dimensional grid of hex values. It's still the Mona Lisa, but I bet you couldn't see it if you tried.
  • Mad Scientist: "Now, repeat after me..."
  • "Please, no more virgins. They give me indigestion. Especially the blonds."
  • Now let's find out when that underground volcanic chamber beneath Yellowstone will erupt! That sucker is one of the two largest chambers of lava in the world! It's a time bomb, so why not study that, too?!
    • volcanic chamber beneath Yellowstone

      Every time his aides present Bush with a funding bill for anything in Yellowstone, he launches into a 45-minute description (with voices!) of his favorite Yogi Bear episodes. Such legislation rarely makes it to his desk anymore.

  • How is what they're talking about not some subset of fourier analysis? Come on, recasting the data as sound waves? You mean, shifting the frequency domain from ELF to human-audible? What in the world is the point?

    They must be using some software package originally written for audio guys, and are unaware that the "conversion" they are talking about is conceptually nothing more than editing the sampling rate constant in the datafile.

    I am never surprised at the dearth of researchers competent in data an
    • They presumably mapped the ELF input to musical tones, not just changed the sampling rate. That represents significant data squashing or smoothing, which is a good thing for several reasons.

      In any event, they're doing predictive time-domain analysis. The state of the art in that field is wavelet analysis, though the Kalman filter seems quite a bit of use in applications. These guys are surely aware of what a Fourier analysis is, what it isn't, and why this is different.

      • I don't mean to be rude, but you shouldn't attempt to rebut someone unless you know what you're talking about. The criticism I was leveling at their work was specifically aimed at comparing their method of "mapping to musical tones," to being conceptually little more than changing the sampling rate.

        Actually I should have gone further, and pointed out how it's actually very destructive to the original data due to it being a convolution with many presumptive kernels (and therefore NOT smoothing). .
    • What in the world is the point?

      I suppose it's to be able to better feed it to one of the most powerful processors for pattern recognition on linear data, available at this time. That would happen to be the human ear. Which in fact is so surprisingly capable that certain competing systems [slashdot.org] seem pretty laughable in comparison. It remains to be seen, whether this conversion will truely turn out to be helpful, but it's quite definitely worth a try.

  • And now, a whole new kind of free-form jazz!
  • Prior art (Score:3, Interesting)

    by mattr ( 78516 ) <mattr&telebody,com> on Thursday August 10, 2006 @01:03AM (#15878602) Homepage Journal
    Ken Goldberg created an art installation called Memento Mori that translated seismic data received over the net in realtime into deep bass rumblings driving a surround sound system. The big bass woofer was under a floor you could lie on to feel it. He didn't need a 622Mbps connection either..


    And incidentally DANTE seems oblivious that the Dante project by NASA was a multilegged robot descending by rope into a volcanic crater.


    I don't mean to overshadow their scientific achievements but lack of memory by networked prdroids bugs me.

    • There is the distinct possibility that the method used to produce the "music" is significantly different from the art installation, and I'm sure that the artist was not solving any regression problems.
      • Thanks for your note, and sorry for my belated reply.
        Yes, you may be right. The artist was using StudioMAX software and was interested in making earthquake-like sounds based on the data but I do not have data to compare; obviously the two parties had differing goals but if the end user is a human the end results might be closer than one would expect. It would be interesting to see though.
  • Does this mean the fat lady has sung?
  • Joe Satriani's Mountain Song sounds much better.
  • what benefit does converting one set of data into audible data? can't they find patterns from the original data they've gathered. I personally don't see the scientific benefit of translating existing data into a melody just to find patterns. Were they not able to find patterns in the original data?
    • Probably because it will lead to a generation of seismologists being trained in listening to mountains, with their mentors hoping that their superior intelligence (superior to a computer, that is) will render them capable of making seismological predictions. Much like car mechanics can tell you whatever's wrong with your car by you revving it for a few seconds. It's a tool to generate an interface to a mountain.
  • spent some time searching for samples, here they are:

    WMA SOUND SAMPLE [amazon.com]

    or like this REAL PLAYER SAMPLE [amazon.com]
  • Whatever that might mean...
  • My conclusion from the article is that the supercomputer with huge arrays of CPUs still fails compared to our slow and limited brains.

    Assuming the brain is the best DSP around, at least when it comes to pattern recognition, it is a choice that at assures you a job as well.

    Only problem, how to interface the brain to all the seismic data etc. Well, the brain has two hi-speed inputs: vision (100Mb/s) and hearing (10Mb/s)

    The seismic data is less than 1Mb/s so it is a ok match.

    Now, just create the interface. T
  • I wanted to be a Lumberjack!
  • Alright sure I'm willing to buy that representing information in sound can sometimes be more evocative or make for a better presentation but it hardly counts as an important scientific advancement.

    I mean this is like having a press release for the pie chart talking about how it is going to revolutionize research in economics.
  • "Researchers Make Mount Etna Sing"

    Yes, but can they make it dance.
  • DANTE seems to be Recruiting [dante.net] some network engineers, but they don't mention Volcanos. Maybe is in the small print in the employee contract.

    Cheers.

  • Few people know Mt. Etna emits some 30,000,000 kilograms of CO2 per day, and is the single largest CO2 source on the planet.

The confusion of a staff member is measured by the length of his memos. -- New York Times, Jan. 20, 1981

Working...