Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Any Cosmologists Here? (Score 5, Informative) 144

I'm not a cosmologist, but I am an astronomer. Most of the questions you ask are in the papers associated with Bolshoi, but science writers just leave them out because the numbers are so huge and hard to relate with -- I'm going to use megaparsecs for distances; 1 megaparsec = 1 million parsecs = 3.26 million light years = 200 billion astronomical units. 1 astronomical unit is ~93 million miles, the distance from the Earth to the Sun.

First off, "entire evolution of the universe" should obviously be qualified with "on cosmological scales", unless they've built the matrix. That said, how big is the domain? Is it just set to match the observable universe? 2048 grid points across the entire universe (or just the observable universe) seems rather... low-res. The TFA mentions an adaptive grid, but fails to mention what factor that can increase the local resolution by.

As you point out, the 'entire evolution ...' phrase is a bad way of saying that the simulated volume and mass is large enough to be statistically representative of the large scale structure and evolution of the entire universe. It's 2048^3 particles total, which is a heck of a lot. 8,589,934,592 particles total, each pushing and pulling on each other simultaneously. It's an enormous computational problem. The particles are put into a box ~250 megaparsecs on a side; the Milky Way is ~0.03 megaparsecs in diameter, and it's ~0.8 megaparsecs from here to the Andromeda galaxy, our nearest large galaxy. 250 megaparsecs is a huge slice and more than enough to ensure that local variations (galaxies) won't dominate the statistics. The ART code starts with a grid covering 256^3 points, but can subdivide to higher resolutions if some threshold is passed up to 10 times if I remember correctly, giving a limit of around 0.001 megaparsecs. My memory is hazy, and the distances are scaled according to the hubble constant at any given point, but they're in the ballpark I think.

Also, how exactly do we model dark matter when we don't really know WTF it is beyond the fact that it has gravitational mass? Does it work because gravitational effects are the only thing that really matters on cosmological scales?

Essentially, yes; gravity absolutely dominates at these scales compared to all other forces considered. The role of stellar and galactic feedback into their environment when forming (and as they evolve) changes lots of important things, but simulations like Bolshoi seek to simulate the largest scale structures in the universe. Smaller subsections of the simulation can be picked out to run detailed N-body simulations of Milky Way type galaxies, or to statically match the dark matter clumps (which will form galaxies) to huge databases like the Sloan Digital Sky Survey. Both of those are pretty active things-to-do in cosmology now.

Comment Re:Not just the GBT (Score 2) 192

Yes, NRAO and NOAO are very different and in charge of different things.

But contrast NRAO's initial response (here) to that of NOAO (here) or even AURA (here, sorry its a PDF) to see the different approaches that are possible.

NRAO essentially criticize the portfolio review process and reject the results outright without consideration and essentially hopes that the NSF figures out a better way: "AUI and NRAO encourage the NSF to work with its other federal agency counterparts to consider a more balanced approach with additional funding scenarios for the entire U.S. federal astronomy portfolio." Compare that to NOAO's response which creates an online discussion point, lays out specific details about each relevant point, encourages all astronomers to talk to their congress people, as well as making observations about the situation between NRAO and ALMA being similar to NOAO and LSST.

This isn't a time to complain about losing one or two specific facilities, this is a time for talking about the entire picture of how bad this would really be if divestment goes through and facilities are either closed or put into private (closed) consortiums. NRAO's response honestly comes across as sour grapes defending their own stuff with little concern of the greater picture.

Comment Not just the GBT (Score 5, Informative) 192

Not just the GBT is at risk in all of this, and honestly NRAO is being selfish and shortsighted in their responses to the portfolio review. There are 5 optical telescopes at the national observatory at Kitt Peak, AZ that are set to be divested from the NSF as well, and their loss is much, much more devastating to the amount of open-access telescope time that is set to be lost if the facilities are closed or go into closed private partnerships. The closing of the Very Long Baseline Array (VLBA) means the loss of literally a one-of-a-kind setup as well. It's bad across all parts of the electromagnetic spectrum, but the decision to stop spending money on these telescopes preserves the NSF astronomy grants program which funds a ton of astronomers, engineers, and students of all levels (myself included). The portfolio review didn't come up with any answers that we liked, but at least it's an honest estimate of what we have vs. what we expect funding wise; things are getting even worse with the upcoming budget sequestration. The big worry among astronomers is that we're returning to a time when only large institutions have access to telescope time, the exact reasoning behind the creation of the US national observatory system in the first place. Public-private partnerships will likely come around somehow to keep these facilities operating, but it's early still to know what those will entail in terms of open-access telescope time.

Comment Re:Kepler's produced great stuff (Score 2) 58

Kepler observes transits of planets. For simplicity's sake, let's just talk about one planet. As the planet passes in front of the star, the shape of the light curve tells you the ratio of the radii of the planet and the star, and some good constraints on the inclination of the system; that's it. If you make some assumptions about the underlying star, you can make a good estimate for the radius of the star and then get the radius of the planet. As AC points out, if you assume a density, you can get a "mass" measurement. That's like asking someone on the internet how tall they are and guessing their weight from it; it can get you an ok answer, but the real range of variation is tremendous and interesting.

In order to then get a real, measured mass of the planet, you need radial velocity measurements which tells you the ratio of the mass of the planet and the star. Again, if you know some things about the star, you can then make a good estimate for it's mass and then get the mass of the planet. NASA buys a share of time from the Keck telescopes, and the vast majority of time has been eaten up by followup observations of Kepler candidates ever since it launched. For the smallest planets, you need precision on a scale that most observatories can not provide at this time; for an Earth-like planet around an Earth-like star the radial velocity precision required is on the order of cm/s, which is fantastically hard to do. I'm not actually sure anyone has produced anything real along those lines, though there are plenty of ideas and plans.

If you're technically minded, there's a decent review from a few years ago available. If you're looking for something simpler, try this.

As Teancum points out, you can detect and infer some other stuff by looking at the variations of the transit times and see if there is something else tugging on the system; that's a whole different ballgame, and David Kipping is the most prominent person I can think of leading that charge.

Comment Re:Kepler's produced great stuff (Score 5, Informative) 58

But I think already we have the important data: thousands of planets! And these are just that tiny fraction that have orbits that take them across the line between their sun and ours. Thousands of times as many planets have orbits that would not cause a transit.

The point is we now have enough data to estimate the density of planets in the galaxy. So you could say the basic goals of Kepler have been accomplished and the rest is gravy.

The Review panel agrees with you, and even goes further to politely tap the Kepler science team on the bottom and to try to point them in the right direction. Looking at the "Proposal Weaknesses" section (emphasis is my addition):

Since masses cannot be determined, Kepler can only directly measure an upper limit to [the frequency of Earth-like planets]. The proposal over-emphasizes the capability of Kepler to directly determine [the frequency of Earth-like planets] as compared to the contribution of Kepler determination of exoplanet statistics. The strong focus of the proposal on the detection of a few (e.g. 0 – 20) “Earth-like” bodies leaves the plan subject to criticism for the very high dollar cost of a few new objects, few or none of which can be followed up for mass characterization through Doppler shift measurements.

So basically they are telling the Kepler science team (rightly so) to pipe down about the Earth-like planets we can't do any more science with at this time and instead talk about the amazing stuff they can do with the statistics they've gathered. This is not even talking about what else can be done with these data; Kepler is an outstanding stellar astrophysics mission.

Comment Re:But... (Score 1) 745

Didn't the Earth get hit by another planet, causing it to shoot a ton of crust into orbit..creating the moon?

Clearly, life requires a mars-sized object to hit the planet where life wants to form.

Jury's still out on that one: http://en.wikipedia.org/wiki/Moon_Formation#Difficulties

That's just science at work, and every theory has it's "difficulties" answering all of our questions. The fact that this particular wiki article has a "Difficulties" section doesn't disprove the scientific merit of the giant impact theory, it proves that the wiki writer tried to give a complete picture and wanted to list some of the interesting questions still out there. Simply put, the giant impact hypothesis has no rival that provides as many self consistent lines of reasoning right now.

Comment Re:and what about xerox's stuff? (Score 3, Interesting) 988

People who genuinely care about contributing to society like Newton instead use quotes such as the classic "standing on the shoulders of giants" (or however you believe it was originally phrased). They don't have an easily dented ego, they just care about making things better whether improving existing things or coming up with new. This to me just reaffirms that Jobs was an arrogant selfish dick with no care for anything other than his own ego.

Newton was just as petty and and seemed to have a *staggeringly* large ego, despite his famous quote you mention. You can get an idea of his craziness from his Wikipedia page, though to get a better idea just google around to see plenty of fun stories about Newton's interations with Leibnitz (Math), Hooke (Optics), and Flamsteed/Halley (Astronomy). I'm sure there are more I'm forgetting, too.

Slashdot Top Deals

The absence of labels [in ECL] is probably a good thing. -- T. Cheatham