Homeland security expert Michael Greenberger told one CBS station that "My best guest is that when this is all said and done we're going to find out that this was done by a contractor, not by an employee of the CIA."
Physics 101 baby
Drag racing is normally quite a good intro topic for 1D kinematics - you can do constant acceleration, look at the real relations between velocity, acceleration and position etc. It's nice because you can push the students to understand when equations are and are not valid, and what they can actually work out with limited information.
I agree - that's why I said throughout that my estimate was conservative. Assuming constant acceleration gives the slowest possible 0-60, hence the max time from these figures is 3.25 seconds.
Interestingly if you look through the pictures in TFA you see that the speed has just about topped out at the 1/8 mile mark. If you run the numbers there, you get an acceleration around the 10.5 m/s^2 mark, which indeed gives about 2.5s for the 0-60 time.
And yes, clearly the car is not designed with cornering in mind.
I would hope one of the requirements to be a "street-legal" car is that it can turn, at the very least...
If we make a horrendous assumption of constant acceleration we can get a maximum on it's 0-60 time:
s=1/2 at^2, t=9.87s, s=400m gives a=8.21 m/s^2
60mph = 60*1600/3600 m/s= 26.67 m/s
t60 = 26.67/8.21 = 3.25 seconds.
So we can conservatively conclude this vehicle does 0-60 in under 3.25 seconds.
Someone with better knowledge of the acceleration/velocity curves of cars can probably correct me on this, but I'm assuming that acceleration reduces with velocity rather than increases, due to wind resistance etc. If this is right 3.25 should be considered a maximum - if the acceleration reduces above 60mph, say, then the car must accelerate to this velocity in even less time to get a quarter mile in 9.87s.
From the data given we can only conclude that its top speed is somewhat higher than 400m/9.87s = about 40m/s or 90mph, but of course that would assume instant acceleration to 90, in all likelihood its top speed is far higher.
Noise. All kinds of noise.
The system is an interferometer - basically two lasers set up in a large L shape with mirrors (massive simplification). When the lengths of the arms are the same, the beams cancel, when they differ a signal is recorded.
Now, the differences in length due to a gravitational wave is tiny, and the problem that kept LIGO from their detection is that there are huge numbers of sources of vibrations around the same frequencies as expected from gravitational waves that have far larger amplitudes. Thermal vibrations, for example, are a killer for experiments like this.
The waves themselves have almost exactly the waveforms that were predicted - the template fits from simulations match amazingly well in terms of amplitudes, frequencies and their evolution. What stopped experiments like this from making the observation was simply a lack of technical skill to make a precise enough instrument. Following the development of LIGO over the last decade, this is precisely what everyone working on the project said - once the noise curve is reduced to form Advanced LIOG (recent upgrade) the noise would be sufficiently small than an integrated signal against a template would be clearly visible, and now it is.
Think about when the light at the edge of your calculation was emitted, and where that place is now. The definition of the observable universe goes roughly as follows:
Consider a photon emitted from a point at the big bang (really CMB, but we can substitute with a small change) that gets to us today. How far away is an object that was at rest (with respect to the homogeneous cosmological spatial slice) at that position now?
It isn't as simple as multiplying up these numbers, as the Hubble parameter changes over time. What you really want to do is track the world-line of an imaginary stationary object from which the light was emitted, and that of ourselves, integrating the Hubble rate given by Friedmann's equation given our best guesses at the types of matter/radiation dominating evolution. That's where the 28Gpc (about 90 billion light years) figure comes from.
The point that emitted the photon is now no-longer observable to us, and never will be again if the current models are correct - it's exited our past light cone, as does more and more of the spatial slice every instant. So there's no contradiction between the point moving away from us 'faster than light' and it having it in our observable universe. One is a calculation done about two spatially separated points at a fixed time, the other is understanding the content of our past light cone.
Hope that helps!
The Hubble constant which is talked about here is the rate of change of the scale factor, divided by the scale factor (H = 1/a da/dt or d/dt (log a). You can think about it as the velocity of log(a) if you like. The matter contribution means that the universe is expanding.
The cosmological constant contributes to the acceleration of expansion (dH/dt ~ (rho+P) ) where rho is the energy density, and P the pressure. For a pure cosmological constant, rho=P and so this is zero. Follow the calculus through and you see that this gives a positive second derivative in a - the universe is accelerating.
The point is that Hubble rate and cosmological constant are related, but separate ideas, and give non-degenerate contributions to observations - we can differentiate between the two. So a different Hubble observation would not, of itself, explain the cosmological constant problem.
It's a pleasure. I'm lucky enough to work on my passion, and to be able to talk about stuff like this with the people who work on it.
A side note - both Jesper and Johannes are very open and easy to talk to - I'm sure if they're not overwhelmed they'll respond to questions from the public about their ideas.
TL:DR - yes, it's a bit out there, but no more so than any other of the big attempts.
I've talked with Jesper and Johannes at length whilst I was a PhD student - their ideas are based on applying the techniques of loop quantum gravity to non-commutative geometry. To give a brief summary of each:
LQG regards the basic variables of geometry to be holonomies and fluxes - a holonomy is the transport of a vector around a small loop, coming back to the start to find the vector isn't pointing the same way (think about carrying an arrow around the a triangle from north pole to equator). This measures the curvature of the underlying manifold. The fluxes are like field lines in electromagnetism. It is these variables that are quantized (discretized) on a spin-network in LQG.
Non-commutative geometry is the idea that geometrical operators care about the order in which they are applied - area(A) length(B) != length(B) area(A) (very loosely). Non-commutativity is at the heart of quantum mechanics, and is the root of Heisenberg's Uncertainty Principle.
What they're hoping to do is build on the work of Connes and Chamseddine who have shown that the spectral action (special type of object in a non-commutative geometry, coming from application to the standard model) naturally reproduces the Einstein-Hilbert action (Basis of General Relatvity) in certain conditions. They hope that by applying LQG techniques here they'll get a full quantum theory of everything.
It's a long shot, of course, but all such things are - non commutative geometry is a strange beast, and no-one has shown that LQG is the right way to quantize gravity (though they have had some theoretical success in cosmology and black holes). It's a personal aesthetic as to whether you think this is more or less plausible than extra dimensions, or symmetries, or some altogether new principle. It's not something I choose to spend my time on as I don't think it's the right way to go (I don't like non-commutativity, and LQG involves fundamental discreteness in a way that I think doesn't work) but I would say it's as good an idea as any other on the market and deserves to be explored.
I just got a fairly substantial grant for a project from an external agency. However, as things stand, on this project I will not be the PI (primary investigator ) - that will be our head of dept. So why do I call it my grant? Because I wrote the proposal, handled all interactions with the funding agency, wrote the budget and arranged everything. My boss simply signed on a dotted line and shook a few hands. A symptom of the endless cycle of postdocs is that you don't have a permanent post until you're quite far on in your career. Therefore your own institution won't let you be the PI. The way around it is that you get a figurehead to be in charge, but you really end up running things.
This has its advantages and disadvantages. The big advantage is that you tend to have a fairly heavy hitter politically to back you up. He (and it's so often He that it's an insult to my female colleagues to pretend that they are equally represented) should have your back in exchange for drawing a fraction of his salary from your grant. The disadvantages are that you aren't officially PI for the sake of your CV - when you apply for jobs you are asked "Wasn't that X's grant?" when you talk about it - an it doesn't count as much for you. Likewise, they pay is miniscule. One of the things you learn writing a budget is just how much more a senior academic makes than a postdoc. It's depressing both how large the ratio is, and how relatively low the higher figure actually is.
Of course the whole process is a vicious cycle: You can't be PI, so you don't have PI positions on grants on your CV, so you have a hard time getting a permanent job, and so you can't be a PI... You just spend three of four months working on a proposal, sacrifice your dignity to the gods of the funding agency, ask someone else to take 90% of the credit, and prepare for hard work. On the plus side, you might just get paid enough to live and do what you love.
If this is a service economy, why is the service so bad?