Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Document first (Score 3, Interesting) 233

No.

It would also be stamped on by management and any competent product owner, unless it was absolutely dripping in tests before he embarked on anything of the sort. If the code is producing the desired numbers but is simply a total and utter mess, no-one is going to thank him for declaring he's going to rebuild it from scratch, and the only way it would be sanctioned at all is if he could absolutely guarantee the same numbers before and after (to within rounding and ordering error). Given the state of the codebase he's talking about, those tests would have to be end-to-end tests since as others have noted writing unit tests for legacy code is in general a thankless and time-consuming task. (Then again, so is attempting to build end-to-end tests that satisfy every useful codepath.)

I genuinely have no idea how large the codebase at my company is; at a guess I'd wager we're in the many millions of lines of code (certainly enough to render Intellisense an utterly useless, chugging, unusable piece of shit) and quite possibly more. Some of it is really quite good code with thorough unit test coverage -- that tends to be the more recent stuff. The rest is covered, in principle if not in reality, by a large number of end-to-end tests that at the very least exercise some extremely fragile pieces of code quite effectively. Even with this, rampant refactoring is discouraged, let alone rampant rewriting. It soaks up developer time we can't afford to spend, and the danger of hitting a bug that isn't covered by our end-to-end tests (or, even more infuriatingly, fixing a bug that clients have grown to trust the results of) is pretty high. Unless there's a very good reason to out and out rewrite, it's to be very much discouraged. Careful refactoring, once every self-contained block of work rerunning all the unit tests and as many of the end-to-end tests as is practical, is about the only way to proceed.

Comment Re:Problems in C++ (Score 1) 386

At least these days every compiler supports #pragma once which cuts down the noise of the guards to a single line. Not ideal - I agree we shouldn't need this shit anymore - but it's a lot less annoying than it used to be. Still non-standard though of course, just that so far as I know at least gcc, clang and MSVC all support it. I don't use any other C++ compilers so I can't speak for anything else.

One thing we could very much do with in C++ -- an interface class. C++ abstract base classes are fine, but they come with one drawback, which is that there's no standardised way of enforcing a virtual destructor. I'm fine with putting in a virtual destructor in every base class but again, it's just noise. There's no need for it now. I should be able to declare

interface Interface
{
    void Method();
}

and have that exactly equivalent to

class Interface
{
public:
    virtual void Method() = 0;

    virtual ~Interface(){}
}

and have

interface Interface
{
    void Method();

protected:
    ~Interface(){}
}

equivalent to

class Interface
{
public:
    virtual void Method() = 0;

protected:
    ~Interface(){}
}

Not difficult to do, but it's not in the standard. It would cut down noise for those of us who make extensive use of interfaces and reduce the risk of accidental memory leaks quite significantly.

Comment Re:Balloons (Score 1) 174

It's impossible to say for certain at the minute since we can't prove anything, but my guess would be both -- we don't understand what can generate a metric on cosmological scales (in the sense of how it's composed of billions of billions of metrics that are best modelled by Schwarschild or Kerr-Newman), but if we did understand how to do so it would most likely by necessity still be a phenomenological description. The ideal would be that we'd end up with a situation similar to that of thermodynamics, which is an emergent theory and a phenomenological description, based ultimately via statistical mechanics on small-scale physics. Without that, we'll be as thermodynamics was in the 19th century, with a phenomenological description but no convincing way of demonstrating its validity.

Comment Re:Balloons (Score 1) 174

A bit of both an objection to FLRW itself, and to dust solutions. On a fundamental level, the FLRW solution can never be more than an approximation to the real universe until we find a way of mapping the small scale physics up to a universal scale - since no well-defined averaging procedure exists (regardless of whether we're doing 3d averaging, 4d averaging, or some kind of statistical averaging) that does this with any rigour and generality, we can't actually ever state that the universe is FLRW "on average". On a more practical level, I don't think anyone would seriously question the applicability of FLRW+perturbations in the radiation-dominated universe, or indeed in the matter-dominated universe up to a pretty late redshift (arbitrarily, somewhere between z=5 and z=1). I certainly wouldn't, myself, and have done quite a bit of work on perturbation theory in the early (ie pre-CMB) universe without feeling the slightest twinge. Where it gets a bit more dubious is when those perturbations actually begin to grow, and in particular when the linear perturbations grow through 1 (ie when delta = delta rho/rho >=1) at which point the expansion is very definitely long since dead, and also when the *second-order* perturbations grow through the first order, at which point the expansion is also basically dead. This happens in the relatively recent, and therefore dust-dominated, universe. It's one reason averaging became so in vogue is that if we could find an effective dark energy, we'd solve both the cosmological constant problem *and* the coincidence problem in one go -- the dark energy is dominant now exactly because we live when there are massive inhomogeneities in the universe.

Of course, it hasn't worked, but that doesn't mean it isn't still an issue - in particular that we simply do not know how to write this theory down, so all we're working with is (extremely successful) phenomenology.

FLRW is unarguably a subcase of LTB; I'm not sure anyone would disagree with that. (LTB is also a subcase of Szekeres, while Minkowski space is a subcase of FLRW with constant scale factor. Minkowski is also a subcase of Schwarzschild with a vanishing mass, etc. There are reams of inhomogeneous solutions that are related in some pretty convoluted ways; "Inhomogeneous Cosmological Models" by Krasinski attempts to at least catalogue them. It's not the most readable of books, but he did try and make it comprehensive.)

Comment Re:Where is the center? (Score 1) 174

I may not have explained it very clearly. The point is that while near to Earth it's obviously in a "special place" -- no other planet in the entire universe has exactly the same conditions around it, in a relatively sparse arm relatively distant from the centre of a relatively large spiral galaxy in a relatively small galaxy cluster that's on the outer edge of a supercluster -- but that if you zoom out a bit and look at things on average, on scales roughly around a megaparsec in scale and above (which is the approximate size of a galaxy cluster, something like 100 times larger than our galaxy), it all begins to look eerily similar. On larger scales (let's say around 50Mpc and upwards) it all turns into a similar-looking mush of little bubbles where everything is basically indistinguishable from everything else. Attempting to pin it down properly, this "homogeneity scale" appears to be at somewhere between 75 and 250Mpc or so.

That's the point -- that the Earth isn't in a special place in the universe in that where we are isn't marked out as anything special. In an average sense, picking a random spot in the universe will lead to a view indistinguishable from that we have from the Earth -- if you ignore local eccentricities such as stars, voids and mighty blasts of raw radiation from supermassive black holes in galactic centres. That is, the assumption is that from anywhere in the universe, if you ignore everything within perhaps a kiloparsec or so, it's all going to look very much of a muchness. In particular, the CMB is going to look basically the same, a featureless wash of radiation at a constant temperature, with little ripples of about one part in 10,000.

Comment Re:Where is the center? (Score 1) 174

Surprisingly, no. There are suggestions of an anisotropy in the Hubble rate in different halves of the sky, but the errors are too big for this to be significant. That's the main problem with doing anything of the sort -- the error bars on the observations are just too big until we get far enough away (as in, taking velocities from galaxies far enough away that there are loads of them) to beat them down by sheer power of numbers.

But what that could let us do is put a constraint on how far away a black hole of one form or another would have to be, since the level of anisotropy that would be introduced would be related to the distance, in one way or another -- for instance, in Randall-Sundrum braneworld models it would depend on what is known as the "dilaton" which is basically the distance between the branes but manifests on the branes themselves as another scalar field -- although such would obviously be model-dependent, meaning that the results for, say, a universe moving towards a hole in an RS-type model, may or may not be very different to those in a more sophisticated model in some other approximation to M theory. About the only thing I'd say in general is that *any* directionality is going to induce an anisotropy. If the directionality is subtle enough that it's drowned in noise of very local observations then the influence of the whole is itself going to be correspondingly minor. How minor I obviously can't say since I've not looked at it in anything more than speculation entirely unbacked by analysis, but I'd be stunned if it was going to introduce more than relatively small errors.

That doesn't say that it wouldn't be an interesting scenario (if of course it hasn't already been examined and as I said I'd be surprised if someone hasn't already looked at it, at least in the context of RS models), and it's also not to say whether or not the corresponding impacts may or may not be significant, but it would be a careful balancing act to keep something like a black brane far enough off to avoid anisotropies that aren't covered by the existing error bars and yet produce significant impacts. Not to say it can't be done, just it would take a bit of care.

Comment Re:Where is the center? (Score 1) 174

I can't actually imagine a setup that would lead to that in vanilla relativity, but even if we assume it could exist it would introduce strong anisotropies into the universe. The very nature of falling into a black hole introduces a directionality, which would immediately produce anisotropies that we don't observe. If the hole is on a rough order of magnitude in scale with the universe, this would be even more obvious since rather than just having a general directionality constant throughout the universe, we'd now have a directionality depending on space (and time), focused on the centre of the hole. This would leave some characteristic signatures on the CMB that aren't observed.

If you wanted instead to embed this in higher-dimensional theories - so the universe is, say, falling into a 7+1d hole - then frankly no-one can give a full answer since it's not a setup amenable to full analysis (but then, neither is the one I've been discussing in the previous paragraph). I'd imagine it's possible to get a setup that doesn't introduce such anisotropies in the dimensions we're observing. I'm thinking of a setup where for instance we're on a 3+1d brane and falling along a 5th dimension into a hole of some higher dimensionality, which extends infinitely (or as near as is sufficient to kill any anisotroies) parallel to the 3+1d brane. You might even be able to get a toy setup along these lines using something like the Randall-Sundrum models that were all the rage 15 years back -- these are composed of two 3+1d branes suspended in a 4+1d spacetime, parallel to one-another. If you make one brane entirely "black" then you'd have a setup with one brane on which a universe can live, separated along a 5th dimension from a black brane. I have genuinely no idea if anyone's looked at such a system, nor whether it can be realised in an RS model, but I wouldn't be surprised if someone's actually examined it. If not it would certainly be interesting to look at.

Comment Re:Where is the center? (Score 4, Interesting) 174

Yes you are, and because you're not educated in the field.

"The article assumes that planet earth is the center of the universe"

No it doesn't. Cosmology does not assume that the Earth is in the centre of the universe. It assumes the exact opposite. It's even known as the "cosmological principle" -- and it's a fundamental axiom in cosmology. Without it we wouldn't have the model that we're talking about. Instead we'd have Lemaitre-Tolman-Bondi models, which are isotropic around the Earth but definitely not homogeneous.

Basically, building the cosmological model goes like this:

1) Observe the CMB. This is all around us, at 2.7K, and is absolutely the same in every direction. It is, in the jargon, isotropic around the Earth.
2) Assume that gravity on large scales is accurately modelled by a geometric theory of gravity (such as, but not restricted to, general relativity). We now know that on average the universe should be described by a metric that is at least isotropic about a point near to Earth.
3) Since this is obviously absurd, as you've picked up on, apply the cosmological principle. If the Earth is not in a special position in the universe, which it would be an astonishing act of hubris to assume it is, but the universe looks isotropic around the Earth, then there are only two choices. We can either dump the cosmological principle and assume the universe is centred on Earth -- which is... untenable, given the vast scale of the observations -- or we can assume that the universe looks isotropic around every point. This implies that it is homogeneous and isotropic: every point is the same in every way.
4) We can now tighten our previous assumption and assume that the universe is modelled by a metric that is isotropic around every point. That means that it is composed of what are known in the jargon as "maximally-symmetric" 3d surfaces. This leads us naturally and inevitably to the Friedman-Lemaitre-Robertson-Walker metrics, which give rise to the "big bang" theory you dislike so strongly.

There are obviously problems here. The phrase "on average" is used frequently and without rigour. That rigour cannot, as yet, be provided. We have assumed twice the nature of gravity - first that it is geometric in origin, and second that it is described by general relativity, which is basically the simplest geometric theory of gravity. Fitting to observation also leads us, naturally and inevitably -- unpleasantly so, if we're being honest -- to dark energy and dark matter. But there is a need to "create these terms", in that the theory demands them, and the theory is *astonishingly successful*. One of the main successes of FLRW cosmology is that it first predicted a characteristic wavelength of ripples on the cosmic microwave background, which was then observed (and which can be used to determine how much dark matter there is relative to normal matter), and that that same wavelength should also be imprinted on the large-scale distribution of galaxies. This was *also* observed, and is exactly where it was predicted by combining CMB and supernovae observations. This is amazing not least because the theory predicts the CMB forming when the universe is around 300,000 years old, while the large-scale distribution of galaxies is observed when the universe is pushing on a bit, probably around 10bn-12bn years old. The wavelength on the galaxy distribution is therefore extremely stretched compared to that seen on the CMB. And, as one might expect, the level to which it is stretched is extraordinarily sensitive to the cosmology - it doesn't take much of a change in the levels of matter, dark matter and dark energy to put it slap bang in the wrong place entirely.

Doing this unfortunately means we need to put dark energy in the model. Unsurprisingly, this isn't as ad-hoc as it seems, since there are multiple candidates for a dark energy, but it's still a bit unfortunate since not many of them are profoundly appealing. (Perhaps the most appealing is also the original, proposed by Wetterich in 1987, since he derived it with reasonable motivation from relatively rational particle physics. Unfortunately it doesn't, quite, work.)

Thankfully research is ongoing amongst people who understand the need to "create a term", in a variety of directions, from challenging the fundamental assumptions that build up the model, to exploring the potential candidates for a dark energy.

Comment Re:What happens to the photons? (Score 1) 174

As a cosmologist, I can comment that the entire theory being talked about is based on a particular solution of GR, which involves defining a particular metric living on a manifold and endowing it with dynamics via the Einstein field equations - meaning I don't have much argument with what you said.

Comment Re:Balloons (Score 4, Interesting) 174

"doesn't that violate some fairly fundamental laws of physics?"

Do you not think that one of the many thousands of theoretical and observational physicists who've worked on this model for decades would perhaps have spotted this flaw at some point in the last eighty years...? Of course it doesn't violate fundamental laws of physics. The whole thing is based tightly on general relativity, so regardless of whether you believe that relativity is being applied accurately to cosmology or not (I don't, not entirely) there is no suggestion of it violating any fairly fundamental laws. Conservation of mass/energy is absolutely guaranteed in relativity. (In two tightly-coupled ways - directly, and via the Bianchi identities which are nothing more than geometric identities along the lines of, but more complicated than, the Pythagoras theorem. Which one you take as more fundamental depends on your philosophy but in relativity the one implies the other.)

The balloon analogy is basically flawed. It's also flawed because it relies on one imagining (to the extent that one can, and no-one can actually do so since our brains didn't evolve to imagine 4d let alone 5d) a 3+1d balloon embedded in a 4+1d spacetime, through the analogy with a 2d balloon embedded in 3d space. This inevitably leads to people understandably querying where the centre is and wondering if it's in the middle of this 4+1d space. It also leads people to understandably ask why the galaxies aren't expanding.

Basically, they're not expanding because the theory doesn't apply in them. There are two ways of viewing this - the simple (but inaccurate) and the headfuck. The simple way of looking at it is that the cosmological expansion is extremely weak and is very easily overpowered by other, more local, forces. So galaxies are easily held together because the gravitational pull between stars in a galaxy is overwhelmingly stronger than the pull of the cosmological expansion. This, unfortunately, does suggest there's some kind of balancing of forces and some kind of spatial expansion, which isn't strictly speaking true.

The headfuck is something that's actually almost impossible to model but straightforward to understand in relativity. The theory that the balloon analogy is based on is Friedman-Lemaitre-Robertson-Walker (FLRW - we're probably missing a name or two in there, as well) cosmology, based on what's known as the FLRW metric, which does nothing more than give the Pythagoras theorem in a 3+1d universe made up of an inverted pyramid of flat 3d spatial surfaces stacked one on top of the other along some time direction. (They could also be a load of nested spheres, or more bewilderingly a pile of saddles, but the data supports the flat model and there's currently no real reason to favour the so called closed or open models.) The FLRW metric applies on scales at which the universe seems to look the same in every direction and wherever you move to. In the jargon, it's "homoegenous and isotropic". Things like the SDSS surveys demonstrate how this can happen quite well -- take a look at http://www.a.phys.nagoya-u.ac.... which is the collection of data from the first SDSS survey (which ended about a decade back, I think; we're on SDSSIII or thereabouts now but I like this figure). On small scales this is obviously really knotty and far from homogeneous, but if you zoom out and squint slightly (to give a form of smoothing) then everything looks the same. Doing this a bit more rigorously, which is notoriously model-dependent, gives the "homogeneity scale" at somewhere in the order of 100Mpc, or about a hundred times larger than a typical galaxy cluster. That's the scale at which the FLRW model applies -- and that's the scale at which every single consequence can be said to hold. Below that, nothing that it says should be taken without a massive pinch of salt. This is particularly true in clusters, which are what is known as 'virialised' and detached from the cosmological expansion -- they're better described by models such as the Lemaitre-Tolman-Bondi or the Szekeres metrics. On smaller, galactic scales, we'd be much better describing galaxies as some horrifically over-complicated cylindrical metric with a whacking lump in the middle. And no matter how you spin it, that thing isn't going to show an expansion.

A caveat to this is that basically any metric can be patched along its edges to an FLRW. So we get things like the vogue for LTB universes in the last decade, given that we can get the effects of dark energy without any actual dark energy if we tune the lump in the middle enough. (This doesn't work, for various reasons. Basically, you can fit the supernova data, but you don't half fuck up the CMB. And the baryon acoustic oscillations. But it's an interesting toy model that shows the kind of impacts that inhomogeneities can actually have without any exotic matter whatsoever.) Or you can patch a Schwarzschild, modeling a single star, to an FLRW and see what would happen to planetary orbits in some mythical universe composed of a single star in expanding spacetime. (They don't like it much.) But if we were able to build an accurate model, we'd actually be patching a foamy structure of LTB and Szekeres metrics together in such a way that they "average out" in one way or another that we also can't define to FLRW. Because FLRW itself does not exist -- the universe merely behaves on large scales as if it were FLRW.

Comment Re:Let's do the math (Score 1) 307

OK, I think we've probably run into the same ambiguity that I mentioned in a different comment in this thread. There are basically two definitions that we might have of this "infinite" business. I'm working from the viewpoint of general relativity -- or some other metric-based theory of gravity -- since I'm wanting to be work with a theory capable of quantifying statements. Given this, I have (at least) two definitions of "infinite" here:

1) The model we are driven to employ, the Friedman-Lemaitre-Robertson-Walker model (or some mildly inhomogeneous or mildly anisotropic generalisation of this) says that the universe extends spatially to an extent that is infinite (in two of the three subcases) and is finite (in the other), and our data is only good enough to state that none of these is preferred over the other (but that given the level of non-flatness, the flat case is *theoretically* preferred, as a theoretical, non-observational, bias.)
2) The past light cone is finite-volumed, as it obviously is, pending some revolution in our understanding.

So far as I understand your point, you're stating that the past light cone is finite-volumed. If that's what you're saying, excellent, we're in agreement. The problem is you used the word "space", which I interpret to mean "space", and the spatial extent of the universe is basically untestable except by reference to the density of the universe; if it is at the critical density, then it is flat (and infinite), and if it is at less than the critical density then it is hyperboloidal (and infinite). If it is greater than the critical density, then it is spherical (and finite).

If we're not talking about spatial extent but instead talking about the 4-volume of the past light cone then we agree.

(I might also add that in GR the entire manifold is basically set -- the "future" of an event (mapped out by the future light cone), the "present" which are regions connected by spacelike geodesics, and then the "past" which is mapped out by the past light cone. From this point of view, which is one that GR forces us to (although it may very well be in contradiction of quantum mechanics), we can't tell if the universe is finite or not temporally. We do know that the past light-cone appears to be finite, but we cannot know what the future light-cone is since we do not have backwards-propagating photons to bring us information along them. The future light-cone could be finite, which would imply that the universe recollapses in the future. The likeliehood right now is that the future lightcone is infinite. That would mean that if we're going to generalise what we mean by "infinite" to four dimensions, we still end up with an infinite 4-volume. However, that's ultimately supposition and extrapolation.

(It can also be commented that actually we don't know that the past light cone itself is finite, since the theory cannot be used to propagate the light cone to, let alone beyond, the singularity. This doesn't actually mean that the universe formed at the singularity; it means that our theory cannot be used to propagate light that near to it - or anything else, for that matter. It may very well be that in a quantum theory of cosmology, there is no singularity, and that we have a bouncing universe. In that event, the past light cone is *not* finite and may very well, in fact, be infinite.

(This part of it though is academic in many ways too since as I commented in another post, we can't see back anything like to the singularity anyway. Our only probe is light, and the universe goes inconveniently opaque at the CMB, which is effectively a photo of the universe when it was a bouncing baby of 300,000 years or so. We simply cannot see beyond this, other than indirectly, unless we manage to observe the gravitational wave background or, even less plausibly, the neutrino background.)

Slashdot Top Deals

After the last of 16 mounting screws has been removed from an access cover, it will be discovered that the wrong access cover has been removed.

Working...