Here is the link to the study.
Here is the link to the study.
"This raises the inevitable question. If we ever could clone a prehistoric species...should we?"
This find raises no such question. Proteins have nothing to do with cloning.
For that you need DNA. We can reconstruct genomes of some ancient animals, that died within the last few tens of thousands of years and were preserved in frozen strata. Clever reconstructions are necessary to put the fragments back together, but still here are usually errors and gaps that must be filled in with modern related organisms. Older DNA is probably hopeless for organism reconstruction, though the fragments can be used for taxonomic work.
Keep in mind that by sending earth microbes we're giving life there a 3.8 billion year head start.
No we aren't. There is chemical evidence that life existed as soon as 300 million years of planet formation (i.e. about as soon as compatible conditions existed). We have actual fossils of life that formed 950 million years after planet formation.
.. when it was first published in Science last week, and I was surprised they were devoting any space to it.
The physicist had no insights to offer, just opinions about far off fanciful speculations unconnected with any current real science. The same interview could have been given by most any SF fan, and many SF authors could have offered far more substance and insight.
Here is Gros's original paper which was the hook on which the interview hung. Not a terrible paper at that, providing some interesting summaries about the evolution of the Earth and about planetary stability. But the "Genesis mission" seeding stuff is just SF hand-waving, even in the full paper. And the whole notion is based on the very questionable premise that organism-ready planets are common that do not already have their own biology established ("The objective of the Genesis mission is after all to give life the chance to prosper in places where it has not yet a foothold..."). Life on Earth may have become established within 300 million years of its formation - i.e. about as soon as compatible conditions existed.
Absolutely. And companies like Uber are recruiting real live people to devote their actual working lives and resources to supporting the company's profit-making business, with specific promises of the terms of work and pay. Then changing them (always in the negative direction) without warning, or appeal.
This is why company's everywhere need regulation. Crazy abuse of workers for profit will happen unless standards are imposed and enforced, otherwise it is always a race to the bottom. Uber sounds like it is turning into a sweatshop on the street.
Like "Moon is a Harsh Mistress" it is fiction, written to entertain.
As I commented above, as far as I can tell - based on all the evidence you provide - this project you are part of is just you typing up web pages describing your project concept.
Let us know how much money this project currently is funded for, how many people are on its staff, and its timeline for building the first self-bootstrapping Seed Factory. Can you show us any actual equipment designs or prototypes, or tell us who is preparing same? Anything real?
It's not. That's why we are building the first self-bootstrapping automated factories here on Earth
We are? When is the first self-bootstrapping automated factory going to be completed? Where is it? Who is funding it?
All you linked to are a few web pages you wrote yourself, which simply describe your very, very high level concept for a program to do this with, despite the numerous bullet points, no actual details, just concept verbiage.
Building an actual self-bootstrapping automated factory on Earth is absolutely essential before we can start talking about putting on the moon - for reasons that should be obvious to anyone. Until such a thing exists this is just fiction. As far as I can tell, at the moment all plans for building a self-bootstrapping automated factory on Earth, much less the actual operating factory, are fiction also.
There is less to this "self-bootstrapping automated factory" you allude to than even the Mars One fake project (which at least has a staff and collects real money).
The "Great Filter" is a very poor answer, IMO. A Great Filter before where we are now is bad science on the level of thinking we're at the center of the universe: no, sorry, we're not special.
Could there be some future hurdle that many civilizations fail to jump? Sure. But there no reason to expect an alien civilization to think the way we do about anything, really. To propose that all civilizations would be blind to some danger is absurd.
Remember the uncertainty in the Drake equation is many orders of magnitude, and even so it doesn't much matter for the Fermi Paradox. A Great Filter that takes out 90% or even 99% of civilizations doesn't solve the paradox. It only take one civilization that built von Neumann probes.
Good post. You hit the salient points pretty nicely.
Agreed that invoking a purely speculative it-always-happens-no-matter-what genuine, permanent extinction event for advanced civilization is basically a magical solution for the Fermi Problem.
It has been argued that in fact we have the history of many (most?) societies that reached a high level of organization and have collapsed - and so I seen a "civilization lifetime" parameter calculated from historical societies used in the Drake Equation. And that is a fair point. But none of the collapses were permanent, new ones have always arisen after, so it does not really support the extinction filter idea at all.
As I wrote on this thread below, extreme improbability of an advanced technological civilization arising in a biosphere is at least a partial explanation, since there the historical evidence is consistent with this idea. I did not mention though that the industrial revolution itself seems something of a fluke, it was completely unexpected and even with more than two centuries to study it, and abundant records and evidence, it is still not clear why it happened.
Also I did not point out the research about the habitable zone of the Universe, the region of space and time where the conditions permitting technological civilization could arise. This requires a very benign stable biosphere for half a billion years, since minor perturbations (on a cosmic scale) still bring about great extinction events. What with quasars and other active galactic cores, exploding stars, migrating planets and colliding bodies, necessary concentrations of heavy elements, etc. it turns out that a fairly small volume of cosmic history contains the necessary conditions. When you have enough "extremely unlikely" events in the chain, even the vast Observable Universe is perhaps not vast enough.
Then too, how far do we think a Von Neumann probe society would end up sending probes? The two major galaxies in the Local Group are 2.43 million light years apart. The next closest galaxy group is 10 million light years. At that distance even a 1% c probe takes a billion years. The closest galaxy cluster to ours is 53 million light years away. That's a billion years even at 5% c. And then there are the great voids in the Universe, separating super clusters, which are 200-600 million light years across. Even at substantial fractions of c the Universe is probably not old enough for a civilization to arise and send a probe to cross those. So at some scale distance does become a true barrier that technology and time cannot cross.
Good summary. You read the original paper I see. I was going to prepare a summary myself, but you beat me to it, and I don't think I can improve upon it.
By observing the patterns of evolution of life in Earth one can conclude a large part of the solution to the Fermi Problem - there is no general trend in evolution toward human-style intelligence with its complex symbol manipulation, communication, and complex tool making.
Examples of evolutionary trends that show up repeatedly include convergent evolution, and the filling up of ecological niches, which happen quite predictably. If the specific adaptations leading to human style intelligence are at all likely we should see them appearing repeatedly, independently in the evolutionary record.
But in the history of life on Earth the appearance of human-style intelligence appears to be a real fluke, which only very, very recently seems to have give our species and marked survival advantage.
There are about 60,000 vertebrate species today (lets assume that this is the only class of organism that can develop intelligence). If we take the estimate that 99.9% of all species that have ever existed have gone extinct, then this makes a history of 60 million evolutionary experiments over a span of 525 million years. Yet only the Simian branch of the Primates developed the dexterity adaptable to complex tool making - 60 million years ago.
Once the simian pre-adaptations were set toward manual dexterity, binocular vision, really all of the evolutionary tool-kit that hominids eventually exploited, do we see any trend within that family toward tool-using? Are there multiple independent branches with the simians that start using tools? No there are not, only one branch leads to that, the Apes (Hominoidea), and that sub-family emerged 20 million years ago. Is there a trend within the Hominoidea of multiple branches showing developing complex tool using? Again there is not. Orangutans for example split off 20 million years ago, but their adaptation pattern appears stable over that time, behaviorally orangutans today seem similar to their distant ancestors. This pattern is observable in each such branch of the Hominids (Great Apes). The Great Apes have existed for at least 8 million years, but none of the branches that split off from Homo has shown any tendency to follow the pattern of tool making and brain growth that Homo did. The other Great Apes have been stable in their brain size and propensity for simple tool use, but not tool making, for millions of years.
It is only within the genus Homo, which arose 2.8 million years ago that we start to see multiple experiments in tool making species with rapid brain growth appearing, this trend seems a real evolutionary fluke.
And finally intelligence has not really show to provide any marked survival advantage for the species possessing it until very recently. Within the last 70,000 years modern humans (who have existed in their present form around 250,000 years) appear to have undergone a population bottleneck where the entire human race shrank to about 2,000 individuals - a close brush with complete extinction.
Humans remained a rare species until about 40,000 years ago, when the first population surge occurred, bringing human numbers up only to levels similar to many other large mammal species (hundreds of thousands to the low millions) by 13,000 years ago. And only then did the intelligence help humans to start out-performing all other mammal species in success.
So this whole pattern suggests that the stable pattern of the last half-billion years, with many tens of millions of large complex animal species, and no trend toward human-style intelligence is the norm, and could be expected to continue indefinitely. But a long series of freak events (which we are still in the early stages of revealing and unraveling) seems to have led just one species to have civilization, and even there is was a late emergence and might not have happened at all if the species had not made it through that bottle-neck.
Industrial Revolution counting is a bit of a problem. The first two Industrial Revolutions are pretty much agreed on. The First (of course) from 1770 to 1850, when factories and steam power revolutionized the textile industry and transportation, and the second with the rise of the chemical industry and assembly line production from 1870 to 1914. Widespread use of electricity and the internal combustion engine after 1920 is often considered the Third Industrial Revolution, but some people consider it an extension of the second.
Then we have the advent of computers, the "Digital Revolution." It seems clear that this was, indeed revolutionary, but this is not usually called the Fourth Industrial Revolution, because while important, it seems less profound than the first three which utterly transformed civilization beyond recognition. So just giving it it's own name seems appropriate.
Similarly, I think we should call what is happening now the "Cybernetic Revolution", rather than trying to decide whether this the Third, or Fourth, or maybe even Fifth IR. It is truly revolutionary, harnessing the cumulative power of the digital revolution to bring about truly unprecedented levels of task automation. And it will, I believe, resemble the First IR more than the other two in a very important way - large numbers of jobs are going to be eliminated rapidly, with nothing to replace them. The FIR threw almost 20% of Britain out of work between 1770 and 1800, creating a wave of petty crime, a huge population of paupers, Dickensian slums, and poor houses - essentially prisons for people who had committed no crimes. The new industrial economy did not provide enough jobs to restore full employment until 1840, or even later, 70 years of destitution.
Although there a many doubter, to me it doesn't matter whether this one person reached that age (his relative may argue or not), what matters is that it won't be uncommon in 70 years (those in their 50 and 60 could go beyond that). A number of technologies are reaching their tipping point.
So far we have not come up with a single intervention of any kind, "technology" or not, that increases the human maximum lifespan by a single day. Nothing that actually slows down aging in humans. Nothing. What we are doing is preventing premature death. We are having a greater fraction living to the same maximum longevities already observed.
So far the only interventions that actually extend observed maximum longevity in rats are regimes of privation that would be considered torture in humans, and could only be maintained by perpetual imprisonment (like lab rats).
Tell us about some of these promising "technologies".
You mean physics isn't finished yet!? Oh the horror!
Physics has always been "in a muddle", from the time before Newton, in the sense you assert since there has never been a time when we thought we understood it all.
There was a short time at the end of the 1800s when some made a silly claim that physics was complete, around 1888 when electromagnetic radiation was discovered. Except for the Ultraviolet Catastrophe prediction that all hot objects would radiate infinitely high frequency photons, and the photoelectric effect that had already been discovered in 1887 that could not be explained. And then new unexplained physics started showing up every couple of years in the 1890s. The non-existence of the ether shown in 1892, the discovery of X-rays, radioactivity, the electron, Curie's work showing that an enormous mysterious energy source existed inside the atom...
...As for the latter: we still don't really know how dark matter works, and maybe it has its own forces (some oddball ones have been proposed).
And that is the motivation discussed in the cited paper, that this could be related to dark matter.
Asserting that we "know" that dark energy is a fifth force, in the same sense as the other four forces in the Standard Model, is claiming more than we actually know at this point. Maybe it is, but there are no good theories at this point that make it one, and it could be something quite different from the particle/force models physicists have been working with. Physics derived from the behavior of the Cosmological Constant in General Relativity may be some really new physics.
"But what we need to know is, do people want nasally-insertable computers?"