No, I haven't submitted a paper on it. It's just a concept that I'm still working on at this point, and I want to have a proper simulation to back it up.
Oh, god, space technology is full of brilliant hacks. For example, New Horizons' radio. It has two amps connected to one dish, designed as a primary and a backup. But while it was en route, an engineer hit upon an idea to have them both transmit at the same time through the same dish, doubling the bandwidth. Normally that wouldn't make sense, except that the amplifiers have signals with different polarization, and these can be separated back out on Earth.
Great, except for one problem. The second radio was designed as a backup, they weren't planned for simultaneous operation - so there's not enough power to run them both and everything else at the same time. There's barely enough power to run just the radios - and I mean, it's not like you can just shut off the flight computer to free up more power. Well... actually, that's exactly what they do. When have a ton of data accumulated that they want to get to Earth and no critical science to do, they spin the craft up like a bullet to keep it stable and the dish pointing at Earth. Then they shut down the whole guidance and control system and pretty much everything else on the craft not essential for reading and transmitting data. It stays in this mode for days for a week or two, until all of the onboard data is transmitted, then they spin it back down so that they can do things like take pictures once again.
Quite a few of us how have them, yes and that includes "environmentalists". Sure, they're not without their problems, environmentally, but they have a quite a few upsides as well.
What sort of environmentalists have you been hanging around with? Environmentalist opposition to dams is so well known that "blowing up dams" is one of the cliche stereotypes of "eco-terrorists".
The aspect of Price-Anderson that people complain about is that the US government foots the bill for the vast majority of costs in the event of a catastrophic accident.
Sure, but what I was pointing out (in a roundabout way), is that the same is effectively true of any large scale infrastructure system, especially when it comes to power generation on a massive scale. Doesn't matter if the cost comes from a hydro electric dam that fails, or a coal ash slurry dam failure, or a major oil spill, or indeed a release of radio nucleotides.
What on Earth are you talking about? Did the government foot the bill after the Deepwater Horizon incident? After any of the coal ash slurry failures? Of course not, the companies responsible did, and it cost them an utter fortune. The difference here is that unlike with nuclear power, their liability is uncapped. With nuclear power, the liability in the case of catastrope is a cost borne by taxpayers.
If that much money is at stake there are many ways for those that earn money off of the business to protect themselves from damage. Bankruptcy is always cheaper than insurance.
Which is why BP and the coal mining companies responsible are now bankrupt?
And FYI, industries carrying major risk are effectively required to have what amounts to insurance against those who go bankrupt. It's called Superfund, and it's supported by taxes on polluting industries - a "polluter pays" principle. Price-Anderson is based on a "public pays" principle. The money to cleanup in the event of a major nuclear disaster (over $12B) doesn't even come from a levy on the nuclear power industry. In fact, there is no money there for such a cleanup, the government is just supposed to come up with it if it happens. Fukushima for example is expected to cost over $100B in direct cleanup costs alone, let alone the much larger potential liability for claims.
So, it doesn't matter if the nuclear industry doesn't have insurance, since many/most other human endeavours on that scale doesn't either.
Um, yes they are. You mention Deepwater Horizon. Are you unaware that it was insured, with liability coverage?
To wit the Exxon Valdes spill and the legal aftermath. It didn't seem to hurt Exxon nearly as much as it did Prince William sound.
To wit, once again, the company didn't go bankrupt. They minimized the cost through a very effective legal campaign, of course. The government did not socialize the damages; it remained their responsibility to pay them. The fact that they managed to weasel out of having to pay a lot of what they should have paid doesn't change who the responsible party was. Nor does it make it logical that the solution to companies like Exxon weaseling out of payments is to have the government assume liability for major disasters and let those who caused them off the hook.
The Soviets were hardly unique in terms of bad reactor design. Have you seen the design used for the British Windscale plant? It makes you want to hit your head against a wall when you read it. They literally just stuck canisters of fuel into holes in the wall, hit them in by hand with ram rods, and hoped that the old canisters would fall out the back into a narrow trench of water. The designers got mad and nearly derailed it when one physicist wanted to put a really trivial pollution scrubber on the stack; they taunted him over it afterwards for wasting money. Now, saying "canisters" makes it sound fancier than they were, they were basically glorified aluminum cans full of highly flammable uranium, stuck into a hot reactor. Then they changed their fuel mix that they put inside without redoing any of the engineering. Including having them full of more flammable stuff like lithium and magnesium metal. And then they cut off the cooling fins from the canisters. Their monitoring was so poor that when the system inevitably caught fire they didn't notice it for days. They then went down there and started poking around in a hole with the ramrod; it came out covered in molten uranium.
Chernobyl was a paragon of safety compared to Windscale.
Yeah, it seems thorium has become the nuclear-reactor-hype of the day, the ShinyNewTech replacing PBMRs. The pattern repeats. I wish these people would at least google "ShinyNewTech disavantages" before spouting off about how ShinyNewTech is the savior of the world.
First off, who's extolling the virtues of hydroelectric dams? Dams usually fall on environmentalists' hate lists at around the same place as coal, give or take a few slots.
Extolling the virtues of wind or solar, yeah. But you better believe a wind farm operator will be sued if a turbine falls on someone's house, or a solar thermal plant if their mirrors misalign and blind a pilot. And for that matter, you better believe that a hydroelectric dam operator will be sued if their dam breaks (at least in the first world). And most companies willingly insure their large projects as a hedge against risk.
The aspect of Price-Anderson that people complain about is that the US government foots the bill for the vast majority of costs in the event of a catastrophic accident. The power plant operators only need to insure enough to foot the bill to insure against minor accidents, something most operators would want to do anyway to protect themselves. Many people find the capped liability to be a highly distorting influence on the market, socializing the risks while keeping the profits private.
Actually, no. But I'll be able to in the future if needed.
The waste issue (as well as inherent safety) is part of the reason that there's so much research on ADSRs right now (note: the article says that an ADSR "would use thorium as a fuel", but it's not actually limited to thorium, it can use any subcritical fissile core). Spallation can rip apart the long-lived actinides that don't have a sufficient (n, gamma) cross section to prevent their accumulation in nuclear waste. And of course, since the core is inherently subcritical by design, simply not enough neutronicity under any condition to sustain a chain reaction on its own, when you shut the beam off, fission ceases instantly (though you still have decay heat like with any other nuclear power plant). Spallation source provides no more than about 10% or so of the neutronicity, but it's the amount needed to push the core over the edge.
I have my own very radical variant on the concept of an accelerator driven fission that I'm working on simulating now in Geant4 (although that was probably a poor choice of software, apparently their thermal scattering codes are really immature... as far as CERN is concerned, once particles get down below the MeV range they're usually not particularly interesting). But anyway the concept is to have a core with literally zero neutronicty - a lithium-burning reactor. The basic concept is as such:
1. A planar proton beam is delivered by one or more high power linac beamlines. Commercial-scale linac costs - without any improvements in technology - are expected to cost $5-20 per watt. The particular design would call for very high voltage (~16MV) klystrons to drive it - and not simply to reduce size (more in this shortly)
2. The proton beam bombards a fragment emitting target inside an axial magnetic field in a vacuum. The estimation of deceleration efficiency is estimated at over 90% in fragment reactors due to the lack of Carnot losses (according to the published research on the subject). The resultant HVDC will be direct converted to the klystron voltage in producing the electron beam that drives the linac. About 60% of the energy of spallation goes into fragment production. Fragments will be drawn away from the fragment target en route to the collector via a slightly expanding axial magnetic field. Fragment collection allows for automatic isotope separation.
3. The maximum power output of a fragment reactor is limited by its surface area and its ability to radiate heat. Fragment-emitting targets can be either electrostatically suspended dust or rapidly rotating with thin fibers or planes of target material, in order to radiatively cool without melting. Spallation targets, for efficiency, need to be high-Z materials, such as lead, tungsten, mercury, etc. Tungsten is particularly attractive due to its high melting point of 3695K. High-Z metal-rich ceramics are also possible targets, with very high melting points. The temperature of the chamber's beryllium walls being radiated to will be around 1050K. This means heat exchange between a ~3000K emitter (4.6e7W/m) and a 1050K receiver (6.8e4W/m), or about 4.5MW per square meter. In short, this allows for a surprisingly compact core, limited more by the length necessary to ensure a sufficient proton spallation cross section.
4. Neutrons emitted by spallation (at a cost of 30-40 MeV per neutron) are heavily biased by energy level. High energy neutrons are biased in the forward direction, while lower energy neutrons scatter with less of a forward bias. As a result, the high energy neutrons (>8 MeV) predominantly continue forward in the high-Z target where they receive a better neutron multiplication ratio than they would in beryllium, while the lower energy (2,5-8MeV) neutrons are predominantly multiplied in the beryllium walls, which is the only effective multiplier for such an energy range. The net neutron yield after multiplication should be around one neutron per 20 MeV of input energy.
5. The beryllium walls, being the recipient of about 2/3rds of the proton beam's energy, are effectively a giant array of cooling channels. As the neutrons are still relatively high energy at this point, the neutron cross sections are not huge and the choice of coolant not critical. The coolant temperature should reach about 1000K and thus allow for around 50% efficient power generation.
6. The neutrons continue scattering outwards past the beryllium and need to be thermalized. This is done in stages to progressively lower their temperature as well as to insulate each section from the next. The first stage would likely be ideally loose-fill graphite. The stage would operate at a minimum temperature of 500-600K to prevent the accumulation of Wigner energy. The still fairly high neutron temperature means that there's some degree of options on wall material based on whether one wants to keep the neutron losses to a small fraction of a percent (say, carbon-carbon or 90-Zr) up to a percent or so (steel).
7. The next neutron cooling stage is a little more sensitive to neutron loss. There are many moderator possibilities. I like the possibility of supercritical CO2 under around 100 ATM (roughly the same pressure that one would want the cooling channels in the beryllium), although lower pressure moderators are certainly an option. A sparse fibrous or porous insulation poor in metallic cations and rich in carbon, nitrogen, and/or oxygen would limit thermal heat flow without offering significant neutron capture. (Wigner energy is not as much of a concern in fibrous materials and carbon-carbon). The outer wall should ideally have a low neutron cross section - plastic, composites, carbon-carbon, and zirconium (natural or 90Zr) are all options to keep the neutron losses at a fraction of 1% percent. The neutron temperature would be lowered to about 200K.
8. The third neutron cooling stage is rather sensitive to neutron loss. It can be heavy water ice, dry ice, or compressed helium, with sparse carbon and oxygen-rich insulation as needed. The temperature is lowered down to near the coolant temperature, which may be ~80K in the case of liquid nitrogen, ~55K in the case of isobutane, or lower in the case of compressed helium. This region begins to become somewhat sensitive to incidently captured radiation, such as gamma, as it takes a dozen-ish joules of energy to remove one joule of heat from it. The outer wall needs be made of a low cross-section material - materials like aluminum and steel are not options at this point.
9. The fourth and final neutron cooling stage is liquid helium. There are no other options with realistically low neutron cross sections except for extremely expensive materials like 15N and tritium. The temperature should be as low as reasonably possible without requiring expensive 3He-based refrigeration systems - 1,5-2K. However, it should be kept it from forming a superfluid, due to the extreme thermal conductivity superfluids present. Helium has a rather low thermal conductivity on its own, so additional insulative material is probably not needed. While conductive inflow of heat to this stage should be minimal due to the great thickness of insulation formed by the prior stages, incident capture of nuclear energy absolutely must be minimized (this is where accurate simulations become critical) - in particular, gamma from a wide range of nuclear processes elsewhere in the reactor. While helium is a very poor gamma absorber, every joule of energy captured in it equates to having to spend hundreds of joules of cooling energy. If gamma capture proves too great, additional periodic gamma absorbers can be placed (more on this shortly).
10. Periodically breaking up this stage are the reason for all of this neutron cooling: the 7-lithium capture assembly. We have to cool the neutrons to boost the lithium (n,gamma) cross section sufficiently. Each assembly consists of an extremely low neutron capture, low scattering cross sections, high gamma cross section, thermally resistive wall capable of maintaining a vacuum. There is unfortunately only a rather short list of candidates meeting this spec - namely, 90-Zr and 208-Pb foams. Fortunately, processes for foaming both metals already exist - foamed lead and zirconium are available on the open market. 90Zr is estimated to cost about $300 per kilogram - which would be affordable in this context. 208-Pb - is more uncertain. It can be found naturally enriched up to about 90% in thorium ores, but as far as I have been able to find there are no estimates as to what it would cost to produce. One paper estimates that 99% pure 208-Pb produced from ordinary lead would cost about $7000 per kilogram from normal lead, but probably significantly less from naturally enriched lead. Regardless of the choice of material, their interior surface would contain cooling channels. The coolant would be gaseous helium, possibly multiple channels different temperatures to optimize cooling efficiency.
11. Inside each assembly would be a series of thin (100um) 7-Li foils. To help maintain structural integrity, increase heat tolerance, and decrease flammability during maintenance, they can be alloyed with 90-Zr (maintenance should not be commonly needed, more on that later). The (n, gamma) capture cross section is about 1/8th of the scattering cross section, so a lot of neutrons will scatter, increase in energy, and many will leave the assembly. However, they will overwhelmingly be re-cooled in the helium (depositing an irrelevantly trivial amount of energy) and return to the foils for repeated passes, as the scattering distance at such low neutron energies is so small. The spacing of the foils will need to be proportional the axial magnetic field strength - for a 10T field, it works out to about 3 centimeters. Heat captured by incident betas or gammas will be radiated to the assembly walls
12. Any (n, gamma) reactions in the lithium would produce 8Li, which decays in about 0,8 seconds by releasing an incredibly powerful 16 MeV beta. The byproduct is helium. The very thin foils provide a poor beta capture target; the betas are collimated by the magnetic field and drawn away from the foils by the field's gradual expansion. The result is a monoenergetic 16MeV electron beam leaving the reactor. This 16MeV beam is fed straight into the linac klystrons.
Let's look at the overall energy picture. A good superconducting linac achieves around 85% efficiency, all associated hardware systems included. By starting with HVDC for beam production, and especially an already-produced electron beam of the desired energy, we only stand to improve this number. Let's say an average of 88% efficiency and a beam energy output of 100MW, meaning 113MW consumed. Each 20 MeV of proton energy, after multiplication, gives us a neutron. The vast majority of those neutrons (we'll say 90%) give us 16 MeV of 7Li beta energy, making up all but 6.2 MeV of the initial proton energy. So 113MW in, 69MW out. However, that's only the start of it. 60% of the beam's energy used to make those neutrons goes into fission fragments, and we capture that energy to electricity at 90% efficiency. So that's 54MW more for our accelerator, for a total of 123MW. But we're not done - that's only our non-Carnot energy. Virtually all of the energy of thermalizing these neutrons happened at high temperatures and is captured in the cooling channels in the beryllium and graphite, turning into electricity at around 50% efficiency. And while neutron multiplication is an endothermic process, the resulting tungsten fragments both from spallation and proton capture will yield more energy via decay than they consumed (splitting up nuclei heavier than iron yields a net release of energy), thus adding more heat to the system. Furthermore, energy lost to heat in our ion deceleration grids, our klystrons, and other components of our linac also goes toward electricity production; altogether, the thermal energy production should add perhaps 35MW more electricity generation.
In short, our 16MeV 7Li betas and our spallation fragments power our accelerator with a little bit to spare, and all of the waste heat from all processes is net yield. The losses in all of our accelerator components are already accounted for in the accelerator's efficiency, athough we have to spend a (currently unknown) amount of energy on keeping our third and especially fourth neutron cooling stages cold. And this is where there's no substitute for accurate simulations, unfortunately.
Let us guess that our cooling costs eat up the beta/fragment surplus energy and thus we have the 35MW thermally-generated electricity for sale, at near 100% capacity factor (more on that shortly - for now let's say 95% to be pessimistic). Our 100MW beam, as described previously, will cost $500M-$2B. The plant would produce 290 million kWh per year. Over a nuclear-typical 50 year lifespan, that's 14.6 billion kilowatt hours. It's easy to see scenarios where this could be cost effective, esp. if accelerator technology advances over this time period. The electricity is high availability and carbon free, which commands a premium. Our fragments are isotopically sorted, which means not only simple waste handling, but the ability to sell valuable isotopes, for example for medical needs. Having a very high flux cold neutron source yields many options for production of difficult-to-produce isotopes, allowing one to sacrifice a bit of electricity generation for very valuable side revenue streams. Nuclear waste can be burned in the spallation target - not only adding another revenue stream (waste disposal), but increasing the neutron flux at the same time (fissile materials release more evaporation neutrons upon spallation).
The fuel is cheap and abundant. 7Li makes up 93% of natural lithium, versus U235 making up 0.7% of natural uranium. Lithium can be enriched (which may not even be necessary here) by cheap chemical processes, versus uranium which is much more difficult. Raw lithium is far cheaper than uranium, and many orders of magnitude more abundant, being one of the most common elements in the universe. Earth's oceans contain 2,4e14 kilograms of lithium; scaled by 16 MeV per 7Li, that works out to 9.6e22 watt hours, or about 5 million years of our current electricity consumption; about half that amount (as per the numbers above) would be sellable, or 2.5 million years. It's actually similar to that of D-T fusion, since the tritium for D-T fusion is bred from lithium, yields 17 MeV per fusion, and since that energy is captured thermally, only about half of it is turned into sellable electricity.
As mentioned, while a uranium-fuelled reactor reactor uses a fuel that's found in only 0.7% concentrations, enriched to several times higher, and then burns though only half of that, a lithium reactor uses a fuel found in a mostly pure state and can burn through almost all of it. And while uranium fission releases more energy per reaction - ~200 MeV per fission versus about 16, or 12.5 times as much - a U235 atom is 33.6 times heavier than a 7Li atom. So even versus pure U235 fuel burned up completely, 7Li contains far more energy potential for release. Compared to real-world situations, per unit mass, the 7Li needs refuelling in the ballpark of a hundredth as often per unit energy sold (rather than replacement due to burnthrough reasons, it would need periodic annealing or meltdown / reproduction for structural reasons). Now, of course the 7-Li targets are only part of the reactor to which neutrons are having an effect - but the same is true with conventional fission reactors as well as fusion reactors.
The high energy density of 7Li also raises possibilities for spaceflight applications as an intermedite stage between fission and fusion - although probably in a different form (rather than a spallation neutron source, the ideal would probably be a highly enriched plutonium fission neutron source, with all of the neutrons not needed to keep the chain reaction going being used toward 7Li bombardment)
There would be no more reason for people to NIMBY a 7Li reactor than a D-T fusion reactor. Both produce only incident radiation and shut down instantly (excepting delayed decay heat). Unlike D-T fusion, there's no proliferation risk (no tritium to divert).
So, that's the basic concept. But to iron it down more - where the neutrons actually get absorbed, how much energy gets deposited in what layer and what is the generation potential / cooling costs, what's the optimal neutron production geometry and how efficient is it, and so forth, requires an accurate simulation. With thermal scattering. So hopefully I'll be able to get that working in Geant4 sooner or later.
You can see the parameters, the cost of Price-Anderson covering them in the event of a catastrophic accident beyond the minimums is not covered.
Also, people should be careful not to confuse the prices on the calculator with the price of electricity that they pay. Power plant generation costs and consumer purchase rates are not the same thing. Industrial rates are at least closer to generation costs, but even they add a couple cents per kWh to the cost.
Not forever, as black holes don't last forever. They evaporate due to Hawking radiation.
Have you never seen anything about the twin paradox? Even at the most superficial level introduction of relativity, you get that time slows down when something is moving relative to the observer.
This is ridiculous. Even "at the most superficial level introduction of relativity "you should know that if a person departs earth moving at "nearly C" and comes back, far less time will have past for them than someone who stayed on Earth the whole time.
From the perspective of the people on the ship, the journey is no longer a distance of 4.3 light years. If the spaceship is going 90% of c relative to Earth, then in the spaceship's frame it will take them 2.08 years to make that trip, and also in their frame they will observe it takes light 1.87 years to go from Earth to Alpha-Centari.
First off, apart from trying to add confusion, why did you change the velocity from the one I gave? Secondly, from a trip travel time perspective, it doesn't matter whether you view it as time dilation or length contraction. The trip at 0.999c takes 70 days from the perspective of the crew. That's the beginning and end of it right there. From their perspective, it's as if they got there moving far faster than the speed of light, as if there were no limits on how fast they could keep accelerating. With an infinite supply of energy, they could travel the 4,3 light years in what they perceive to be 7 days, 7 hours, 7 minutes, or 7 seconds (let's ignore G-forces here, or how to have such vast quantities of energy at their disposal). The crew of a spacecraft experiences no "upper limit" to how fast the universe will allow them to traverse a distance.
What are you talking about? I just did "echo 1,2 > test.csv" then opened test.csv in OpenOffice Calc, then saved it as test2.csv from the save dialog. No complaints. Then I clicked to close it. No complaints about unsaved changes. Did you actually try that out before you commented? I don't have any of the other programs you mention on this computer, so I'll pick another - let's try OpenOffice Writer. Made a text file, opened it, saved it as a
I'm sorry, but GIMP's change is totally broken behavior. The most common workflow for GIMP (as you can see from all of the rage on the forums when these changes occurred) is not long complex workflows, but simple changes to jpegs or pngs. Open, change it, save it, close it. What sort of moron do you take people for to think that you have to "protect" them from choosing a format of file that doesn't save layers, and instead try to make them always save whatever they do in a format that no other programs support? As if a dialog warning them that it doesn't save layers and asking them if they want to flatten it, like Gimp used to do, isn't enough? What on earth is the point of *banning* people from typing in a file with the suffix that they want to use in the save menu, and instead making them choose an entirely different menu? Actually two different menus, depending on context, only one of which has a keyboard shortcut. It's just ridiculous. We're not preschoolers, we don't need the hand-holding.
An interesting side effect of this would be that it would actually be theoretically possible to send a probe into a black hole and get a signal back from it. If you're REALLY, REALLY, REALLY patient, that is
(more realistically, one would likely try to probe the insides by making mciro black holes inside colliders and trying to get them to consume particles before they collapse, then looking for traces of information in the aftermath of the collapse)
And from the traveler's perspective the universe is consistent and there's no information loss either. They still see an apparent horizon, a place where time appears to stop, but they never reach it, it always recedes ahead of them. To them, the area beyond that apparent horizon is also not part of spacetime, but nothing ever manages to enter it so no information appears to be lost.
They of course eventually get ripped apart by tidal forces, but their information doesn't disappear into a "no-hair" singularity, it remains to be released when the black hole evaporates. As a black hole evaporates, time showing the particles falling deeper and deeper into it becomes observable to the outside world (albeit incredibly distorted and with the matter ripped to bits).
Again, that's at least my understanding of Hawking's "black holes don't actually exist" concept, and it makes logical sense to me. From the perspective of a traveler, they're just falling to their deaths in an extreme sort of collapsed star. From the perspective of an outside observer, they've fallen into a spot where a the collapsed star has ripped a hole in spacetime that won't start back up (from our perspective) until the "hole" boils off. Nothing ever lost, nothing ever undefined, always part of our universe, just effectively frozen temporarily in time. From our perspective.
Not only do they bundle it with adware, but they've apparently sabotaged GIMP too - for example, they apparently changed the save dialog so that you can only save XCF files and have to click through a "you have unsaved changes" warning when you export to a different format. They added an very difficult to precisely adjust sliders to things like brush size. They took out 16 bit color support. Basically, sourceforge has really totalled GIMP.