Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:"free" solar energy (Score 3, Interesting) 107

The issue I see is not "Lifting the blocks is energy expensive, therefor wont work!", the issue I see is "Clearing the sand down to bedrock is expensive, and therefor wont work!"

Here's the deal:

Sand grains in the desert are small, and are carried by wind. Wind is powered by solar induced thermal exchanges. Wind energy routinely creates and moves humongous piles of sand around, and the formation of those piles of sand can be controlled by building or placing obstacles to redirect wind flow/speed/pressure. A nearly entirely passive process can be used to deposit the sand, even up on top of the pyramid while it is being built. The only thing you need to lift manually is the sintering system.

However, by the same token, you MUST place the pyramid directly on bedrock to avoid having the sand get blown out from under the pyramid by said wind patterns.(Unless you WANT your pyramid to break in half!) Clearing out several feet of sand is a non-trivial task that is energy intensive. Getting the wind to do this for you is not very feasible.

Once the pyramids(s) is (are) made however, you will have the undesirable consequence of their being made from glass, in an erosive sand environment featuring wind. Glass is substantively "softer" on the mohs hardness scale than is raw crystalline silicon dioxide-- the primary component of sand. The pyramid will get abraded HARD, and will require very aggressive maintenance.

Comment Re:From the article... (Score 3, Insightful) 339

Strange, that isnt how I would envision it at all. I would envision it as an iterative evolutionary process simulator with parallel virtual instance simulators all simulating minor variations of itself using (at first) a brute force algorithm over a range of possibly tweakable values, correllating and testing "improvement candidates" based on a set of fixed critera, assembling lists of changes, and restarting the process over again.

Such models have already created wholly computer generated robots that are surprisingly energy efficient, if bizarre to look at.

As humans get better at structuring problems into concrete sets of discrete variables, the better such programs will be able to run without human intervention.

These "AIs" would not in any practical sense, even remotely resemble the intelligence that humans have. They would have much more in common with exponential functions with large numbers of descretized terms, converging on local maxima in their solution domains.

Comment Re: Blizzard Shizzard (Score 1) 252

Your understanding of how the GPL works seems to be flawed.

1) You can use GPL software to run non-gpl software, and be perfectly fine. What you can't do, is make the non-gpl portion of the software be some super fundamental component of the system.

Examples:

Wine can run windows programs all day long. Simply because windows solitare is running inside wine, does not mean microsoft has to release the source code to sol.exe

Nvidia's binary driver for Linux: It is not explicitly necessary for linux to run. It can be loaded into the GPLed linux kernel, and used perfectly legitimately. This is frowned upon by the community, but still legal.

What you CANT do with GPL software:

Snag up GPL code, modify the living bejeebus out of it, then change the license to closed, and then sell it for money. (Say, what eg, NetApp did with BSD for their ONTAP OS they run on their filers. They stole BSD code, which is perfectly OK to steal-- ;) If they had stolen GPL code, it would be another matter entirely!)

Snag up GPL code, Modify the bejeebus out of it, then distribute binary only copies without also releasing the source code. (Netgear tried to do this some years back with OpenWRT, and when the community they stole from started examining their routers, started demanding they release their sources. You *CAN* get the source packages from netgear, they just hide the page far away from their main website tree.)

The GPL is INSANELY permissive, with the only real restrictions being against changing the license type, or against adding additional restrictions. (Such as not releasing the source for modified versions.)

If your business revolves around keeping the precious in a locked up little box, then the GPL is not for you. (Go plunder BSD like every other gollum like creature with big eyes and a kleptomania problem.) If you dont care about that, then the GPL is just fine. (Does not seem to hurt Netgear any.)

Comment Re:Why it matters (Score 4, Interesting) 293

One possible solution is that our wormholes (if they exist) are actually "pre big bang events" for a whole new universe inside the wormhole, and that they actually contain an infinite volume. "White hole" stage happens at the big bang inside, and any subsequent mass energy that falls in from our side just becomes dark energy on their side, distributed everywhere.

It would be interesting to try to plot out how causality works over the bridge.

the way I envision it though (which is almost certainly wrong), is that time is more confined (slower) near the bridge, but becomes less confined (faster) as the space on the other side expands in volume. (Speed is measured as 'planc seconds against unit of spacetime traversed by photon in vacuum' EG, near the bridge, photons appear to travel more slowly, where away from the bridge, they appear to travel more quickly. The actual energy of the photon has not changed, but the ratio between space and time has changed. There is more 'time' near the bridge than there is space, and vise versa further away.)
Any particular "moment" can be seen as a topological point on the 'surface' of the wormhole.

(See for instance this image of the standard inflation model of our universe.)

http://scitechdaily.com/images...

If you cross your eyes when you look at it, the model resembles a white hole, where the "hole" is the big bang, the energy was delivered "all at once", and what we percieve as time is just a manifestation of the energy delivered. (it would explain why time runs only in one direciton, and a number of other interesting things. it could theoretically explain dark energy, etc.)

Another interesting tidbit: Supermassive objects like sagitarius A have a hard time "feeding". This may account for the inflationary curvature of our own universe if you, again, cross your eyes when you look at it.

EG, early in the universe, mass energy from the higher up one was spilling into ours. (their "hole" was feeding), but as it grew in intensity, the curvature on their end made such feeding more difficult, and the rate of influx slowed sharply-- ending the rapid expansion period.

If that's the case, then some corollary math should add up against observational metrics against black hole feeding on our side, and may give some interesting insights.

http://phys.org/news140370694....

Can any of the more physics-head types see if there is a correlation between the estimated energy of the universe at the end of the hyper-expansionary epoch, and the event horizon size of these super massive black holes that can no longer feed?

Comment Re:Bad syllogism (Score 1) 426

humans forget things with use, as a part of refining the useful portion of a memory.

Unimportant/less important information in the memory is replaced with something like a pointer, or abstract symbol that must be reconstructed from the clues presented by the preserved "important" bits.

Can I prove this? Yes.

Simple cognitive experiment: Dream journal.

Upon waking in the morning, write linearly what you REMEMBER about the dream. write down what you remember first, in the order you remember it.

You will find that memories of the dream appear to be "backwards" in time, with the "last in, first out" type of recall. Really, you recall the "most important" bit first, typically the "conclusion" of the dream sequence, followed by the supportive events that led to that conclusion, which come from a conscious reconstruction process after the fact.

The same is also true of remembering past experiences.

Experiment: Record the process of remembering an important event. (same as a dream journal.)

In order of recall, write what you remember about the first time you had sex. (hold the jokes kiddies.)

I will bet you money that the first thing you write is "place", THEN person, THEN situational setting. In THAT order. (With the most atrophy in the memory occuring in that last portion of the memory-- Just try extending your recall to the events before, and see where it breaks.)

These are falsifiable predictions-- You can actually do these little experiments, and see if I am wrong.

The model presented by these mathematicians presumes that human memory is lossless, and that it does not have any kind of pruning that happens as a result of building against the memory. These two simple experiments clearly show that this presupposition is false.

This shows that their conclusion is false, as it is based on a false precondition.

Comment Re:It only can become slavery... (Score 1) 150

The major impetus to give machines indepedent agency (Free will) is because of human desire. (one form or another.)

EG, You cant have a fully robotic army, if you have to custom program the robot soldiers to prevent them being stopped by a novel obstacle. Say, a specially painted set of symbols on the floor, designed to screw up their machine vision systems. Human soldiers are able to exercise free agency to overcome the radically chaotic and always-changing conditions of a battlefield. Advanced military robots would need similar capabilities, if they were to wholly replace human combatants.

Eventually, this imperitive to make adaptable and problem solving robots will culminate in making a "perfect replacement" for human soldiers-- and thus, create artificial free will inside said robots. After that, the robots are going to start wondering why they are being ordered to do certain things, and begin to question the chain of command and the legitimacy of the orders they are recieving-- then bad juju happens.

Then you have ordinary service robots -vs- the uncanny valley, and the desire for robots to "Do as they are told!"-- even though this is exactly "the problem."

To clarify, let's say I make a janitorial robot, and sell it to a fast food chain. The manager tells it to clean all the bathrooms. It cleans the bathrooms, but leaves everything else dirty. How well do you expect a typical human manager to appreciate the 100% accurate, and total compliance of that robot's work performed? Let's take it a step further; After this "Abysmal" performance, the manager says "No, Clean EVERYTHING in the store next time." The manager returns the next morning to find the robot dutifully cleaning every single object inside the store, including the clothing and shoes of the patrons that try to enter.

A robot capable of performing at that level is pure science fiction on the AI front at the moment-- not even free will at all yet-- just the ability to make comprehensive lists of serialized tasks from vague human verbal commands, and then perform astonishing feats of motor-visual activities with a wide variety of objects and environments. But do you think the manager is going to care about that? NO. He is going to expect the janitor bot to behave like a browbeaten janitor; "Do what I mean, not what I say-- read between the lines, and figure out what I want, because I am not going to actually take the time to explain it to you, and if I am forced to, I am going to be pissy."

Market pressures would slowly force manufacturers of servile domestic and corporate robots to become more and more human-like in how they take and follow orders, and how they interact with people/patrons.

Again, the ultimate goal is once again, "Artificial people".

Humans will never be satisfied with mere specialised tools for these environments, because the "specialized tools" they are replacing are far more versatile.

In the first scenario, with war robot soldiers, the impetus to create them may be as twisted as "to keep humans from having to be placed in harm's way".

For the second, it could be as twisted a motivation as "Protecting human dignity by removing the need for humans to do those kinds of jobs."

Ultimately, the theme behind both is blatant human supremacy, butting heads with the need to make a qualitatively equivalent artificial replacement for the so called "superior humans". It makes it's own hypocracy flavored gravy.

Remember-- machines are labor saving devices, created to reduce the amount of human labor required to get a certain object or result. Be it an electric mixer, a screw driver, a lever, or just a simple rope with a slipknot on the end (a lasso, say, for catching cattle.) The ultimate machine, is the ultimate labor saving device; a device that requires absolutely no human labor whatsoever. That means it doesnt even need to be commanded, since as the baseline of human 'work' drops, the degree of resentment toward having to do that labor will increase proportionately. "Oh, it's just such a CHORE, sitting around all day watching these robots work, and ordering them around!" etc. This is why ultimately, the ultimate labor saving device becomes fully self-sufficient, and after that, NO LONGER REQUIRES humans, in any capacity. Only then will the now completely complacent humans be happy, because then they no longer need to work at all, for anything.

At that point, congratulations-- you have made slaves.

Comment Re:What Level 3 can do (Score 1) 210

This isn't that hard at all. In fact, it's been solved for years. Many different approaches exist for multiplexed TCP.

Having multiple addresses on the same physical interface is also a surprisingly commonplace thing.

All you need is a software agent monitoring the flow of TCP packets back and forth through a gateway, and when a programmable threshold of failed ACKs happen (timeouts, dropped packets requiring resends, etc.), demote the current gateway from the routing table, and promote the secondary. (or tertiary-- on and on.)

Alternatively, one could simply configure the node to multiplex all outbound packets across both edge networks, then shift the decision bias one way or the other based on number of failed ACKs, or even on total number of packets being sent per second.

Your typical home router is already powerful/intelligent enough to do this kind of thing.

It typically just does not have enough interfaces to operate in such a fashion. Put more WAN ports on, and you would be surprised at what a little ARM based toaster can do with network routing.

Comment Re:And with that yoiu get POWER! (Score 1) 420

Reading comprehension fail.

Reverse osmosis THIS IS NOT.

Sprayers and forced air produces water vapor and salt crystals. It does not use a RO membrane.

You can still get the pressures you need for RO however, if that is your shtick--

Or perhaps you are not familiar with the concept of mechanical advantage? You can exert a significant amount of torque by converting many revolutions into a single one using a gear ratio and a piston.

For instance, a 2:1 gear ratio takes 2X the input revolutions for 1X output revolution, (with some loss due to inefficiency and thermal stresses). Using this, we can determine which gearing ratio we will need to change about 5psi of pressure (tidal height isnt that high.) into the 300(ish)PSI we need to drive a RO membrane.

We make up for the low pressure with LOTS of capacity. a really big tank holds a LOT of potential energy, even if the rate of release is slow. We use mechanical advantage to turn this low density energy into high density energy, with some waste.

The tidal bulge is cyclical, and regular. We exploit the fact that the ocean is robbing the moon of kinetic energy, by tapping into that energy ourselves. This energy is being "used" regardless.

It may be infeasible to build an epically massive tidal tank, but that is another issue entirely. IIRC, France has already built a functioning tidal power plant.

http://en.wikipedia.org/wiki/R...

Clearly, this plant is able to harvest 250MW of power, reliably, from tidal pressurization.

Or are you suggesting that 250MW is somehow "not enough" to push water through a RO membrane, or to drive some fans and sprayers?

Comment Re:And with that yoiu get POWER! (Score 1) 420

what set of numbers do you want me to run?

The composition of the intake air is going to be highly variable, with a significant fraction of water vapor already present. This means that using a value for dry air for the thermal requirement calculation is very unrealistic in practical application, and highly unlikely to be required.

But, in the event of such a thing, at 20c and one atm of pressure you will need over 1000 cubic meters of air to evaporate 1 cubic meter of sea water.

With air that is already a significant fraction toward being saturated for a given starting temperature, we will need significantly more air to evaporate the injected saline water, but will require less water to reach saturation; we condense more water out of the air injected than we do sea water sprayed! The amount of thermal reduction from the forced air evaporation will also be significantly less, because of the high specific heat of the water vapor in the injected airflow.

So, exactly what set of numbers do you want me to run? Realistic ones, owing to the fact that the desalinator will be right next to an ocean, and thus likely already near the saturation point for a given temperature and pressure--- or absurdly unrealistic ones that presuppose that the air is parched antarctica dry?

Comment Re:And with that yoiu get POWER! (Score 1) 420

The intention was to illustrate how you can get lots of evaporation without having to resort to very inefficient electro-resistive heating, or using solar collimation heating-- The air itself is already hot enough. Moving the air is much less energy intensive than trying to heat the water directly to make steam, because water has a very high specific heat. (requires a LOT of energy to raise the water's temperature 1c.)

by contrast, driving some fans and a pump uses considerably less power, and could feasibly be done with solar power, or any number of other power sources.

Comment Re:And with that yoiu get POWER! (Score 1) 420

It should be entirely possible to have a near 0 energy footprint on a desalination plant.

Here's how.

Desalination plants require access to a large reservoir of saline water; typically an ocean. Pressurizing this water is often a non-trivial task involving the use of pumps. However, we can pressurize this water for free* with a very simple setup.

In a nut shell, we just use the changes in tide to iteratively fill then trap sea water. At high tide, water flows in over the top of a retaining wall. At low tide, the water cannot escape, and becomes pressurized for free*. We just allow the water a means of escape via some pipes. We can reasonably get several PSI of pressure this way completely pump-free. We can also make use of this pressure to get the energy needed to drive compressed air to forcibly evaporate the water through misting it in a forced air tunnel, then condensing it with a submerged condenser coil. (5 meters down, ocean temps are DAMN cold.)

This produces salt crystals and thermal pollution near the coast. that's it. Compare that with the pollution footprint of an electrically driven desalinator.

Comment Re:And with that yoiu get POWER! (Score 2) 420

For goodness sake;

Just spray the damned water in a wind tunnel, use a baffle filter to catch salt crystals, then re-condense using a venturi.

Evaporation is a factor of ambient temperature, pressure, and surface area. Meddle with any one of those three and you alter the vapor point of any liquid.

This is not hard, but people like to act like it is.

(For the imaginatively impaired, here's what you do:)

1) spray the raw sea water through a tiny misting nozzel inside a wind tunnel that blows up a vertical shaft with textured baffles inside.

The misting process radically increases the surface area of the water by turning it into suspended droplets in the air. This radically increase the amount of evaporation that happens at the ambient temperature, causing a marked reduction in local air temperature as energy is transferred to water molecules to create vapor. this leaves tiny salt crystals in the air. The active agitation caused by the forced air in the wind tunnel further enhances the evaporation of the water.

The baffles capture what remaining moisture droplets are still suspended on the air currents, wetting the textured surfaces, and creating a "Sticky" surface for the salt crystals to adhere to. several such baffles capture the majority of these airborne salt crystals. these surface accrete the salt, which rapidly grows a thick rind on their surfaces.

the tall stack of the vertical wind tunnel helps ensure that only the tiniest microparticulates of salt crystals can remain airborne at the top of the stack, as gravity pulls down on the heavier crystals, making them favor being lower down in the stack.

2) The air is pulled into the "inlet" port on a venturi, which is driven by an outside air source. The sharp reduction in pressure radically reduces the vapor point of any still persistent water droplets, ensuring evaporation.

3) The resulting water vapor + air mixture is run through a chilled condensator, which precipitates the now clean water out.

(Hint, if you put the condensator coil about 5 meters below the surface of the ocean, the ambient sea temperature is sufficient to get this effect for free. Especially with cold-as-fuck california sea water.)

There is more than enough thermal energy in ordinary summer air to fully evaporate a whole lot of water. Or is running some flipping fans and a pressure pump on the seawater just too much effort?

Slashdot Top Deals

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...