Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Thanks for the info (Score 1) 247

You can do something similar with aluminum refining, which uses high power electrolysis. If we look around, I'm sure that other processes can be reorganized to make use of varying supply of electricity.

Thanks for the info. I'll add this and "water desalinization" (from a post further down) to my mental list of solutions.

I had *thought* that aluminum refining required the melting of bauxite, which would make it inherently difficult to start and stop, but another poster points out that Alcoa tailors their production in this manner. I'm guessing that a "charge" of ore can be processed in a short amount of time, and that a refinery has a large number of small furnaces which can be individually shut down as needed.

Comment Run on sentences (Score 0) 109

Sure, it's easy today to look at the Sun and know it's a ball of (mostly) hydrogen, generating energy by combining those protons in a chain into helium through the process of nuclear fusion.

Sure, it's easy to today to look at slashdot and know that it's all (mostly) clickbait, generating revenue for Dice by tricking viewers into visiting websites who think that they can make money by spraying advertizing onto eyeballs in a vain attempt to...

Damn! I never realized how hard it is to make convoluted run-on sentences. So much for my attempt at sarcastic humor.

I have newly-found respect for the Slashdot editors.

Comment Re:Pointing out the stark, bleeding obvious... (Score 4, Interesting) 247

So the plan is to install enough batteries to power the world all night long, and then for a week or two when the weather is bad?

Or is it to put solar all over the Earth and have a massive world wide power grid to move power to where it is needed?

I suppose either is technically possible, I just don't think either is likely to happen.

How about we build nitrogen fixation factories near the baseload generation, keep the baseload on all the time, and make fertilizer during the times when the energy is otherwise not needed? Nitrogen fixation can be quickly started up and shut down without damage to the system, and requires an enormous amount of worldwide energy.

How about we build a smart grid, which incorporates electric vehicles on home charging systems? Charge the car during the day, then give back some of the stored energy at night when the car's in the garage.

How about we take recycled batteries from aging electric vehicles - batteries that can hold 80% of their original charge, but which are no longer good enough for electric vehicle operation - and stack them in warehouses to store and release energy as needed? Do batteries lose capacity at an exponential rate? If so, those 80% batteries should last a long time.

How about we mount the solar panels with a gap above the rooftops, so that the panels keep sunlight off of the roof, reducing [somewhat] the *need* for energy to be spent on air conditioning?

How about we look for solutions rather than assume that everything will be exactly like it is now, except with problems that cannot be solved?

Comment Re:Ridiculous (Score 5, Insightful) 112

You learn no more from failure than you learn from success. There are many ways to fail and few ways to succeed, thus it is better to learn what to do than what not to do.

This is a rational argument applied to the real world, and it doesn't hold true. Rational arguments are almost *never* true when applied to the real world, unless they start from a fundamental model and build up. (And in that case you can make testable predictions.)

Eighty percent of first businesses fail, but only 20% of *second* businesses fail, and it's not because people don't try to do it right the first time.

Both Thomas Edison and [head of IBM] Thomas J. Watson have extensive experience in this, and both have written positions on the subject. When someone approached Watson and asked "how can I increase my success rate", he responded "double your failure rate". When someone asked Edison how he could continue researching the electric light bulb after failing 5,000 times, he replied "I haven't failed 5,000 times, I know 5,000 ways that won't work" (source).

The rational argument fails when it's applied to the risk/reward formulation. Each time you fail you lose 1x the value of the experiment, but each time you succeed you regain 50x the value of the experiment in profit.

The mantra in the IT world is "fail fast, fail often", which reflects the risk-reward equation very well. It takes almost nothing to set up a website showing your idea to the world, and almost nothing to shutter it 6 months later.

But once in awhile, that idea becomes popular and profitable and you can recoup your investment many times over. That's why people should fail; or rather, not be afraid of failure.

Not because of any rationalization, but because it's historically the route to success.

Comment Google glass choices (Score 2) 112

Google glass failed, but I suspect that they allowed it to fail due to lack of persistent development.

The way most people work is that they try something, it doesn't work, and they give up. I've heard lots of things like "I can't learn to whistle, I've tried" and "I tried that, but it didn't work". Mostly it's amateurs building stuff and giving up on the first try: "I put the circuit together and it didn't work", or "I tried to build a spice rack for Marge, but it turned out awful".

If you really want to make something, you have to be prepared to throw the first one out and start over. If the circuit doesn't work, find out *why* it didn't work and fix it. If your spice rack is awful, spend some time on YouTube looking at proper technique, then spend some time using the router (or table saw, or whatnot) with pieces of scrap until you get the hang of it. Then start the project over.

Google glass could have been popular if they noted the feedback and piloted the project into more popular waters. For example:

1) A flip-down cover for the camera, so you can interact with people and they know you aren't recording them
2) A less restrictive interface, so that developers can show anything instead of storyboard images like a viewmaster. IOW, a direct graphical interface.
3) a less expensive device (costs $150 to make, $1500 to buy). (Note: Cell phones have largely the same functionality and don't cost $1500)

Rather than fix the problems, they decided to just let it die. Maybe they did market analysis and thought that it would never sell in any form, but I really doubt they went that far.

Comment Re:Makes sense (Score 5, Insightful) 239

The government doesn't want you to make money, especially if you do so in a new and innovative way. THAT, my friend, is the problem.

That is not really what is going on. This is a simple case of regulatory capture.

It's not really that simple, and the grandparent's position is not without merit.

You'll note that *amateurs* are not allowed to operate drones commercially, and *commoners* are not allowed to start a business operating drones (for remote crop/herd inspection, search and rescue, real estate videos), but big players such as Amazon and FedEx will be granted commercial licenses to do so.

It's the same with any business in the US: the big, entrenched businesses are given all the exceptions, all the subsidies, and all the tax breaks in the name of "jobs", while making it impossible for new companies to form and hire grow. As a concrete example, it is impossible to start a company (however small) to compete against GE because GE pays no taxes.

It's a stupid policy that's indirectly driving the economy of the country into the ground. Big, entrenched companies don't hire more people when given money, *small* businesses hire people when they grow to become big ones. Propping up a big, weak company at the expense of stifling smaller companies is the source of much stagnation in this country.

We have an opportunity to make great progress in an emerging technology, and by holding the US back all the advances will be made in other economic climates.

Look for the US to become a third-world nation in the next decade or so.

Comment Awesome post! (Score 1) 262

Thank you for the awesome post.

So many times people simply say "no it isn't" in response to some article or position, it's refreshing to see someone who can put forth some qualifiers and make a back-of-the-envelope calculation. Bravo!

(And I'll award triple-word score for making a linear prediction within a logarithmic scale. That's not something most people can do.)

Comment Cognitive load (Score 2) 167

One thing that Mozilla doesn't get (and engineers in general, I ween) is that changing things imposes a cognitive load on the users.

I'm used to Firefox, it does what I want and doesn't require my attention very much. The major reason I don't switch to Chrome or any of the other browsers is cognitive load: I'd have to learn an entire new way of doing things. Different looks, different icons, different behaviour... it would take hours to figure out the new system, many minor "how can I get it to do this..." moments amortized over the next year.

Every time Firefox changes, it's a distraction. Something to notice, figure out, and get around. For me this time it's the offline cache system - no amount of fiddling with the options or about:config will cause the system to save tabs on program exit and load those tabs anew on start - the weather *has* to show yesterday's page on program startup(*).

The previous issue (for me) was putting the window rendering in an external thread, the upshot was that cascading menus took several seconds to render. Click, count to three, then see the bookmarks... move the cursor, count to three, see the selection bar move down. Setting the about:config option to undo this caused Firefox to crash on every boot, but un-setting "use hardware acceleration" fixed that. (My dad is *totally* going to figure that out and not move to Chrome instead.)

All this "OMGWTF we need to be like Chrome!!!" and "OMGWTF we need a chicklet interface" is driving users away from the system. For every change, a number of users say "screw it, I'm moving to $OtherBrowser".

Changing behaviour at all is stupid, doing it once a month is ridiculously stupid. They're thinking in terms of "how can we add more functionality" instead of "how can we attract and keep users".

Pro tip: adding complexity to every little feature does not necessarily make your software more popular.

(*) To be fair, I've only tried 6 of the 64 possible combinations of options that might affect this (in Options->Privacy and about:config). It might be a simple fix, I just need to uncover the right combination of options to do it.

Comment eReaders are functionally bad (Score 1) 261

Having the ability to touch any word on the screen and have definitions, translations, and wikipedia entries pop up as you read (which is great for many of the older books) is a fantastic benefit over and beyond the simple fact that so many of the world's classics are available free of charge wherever you have internet access is a bonus that can't be overlooked. Honestly, in terms of studying books such as Gibbon's Fall of the Roman Empire, I find myself eternally grateful for such capabilities.

I agree wholeheartedly that the eBook experience *could* be much better than physical books, but it isn't.

As an experiment, I recently picked up a reader and tried it (Sony eReader). Here's what I found:

  1. .) The contrast is lousy, it's reading with a piece of slightly frosted glass between you and the text
  2. .) The reflected glare is awful. You can't read wearing a white shirt, for example.
  3. .) Every time the system powers up it has to run through the database making a hash of each file it finds. This can take upwards of an hour, depending on the number and complexity of items, and during which the system cannot be used.
  4. .) It always shows PDFs at "fill the screen" resolution, which means that the margins of the original page are always visible, which means that most of the display area is wasted. I can "zoom" individual pages, but to go to the next page I have to get out of zoom and then reapply the zoom to the next page.
  5. .) Using the "small-medium-large" setting scales the font, but not the formatting. Characters and words become larger, but the "breaks" at the original margins are still there, meaning that the lines break at odd places and waste much of the display area.
  6. .) Finding a specific place in a book is time consuming and inefficient. The first 30 physical pages of a book are usually things I want to skip (contents, publisher, title page, foreward, &c) and going forward to find the transition from meta to actual content is tedious. You can't just say "go to the start of text". In a real book you flib forward/back at high speed until the character of the pages change.
  7. .) Finding a referenced diagram, equation, or image is nigh impossible. Flipping forward (or back) 3 pages to see a chart of graph is easy in a physical book - you just put your finger in that place and you can go back-and-forth whenever you need.
  8. .) Reading scientific papers where the charts/diagrams are at the end of the document is highly inconvenient.
  9. .) Finding a specific place *mentioned* in a book is nigh impossible. If the contents say "Chapter 5 is on page 120", then you have to go to *physical* page 120 and then flip forward or back until you find what you're looking for. If the contents say "figure 120" and you're looking at "figure 4", it's too time consuming to find it. (I'm currently reading a book in PDF format that does this.)
  10. All in all, I haven't used my eReader much.

    It might be OK for narrative stories, light paperback reading that you can do in a dentist's office, and if it's a modern eBook written with proper formatting, but for anything remotely sophisticated it's insufficient.

Comment Dazzlers (Score 4, Interesting) 318

Blinding weapons are banned? Not so.

From that article:

[...] a soldier he interviewed after an incident in Iraq a few years ago. While on duty, the soldier fumbled a dazzler he was trying to point at an oncoming vehicle a safe distance away. “He was in an awkward position and illuminated a rearview mirror in such a way that he got a beam directly back into the eye.” The beam had gone less than 6 metres when it hit the soldier in the centre of vision of his right eye, burning the retina and leaving his vision in that eye permanently damaged.

Yeah, right. Blinding lasers are banned from military use, except that the military uses them and (from the article) are being made available to police departments.

I'm missing something here - is it OK if it blinds soldiers so long as the *intent* is not to blind soldiers? Is the ban only for *combat* soldiers and not policing soldiers? Is it only banned in *declared wars*, and not *non-war military invasions*?

Can anyone explain why we use dazzlers when they appear to be on the banned list?

Comment Re:Fritz Haber (Score 1) 224

I think we can all agree: scientists have killed more people than any other group in history.

I dunno about that - Genghis Khan supposedly killed 40 million, which was 11 percent of the total population of the time.

Hitler killed 5 million Jews, the Hutus killed a million other Rwandans. I'm not sure how many scientists were on their side(s), but none of the responsible parties (people who made the decisions, gave the orders, or actually carried out the actions) stand out as being scientists.

I don't know that scientists kill all that many people - as individuals they're pretty weak.

As a group, they're more "enablers" than killers.

Comment Re:Fritz Haber (Score 1) 224

Fritz Haber was an interesting guy.

His actions - turning chemistry to the task of killing soldiers - was considered abhorrent by many people and caused much political and philosophical debate at the time.

His position was that (I'm paraphrasing) his country and its way of life were in jeopardy, and any action taken to prevent that was justified. He saw no difference between shooting an enemy soldier dead and killing them dead with chemicals.

And although he used Chlorine, the other side (French Chemist Victor Grignard) was working on Phosgene gas at the time. We look to Haber as the first use of chemical weapons, but Grignard was only a little behind and the French produced and used only a little less phosgene gas than Germany did during the war.

Modern weapons design (guns and such) supposedly favor wounding the enemy soldier instead of killing them - the theory being that the enemy has to spend more resources dealing with wounded than they do dealing with dead bodies (fellow soldiers have to help the wounded to the hospital, supplies and support of hospitals, &c.)

I suspect Haber would have seen no difference between wounding an enemy using phosgene gas and wounding them using a gun. Phosgene, when mixed with tear gas, wounds the enemy but is largely non-fatal.

I'm not sure I see the difference either.

Can anyone explain why wounding someone with a gun is more palatable than burning them with phosgene gas (or napalm or phosphorus)?

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...