Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:well that's just silly (Score 1) 210

- Real estate covered in solar flux = energy

We have solar flux and real estate here, at 1/1000th the cost.

Also, energy THERE is not as good as energy HERE.

- Temperature differences = energy

There are no significant temperature differences that can be exploited, and no cooling water or air either. You're thinking of the temperature swing between the day and night side.

Also, energy THERE is not as good as energy HERE.

- A shallow gravity well = easy to ship things out

So what? There's nothing there worth shipping anywhere. It's just rocks.

- Low gravity may = easier life for those weak due to medical conditions = retirement

Doubtful. All evidence collected so far indicates that it would make things worse, not to mention the huge risk of living on the Moon and the drop in the quality of life. There's no parks and stuff outside to take a stroll in on a nice sunny day!

- Dark side = potential astronomy sites

Idiotic beyond belief. It's no darker than anywhere else -- it's just a name for crying out loud. Space telescopes are placed into orbit or Lagrange points for a reason: no vibration, no gravity, minimal temperature variations, and 24/7 seeing. The Moon has none of those things.

- As closest planetary body from which vacuum based engineering such as asteroid mining and space habitats could be tested and based.

You can do vacuum based engineering in Earth orbit, which is far more convenient. Not that this has been shown to be useful in any way, because we can produce vacuums down here on the surface just fine. What we can't produce is microgravity, which Earth orbit has, but the Moon does not.

- Close enough that robotic operations can be monitored and directed in real time

Just because it's practical to do things there doesn't actually provide a reason to be there. Not to mention that the 3 second round-trip delay makes "real time" a bit of a stretch.

- Far enough that it is a good place for dangerous things like reactors, super particle accelerators and self-assembling nanolife construction bots.

Nuclear reactors have killed fewer people in their entire history than coal mining has this year alone. Particle accelerators aren't dangerous at all. Nanolife is wishful thinking.

Also, energy THERE is not as good as energy HERE.

Got any ideas that belong in reality instead of fantasy?

Comment Re:well that's just silly (Score 1) 210

Assumes too much about future technology which doesn't even exist yet.

For example, ion drives are the best developed for deep-space exploration, and require only relatively small quantities of exotic substances such as mercury or xenon. Lifting a few tons of either into orbit is not a problem, and way cheaper than lifting an entire refinery all the way to the Moon!

Comment Re:well that's just silly (Score 1) 210

1. Rare Earth elements.
2. (Potentially) Clean water
3. Raw materials that are in limited supply on earth, e.g. Copper

All three are available here, at 1/1000th of the cost, right now. Rare earth metals aren't that rare, water is everywhere and at most needs desalination, and copper is both easy to obtain and easy to recycle.

You would have to propose non-physical magic technology to enable any of those things to be shipped from the Moon to the Earth cheaper.

Comment Re:well that's just silly (Score 2, Insightful) 210

He-3 is preferable for a fusion fuel since it's aneutronic--no radiation to deal. It comes that way from the moon, the path to producing it on earth does everything but avoid radiation.

Even the "aneutronic" fusion reactions have side-reactions that produce neutrons. While a lower neutron flux helps with materials engineering from a longevity standpoint, it still makes the reactor wall materials radioactive. That's the real problem, and He-3 doesn't fix it.

He-3 is useful as an advanced fuel in rocket propulsion

a) Requires technology that is currently at the wishful-thinking stage of development.
b) Rockets don't require aneutronic fusion, because fusion engines would be most useful in deep space, where radiation is not a problem.
c) He-3 fusion isn't entirely aneutronic anyway.
d) He-3 fusion is harder than D-T fusion.

Power can be produced in space and beamed down to earth

Has nothing to do with the Moon, or an orbital tether.

There is no realistic source of power that either exists only on the Moon, or would be cheaper to produce on the Moon.

Many of those rocks we have down here on Earth resulted from really big rocks from space slamming into us. Might be good idea if we have technology, infrastructure and humanity already in space before we're in need of it.

A tether on the Moon won't help you solve this problem. If this comes up, robotic space-probe technology will be all we need, and we have that already. Stop watching Hollywood sci-fi where brave men have to go deal with the problem in a giant space ship. The real solution will likely be as simple as coating one side of the incoming object with soot.

Putting multi-trillions of dollars into the vacuum is preferable to craters into the middle-eastern sand. The same jobs are created but at the end of the day at you have something far more impressive to show for it and far fewer lives expended.

[citation needed]

Things aren't that simple in the real world. As cold and sad as it is, the lives of brown people in a distant desert just aren't worth much to anybody in the United States, unlike the oil they live on top of. By some calculation it was worth it to invade. Thanks to various mistakes, the cost ended up spiralling out of control, but even so the wars are probably a better investment than going to the Moon.

He-3 is worthless, because it doesn't achieve aneutronic fusion, just slightly-less-neutronic fusion. So then, what's left on the Moon that's worth a multi-trillion investment?

Seriously, name one thing that's on the moon that you think is worth trillions of dollars, keeping in mind that its surface is entirely covered in rocks.

Comment Re:well that's just silly (Score 2, Insightful) 210

Except that it's not economical: all current plans for fusion power intend to breed the required fuel isotopes from lithium, which is several orders of magnitude cheaper than mining anything from space.

So, that leaves what? Nothing. There is nothing on the Moon even remotely worth the multi-trillion-dollar expense. It's just rocks in a vacuum. We've got plenty of rocks here!

Comment Re:Amazing what competition does (Score 2) 70

Yes, really, Hyper-V 2012 might be usable.

Version 1.0 and 2.0 were "me too" products that weren't mature. Nobody in their right mind would use them for anything serious. Some people did, of course, but only because of some non-technical manager deciding they wanted "all Microsoft" or some-such nonsense.

Read the technical whitepapers on the 2012 release, it looks like someone at Microsoft finally "got it". It doesn't just have feature parity, it has some interesting new ones too that nobody else has, like good support for >10Gbps Ethernet. Apparently they took the zero-copy and low-latency network stack from the old HPC edition of Windows Server, and bolted it onto the generic server editions. Supposedly it can do 40 Gbps for a single TCP stream without special tuning! For comparison, it's hard to find a Windows server that can do more than 3 or 4 Gbps in loopback, let alone across the wire for a single stream. Combined with Microsoft fixing most of the issues with SMB2, it looks like using plain file server clusters might be not just a viable replacement for a low-end SAN, but a serious performance upgrade. For small business or workloads without critical data, this is going to massively reduce costs.

Comment Re:depends on intended users (Score 1) 360

To continue the car analogy, what Microsoft and Apple want is the "Plan C", where the if the car that you bought no longer "just works", you can't take it down to your local friendly mechanic, because the diagnostics computer interface is encrypted to lock out anyone but the car manufacturers' approved mechanics.

Comment Re:What if... (Score 1) 136

I can't claim that there is a scientific germ theory equivalent for the practices I listed...

I could skip replying to the rest of your comment, because you admitted my point, but you seem to have misunderstood my finer points, so I may as well...

First off, I understand TDD and I know the difference between TDD and Unit Testing. I love refactoring, and when IntelliJ IDEA was first released, I thought it was like the light of God shining down from heaven upon me. I've done pair programming. I get it. What I'm saying is that none of those can or should be applied without knowing when to apply them, which you can't do. I can't do it. Nobody can.

No amount of explanation of how a buzzword works will make my point invalid. It's still not scientific, and doesn't always apply. Until it is a science, it is snake oil, as far as I am concerned.

Go back in time a little bit. Remember when Object Oriented was the buzzword of the day? The purported advantages of OO sounded an awful lot like the advantages of popular project methodologies: OO would help prevent code breaking, because new code would not change existing working code. OO would help big teams work together by defining interfaces. OO would help encapsulate code to prevent bugs creeping in due to excessive cross-dependencies. Etc...

Now, go and tell Linus Torwalds that he's an idiot for using a non-OO procedural language on one of the biggest and most successful programming projects of all time. I'd love to see you have that conversation.

This is a lot like you saying that every programmer should be using Agile, even though there are enormous and wildly successful projects out there were produced without Agile.

I've heard more than a few anecdotes of Agile not working, and resulting in major problems. Sure you say, maybe they just didn't apply Agile the right way? Maybe they didn't get it? That's a lot like saying the priest just didn't pray hard enough, that's why his church was struck by lightning. He should pray harder! He should pray the right way! No? How about penance? Maybe even self-flagellate, see if that works?

This means that full TDD results in 100% passing unit tests with full code coverage at all times.

Code coverage != testing for everything that needs testing. That's one of my points.

Yes, or even tripling the amount of code. But if you still measure productivity in any relation (positive or negative) to lines of code written then I'm not sure we have much else to talk about.

There's an awful lot to talk about, because time is money. Tripling the LOC could send many projects over budget, and hence into failure. Just because the code passes tests, doesn't mean the project is a success. Which do you think businesses care about most?

Actually, refactoring (including test refactoring) is a significant part of the effort...

You misread what I said. I assumed refactoring is a given. Based on that, a TDD project will also require changes to the tests. A non-TDD will not, saving time. Some refactoring processes are 100% safe, and TDD just adds pure overhead. I'm thinking of the type of refactoring done by IntelliJ IDEA or Visual Studio, where the refactoring is automated and done on a statically typed language.

TDD is not unit testing.

It isn't, but it's a superset. I was talking about test-based development methodologies in general, not just TDD specifically.

I've used TDD with C++, Java, Objective-C, C and Pascal.

You've just named 4 unsafe languages, and one that has type erasure and is typically littered with casts from "object" which is functionally equivalent to "void*". Mmm... safe.

Try using C#, F#, Haskell, or a similarly modern languages with proper type safety, extensive use of templates, and higher-order programming. I've heard of people releasing Haskell libraries without ever actually running them.

I haven't done a google search for this, but I'm wondering if you did?

Good try shifting the burden of proof. Your career is the one based on the theory project management, not mine. So you're telling me that you don't even know if the researching backing up your teachings exists? Aren't you concerned by that?

That said, I admit it might have happened somewhere.

So then.. as a professional, you would spend the time to find the research that shows the percentage of time wasted on tests failing on correct code, right? And you would research on which types of projects this is likely to become a major problem? And you'd spend the time figuring out how to detect such cases ahead of time instead of after the fact? I bet you would, but there's no such research, because TDD is not a science!

I feel a _lot_ better about my TDD code since I _know_ it works and has no cruft.

You've just hit two one of my points in one sentence: you may be getting a false sense of security about your code, even though you could be fooled by seeing lots of nice "green ticks" despite major problems with the code. On top of that, you claim "no cruft", but you've doubled or even tripled the lines of code. Hmm...

I work with my clients to help them quadruple productivity as measured by business results

Which might have nothing at all to do with the specific methodology you've applied. Read up on the Hawthorne Effect, which shows that merely getting some attention from an external party may alter behavior significantly. The benefits could be just a side effect, much like a placebo. The company has spent a bunch of money for you to come out there and train the staff, so it must be a success, right? Sure, when you're looking over the shoulders of the engineers and they can't goof off. Similarly, enforcing a rigid structured methodology -- any methodology -- on an unstructured team is likely to lead to improved results.

You don't know if Agile is the best approach each and every time. What if it's not? What if there's a more appropriate methodology that should be applied, but you just don't know how to determine when it is appropriate or isn't?

Talk to me when project management reaches the level of modern medicine.

Until then, all I'll be hearing is "anecdote, anecdote, citation needed, anecdote, snake oil, anecdote". 8)

Comment Re:What if... (Score 2) 136

They are hackers in the most pejorative sense of the word. Think of a surgeon. If a surgeon fails to wash her hands before operating, she is failing as a professional. These agile engineering practices are like surgeons washing their hands.

Washing of hands is based on the scientific germ theory, which is backed by mountains of evidence.

Every buzzword you just listed is a mere whim. A preference. A particular style that works for some teams, and not others, mostly for mysterious and unknown reasons.

What you just said basically amounts to some priest admonishing a fellow man of the cloth for not properly anointing the bust of Christ with scented oil. He's doing it wrong, doesn't he know?

I've seen these software development fads come and go, and without exception they fail at being scientific, they fail at consistently achieving their goals, and eventually get replaced with a shiny new method which is just as un-scientific.

Okay, enough waffle, let me give you a concrete example: test-driven development. Sure, it sounds great. You write tests, they pass, and if you break something then the test will tell you. Except that it's not so simple:

- You can't predict unexpected problems.
- Expected problems aren't really problems, are they?
- Tests can give a false sense of security by reporting 100% success despite the prolific presence of bugs not tested for. This can result in major scheduling issues when the software fails unexpectedly.
- It is impossible to write quick & simple tests for a HUGE range of things one would most want to test for: ACID compliance, memory leaking, safe multi-threading, security, etc... Some of these things just have to be done exactly right the first time, like a maths proof.
- Tests take time to write. In many cases, doubling the amount of code written.
- Changes to the code like refactoring often require changes to the tests, potentially doubling the cost of maintenance.
- Tests are usually written in the same language as the code being tested. Gotchas with the language like Javascript's and PHP's "==" vs "===" are likely to be repeated in the tests too, leading to false results.
- Someone who doesn't know how to write code correctly probably won't know what tests to write to really put their code through its paces. Programmers who do know how to write code correctly, probably won't benefit quite so much from testing.
- Tests are most popular with developers using languages with weak built-in protections, like weakly-typed or dynamic languages. Where's the study comparing the relative merits of dynamic languages plus tests vs using a statically-typed language on its own? Nowhere.
- If a test fails for the wrong reason (badly written, test engine issue, etc...), then time may be wasted fixing a bug where none really exists.

That's just off the top of my head.

Now don't get me wrong, I'm not saying that test-driven development is bad -- far from it -- but what I'm saying is that it's not always the best approach, and NOBODY knows when it is or isn't the best approach. Neither you, nor anyone else, can give me a convincing argument of when it is right and when it isn't, because there is no evidence either way. There is no "theory" of project management that can be applied, like germ theory. There's just guesswork, and rules of thumb, and blind application of a rule that worked for one project to another where it may not actually apply. All I ever hear are anecdotes: "Oh, TDD worked great for my PHP website written by beginner programmers, you'd be an idiot not to use it". Meanwhile, I'm a veteran, write code in a safe and statically-typed language, and rarely if ever find bugs in my code after it ships. When I do, it's something obscure that tests would not find, like the incorrect use of case-insensitive collation in a SQL sort statement.

Same thing goes for refactoring, or pair programming, etc...

I've tried both. I like refactoring, but again, if you're constantly refactoring your code, maybe you should stop writing shitty code that needs refactoring so often that it becomes a key component of your regular project management. I've tried pair-programming, and I thought it really was effective at improving code quality, but the poor bastard without the keyboard was bored to tears.

Can you point me at even one study that compares the pros of pair programming vs the cons, such as employee job-satisfaction? One that uses proper scientific research methods with -- you know -- numbers? A study that comes up with a theory for pair programming that I can apply to accurately determine when to use it or when not to use it?

I doubt it.

Comment Re:Someone explain to me... (Score 5, Insightful) 443

Market liquidity is extremely important.

Lets assume for a second that it is important to be able to trade a hundred times a second, which isn't even an exaggeration of what's already happening.

Then, logically, one would expect the entire financial world collapse every night when the markets close for hours.

Oh wait, nothing happens, and everything continues like normal the next morning!

Hence, the assumption that high-speed trading is vital is clearly false.

It's one thing to have a high volume of real trades, but it's entirely another thing to have a ludicrous volume of very small meaningless trades by third-parties that neither want to buy nor sell, but just want to "play the game" and skim off the top.

Comment Re:chasing the "dumb it down" crowd (Score 5, Insightful) 535

Well put.

I have a theory for this based on my observations of older computer users, especially those that started in the DOS era.

Back in those days, two things were significantly different to now:

1) Software came with printed manuals written in a "tutorial" style. These days, most software comes with electronic help files at best, usually written in a "reference" style with no theory or explanations.
2) A few very popular products at the time like Norton Disk Doctor had a radically different UI style that actually explained things, and this helped people learn as they went.

I remember my father reading through the Corel Draw manual end-to-end, and he ended up learning how to use it completely. He's not a graphic artist by any means, but I've seen him develop fantastically complex multi-layer vector art for use in embedding in documents back when DOS 6 was new. These days, I'm shocked when I see vector art in a Word document. It just doesn't happen because it's "too complex" for most users, even though vector drawing programs have gotten better and easier to use!

It's the second one that I'd like to see make a come-back the most. Norton at the time was a fanstastic product, because its author realized that everyone else was doing UI design wrong. Nobody has picked up on his insight, and everybody still does it wrong.

Ask yourself this: How many times have you seen a dialox box pop up on the screen demanding an immediate response to a scary question with no explanation? Things like:

This could damage your system! Are you sure? Yes or No?

Think about it for a second. How is the poor user expected to respond to this? What the fuck is "this"? What kind of "damage"? Should he press "yes"? Or "no"? Why? Why not? On what basis should he decide?

Practically all software is like this. Operating systems like Windows literally barrage users with prompts that are exactly like that, dozens of times a day. The prompts never give any useful information, even for Administrators, let alone a non-technical user. Users learn only to click "OK" to everything and pray. No understanding is gained.

For comparison, Norton Disk Doctor had full screen dialog boxes with paragraphs of text explaining things like:
- What triggered this message
- A detailed explanation of what the question means
- What will happen if you press 'yes'
- What will happen if you press 'no'
- The risk to your data for both cases

I saw users who were still at the stage where they could only type with one finger confidently making complex technical decisions because they were informed. The explanations thought them something, and they learned, and got better at using computers.

I haven't seen a product like that since, by any vendor. Coupled with the combination of manuals becoming a rarity, it's no surprise that users aren't learning anything.

Comment Re:Lol (Score 1) 711

The memory requirement of Office includes operating system overhead and caters for extremely large documents. You know, like the stated memory requirements of every other piece of software out there. It's not like Microsoft seriously expects Office applications to regularly use 1 GB just to launch and edit a letter!

I just opened a 78 page, 14.5 MB Word DOCX technical document with the 64-bit edition of Word 2010, and it's using a whopping 36.3 MB of memory. Also, it used a grand total of 240 milliseconds of CPU time to load.

Clearly, it's a bloated pig of an app, and we should all go back to using edlin to save those precious bytes of RAM.

Comment Re:Still using Office 2003 (Score 2) 369

That's great, but this time, have they gotten around to fixing any of the bugs & quirks from the older versions that we've all learned to love to hate?

I mean seriously, it's 2012 already, and in Word 2010 SP1 I still struggle with issues like these:

- Can't use a font with a PostScript outline and export to PDF. Because of a ~7 year old buy in Word, it gets converted to a bitmap! MOST third-party fonts have PostScript outlines, including practically all of the Adobe Pro fonts. Printing to a "PDF Printer" strips out all the metadata and hyperlinks, so that's not a solution either.
- Still can't use advanced font features like the OpenType small caps.
- Table padding and outlines are added to the cell content. This makes it impossible to create a table that is exactly as wide as a normal paragraph, because a table that is 100% wide is actually 100% + some extra wide, just for laughs. The only solution I've seen is complex macros that recompute the width of each table to some horrific fractional size to compensate for the padding.
- Certain style formats need to be left on "default" (e.g.: inherit from parent style) to prevent downstream formatting issues. However, once set, most style properties can't be unset back to defaults. Short of editing the XML by hand or possibly resorting to macros again, I don't see how this is fixable.

From reading the forums, most such problems have been present since forever, and will never be fixed.

Slashdot Top Deals

"I prefer the blunted cudgels of the followers of the Serpent God." -- Sean Doran the Younger

Working...