Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:economic case with different assumptions... (Score 2) 163

Actually, I do know these things, but didn't bother to include all the sources, given it being Christmas and all. Since you are so insistent about it though...

Points 1 & 3 are taken from direct quotes by Elon Musk
Point 2 is taken from the design of the Falcon 9, available at spacex or nearby wikipedia.
"re-use without refurbishment" another direct E. Musk quote.

Spacex current launch rate (6 per year) and cadence and published launch costs and satellite weights for commercial space companies are just a google search away. Try this excellent site: http://www.spacelaunchreport.c... for starters

A very informative and useful place to find much of this information and discussion by knowledgeable space experts and enthusiasts is at: http://www.nasaspaceflight.com...

The rest is just simple math.

To sum up, I do have details, I'm not guessing, and I note where I make assumptions. Find fault with my assumptions if you like, but please explain why those assumptions are flawed with specifics, not generalities.

Comment Re:economic case with different assumptions... (Score 4, Insightful) 163

Interesting numbers. Let's try a variant case. Suppose in addition: You're assuming that the non-reusable launch vehicle cost per launch is $60M. OK, let's start out by assuming 1/3 of that is fixed costs and operations costs, and 2/3 the vehicle cost, which is split evenly between the two stages (first stage is larger, but not proportionately more expensive). So, of the $60 million, $40 million is spent even if the vehicle first stage was free. Now assume that re-usability increases the launch cost by, say, $5 million (launch operations are expensive! and the cost is not entirely the vehicle). Assume that all the stuff needed to make the first stage reusable increases the stage cost by 25%, from $20M to $25M. And assume that the delta-V and the added mass to do the fly-back decreases payload by 10%, and that the price you sell the launch for decreases a similar percentage (some payloads won't care, but some will.)

First off, the current cost of the rocket already includes the costs to do reusibilty, so the cost of the first stage will not increase- it is designed be reused up to 10 times right now with no change in hardware.

Secondly, the cost of the 2 stages are not even remotely close to equal; the first stage has 9 Merlin engines, the second stage only has 1. An estimate of 6 to 1 (first to second) for costs would be more reasonable.

Thirdly, the payloads currently quoted already include reusability (16MT to LEO and 4.5MT to GTO). No loss of earnings there.

So none your variant assumptions are useful for this discussion.

Let's look at some other factors you haven't considered.

Like the space shuttle, SpaceX now has a rocket for examination that has flown a full mission and hasn't had a 6G salt water landing. This means that they will be able to do full engineering analysis on what stresses the rocket actually experienced during a flight event that increase all steps necessary for re-use. The results of that analysis will allow them to determine what parts of the rocket need to be enhanced or reduced to meet the 10 tens re-use goal. SpaceX has the luxury of being to make changes to their rocket without Congressional approval, so this information can be used immediately to improve the vehicle. The design goal of the Falcon is that the rocket need not be "refurbished" after every flight, just put through some standard flight maintenance tests. Having the flown stages available for analysis will help them to meet this goal.

Additionally, SpaceX currently has launch costs based on 6 launches a year. As they have already demonstrated the ability to launch with a cadence of 2 weeks several times, being able to increase their launch rate to a minimum of 1 a month will cut their overall costs per launch.

Let's assume that a slight redesign based on analysis of real-world data let's them increase reliability of the Falcon 9 to 1 in 100 and increase the payload by 1MT to GTO. At 5.5MT to GTO, this let's them handle 90% of all GTO launches (6MT is at the current top end for commercial satellites to GeoSynchronous orbits) with the reuable design. 5MT is compable to $137M Ariane 5 capbility or $132M for an Atlas 5 launch for NASA with both the throw weight and reliability requirements necessary to get these flights.

$60M to launch the current, reusable Falcon 9 1.1FT.
33% is launch cost. - $20M
56% is first stage - $34M
11% is the second stage $6M

Assumption 1: increase in flight rate reduces launch costs by 25%
Assumption 2: landing/recovery/flight readiness check costs $5M a launch
Assumption 3: 10 flights reuse of the first stage = $3.5M a launch

Under these assumptions:
Launch cost $15M
Landing/recovery/checks $5M
First stage $3.5M
Second Stage: $6M

Total: $29.5M

I'm OK with those numbers given what they can charge and how quickly they can do regular launches. Where they will really rake in the cash is for a Facon Heavy launch (same vehicle with 3 first stages instead of 1) with 56MT to LEO for an asking price of $110M and a cost, by these assumptions of $35M. They could even reduce their price after a few launches of the Heavy to $56M, and start launching bulk cargo to space at a rate of $1000/Kg

Comment WSJ is incorrect in title, implication (Score 4, Informative) 434

From Daily Kos:

"Late Thursday night, the Times published a story claiming that the Justice Department had been asked "to open a criminal investigation into whether Hillary Rodham Clinton mishandled sensitive government information on a private email account," only to quietly change the story to say that the Justice Department had been asked "to open a criminal investigation into whether sensitive government information was mishandled in connection with the personal email account Hillary Rodham Clinton used." As in, the story changed from being about a potential criminal investigation into Clinton's conduct to being about a potential criminal investigation into the mishandling of sensitive information by ... someone not named. "

So, haven't you guys learned yet to ignore mass media reporting when it involves a Clinton? It's almost like someone with billions of dollars has been trying to smear the leading Democratic candidate for a few years now.

Comment Re:Fucking rednecks (Score 3, Informative) 1030

The thing is, I can put solar on my house, and I will be to able to generate enough power, on occasion, to have some extra to put back on the grid. With the right configuration and local storage, I can even go off the grid. As a consumer, the other options you mention are things I can't do. Sure, solar is more expensive per KWH, but at least it's doable for lots of homeowners.

Separately, you may not have noticed that the Republicans have held effective veto power over new legislation in the Senate until just yesterday. Thus, making the claim the Republicans (even with a minority in the Senate) can be held somewhat responsible for lack of progress in the area seems reasonable.

Comment High Throughput Computing not HPC (Score 1) 54

While this a nice use of Amazon's EC to build a high throughput system, that doesn't translate as nicely to what most High Performance computing users need- high network bandwidth, low latency between nodes and large, fast shared filesystems on which to store and retrieve the massive amounts of data being used or generated. The cloud created here is only useful to the subset of researchers who don't need those things. I'd have a hard time calling this High Performance Computing.

Look at XSEDE's HPC resources page. While each of those supercomputers has something special about the services they offer (GPU's SSD's, fast access, etc), they all spent a significant portion of their build budget on a high performance network to link the nodes for parallel codes. They also spent money on high performance parallel filesystems instead of more cores. Their users can't get their research done effectively on systems or clouds without those important elements.

I think that it's great that public cloud computing has advanced to the point where useful, large-scale science can be accomplished on it. Please note that it takes a separate company (CycleCloud) to make it possible to use Amazon EC in this way (lowest cost and webapp access) for your average scientist, but it's still an advance.

Disclaimer: I work for XSEDE, so do your own search on HPC to verify what I'm saying.

Comment Linux ISO's mostly (Score 4, Informative) 302

At work I need to install several different types/versions of linux OS's for testing. I always torrent the ISO as a way of "paying" for the image that I'm using.

A few years back, we did some experimenting with torrents over the Teragrid 10GBe backbone, to see how well that worked over the long haul between IL and CA. With just 2 endpoints, even on GBe, it wasn't better than a simple rsync. We did some small scale test with less than 10 cluster nodes on one side, but still not as useful as a Wide Area filesystem we were testing against. Bittorrent protocols just aren't optimized for a few nodes with a fat pipe between them.

I am interested in looking at the new Bitorrent Sync client to see how thanks for our setup. We have many users with 10's of TB's of data to push around on a weekly basis.

Comment Re:Just how would this work? (Score 1) 257

If the purpose of patents is "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries" then no, I don't see how restricting patents to physical implementations (not software on a general purpose computing device) utterly defeats that purpose. Nothing restricts the author from enforcing his patent on physical reproductions, he just can't claim that a non-physical implementation is a violation.

Can you give any examples where this change would stop or slow scientific progress?

Comment Ex-NASA employees (Score 5, Insightful) 616

I take some relief in noting that these are "ex-NASA" employees.

Per the article, it seems that these guys mostly worked at the Texas-based Johnson space center:

"Keith Cowing, editor of the website NASA Watch, noted that the undersigners, most of whom have engineering backgrounds, worked almost exclusively at the Houston-based Johnson Space Centre, a facility almost entirely removed from NASA's climate change arm."

Figures.

Why is it that there are so many amateur climatologists in Texas who know so much, but publish so little? I wonder if these gentlemen even bothered to visit the site of the "Plants Need CO2" sponsor, Leighton Steward, to see who also agreed with their opinions. I'm not linking to that site, and I'd surely want to avoid association with anyone with ideas like that.

Maybe Steward just punked them. Yep, that's go to be it.

Comment Government and Higher education (Score 1) 506

Don't overlook positions in government or higher education. Besides being OS agnostic in many cases, there are universities all over the country, not just the SF area.

Want to travel a lot, have a nice career path and instant usefulness due to linux knowledge? Try DISA. I'm not sure if they are still hiring for their intern program (the Army uses intern in a different way than business IT), but it was great opportunity for some people I know. DOE is another area that looks for reliable linux knowledgeable sysadmins.

Look at the top500 list and see how many big clusters are run by Universities and their affiliates. Then check out how many of those systems use windows- and then laugh. Higher education also runs a lot of smaller systems on linux. Lots of positions starting to open up there. if you have cluster admin knowledge, you're a shoo-in. If not, take a lower position where they do run clusters and let them know that you'd be interested in moving up.

Disclaimer- yes, I work at NCSA at the University of Illinois, Urbana, and yes, we have some linux positions open. Do the legwork yourself, however- it'll make you look smarter.

Comment HPC Planning (Score 1) 3

You're about to receive a large amount of hardware from the vendor, and you haven't decided upon which GPU's to use, which interconnect for communications, what OS would be appropriate, or the types of workloads your users will be running (beyond your base set)? Really? If that's the case, no amount information from slashdot will solve your problems.

If you have no interconnect chosen, how will you rack the systems in the case that cable lengths are an issue, as they are for IB? Do you even have nodes that natively support both 10GBe and IB? I highly doubt it. What about you core network switches- 1200 ports (plus switch fabric) of IB or 10GBe might cost more than those 1200 nodes. You're also talking about adding GPU's and a high speed network adapter to each of 1200 nodes; what kind of manpower do you have for the task of installing 2 PCIe cards per node for 1200 nodes. I'm assuming that you'd want to be in operation sometime before Christmas. I won't even ask about what kind of large scale storage you have planned. I shudder to think of what power and cooling requirements you've already overlooked or made impossible.

Who's your vendor? If they really let you purchase 1200 nodes without any sort of planning, they should be dragged behind horses and shot. What a waste of money.

I'm sorry to be so negative, but you guys really screwed the pooch on this one. When you are designing a supercomputer, the very first thing to be decided is what the use cases are, especially if you're trying to generate revenue from the system. You have a limited amount of money to buy computing power, interconnect, storage, and facilities, so you have to optimize your purchase in those areas around to the expected use of the system. Not to mention operating costs.

Sheesh. I hope you're just pranking us.

Comment I knew him (Score 1) 70

My boss suggested that I attend a weekly "geek lunch" that a group of the older computer savvy fellows held at the U of I's Beckman Institute and met him there. I was aware of Project Gutenberg before that but hadn't used it much. Michael was a good advocate for ebooks before anyone got around to coining that particular terminology. The last few times we met, I remember him being very excited as he had samples of various new ebook readers to try out. He was testing them to see well they integrated the Gutenberg Project and was glad that more people would have easy access to it.

Over last fall, the group met weekly and I helped him with the process of making digital copies of the Gutenberg archive on different filesystems on individual drives. The entire Gutenberg archive is about 300GB with everything extracted and we could dual format a 750GB drive to fit a copy on NTFS and another one on ext3. That was a fun experience; most people don't get to play with a real life 300GB data set.

I hadn't been to a meeting in a while, darn it. I'll miss him.

Comment Re:Comparitive Advantage (Score 1) 276

I've heard that the Merlin 1-c engines are about $1M a piece. And that SSME's run $50M each at the current production rates.

Hard to verify pricing for components, especially for SpaceX, as they do so much in house. Who outside of the company knows what the actual production costs of each part are? Hmmm, perhaps we can estimate the max possible cost of each engine based on launch prices and the assumption that SpaceX is not taking a loss on each launch.

A Falcon 9 launch costs $54M, and has 10 Merlin 1c engines. I'm going to ignore the cost differences between the upper stage (vacuum) and lower stage engines. If every thing else (fuel, lower & upper stages, facility lease, profit) were $0, each engine would cost at most $5.4M. In fact, looking at the announced pricing for Falcon Heavy, $110M max, with 27+1 engines, you're looking at less than $4M an engine, max.

Given the costs of the rest of the launch, and number of engines (production scaling efficiencies) involved, I don't think that a $1M per engine estimate is too far off. That puts engines at 25% of the launch costs, and I'm OK with that estimate. I know that the Shuttle SRB's are a higher percentage of the cost of a SLS, but those are an outlier. You can buy 4 Atlas CCB's (with 8 engines) for the price of 1 SRB. Given that pricing, I'm not sure that any $10M engine out there has 10x the thrust of a Merlin 1c.

So SpaceX is probably good with the whole multiple engine thing, at least on price.

Comment Notification System (Score 4, Insightful) 168

I'm an alumni of the U of I, and I work here as well. I get these notifications. I thought I'd bring up 2 points:

  1. Fortunately, given the spring break, the actual number of people on campus able to read this was was quite low.
  2. Unfortunately, we just had a fire on Green street 2 days ago, and we got an alert from the same system informing us about it. So this warning was probably taken very seriously for those 12 minutes.

Overall, I'm satisfied with the system and I was impressed by the very explicit letter from the chief both explaining the error and accepting the blame for the mistake. She also detailed the upcoming efforts to address the error. I'd like to see the same level of accountability from my ISP or phone company.

Comment Re:The difference between Google and Bing is (Score 1) 356

Hmmm, can't say that my first attempt to use Bing gave me any Lindsay L. results, but noscript did put up a cross site scripting hijack after I attempted to "disable" a helpful toolbar with my facebook info proudly displayed.

I'm positive I don't need any search provider tapping into my facebook info- and I certainly don't want to be reminded of it on the front page! That's like, TSA scary.

Ignoring the blatant invasion of my privacy for a moment, I'm happy to say my (small sample size, insert disclaimer here) test of Google vs Bing revealed that the "best all mountain skis" works differently in Google versus Bing. Google gave a list of places to buy "the best all mountain skis" as the top listings, whereas Bing gave a set of review sites telling me which ones were the best.

Not sure how to rate one result as better than the other, they're just different. Perhaps Google feels that their users know what they want, so they just point them at it. Perhaps Bing believes that their users want to learn what is the best choice for them. Hard to put a metric on that. I'd hazard an informed guess that both search providers weigh their results according to desires of their users, as measured by click through rates. Bing users might want more hand holding, whereas Google users might want less distractions before they learn the location of something.

All that being said, I'm still not using a search engine that displays my facebook account info. Yuck. I don't care if this is Facebook's fault, I don't want to see it on a random search page as part of the interface.

Slashdot Top Deals

!07/11 PDP a ni deppart m'I !pleH

Working...