Forgot your password?
typodupeerror

Comment: Re:Fucking rednecks (Score 3, Informative) 1030

by dlapine (#45495149) Attached to: A War Over Solar Power Is Raging Within the GOP

The thing is, I can put solar on my house, and I will be to able to generate enough power, on occasion, to have some extra to put back on the grid. With the right configuration and local storage, I can even go off the grid. As a consumer, the other options you mention are things I can't do. Sure, solar is more expensive per KWH, but at least it's doable for lots of homeowners.

Separately, you may not have noticed that the Republicans have held effective veto power over new legislation in the Senate until just yesterday. Thus, making the claim the Republicans (even with a minority in the Senate) can be held somewhat responsible for lack of progress in the area seems reasonable.

Comment: High Throughput Computing not HPC (Score 1) 54

by dlapine (#45415825) Attached to: 1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2

While this a nice use of Amazon's EC to build a high throughput system, that doesn't translate as nicely to what most High Performance computing users need- high network bandwidth, low latency between nodes and large, fast shared filesystems on which to store and retrieve the massive amounts of data being used or generated. The cloud created here is only useful to the subset of researchers who don't need those things. I'd have a hard time calling this High Performance Computing.

Look at XSEDE's HPC resources page. While each of those supercomputers has something special about the services they offer (GPU's SSD's, fast access, etc), they all spent a significant portion of their build budget on a high performance network to link the nodes for parallel codes. They also spent money on high performance parallel filesystems instead of more cores. Their users can't get their research done effectively on systems or clouds without those important elements.

I think that it's great that public cloud computing has advanced to the point where useful, large-scale science can be accomplished on it. Please note that it takes a separate company (CycleCloud) to make it possible to use Amazon EC in this way (lowest cost and webapp access) for your average scientist, but it's still an advance.

Disclaimer: I work for XSEDE, so do your own search on HPC to verify what I'm saying.

Comment: Linux ISO's mostly (Score 4, Informative) 302

by dlapine (#43541285) Attached to: Ask Slashdot: Do You Move Legal Data With Torrents?

At work I need to install several different types/versions of linux OS's for testing. I always torrent the ISO as a way of "paying" for the image that I'm using.

A few years back, we did some experimenting with torrents over the Teragrid 10GBe backbone, to see how well that worked over the long haul between IL and CA. With just 2 endpoints, even on GBe, it wasn't better than a simple rsync. We did some small scale test with less than 10 cluster nodes on one side, but still not as useful as a Wide Area filesystem we were testing against. Bittorrent protocols just aren't optimized for a few nodes with a fat pipe between them.

I am interested in looking at the new Bitorrent Sync client to see how thanks for our setup. We have many users with 10's of TB's of data to push around on a weekly basis.

Comment: Re:Just how would this work? (Score 1) 257

by dlapine (#41856675) Attached to: Richard Stallman: Limit the Effect of Software Patents

If the purpose of patents is "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries" then no, I don't see how restricting patents to physical implementations (not software on a general purpose computing device) utterly defeats that purpose. Nothing restricts the author from enforcing his patent on physical reproductions, he just can't claim that a non-physical implementation is a violation.

Can you give any examples where this change would stop or slow scientific progress?

Comment: Ex-NASA employees (Score 5, Insightful) 616

by dlapine (#39661689) Attached to: Ex-NASA Employees Accuse Agency of 'Extreme Position' On Climate Change

I take some relief in noting that these are "ex-NASA" employees.

Per the article, it seems that these guys mostly worked at the Texas-based Johnson space center:

"Keith Cowing, editor of the website NASA Watch, noted that the undersigners, most of whom have engineering backgrounds, worked almost exclusively at the Houston-based Johnson Space Centre, a facility almost entirely removed from NASA's climate change arm."

Figures.

Why is it that there are so many amateur climatologists in Texas who know so much, but publish so little? I wonder if these gentlemen even bothered to visit the site of the "Plants Need CO2" sponsor, Leighton Steward, to see who also agreed with their opinions. I'm not linking to that site, and I'd surely want to avoid association with anyone with ideas like that.

Maybe Steward just punked them. Yep, that's go to be it.

Comment: Government and Higher education (Score 1) 506

by dlapine (#38985109) Attached to: Ask Slashdot: Where Are the Open Source Jobs?

Don't overlook positions in government or higher education. Besides being OS agnostic in many cases, there are universities all over the country, not just the SF area.

Want to travel a lot, have a nice career path and instant usefulness due to linux knowledge? Try DISA. I'm not sure if they are still hiring for their intern program (the Army uses intern in a different way than business IT), but it was great opportunity for some people I know. DOE is another area that looks for reliable linux knowledgeable sysadmins.

Look at the top500 list and see how many big clusters are run by Universities and their affiliates. Then check out how many of those systems use windows- and then laugh. Higher education also runs a lot of smaller systems on linux. Lots of positions starting to open up there. if you have cluster admin knowledge, you're a shoo-in. If not, take a lower position where they do run clusters and let them know that you'd be interested in moving up.

Disclaimer- yes, I work at NCSA at the University of Illinois, Urbana, and yes, we have some linux positions open. Do the legwork yourself, however- it'll make you look smarter.

Comment: HPC Planning (Score 1) 3

by dlapine (#37379376) Attached to: Best Use For A New SuperComputer (HPC)

You're about to receive a large amount of hardware from the vendor, and you haven't decided upon which GPU's to use, which interconnect for communications, what OS would be appropriate, or the types of workloads your users will be running (beyond your base set)? Really? If that's the case, no amount information from slashdot will solve your problems.

If you have no interconnect chosen, how will you rack the systems in the case that cable lengths are an issue, as they are for IB? Do you even have nodes that natively support both 10GBe and IB? I highly doubt it. What about you core network switches- 1200 ports (plus switch fabric) of IB or 10GBe might cost more than those 1200 nodes. You're also talking about adding GPU's and a high speed network adapter to each of 1200 nodes; what kind of manpower do you have for the task of installing 2 PCIe cards per node for 1200 nodes. I'm assuming that you'd want to be in operation sometime before Christmas. I won't even ask about what kind of large scale storage you have planned. I shudder to think of what power and cooling requirements you've already overlooked or made impossible.

Who's your vendor? If they really let you purchase 1200 nodes without any sort of planning, they should be dragged behind horses and shot. What a waste of money.

I'm sorry to be so negative, but you guys really screwed the pooch on this one. When you are designing a supercomputer, the very first thing to be decided is what the use cases are, especially if you're trying to generate revenue from the system. You have a limited amount of money to buy computing power, interconnect, storage, and facilities, so you have to optimize your purchase in those areas around to the expected use of the system. Not to mention operating costs.

Sheesh. I hope you're just pranking us.

Comment: I knew him (Score 1) 70

by dlapine (#37340830) Attached to: Michael Hart, Inventor of the E-book, Dead At 64

My boss suggested that I attend a weekly "geek lunch" that a group of the older computer savvy fellows held at the U of I's Beckman Institute and met him there. I was aware of Project Gutenberg before that but hadn't used it much. Michael was a good advocate for ebooks before anyone got around to coining that particular terminology. The last few times we met, I remember him being very excited as he had samples of various new ebook readers to try out. He was testing them to see well they integrated the Gutenberg Project and was glad that more people would have easy access to it.

Over last fall, the group met weekly and I helped him with the process of making digital copies of the Gutenberg archive on different filesystems on individual drives. The entire Gutenberg archive is about 300GB with everything extracted and we could dual format a 750GB drive to fit a copy on NTFS and another one on ext3. That was a fun experience; most people don't get to play with a real life 300GB data set.

I hadn't been to a meeting in a while, darn it. I'll miss him.

Comment: Re:Comparitive Advantage (Score 1) 276

by dlapine (#35856874) Attached to: China Space Official Confounded By SpaceX Price

I've heard that the Merlin 1-c engines are about $1M a piece. And that SSME's run $50M each at the current production rates.

Hard to verify pricing for components, especially for SpaceX, as they do so much in house. Who outside of the company knows what the actual production costs of each part are? Hmmm, perhaps we can estimate the max possible cost of each engine based on launch prices and the assumption that SpaceX is not taking a loss on each launch.

A Falcon 9 launch costs $54M, and has 10 Merlin 1c engines. I'm going to ignore the cost differences between the upper stage (vacuum) and lower stage engines. If every thing else (fuel, lower & upper stages, facility lease, profit) were $0, each engine would cost at most $5.4M. In fact, looking at the announced pricing for Falcon Heavy, $110M max, with 27+1 engines, you're looking at less than $4M an engine, max.

Given the costs of the rest of the launch, and number of engines (production scaling efficiencies) involved, I don't think that a $1M per engine estimate is too far off. That puts engines at 25% of the launch costs, and I'm OK with that estimate. I know that the Shuttle SRB's are a higher percentage of the cost of a SLS, but those are an outlier. You can buy 4 Atlas CCB's (with 8 engines) for the price of 1 SRB. Given that pricing, I'm not sure that any $10M engine out there has 10x the thrust of a Merlin 1c.

So SpaceX is probably good with the whole multiple engine thing, at least on price.

Comment: Notification System (Score 4, Insightful) 168

by dlapine (#35617342) Attached to: Univ. of Illinois Goes War-of-the-Worlds On Students

I'm an alumni of the U of I, and I work here as well. I get these notifications. I thought I'd bring up 2 points:

  1. Fortunately, given the spring break, the actual number of people on campus able to read this was was quite low.
  2. Unfortunately, we just had a fire on Green street 2 days ago, and we got an alert from the same system informing us about it. So this warning was probably taken very seriously for those 12 minutes.

Overall, I'm satisfied with the system and I was impressed by the very explicit letter from the chief both explaining the error and accepting the blame for the mistake. She also detailed the upcoming efforts to address the error. I'd like to see the same level of accountability from my ISP or phone company.

Comment: Re:The difference between Google and Bing is (Score 1) 356

by dlapine (#34869340) Attached to: Google vs. Bing — a Quasi-Empirical Study

Hmmm, can't say that my first attempt to use Bing gave me any Lindsay L. results, but noscript did put up a cross site scripting hijack after I attempted to "disable" a helpful toolbar with my facebook info proudly displayed.

I'm positive I don't need any search provider tapping into my facebook info- and I certainly don't want to be reminded of it on the front page! That's like, TSA scary.

Ignoring the blatant invasion of my privacy for a moment, I'm happy to say my (small sample size, insert disclaimer here) test of Google vs Bing revealed that the "best all mountain skis" works differently in Google versus Bing. Google gave a list of places to buy "the best all mountain skis" as the top listings, whereas Bing gave a set of review sites telling me which ones were the best.

Not sure how to rate one result as better than the other, they're just different. Perhaps Google feels that their users know what they want, so they just point them at it. Perhaps Bing believes that their users want to learn what is the best choice for them. Hard to put a metric on that. I'd hazard an informed guess that both search providers weigh their results according to desires of their users, as measured by click through rates. Bing users might want more hand holding, whereas Google users might want less distractions before they learn the location of something.

All that being said, I'm still not using a search engine that displays my facebook account info. Yuck. I don't care if this is Facebook's fault, I don't want to see it on a random search page as part of the interface.

Comment: Re:Terraforming (Score 1) 1657

by dlapine (#33073358) Attached to: Global Warming 'Undeniable,' Report Says

Terraforming is great, if you have someplace else to practice. Trying to terraform the earth with our current level of knowledge about the process and possible side effects is like doing experimental brain surgery on yourself. If we screw it up, we have no place else to go. Paraphrasing the Tick, I like the Earth, I keep all my stuff there. Let's practice terraforming on Mars, first, to get the bugs out. Until then, let's not make things worse here by accident.

My biggest gripe about this whole debate are the countless numbers of people who fail to think at all, and believe that we can ignore the mounting evidence that there even is an issue. Until they recognize the warning signs the scientists keep point out, we really can't have a debate about the issue and what to do about it. Humanocentric or not, the planet seems to be getting hotter. Perhaps all those scientists are reading things incorrectly, or drawing the wrong conclusions, but even with a chance that they are on to something ought to cause all of us to be very concerned. And not just about the gas mileage for SUV's.

Comment: Re:This means Direct (Score 3, Interesting) 342

Um, I was referring to Direct, the "SSTS without the space shuttle" design, not the Ares I "Stick". I was looking at the actual design for Direct's J-130 model right here. It's a stage 1.5 design with all engines ground lit and the boosters jettisoned during flight, just like the SSTS.
I do agree with your statement about the Ares I:

I worked on Ares and know what the design is. That thing was a gigantic piece of crap just waiting to fail. Badly. From the barely stable structural dynamics of a 400ft long pencil flying at mach 6, to the ugliest, most disaster prone separation sequence; that design was doomed to fail.

But that's not what I was talking about. :)

Also, the very first class you take in Aerospace Engineering teaches you exactly why SSTO (single stage to orbit) is not as cost-effective as multiple stages. So your argument that this design is better because it doesn't need a second stage is not a good one. The design might be simpler and easier to build, but it requires so much more fuel per launch that it isn't worth it.

As my argument about "single stage", I was referring to the fact that the design already gets 77mT to orbit with just a single (OK, 1.5 stage counting the SRB's) stage and that there was room for more growth, like a second stage, if you needed more lift and were willing to pay extra for it. Did I mention the option to use 5 segment SRB's? I could go on... It's just that the J-130 is the cheapest option for a new HLV, and it leverages all the work and research that went into the SSTS program, rather than throwing it away.

That's a good thing, in my opinion.

Comment: Re:This means Direct (Score 1) 342

Per the official design from the Direct team (sorry for the pdf, that's what they have), it's 77,835kg to 30nmx100nm orbit for the regular NASA GR&A's. It's only down to 70mt if you arbitrarily factor in an additional 10% margin. Which doesn't account for their own internal 15% margin that isn't documented. I like engineers who give themselves leeway.

Short answer, yes, the 1.5 stage J-130 does 77mT to orbit per NASA rules.

If this is timesharing, give me my share right now.

Working...