Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Amd also has better MB's for the price (Score 1) 104

Since you seem to care vaguely about stability, you might want to know that none of Intel's current desktop chipsets support ECC memory. I'm speccing a system right now and it looks like I'm going to have to buy a non-Intel chip.

PS. If you still want an Intel, the Xeon E3 1200 series is the closest ECC chipset to a desktop chipset.

Comment It's a numbers game (Score 2) 582

Let's say that the average company employs 1 senior engineer for every 10 fresh outs. By the pigeonhole principle, there is going to be a lot of unemployed senior engineers.

In my set of old college friends, only two of us are senior engineers. The rest, all of whom are great engineers, have found other positions. Some by encouragement, some by changing interests, some by following the path of least resistance and some by means that I am not aware of. It has to be that way.

Comment let's be careful (Score 2) 403

I feel that Bitcoin discussions always fall apart because people cannot make a distinction between the Computer Science of Bitcoin and the Economics of Bitcoin.

The Bitcoin Algorithm is a peer-to-peer transaction protocol. The algorithm was self-published by S Nakamoto as "Bitcoin: A peer-to-peer electronic cash system" in 2008. To my knowledge, the paper has not been accepted in a peer reviewed Computer Science journal. Nevertheless, there appears to be growing acceptance that the underlying technology is sound. However, that says nothing about the validity of applications based on the peering transaction protocol.

The Economics of Bitcoin is about how the original coins are generated, how new coins are minted, how the currency is regulated, etc. Nakamoto's paper has nothing to say about these issues. In particular, many people feel that Bitcoins have been distributed in a manner that makes them a Ponzi scheme. I am not aware of any paper published in a peer reviewed Economics journal that contradicts this.

The lack of a peer reviewed CS publication is terrible, but not fatal. Bitcoin has enough popularity now that we can expect crypto researchers to be looking at it as an easy target for a quick pub. The lack of a pub on a flaw in the technology of Bitcoin is not bad. This technology might be useful for something.

However, the lack of a peer reviewed Economics pub addressing the Ponzi Scheme issue is fatal. In addition, I'm pretty sure that there are other important things that need to be investigated by Economists. I can't say much more about this since I am a computer scientist.

I hope people will understand that there is a distinction between Bitcoin technology and Bitcoin coins. It is possible to have a different opinion on each and an opinion on one does not imply anything about the other. I urge people in future Slashdot discussions to take care to make it clear in their posts whether or not they are talking about the technology or the coins. Thank you.

Comment Re:Another visitor! (Score 1) 344

You have caught bitcoin fever.

I can easily substitute any collectible item every where "bitcoin" appears in your post. For example, collectible coins, stamps, beanie babies, paintings, sculptures, vintage cars, faberge eggs, barbie dolls, swatches, baseball cards, etc.

Let's talk about something specific. A Picasso serves exactly the same role as a bitcoin. There are a limited number. They are difficult to forge. They have value because people think they have value. They are independent of any government. They can be used as an alternative storage and investment of wealth. They can be stolen. The ownership is tracked by a community.

The peering algorithms which underlie bitcoins are somewhat interesting. Beyond that, there is nothing new.

Comment weed out classes (Score 1) 606

The sciences and engineering programs tend to have intro classes that weed out the weaker students. This is at odds with one of the primary goals of teaching. However, I think these weed out classes are ultimately a good thing because society pays a cost when poor students get science and engineering degrees.

Bad science creates a burden on society by undermining our efforts to advance knowledge. Every poorly conducted experiment and incorrect published paper creates extra work for everybody. The erroneous results must be refuted or else the misinformation spreads. In addition, peer reviews don't come free. A poor scientist cannot contribute useful reviews but they can submit papers which need review (moreso than better scientists). They are a constant burden on the community.

Bad engineering also creates extra work for everybody. Engineered systems are susceptible to the weakest link paradigm. A broken part cannot be ignored; it could bring down the whole system. Good engineering teams quickly identify poor engineers and remove them. Engineering teams made up of poor engineers dump broken products on end users which causes problems for everyone. Even if the engineers are held accountable, the problems don't go away naturally. Society has to waste time and effort cleaning up their messes.

It is always saddening to fail a student. However, in my experience, students are not expelled from school for failing a weed out class. They are simply encouraged to change majors. Universities have enough departments of varying difficulty that any student can graduate with a degree. It is up to the student to make the effort to get their preferred degree.

Comment buzzer speed (Score 1) 674

Daniel Gruhl, an IBM researcher, gave a talk at Caltech on Tuesday where he expounded on the buzzer issue. He said that Watson uses a solenoid which can respond in 5-10 ms. He also said that Ken Jennings is capable of responding in less than 5 ms. He can respond quickly because he uses the audio to anticipate the signal.

Comment Re:I don't get the big deal (Score 1) 172

Beardo, here's the short of it.

1. President signs HSPD-12 which mandates issuing a new, more secure, id card to all federal employees. HSPD-12 is all about "secure and reliable" id. It has nothing to do with background checks. Full text: http://www.dhs.gov/xabout/laws/gc_1217616624097.shtm

2. OMB is tasked with carrying out HSPD-12.

3. OMB arbitrarily adds background checks and "employee suitability". HSPD-12 does not authorize this. This bears repeating. HSPD-12 does not mandate background checks. The background check is a fantasy invented by OMB.

4. Presumably because there are too many background checks to be done, the background checks are being partially outsourced. For example, ChoicePoint handled JSC. Here's a nice article about ChoicePoint: http://www.wired.com/politics/security/news/2005/02/66685. An old article now, but was relevant 3 years ago.

5. SF85 is the form that landed on my desk with instructions to "sign or else resign". The "carte blanche" part is the first paragraph on the last page. It basically says, "authorize any investigator ... obtain any information ... is not limited". Full text is here: http://www.opm.gov/forms/pdf_fill/sf85.pdf

I would be happy to undergo an actual real background check and receive an S or TS clearance. Such a check has real value and would open up work opportunities to me. I an not willing to let some random dude investigate me and store that information in some unknown location to be stolen or shared with arbitrary entities.

Comment Re:I don't get the big deal (Score 2) 172

If the goal of the background check is to determine whether or not you are susceptible to blackmail, then I would say that they have succeeded admirably.

"Sign this form which gives an unnamed private contractor carte blanche to investigate your personal life or else we will fire you."

I have so much more to say on this subject beyond a quip, but I'm tired. It's all been said already. The correct course of action is obvious to anybody who is aware of the facts.

When I was a child I read about McCarthyism in high school. It seemed like a fairytale to me; I couldn't understand how people could ever do something like that. These last 3 years have been a bitter lesson for me.

Comment Re:Function re-ordering inside the image? wow (Score 1) 141

Well, if you're really crazy you put the init code together into a contiguous block so you can use it as a data buffer after you're done initializing everything.

Of course we realized that that was still too conservative. The next year we started allocating data buffers inside the init function while we were still executing it. Just follow the instruction pointer down and allocate behind it.

Comment Re:When do they get the question? (Score 1) 220

Well, remember this is Jeopardy, so the contestants receive the 'answer,' and must supply the 'question.'

Actually, you answer the question just like any other game show. The cute oddity of Jeopardy is that you frame your answer in the form of a question. Have no doubt though, you are most definitely giving an answer.

If you watch a lot of Jeopardy you will notice that when a contestant fails to give the answer in the proper form Alex will say, "I'm sorry. You failed to give your answer in the form of a question." I am quite confident that Alex will never say, "I'm sorry. You failed to give your question to the answer."

And now, just to blow your mind.

Q. The form of an answer on Jeopardy.
A. What is a question?

:)

Comment Re:Is C++ ever the right tool for the job? (Score 1) 509

with C++ it's not always obvious what a compiler might want to do with '+' thanks to operator overloading and rather convoluted implicit casting rules.

I agree that optimized, compiled code might as well be a block box nowadays.

However, I think the problem with C++ is that it is not obvious what your fellow _programmer_ wants to do with '+'. That's the real problem with operator overloading. Everytime I have to dive through 4 layers of pre-processed headers to figure out what '+' does is another nail in that coffin.

Comment Re:Move to quantified data (Score 2) 271

I'll just say this straight to your face. "Liquidity" is the lie you tell yourself so you can sleep at night. Sorry.

I've seen your arguments a hundred times. It's always the same.

Investors: Let's regulate HFT.
HFT: We do this to help the investors.
Investors: We hate you.

Think about this. If you're really helping the investors, then why do they hate you so much?

I don't think you're a bad person. I think you genuinely believe that you're helping. You just need to be a little more self critical and ask yourself, "Am I really helping? Are there metrics I can collect to measure how much I'm helping?" Does your firm produce an annual report detailing how much liquidity you generated that year? Do you have milestones, such as: 500 units of liquidity by February? Does your industry have magazines and industry watchers that talk about and evaluate how the various firms are doing in their competition to provide liquidity?

Comment Re:What's the point? (Score 2, Informative) 184

1. Immediate lighting rasterizer = O(S*N*L)
2. Deferred lighting rasterizter =O(S*N)+O(S*L)
3. Ray tracer = O(S*log N*L*D)

where N is the number of solid 3D elements, L is the number of direct illumination lights, D is the indirect lighting depth and S is the number of screen elements.

No matter how I look at this, ray tracing is not very compelling.

Once upon a time, we thought ray tracers were fast. If we hold screen size as a constant and set the number of bounces to 1 for a fair comparison to a 1992 era rasterizer we get the "classic" complexity analysis comparison.

1. Rasterizer = O(N*L)
2. Ray tracer = O(log N*L)

Winner: ray tracer. However, a few things have changed since 1992. First, screen size is important and should not be ignored. This is due to the increasing importance of screen space effects. Second, deferred lighting broke rasterization in half. Third, rasterizers can now do convincing shadows and fake global illumination. So, to keep up with the quality of the average 2010 rasterizer we have to set D>1. This is a 1-2-3 knockout combo for ray tracers.

Rasterizers are the current complexity king. Now, I'll tell you why it will remain the king. Ray tracers have an architecturally bad design. It looks like this:

for p in rays:
    for i in items: raytest
    for s in lights:
        for t in bounces: ...

There is a beautiful elegance to this. It is a good way to learn how to do computer graphics. Unfortunately, this kind of architecture always leads to bad complexity that looks like this: O(f1*f2*f3*f4 ...).

Rasterizers have a better basic architecture. Scatter-gather type architecture tends to lead to nice complexity like this: O(f1)+O(f2)+O(f3)+O(f4). Don't take my word for it, look at the history. The O(N*L) immediate rasterizer got broken up into the O(N)+O(L) deferred rasterizer as soon as enough memory became available. Indirect lighting followed the same pattern.

I'm not saying that ray tracers will always be slow. But, I _am_ saying that if ray tracers ever become fast again, it will be because they have been architecturally restructured into something that looks a lot like a rasterizer. In such a case, any claimed victory by the ray tracer would be a pyrrhic one.

Comment $ per capita (Score 1) 367

$578M for 4,200 students comes out to $137k per student. The article cites "land costs" as one of the reasons for the price tag. This isn't a fair comparison, but to put it into perspective for non-Californians, homes are really expensive here. A home for 4 in an average area can easily cost you $600k. If you want to live in an area with low crime then expect to pay over $1M. Anyway, 4 people for $600k is $150k per person which is comparable to the school.

Slashdot Top Deals

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...