Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:I'll be rich! (Score 1) 61

by TechyImmigrant (#49186017) Attached to: SpaceX's Challenge Against Blue Origins' Patent Fails To Take Off

I knew the patent system was horribly broken but this is obscene. Perhaps I'll patent "Utilizing a multi-wheeled conveyance to traverse a network of engineered level surfaces to traverse from an origination point to a destination point". This patent doesn't seem to cover any real technology but the general idea of "launching from a land site and landing on an ocean platform".

It's rather like "Do X, but on the internet" patents. However it's "Do X, but at sea on a platform"

Comment: Re:Deja vu all over again (Score 1) 111

there is a linear relationship between code density and the functional bandwidth of the instruction caching at every level.

Even if this were true (which it isn't), and even if you were right that CISC has twice the code density of RISC (which it doesn't), it would still be a long way short of proving your claim that CISC gives better performance than RISC.

I don't presume to understand why ARM do what they do.

Certainly not 2X, that was normal slashdot hyperbole. 1.2-1.5X depending on what you compare with. But you are putting words in my mouth. Nowhere did I claim RISC was better than CISC or visa versa in any general sense. I said that compact instruction encodings are better than inefficient instructions encodings, which they clearly are for a broad class of CPU memory hierarchies that have been around in recent years. The decoding benefits of RISC were tangible for 1990 era CPUs, but the decoding overhead hasn't changed, while the rest of the CPU has got much bigger. So the decoding overhead is now negligible for the CPUs we put in phones and PCs, while the instruction bandwidth has a bottom line effect on performance. You can address it with wider buses and bigger caches, but then you can have wider buses and bigger caches with smaller instructions too.

I'm not seeking to prove a claim. It's just the way things are.

Accordingly a CPU with a real Huffman coded instruction set might be even better. Feel free to go implement one.

I don't need to presume, I know that they removed some complex instructions because they made the hardware more complex and reduced performance.

Which contradicts the evidence of the vast majority of CPUs ever made, which is they get faster as you throw more gates at them.

Comment: Re:Uprising? (Score 1) 322

by TechyImmigrant (#49182473) Attached to: 'The Moon Is a Harsh Mistress' Coming To the Big Screen

It's too many words to fit on a Marquee, and takes too long to say to the ticket vendor. By the time you say "One for The Man who went up a hill and came down a Mountain", the ticket agent has already given you your ticket, change, and is halfway through serving the next person in line.

But I did actually see that film. Making life easy for the marquee layout is clearly not the same thing as making money.

Comment: Re:Politics aside for a moment. (Score 1) 535

I remember an interview from years back where she was asked if she used email and her response was along the lines of " Oh no. Emails are discoverable".
So yes, she knew exactly what she was doing and why she was doing it.

Emails are discoverable whether you use a public or a private email address.

Yes. That's what makes the current news interesting. She wasn't hiding them. She handed them over in response to a request. She wasn't not-allowed to use personal email. The rules had no such restriction. But her actions conflict with what I remember her saying was motivating her to not use email.

Comment: Re:Ciphersuite Negotiation (Score 1) 72

by TechyImmigrant (#49181859) Attached to: FREAK Attack Threatens SSL Clients

I keep seeing people declare TLS's cipher choices to be mess and propose cleaning them up somehome. Look deeper. The problem is not adding things, it is retiring things. If you can't retire algorithms, you can't clean up. If you can't clean up, then negotiation is bad.

So you have to live with your initial choices. Make them well.

Comment: Re:Deja vu all over again (Score 1) 111

It's certainly not simple. But there is a linear relationship between code density and the functional bandwidth of the instruction caching at every level.

I don't presume to understand why ARM do what they do. Whoever came up with that interrupt architecture needs a serious talking to.

Comment: Re:Ciphersuite Negotiation (Score 1) 72

by TechyImmigrant (#49178563) Attached to: FREAK Attack Threatens SSL Clients

One of the problems with TLS is we keep adding better ciphers, but the old weak ciphers hang around and implementation errors leave us vulnerable to downgrade attacks. A big problem with negotiable cipher suites is the inability to retire old ciphers. We might like to think it can be done, but it isn't a solved problem and TLS is a prime example of that failure.

But crypto has moved on a long way and we have a lot more of the basic crypto functions coming with mathematical proofs of the hardness bounds of attacks, which was simply not true when those older ciphers, hashes and macs were published.

I would prefer negotiation to be in terms of algorithm parameters we can negotiate on the fly, such as the number of rounds on the cipher, or the amount of entropy fed into a sponge construction. It's easy to increase an iteration count. It's hard to add a new algorithm to a device after it's been built. These methods come with their own problems, but they're a heck of a lot less of a problem and a heck of a lot more solvable than ciphersuite negotiation, which has failed year after year.

There is a reasonable physical arguments that even with quantum computers that can do what people claim they can do (not likely) that it is impossible to brute force anything above O(2^360). So lets accept that we can pick a secure key size, pick it and focus on the parameters we can alter over time, rather than those that we cannot. Also focus on things that are implementable by any reasonable programmer or circuit designer. It's incumbent on any crypto system designer to fight against complexity at all costs. Complexity will undermine your secure algorithms and protocols in ways you cannot control.

All the evidence concerning the universe has not yet been collected, so there's still hope.

Working...