Forgot your password?
typodupeerror

Comment: Re:Design by Comittee (Score 1) 243

by kent.dickey (#45348355) Attached to: HP's NonStop Servers Go x86, Countdown To Itanium Extinction Begins

You would have made a terrible HP executive. Maintaining and growing a profitable business gets you nowhere--no one wants to make a person like that CEO of a smaller company.

You gotta shoot for the moon and miss, and fail upwards slightly. Succeed, and you fly upwards to CEO of a large company. Most silicon valley CEOs just got lucky at some point in their life. Too bad luck doesn't repeat like skill.

And there's no way Intel would have bought PA-RISC. They needed the cover of developing something new. Basically, any deal with Intel would have gone badly for HP, and many at HP knew it. It was a short term boost (before IA64 shipped) at the expense of long-term business. Intel at the time wanted to go after the RISC server market, they wanted some of that money for themselves, by creating chips that they could sell to HP competitors to keep HP under control. If IA64 took off, it still would have gone badly for HP, in that HP would have more competitors than ever selling the same chips they were. At least IA64 now is squarely aimed at expensive servers, and HP doesn't have many IA64 competitors.

Lots of people think IA64 was just a big fiasco done by stupid people. It was a smart ploy to crush Unix servers by Intel (who would have been happy if IA64 worked as well, so win-win). On that score, it was very successful.

Comment: Design by Comittee (Score 4, Interesting) 243

by kent.dickey (#45346487) Attached to: HP's NonStop Servers Go x86, Countdown To Itanium Extinction Begins

IA64 started as an HP Labs project to be a new instruction set to replace HP's PA-RISC. VLIW has a hot topic around 1995. HP Labs was always proposing stuff and the development groups (those making chips/systems) ignored it, but for some reason this had legs.

The HP executive culture is: HP hired mid-level executives from outside. They would then do something big to get a bigger job in another company. A lot of HP's poor decisions in the last 20 years can be directly traced to this culture. And there was no downside--if you failed, you'd go to an equivalent job at another company to try again.

So enterprising HP executives turned HP's VLIW project into a partnership with Intel, and in return HP got access to Intel's fabs. This was not done for technical reasons. Intel wanted a 64-bit architecture with patents to lock out AMD, and would never buy PA-RISC. So it had to be new. HP was behind the CPU performance curve by 1995 due to its own internal fab not keeping up with the industry due to HP not wanting to spend money. So HP could save billions in fab costs if Intel would fab HP's PA-RISC CPU chips until IA64 took off. So, for these non-technical reasons, IA64 was born, and enough executives at both companies became committed enough to guarantee it would ship.

For a while, this worked well for HP. The HP CPUs went from 360MHz to 550MHz in one generation, then pretty quickly up to 750MHz. I thought IA64 would be canceled many times, but then it became clear that Intel was fully committed, and they did get Merced out the door only 2 years late. IA64 was a power struggle inside Intel, with the IA64 group trying to wrest control from the x86 group. That's where the "IA64 will replace x86" was coming from--but even inside Intel many people knew that was unlikely. Large companies easily can do two things at once--try something, but have a backup plan in case it doesn't work.

But IA64 as an architecture is a huge mess. It became full of every performance idea anyone ever had. This just meant there was a lot of complexity to get right, and many of the first implementations made poor implementation choices. It was a bad time for a new architecture--designed for performance, IA64 missed out on the power wall about to hit the industry hard. It also bet too heavily on compiler technology, which again all the engineers knew would be a problem. But see the above non-technical reasons--IA64 was going to happen, performance features had to be put in to make it crush the competition and be successful. The powerpoint presentations looked impressive. It didn't work out--performance features ended up lowering the clock speed and delaying the projects, and hurting overall performance.

Comment: Ponzi scheme clawback (Score 0) 239

by kent.dickey (#43595195) Attached to: One Bitcoin By the Numbers: Is There Still Profit To Be Made?

My opinion is Bitcoin is a Ponzi Scheme. It's a complex one, so that it's not obvious. It's not even intended to be one, but that doesn't matter.

Bitcoin will at some time reach the point where it makes no financial sense to mine Bitcoins at all (energy required will greatly exceed the mining return even for ASICs). Then, mining will collapse, but not stop because some people will still do it, such as botnets. As soon as Bitcoin mining rate becomes 50% of it's peak, Bitcoin is in trouble. Now Bitcoin become vulnerable to fraud since it's only secure if enough independent people are mining. So someone with lots of ASICs will attempt to grab all the Bitcoins by monopolizing the mining operation and putting in fake transactions to grab all the Bitcoins, and putting in enough mining power to dominate the network and push the fraudulent transactions. This will be hard, maybe even impossible to reverse, especially if it's done well. Then Bitcoin will crash. I'd love it if someone could estimate when this might happen--I'm not interested in Bitcoin enough to collect the data.

Then everyone will scream, and call it a Ponzi scheme. And it will then appear to have been one.

And Bernie Madoff has shown us that clawback has no statute of limitations--anyone who's ever taken any profit out of Bitcoin will be sued by the losers for all the profit. So then no one will have made any money. And you'll be dealing with courts. Anyone who made money with Bitcoin will lose all that they made, plus lawyer's fees. Even if it's 10 years from now.

Only lawyers will do OK out of this. Sigh.

Comment: Re:The problem with the Lisa (Score 1) 171

by kent.dickey (#42628723) Attached to: 30 Years of the Apple Lisa and the Apple IIe

The original 68000 CPUs couldn't take TLB miss exceptions properly--some of the instructions would partly execute when you tried to take an exception. So you couldn't really do virtual memory properly with them, even with external logic (which people tried to do). The 68020, which came much later, had a built-in MMU.

Comment: The replies here are disappointing (Score 1) 540

by kent.dickey (#42401369) Attached to: Krugman: Is the Computer Revolution Coming To a Close?

Here's my summary of skimming through this mess: Krugman is a hack and a liar who is always wrong. He's predicting computers will make no further improvements, so that is clearly wrong. Computers are just getting started.

Hmm. I don't know what it is about Paul Krugman that makes people so rabid, but Krugman is actually arguing that the computer revolution is just getting started (against Gordon, who's arguing the opposite). So if the Krugman haters are sure he's wrong about everything, then the logical conclusion is: computers are finished.

I'm sure everyone here basically agrees with Krugman that the computer revolution is not over. Computers will automate more and more things. This flamefest was just pointless.

The much more interesting econo-blog discussion is: if robots can replace humans, and robots can make more robots, then it appears the Luddites may turn out to be right 200 years later: wages will fall. This hasn't happened yet, but outsourcing gives us a partial taste of what this looks like. The interesting question is, what to do about this? Note that taxing robot labor the same way human labor is taxed helps address this issue, but how do you tax robot wages when they aren't paid? And the really interesting question: has this revolution partially begun and is it behind the increasing inequality in advanced countries?

Comment: After a WHOLE week? (Score 4, Informative) 674

by kent.dickey (#42005161) Attached to: Hostess To Close; No More Twinkies

What company has to close if workers are on strike for a WHOLE WEEK? The company doesn't have to pay hourly workers who don't show up to work...

This looks to me like a corporate version of "suicide by cop"--run your company into the ground (6 CEOs in 10 years, many executives getting big raises, company owned by hedge funds and venture capitalists, company has big debt), and then keep cutting workers pay until they have to say "enough". And then blame the unions.

If you're a company, which is failing and cannot be saved, and you have union workers, how else do you expect the company to finally close up shop? This is what it looks like--try to blame the unions.

The union says they already had half their members laid off, have already cut their pay to below industry average, etc. The union website before the strike started said the following (see http://bctgm.org/PDFs/HostessFactSheet.pdf):

Hostess is not and will not be viable: If Hostess emerges from bankruptcy under its present plan,
it will still have too much debt, too high costs and not enough access to cash to stay in business for
the long term. It will not be able to invest in its plants, in new products and in new technology.

---

I hope someone buys Drake's.

Comment: Re:Actually I think it's SRAM... (Score 5, Interesting) 178

by kent.dickey (#41532095) Attached to: Graphics Cards: the Future of Online Authentication?

The WPI report confirms what most everyone suspects: Reading from an uninitialized SRAM returns mostly noise, about 50/50 (but not exactly) 1's and 0's, and highly dependent on temperature. I think what they're saying is something like "Look at uninitialized memory, whose values are apparently random 1's and 0's, and somehow compute a unique fingerprint that is stable for this device, but different from all other devices". I'm not sure that's actually possible. I can't think of anything on chips that would produce "random"-looking data and which wasn't highly temperature dependent.

Even if a clever algorithm could "fingerprint" an SRAM device, others have already pointed out all the ways to break this. It's simply a slightly more complex MAC address, and will likely be easy to effectively clone. It's like printing a password on paper in special red ink that only you have, and then saying no one can log in to your system (by typing the password) since they can't replicate that red ink. Umm, the special red ink is a red herring. All you need is the password.

I don't think there's really anything here. There's no details at the PUFFIN site.

Comment: Re:Sorry Bruce, but that is total nonsense. (Score 2) 403

by kent.dickey (#41353171) Attached to: The Linux-Proof Processor That Nobody Wants

Intel in the 90's was performance at any power cost. Then in the last 10 years, it was performance within a limited power envelope, aiming at laptops and desktops. The power they were aiming at was much higher than smartphones, so although they got more "power efficient", you do very different things when aiming at 1W than when aiming at 10W or 100W. If you can waste 5W and get 20% more performance, that's a great thing to do. But not for phones.

I think what you're seeing is Atom was a kludge. If Intel chooses to aim directly at the 1W market, then you'll see there really is no "CISC" overhead.

The ARM Cortex-A9 is comparable in performance per MHz with the Pentium II of the mid-90's. That's because ARM is very sensitive to power, not to performance, so they're not throwing in everything that high-performance CPUs have. Intel is coming at the market from the other end--high performance chips they're trying to trim down to use less power. And they've not executed that well yet. Just look at the Atom--it has a FSB, meaning the memory is attached to a different chip. Lots of wasted power. Umm...ARM chips used in phones have the memory in the same PACKAGE now (stacked die).

Note that ARM has something analogous to the CISC decoder since it has 2 instruction sets it runs (Thumb and ARM). It's not as complex as the decoder needed for x86, though.

Comment: My explanation of article (Score 5, Informative) 172

by kent.dickey (#41178939) Attached to: Google Talks About the Dangers of User Content

The blog post was a bit terse, but I gather one of the main problems is the following:

Google lets users upload profile photos. So when anyone views that user's page, they will see that photo. But, malicious users were making their photos files contain Javascript/Java/Flash/HTML code. Browsers (I think it's always IE) are very lax and will try to interpret files how they please, regardless of what the web page says. So, webpage says it's pointing to a IMG, but some browsers will interpret it as Javascript/Java/Flash/HTML anyway once they look at the file. So now a malicious user can serve up scripts that seem to be coming from Google.com, and so they are given a lot of access at Google.com and break their security (e.g., let you look at other people's private files).

Their solution: user images are hosted at googleusercontent.com. Now, if a malicious user tries to put a script in there, it will only have the privileges of a script run from that domain--which is no privileges at all. Note this just protects Google's security...you're still running some other user's malicious script. Not google's problem.

The article then discusses how trying to sanitize images can never work, since valid images can appear to have HTML/whatever in them, and their own internal team worked out how to get HTML to appear in images even after image manipulation was done.

Shorter summary: Browsers suck.

Comment: Why? (Score 1) 78

by kent.dickey (#41082367) Attached to: Nintendo Power To Shut Down

The article said Nintendo Power has over 400,000 print subscribers. How could they not make a go of this? What did they need from Nintendo, anyway, other than early access to games to review them? I get Nintendo Power currently since I can let me kids read it and not have to explain, again, why they can't play M rated games.

I suspect the threat of shutdown is part of a ploy by the publisher to get something from Nintendo (which was hinted at in the article). If the shutdown actually happens, then the publisher is stupid to throw away several million dollars/year in subscriptions.

Comment: Re:Friends (Score 4, Insightful) 948

by kent.dickey (#40560569) Attached to: Ron Paul's New Primary Goal Is "Internet Freedom"

"A free market fixes everything" is nonsense. Imagine no rules/laws/regulations. Perfectly free market. To win, I'll murder my competition, and get away with it (until they murder me). There are no laws. It's free and fair, brutal and ugly.

OK, so we make murder illegal. And kidnapping, extortion, blackmail, etc. It's no longer a free market. But I don't think anyone minds.

But already, government can be corrupted. A sheriff that aggressively investigates crimes against my competitors while ignoring my crimes gives me an advantage. And this is just serious crimes.

The point is not to get government out of the way, it's to make government enforce fairness (you are right about that). And "less government" is not really the way to do this. I don't want a perfectly free market. If you take econ101, you'll see many ways businesses could screw over consumers with asymmetric info, monopolies, fraud, etc. And I want regulations to eliminate toxins in food, unreasonably dangerous products, etc. And I don't want to drink polluted water.

Solyndra is no big deal--they expected a percentage of businesses the government backed to not succeed, and Solyndra was in that percentage. If there's corruption involved, then I'd be mad, but I haven't heard of any yet. I'm glad the US government invested in the Internet.

Comment: $600 billion, but still a problem (Score 2) 917

by kent.dickey (#37775054) Attached to: US Student Loans Exceed $1 Trillion

First, Felix Salmon says USA Today's numbers are wrong, and student loans are around $600 billion: http://blogs.reuters.com/felix-salmon/2011/10/19/fact-and-fiction-about-student-loans/. But it still is a big number.

Here's the current system: if someone with a pulse wants to go to a for-profit school, he will get in. He will pay high tuition, almost all covered by student loans. He gets a worthless degree and cannot get a job. But federal student loans cannot be discharged in bankruptcy, so his life is now ruined.

There's some blame to go to the student, he should have known better. But chances are this is a young kid, and his first exposure to the adult world is a recruiter telling him he's smart, he's going places, he just needs to graduate college, preferably this really expensive for-profit school. He's been preyed upon as well. And this used to be considered fraud, preying on vulnerable people. If a guy went around to old ladies selling them useless junk, we used to toss him in jail. I'm not sure why our attitudes have changed.

I think federal student loans need a major overhaul--right now, it's a huge giveaway to the banks and for-profit schools with students as victims. Limit federal loans to for-profit institutions to 50% of non-profit tuition (it can go higher based on merit), and force for-profit schools to be accredited every year. Just somehow change the incentive system to reduce the number of non-qualified kids funneled into expensive and useless programs. Change the law so that student loan defaults impact the school they went to: say, reduce loans in the future, with no more student loans to that school if the default rate tops 15% (or whatever number makes sense).

This is another bubble, and the popping of it will be another huge blow to the economy.

Also, kids need to told loudly: getting a degree from a school not competitive in the field is not worth anything more than going to your closest state school. Expensive schools that aren't competitive to get in are just a place for rich kids to go get drunk. Don't take out loans to go to those schools!

Comment: Unix changed computing (Score 1) 725

by kent.dickey (#37701976) Attached to: Dennis Ritchie, Creator of C Programming Language, Passed Away

Unix began the commoditization of minicomputers. With Unix, you could run your application on many vendors' systems, choosing which one you bought this year based on price and performance, not because you were locked in to the vendor you bought last time. This opened up computing to be much more competitive, and was a great benefit to all users. This change affected technical computing very quickly, but took a while longer for business computing.

C is a very clever language, and Unix even more so. Both assume the least-common denominator in hardware, which was a very smart decision. I still remember the awe I had of Unix when I first logged in on a teletype in 1980 to play Adventure and Hunt the Wumpus. Very little else from this era has endured as well as C and Unix.

Thank you, Dennis.

Comment: Re:arm vs x86 (Score 1) 167

by kent.dickey (#37134654) Attached to: ARM Is a Promising Platform But Needs To Learn From the PC

Code size doesn't really apply--this is a discussion about Linux. If you're running Linux, you're not counting KBs. Maybe you're counting MBs. You may only be counting GBs (the smallest iPhone was 8GB). And ARM does provide a timer, interrupt controller, and memory controller. Not all customers use them, and only the interrupt controller has a generic "architecture" which could be said to apply to any interrupt controller. It's ironic, though, since I think everyone uses the ARM interrupt controller in any case.

It's basically ARM's fault. ARM has a predilection for leaving specs more vague than they should, and then making minor improvements that aren't backwards compatible at the OS level with each new CPU generation. User-level code tends to be backwards-compatible at least. As an example, they changed the page table format between ARMv6 (ARM11xx series) and ARMv7 (Cortex series). ARM's move to multiprocessors is new and its not clear the current OS-level view will change in the future. ARM also only documents the CPU and the IP they provide (interconnect, a memory controller, an L2 cache controller, and an interrupt controller). There is no larger system architecture, like x86 has, not even a de facto one. The x86 architecture is basically PCI based--generally, all devices appear in PCI space, with a BIOS interface for OS'es to use to discover memory layouts. The x86 world was crazy before Pentium and PCI came along, and then very stable since then.

Part of it is Linux's fault. If Linus had a distaste for #ifdefs, and instead required patches use if()/else, then vendors would be forced to adopt a more common architecture. As it is now, the vendors push their incompatibilities into huge patches in Linux, at no real code size or speed cost when run, but complicating Linux with very complex #ifdef mazes. Basically, Linux pays the cost of everyone doing something different.

So, if your CPU vendor requires pretty deep OS changes for each CPU, there's no incentive for licensees to create a system architecture so that the old OS runs on new hardware. If ARM were to accept running old OSes on new hardware as a requirement, they would have to create a system architecture. Just having a standardized memory layout would be a nice start. Having hardware be more self-descriptive could be done very simply and cheaply. PCI is probably not the best choice, but having hardware have the equivalent of Vendor/Device ID that was globally unique, and a way to find peripherals would be a start. It's just that ARM doesn't care, and probably won't care until its customers demand it to care.

Gosh that takes me back... or is it forward? That's the trouble with time travel, you never can tell." -- Doctor Who, "Androids of Tara"

Working...