Comment Re:Big list (Score 1) 594
Thanks for the links. Did you forget Fly the Road?
Thanks for the links. Did you forget Fly the Road?
Here's the skinny on EEStor, so far as I can read.
Their new patent is a clean-up version of their old patent. Unfortunately, it's still a piece of marketing BS. Look at claim 1. It has 15 steps! If you avoid any one of them, you do not infringe. The rest of the patent is similar - not designed to protect, but designed to market an idea.
The physics of EEStor seems to have been replicated by half a dozen other companies, so we can probably begin to believe that the EEStore ultra-capacitors are possible in principle. However, a fully charged EEStor capacitor will explode on impact with about the force of 100 sticks of dynamite. I've thought about this problem for two years, without any solution. Hopefully the guys at EEStor are wiser, but no one else on the Internet has a solution either.
In short, don't bother believing this until you see it.
It turns out that this is not the first time geothermal energy plants have drilled into hot magma. They had a similar experience decades ago in Iceland. Unfortunately, it was a story told to me by a professor at Berkeley in 1985, so I'm not able to google any info on it.
Yep! If you talk to DSP guys, they do this kind of thing all the time. DataDraw allows me to specify which fields of a class I want kept together in memory, and by default, they're kept in arrays of individual properties. I was able to speed up random-access of large red-black trees in DataDraw 50% with this feature, simply because you almost always want both the left and right child pointers, not just one or the other.
Nice to hear from a fellow geek who for whatever reason still keeps an eye on low-level performance.
Not a bad idea, but where would I publish it? I could post it on my Dumb Idea of the Day blog, but no one reads it (which is ok with me). I would certainly be interested in writing an article about coding for cache performance.
Check out the benchmark table at this informative link. On every cache miss, the CPU loads an entire cache line, typically 64 or more bytes. Cache miss rates are massively dependent on the probability that those extra bytes will soon be accessed. Since typical structures and objects are 64 bytes or more, the cache line typically gets filled with fields of just one object. Typical inner loops may access two of those object's fields, but rarely three, meaning that the cache is loaded with useless junk. By keeping data of like fields together in arrays, the cache line will be filled with the same field, but from different objects, often objects that will soon be accessed. This, plus the 32 vs 64 bit object references, and cache-sensitive memory organization (unlike malloc), leads to a 7X speedup in DataDraw backed graph traversals vs plain C code.
Understanding cache performance is critical for fast code, yet most programmers are virtually clueless about it. Just run the benchmarks yourself if you want to see the impact.
The sad part is that improved runtime speed and code readability can be had together at the same time. The reason the DataDraw based code ran 7x faster was simple: cache performance. C, C++, D, and C# all specify the layout of objects in memory, making it impossible for the compiler to optimize cache hit rates. If we simply go to a slightly more readable higher level of coding, and let the compiler muck with the individual bits and bytes, huge performance gains can be had. The reason DataDraw saved 40% in memory was that it uses 32-bit integers to reference graph objects rather than 64-bit pointers. Again, C, C++, and most languages specify a common pointer size for all class types. If the compiler were allowed to take over that task, life would be easier for the programmer, and we'd save a ton of memory.
But then again... what's a mere factor of 7X runtime with today's computers? With the low price of DRAM, who cares about 40%? It's easier to stick with the crud we've used since 1970 (C, and it's offspring) than to bother building more efficient languages. Language research has abandoned efficiency as a goal.
Good point. With solid-state drives coming down the pipe, even that bottle-neck will be somewhat relieved for what most people do (lot's of disk reads, few writes). I write programs to help designers place and route chips. The problem size scales with Moore's Law, so we never have enough CPU power. I'm part of a shrinking population that remains focused on squeezing a bit more power out of their code. I wrote the DataDraw CASE tool to dramatically improve overall place-and-route performance, but few programmers care all that much now days. On routing-graph traversal benchmarks, it sped up C-code 7X while cutting memory required by 40%. But what's a factor of 7 now days?
If you mean this patent then don't worry too much. Apple didn't invent multi-touch (these guys did), nor did they patent the way it's currently used. They patented extensions, such as performing cut and paste with gestures. Why the G1 has no multi-touch is a mystery to me.
I agree with the article that Apple could find itself marginalized by Android in 5 years much like Windows marginalized Macs years ago. However, making it open-source wont help. I agree that users don't care about open-source vs closed-source. What Steve Jobs needs to do is license the iPhone software cheaply or even free. Of course, he wont. I've used both Android and iPhone extensively. Android is a bit behind iPhone, but is on a steeper improvement curve. It will be an interesting five years to watch.
I agree that C++ GUI code (like Valve's Source engine) are better than the old C GUI libraries. C++ is a good fit for describing class hierarchies of GUI widgets. It's not all bad, but not all good, either.
While C++ works well for trees, consider graphs. Two classes, not just one (Nodes and Edges, rather than just Nodes). If there is a C++ database containing a graph, and you want to manipulate that graph, how do you do it? In C++, your life becomes harder than it should be at that point (do you attach void pointers to allow kludged extensions to database objects, or inherit from them directly and do copy-in/copy-out?). The only reasonable C++ graph library I've seen is the Boost Graph Library. If you care for a life of pain, make this the basis of your next big EDA project. Alternatively, if you store those graphs in a DataDraw database, your code is hugely simplified, while running far faster.
I do EDA coding for a living. Life as an EDA programmer is basically all about manipulating graphs. C++ and EDA have never worked out well together, but nor has Java, C#, or any other mainstream language. You need dynamic extension, like Python, but raw-speed, like C. Today, that means DataDraw.
Here's my beef with C++. Average to less-than-average programmers will never understand virtual functions, templates, or (shudder) multiple inheritance. New code is normally written by super-smart programmers who use all that stuff. Then, the B-team takes it over and can't figure out what the heck it does. The code is then doomed to painful process of continuous decay.
C++ was written by PhD's for PhD's. It was never a good fit for the real world. Java is a huge step forward for the world, just not graduate programs. Personally, I have 100 other issues with modern languages, which is why I do all my programming with the DataDraw variant of C.
I haven't used a BlackBerry, so I can't compare to that. However, I use to own an iPhone, so I can compare against the software available there early on. The iPhone had no cut and paste, no ability to download files, but the POP client worked OK. There was also no app store, only a 4-function calculator, and no dial-by-voice. In comparison to the iPhone trajectory, Android looks quite good to me.
IANAL, but my understanding is that you may not legally hack the modem itself or it's software driver. You do not break any law by writing software that manipulates the modem through it's provided driver, so feel free to hack at that level.
"Everything should be made as simple as possible, but not simpler." -- Albert Einstein