Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Re:lack of keyboard (Score 2, Informative) 204

by smilindog2000 (#26891039) Attached to: Second Android-Based Phone Announced

I guess I agree with you there, though I miss my old iPhone. My G1 seems like a Model T in comparison hardware-wise, but I don't think I can go back to the no-keyboard situation. My old eyes just don't work well enough to find those small keys.

I looked at all the photos of the new phone. Here's some insights from a G1 owner:

- The screen is *exactly* the same model. It's nice, but only 2.5". Every time I hold an iPhone, what really strikes me is that huge 3.5" screen.
- They *still* don't have a headset jack. This is possibly the dumbest lack of a feature ever in a smart phone, and they kept it! The non-standard headphones HTC ships suck, you have to have a bulky dongle to use anything else, and you can't charge while listening to music. Sucks, sucks, sucks!
- The camera lens appears larger, possibly meaning that HTC decided not to ruin all photos with a crap lens like on the G1. Of course, it could just be the same lens with a new style case.
- Being HTC, there's no way this phone has a bigger battery than the G1, and battery life will suck.
- The slot for the speaker looks identical the the G1. On the G1, it fills up with lint, and you can't hear calls when there's background noise, like while driving or at a restaurant (much to the pleasure of everyone else).

Until someone fixes the damned headphone issue, the speaker, the camera, and increases the screen size on an Android phone, I'll stick with my crappy G1.

Comment: Re:Not because there's only 1 (Score 4, Informative) 136

by smilindog2000 (#26855219) Attached to: Competition For the App Store Is Mounting

I've had an iPhone, and currently own a T-Mobile G1. In short, Android is a solid competitor (the only competitor IMO) to the iPhone OS. The actual G1 phone however, sucks big time, as GP suggests, though he didn't get close as to why:

- The speaker slot gets clogged with lint, and now I have trouble hearing the phone
- While the camera has auto-focus and more pixels than iPhone, HTC screwed up with a crappy lens that ruins all photos
- There's no headphone jack. Instead, HTC provides crappy headphones using a non-standard extension to the micro-USB jack
- The phone is too thick, and not nearly as sleek or well designed or packaged as the iPhone
- The battery is tiny in comparison to the iPhone.

Basically, some US company (Qualcom? T-Mobile?) must have said "Here's the specs for you, HTC", and then HTC delivered on the specs, but screwed up the phone.

While there are fewer users of the G1, there are proportionally fewer developers. Many of the best application spaces are already dominated on iPhone, while they're still open on Android. I believe that future Android phones will gain in market share vs iPhone, making development for Android a wise choice.

Comment: Re:I didn't know Feinstein was a Republican.... (Score 1) 873

by smilindog2000 (#26819555) Attached to: Senator Diane Feinstein Trying to Kill Net Neutrality

Diane has been technology illiterate for her entire career. In general, her heart is in the right place, but as Silicon Valley's senator, she's shamefully lacking in any sort of reasonable understanding of the issues.

Quite seriously, she hears, "let's protect our children", and "let's protect intellectual property", and that convinces her to support Trojan legislation designed to allow telcoms to put toll roads on the Internet.


+ - Another crazy new language effort - Language #42

Submitted by
smilindog2000 writes: "I've got this wacky idea: I'll write the world's most completely awesome computer language, and naturally get rich, famous, and attacked by hordes of crazed hot chicks. The language is called 42, and has the following insane goals: Run faster than C; Foster extreme code reuse; Compile to both hardware and software; Run faster on reconfigurable computers than Wintel boxes; Allow users to extend the language however they like. I need a few of you super-geeks out there to tell me flat out it's impossible. For some reason, that always motivates me. There's more detail at this discussion group."

Comment: Re:It must be real (Score 1) 603

by smilindog2000 (#26203957) Attached to: EEStor Issued a Patent For Its Supercapacitor

Here's the skinny on EEStor, so far as I can read.

Their new patent is a clean-up version of their old patent. Unfortunately, it's still a piece of marketing BS. Look at claim 1. It has 15 steps! If you avoid any one of them, you do not infringe. The rest of the patent is similar - not designed to protect, but designed to market an idea.

The physics of EEStor seems to have been replicated by half a dozen other companies, so we can probably begin to believe that the EEStore ultra-capacitors are possible in principle. However, a fully charged EEStor capacitor will explode on impact with about the force of 100 sticks of dynamite. I've thought about this problem for two years, without any solution. Hopefully the guys at EEStor are wiser, but no one else on the Internet has a solution either.

In short, don't bother believing this until you see it.

Comment: Re:Normal people don't need faster computers (Score 1) 139

by smilindog2000 (#26093831) Attached to: Intel On Track For 32 nm Manufacturing

Yep! If you talk to DSP guys, they do this kind of thing all the time. DataDraw allows me to specify which fields of a class I want kept together in memory, and by default, they're kept in arrays of individual properties. I was able to speed up random-access of large red-black trees in DataDraw 50% with this feature, simply because you almost always want both the left and right child pointers, not just one or the other.

Nice to hear from a fellow geek who for whatever reason still keeps an eye on low-level performance.

Comment: Re:Normal people don't need faster computers (Score 2, Interesting) 139

by smilindog2000 (#26074585) Attached to: Intel On Track For 32 nm Manufacturing

Check out the benchmark table at this informative link. On every cache miss, the CPU loads an entire cache line, typically 64 or more bytes. Cache miss rates are massively dependent on the probability that those extra bytes will soon be accessed. Since typical structures and objects are 64 bytes or more, the cache line typically gets filled with fields of just one object. Typical inner loops may access two of those object's fields, but rarely three, meaning that the cache is loaded with useless junk. By keeping data of like fields together in arrays, the cache line will be filled with the same field, but from different objects, often objects that will soon be accessed. This, plus the 32 vs 64 bit object references, and cache-sensitive memory organization (unlike malloc), leads to a 7X speedup in DataDraw backed graph traversals vs plain C code.

Understanding cache performance is critical for fast code, yet most programmers are virtually clueless about it. Just run the benchmarks yourself if you want to see the impact.

Comment: Re:Normal people don't need faster computers (Score 2, Interesting) 139

by smilindog2000 (#26072831) Attached to: Intel On Track For 32 nm Manufacturing

The sad part is that improved runtime speed and code readability can be had together at the same time. The reason the DataDraw based code ran 7x faster was simple: cache performance. C, C++, D, and C# all specify the layout of objects in memory, making it impossible for the compiler to optimize cache hit rates. If we simply go to a slightly more readable higher level of coding, and let the compiler muck with the individual bits and bytes, huge performance gains can be had. The reason DataDraw saved 40% in memory was that it uses 32-bit integers to reference graph objects rather than 64-bit pointers. Again, C, C++, and most languages specify a common pointer size for all class types. If the compiler were allowed to take over that task, life would be easier for the programmer, and we'd save a ton of memory.

But then again... what's a mere factor of 7X runtime with today's computers? With the low price of DRAM, who cares about 40%? It's easier to stick with the crud we've used since 1970 (C, and it's offspring) than to bother building more efficient languages. Language research has abandoned efficiency as a goal.

Comment: Re:Normal people don't need faster computers (Score 3, Insightful) 139

by smilindog2000 (#26072249) Attached to: Intel On Track For 32 nm Manufacturing

Good point. With solid-state drives coming down the pipe, even that bottle-neck will be somewhat relieved for what most people do (lot's of disk reads, few writes). I write programs to help designers place and route chips. The problem size scales with Moore's Law, so we never have enough CPU power. I'm part of a shrinking population that remains focused on squeezing a bit more power out of their code. I wrote the DataDraw CASE tool to dramatically improve overall place-and-route performance, but few programmers care all that much now days. On routing-graph traversal benchmarks, it sped up C-code 7X while cutting memory required by 40%. But what's a factor of 7 now days?

Comment: Re:Nobody cares. (Score 1) 379

by smilindog2000 (#26067097) Attached to: Should Apple Open Source the iPhone?

I agree with the article that Apple could find itself marginalized by Android in 5 years much like Windows marginalized Macs years ago. However, making it open-source wont help. I agree that users don't care about open-source vs closed-source. What Steve Jobs needs to do is license the iPhone software cheaply or even free. Of course, he wont. I've used both Android and iPhone extensively. Android is a bit behind iPhone, but is on a steeper improvement curve. It will be an interesting five years to watch.

Comment: Re:Mythical Creature... (Score 1, Insightful) 538

by smilindog2000 (#26059229) Attached to: Bjarne Stroustrup On Educating Software Developers

I agree that C++ GUI code (like Valve's Source engine) are better than the old C GUI libraries. C++ is a good fit for describing class hierarchies of GUI widgets. It's not all bad, but not all good, either.

While C++ works well for trees, consider graphs. Two classes, not just one (Nodes and Edges, rather than just Nodes). If there is a C++ database containing a graph, and you want to manipulate that graph, how do you do it? In C++, your life becomes harder than it should be at that point (do you attach void pointers to allow kludged extensions to database objects, or inherit from them directly and do copy-in/copy-out?). The only reasonable C++ graph library I've seen is the Boost Graph Library. If you care for a life of pain, make this the basis of your next big EDA project. Alternatively, if you store those graphs in a DataDraw database, your code is hugely simplified, while running far faster.

I do EDA coding for a living. Life as an EDA programmer is basically all about manipulating graphs. C++ and EDA have never worked out well together, but nor has Java, C#, or any other mainstream language. You need dynamic extension, like Python, but raw-speed, like C. Today, that means DataDraw.

It's not an optical illusion, it just looks like one. -- Phil White