Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:I'm Not a Betting Man... (Score 1) 235

A controlling interest in Google is owned by the CEO and two founders. Their IPO stated that this would be the case and that public investors would be able to share in the financial gains, but not significantly in the direction or operation of the company. If those three have decided that China isn't worth it, there is little the investors can do to stop them.

Comment Re:Premature optimization is evil... and stupid (Score 5, Insightful) 249

Having spent 4 years being one of the primary developers of Apple's main performance analysis tools (CHUD, not Instruments) and having helped developers from nearly every field imaginable tune their applications for performance, I can honestly say that regardless of your performance criteria, you shouldn't be doing anything special for optimization when you first write a program. Some thought should be given to the architecture and overall data flow of the program and how that design might have some high-level performance limits, but certainly no code should be written using explicit vector operations and all loops should be written for clarity. Scalability by partitioning the work is one of those items that can generally be incorporated into the program's architecture if the program lends itself to it, but most other performance-related changes depend on specific usage cases. Trying to guess those while writing the application logic relies solely on intuition which is usually wrong.

After you've written and debugged the application, profiling and tracing is the prime way for finding _where_ to do optimization. Your experiences have been tainted by the poor quality of tools known by the larger OSS community, but many good tools are free (as in beer) for many OSes (Shark for OS X as an example) while others cost a bit (VTune for Linux or Windows). Even large, complex multi-threaded programs can be profiled and tuned with decent profilers. I know for a fact that Shark is used to tune large applications such as Photoshop, Final Cut Pro, Mathematica, and basically every application, daemon, and framework included in OS X.

What do you do if there really isn't much of a hotspot? Quake 3 was an example where the time was spread out over many C++ methods so no one hotspot really showed up. Using features available in the better profiling tools, the collected samples could be attributed up the stack to the actual algorithms instead of things like simple accessors. Once you do that, the problems become much more obvious.

What do you do after the application has been written and a major performance problem is found that would require an architectural change? Well, you change the architecture. The reason for not doing it during the initial design is that predicting performance issues is near impossible even for those of us who have spent years doing it as a full time job. Sure, you have to throw away some code or revisit the design to fix the performance issues, but that's a normal part of software design. You try an approach, find out why it won't work, and use that knowledge to come up with a new approach.

That largest failing I see from my experiences have been the lack of understanding by management and engineers that performance is a very iterative part of software design and that it happens late in the game. Frequently, schedules get set without consideration for the amount of time required to do performance analysis, let alone optimization. Then you have all the engineers who either try to optimize everything they encounter and end up wasting lots of time, or they do the initial implementation and never do any profiling.

Ultimately, if you try to build performance into a design very early, you end up with a big, messy, unmaintainable code base that isn't actually all that fast. If you build the design cleanly and then optimize the sections that actually need it, you have a most maintainable code base that meets the requirements. Be the latter.

Comment Re:Raises a question? (Score 1) 1012

I worked closely with the kernel and firmware engineers at Apple for the last 4 years. They have never intentionally disabled any hacks, unlocks, or unofficially supported hardware. The changes that cause them to stop working were done solely to fix other bugs or to enable new features. It's not vindictive. I know, I helped make some of those decisions.

Comment Re:Who wants to update?? (Score 2, Interesting) 1012

Consider that code support for a processor is non-trivial. While they may have added support for the Atom at some point, there is a cost to keeping that support functional. When working on other features in the kernel, it may very well be easier to remove the support for a processor that isn't officially supported than to keep it working. This is especially true for OS X which frequently changes their power management scheme for Intel processors.

Comment Re:Fifty votes from "executives"? (Score 3, Informative) 189

Speaking as a former Apple employee: so _that's_ why a bunch of senior engineers in the hardware devision were let go or put in such a horrible situation that they left. Apple isn't being that great to their engineers and is focusing more on how hard they can drive them to produce new products every 6 months.

Comment Re:Encoding? (Score 1) 284

Actually, all labels are restricted to the characters allowed for ARPANET hosts. The spec does state that implementations should store labels as a length octect followed by a sequence of octets, thus implying that any compliant software _should_ handle UTF8, but no one wants to take that chance.

Comment Re:Silly (Score 1) 135

... I've heard a 486 serving static pages can manage to fill a T1 line.

It isn't _that_ hard to saturate 1.544Mbps. Most cable/DSL downlink speeds are faster than that. Now, a T3 is a bit more challenging, but nothing a single decent machine can't handle.

Comment Re:Dock/Taskbar design (Score 0, Troll) 688

And how does your system compare spec-wise to a fully decked out mac pro? It doesn't. A top of the line mac pro would include (2) Nehalem processors (8 cores total), 32GB of RAM, 4TB of disk.... Your system doesn't even come close.

In terms of the $4k mac pro, it still outspecs what you've listed. If you are going to do comparisons, at least use comparable parts.

Slashdot Top Deals

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...