Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Sounds familiar. (Score 2, Insightful) 571

People who will try to cause fear and injury aren't new. There hasn't been any proof that all this legislation and fear mongering around curiosity has actually made us any safer. We live in an inherently dangerous world. It's time to realize that we can't baby-proof it. Then we can get back to doing research, having odd hobbies, and being generally curious without fear of being accosted.

Comment Re:So they broke it, and made it theirs. (Score 1) 163

Perhaps you should actually read up on the technology (link are already in other comments) and realize that the sleep proxy service handles some requests without waking the machine. For something like a ping, it doesn't wake the machine, but instead the proxy responds to the ping directly. Same for service advertisements. It only wakes the machine when the proxy can't handle the request on the machine's behalf.

So yeah, it does solve the problem. Now you've proven that not only are you unable to perform basic research, but that you ignore the facts presented and continue claim something entirely refuted by the facts.

Comment Re:Not necessarily fake (Score 5, Insightful) 511

If you've ever had a display calibrated, you'd know that even the existing RGB color space can't be completely recreated with existing RGB-based displays. The problem is in the inability of LEDs or LCD or plasma panels to produce light uniformly in the three color channels. If you can add a 4th channel that lets the RGB color space be more accurately produced by the display, then you will see an improvement. It won't make the source any better, but the output generated by the display for that input will be better.

Comment Re:I just don't see the issue (Score 1) 559

So in other words, it was OK for everyone to broadcast information that they don't really want to be public because they didn't expect anyone to actually make it public. Then, when someone does, it's the fault of the collector that everything was available? Huh? Perhaps it would be more prudent for individuals to consider what having something be made public means before deciding to do so. The options for not broadcasting SSIDs have been in APs since the beginning.

A probably poor analogy: When I'm visiting my parents, I tend to not bother locking my car doors since they live in the middle of nowhere. I don't expect anyone to steal my car because it is unlikely that someone would know that I leave it unlocked and would venture out that far to steal a car. Now, a company comes along and records locations and the number of cars that have unlocked doors. If it helps, consider that this can be determined for most cars without touching the car. If my car gets stolen, do I sue the company for making it known that my car was frequently unlocked in this area? No, I realize how dumb I was, file an insurance claim, and start shopping for a new car. I probably won't leave my car unlocked any more either.

Comment Re:Too many Linux-incompatible-with-Linux distros (Score 1) 148

Uh, no. OS X provides a rich set of libraries as part of the base OS. Apple goes to great lengths to ensure compatibility between OS versions (libSystem is compatible to version 1). The only time any software includes a library inside their app bundle is if they wrote it or it is an OSS library that isn't in the base OS. Most apps don't need to.

Comment Re:30ms? (Score 1) 334

That's why we have virtual address spaces. Each process gets a 4GB address space for a 32-bit OS. Only 2-3.5GB (depending on OS) will be available for actual program code (not shared libraries) and data. Of course, those virtual addresses get translated to physical addresses by the OS on a per-page (or at least per-range) basis, so all 4GB of physical RAM can be used assuming there is more than one process running.

That's all a simplification of how modern VM subsystems work, of course.

As to memory "stolen" by hardware, the big one there is video cards. A fair number of OSes still map the entire video memory into the physical memory address map. You might think this would be a problem, but on the x86 architecture at least, physical addresses have been 36 bits for a while. Even with a few video card with 1GB of video ram each, there is still enough space to address all 4GB of physical RAM.

Comment Re:I'm Not a Betting Man... (Score 1) 235

A controlling interest in Google is owned by the CEO and two founders. Their IPO stated that this would be the case and that public investors would be able to share in the financial gains, but not significantly in the direction or operation of the company. If those three have decided that China isn't worth it, there is little the investors can do to stop them.

Comment Re:Premature optimization is evil... and stupid (Score 5, Insightful) 249

Having spent 4 years being one of the primary developers of Apple's main performance analysis tools (CHUD, not Instruments) and having helped developers from nearly every field imaginable tune their applications for performance, I can honestly say that regardless of your performance criteria, you shouldn't be doing anything special for optimization when you first write a program. Some thought should be given to the architecture and overall data flow of the program and how that design might have some high-level performance limits, but certainly no code should be written using explicit vector operations and all loops should be written for clarity. Scalability by partitioning the work is one of those items that can generally be incorporated into the program's architecture if the program lends itself to it, but most other performance-related changes depend on specific usage cases. Trying to guess those while writing the application logic relies solely on intuition which is usually wrong.

After you've written and debugged the application, profiling and tracing is the prime way for finding _where_ to do optimization. Your experiences have been tainted by the poor quality of tools known by the larger OSS community, but many good tools are free (as in beer) for many OSes (Shark for OS X as an example) while others cost a bit (VTune for Linux or Windows). Even large, complex multi-threaded programs can be profiled and tuned with decent profilers. I know for a fact that Shark is used to tune large applications such as Photoshop, Final Cut Pro, Mathematica, and basically every application, daemon, and framework included in OS X.

What do you do if there really isn't much of a hotspot? Quake 3 was an example where the time was spread out over many C++ methods so no one hotspot really showed up. Using features available in the better profiling tools, the collected samples could be attributed up the stack to the actual algorithms instead of things like simple accessors. Once you do that, the problems become much more obvious.

What do you do after the application has been written and a major performance problem is found that would require an architectural change? Well, you change the architecture. The reason for not doing it during the initial design is that predicting performance issues is near impossible even for those of us who have spent years doing it as a full time job. Sure, you have to throw away some code or revisit the design to fix the performance issues, but that's a normal part of software design. You try an approach, find out why it won't work, and use that knowledge to come up with a new approach.

That largest failing I see from my experiences have been the lack of understanding by management and engineers that performance is a very iterative part of software design and that it happens late in the game. Frequently, schedules get set without consideration for the amount of time required to do performance analysis, let alone optimization. Then you have all the engineers who either try to optimize everything they encounter and end up wasting lots of time, or they do the initial implementation and never do any profiling.

Ultimately, if you try to build performance into a design very early, you end up with a big, messy, unmaintainable code base that isn't actually all that fast. If you build the design cleanly and then optimize the sections that actually need it, you have a most maintainable code base that meets the requirements. Be the latter.

Comment Re:Raises a question? (Score 1) 1012

I worked closely with the kernel and firmware engineers at Apple for the last 4 years. They have never intentionally disabled any hacks, unlocks, or unofficially supported hardware. The changes that cause them to stop working were done solely to fix other bugs or to enable new features. It's not vindictive. I know, I helped make some of those decisions.

Slashdot Top Deals

The moon is made of green cheese. -- John Heywood

Working...