Forgot your password?
typodupeerror

Comment: Re:Never failed before (Score 2) 129

by Sun (#47887755) Attached to: Chrome OS Can Now Run Android Apps With No Porting Required

Yeah, I heard that claim before. Aside from the Novel Wordperfect stink, that is just not so.

What people fail to consider when saying this is that, even if it were still true (and I don't think it is), it is immaterial. Wine does not need to, and does not do so, implement every one of Windows' APIs. It just needs to implement those APIs that programs are actually using.

MS cannot change interfaces to existing APIs. That will break application compatibility (without which, MS has no monopoly). They can add new functionality all they want. Until applications start using them (i.e. - after release), they are immaterial to Wine.

Also, you simply assumed everything else I said was the same. Linux interfaces in BSD are not subject to the same rules, and yet they did very little to drive adoption of BSD based OSes.

It all boils down to this. If you want to run Windows apps, you are going to do so on Windows. If you want to run Linux apps, you are going to do so on Linux. If you want to run Android apps, you are going to do so on Android. Every so often, you will want 90% from your native OS, and the support for those extra 10% would be great. It is not, however, something that drives large scale market shifts.

Shachar

P.S.
Judge Jackson's finding of facts had everything to do with IE integration, some to do with Java embrace and extend, and nothing at all to do with private APIs.

Comment: Never failed before (Score 5, Insightful) 129

by Sun (#47887109) Attached to: Chrome OS Can Now Run Android Apps With No Porting Required

I mean, OS/2 running Windows apps was a huge push forward for IBM. Wine completely changed the Linux desktop picture, and BSD's Linux binary compatibility made it an effective super set of Linux, to the point nobody bothers to install the later (not to mention the similar capability of SCO Unix: they wouldn't be where they are today without it).

I hear that ChromOS is a nice platform and is doing well. I'm glad, in a "diversity is good" non-committed sort of way. I don't think this particular feature will change much.

Shachar

Comment: Re:How would we know? (Score 1) 811

by Sun (#47846477) Attached to: 3 Recent Flights Make Unscheduled Landings, After Disputes Over Knee Room

El Al are trying it. They have a few coach type called "economy plus". What the link doesn't say is that you can buy the regular seat, and then participate in an auction to upgrade that might prove cheaper than buying it straight out.

I took a flight to France a month ago. Flight there was almost completely empty, and they let us move there at no cost. I can't say for sure whether there is more leg room, but it is the exact experiment you were talking about either way.

Then again, I'm fairly heavy set, and the flight back (regular coach) went fine without this upgrade. Maybe they just haven't completely jumped on to the "no leg room" band wagon yet.

Shachar

Comment: Re:bringing in more H1Bs will solve this problem (Score 2) 249

by Sun (#47840987) Attached to: IT Job Hiring Slumps

And how many jobs actually require you to get "close to the metal"?

That's the wrong question.

The real question is "How many jobs need you to understand what the metal does when you write code in order for you to be any good". The answer is "almost all of them".

Sure, there are rapid application development (RAD) environments that allow you to create a TCP server in three lines of code with a scale out of 5,000..... assuming you don't actually want to do anything with each connecting client. If you do, the scale out suddenly drops to 5 unless you know what you're doing.

And here's the sore point - most programmers don't. They don't differentiate between capabilities given by their environment which are expensive and those that are cheap. They were never trained to think that the commands they operate have a cost, and that this cost needs to be weighed and considered.

So, yeah, CS studies are not the place to learn how to use RADs. Pick them up on your own later. You should learn about bare metal programming, about how a garbage collector is actually implemented and what are its costs, about the limits and capabilities of your compiler's optimizer. This way, if you end up using RADs, at least you will not be a shitty RAD programmer.

Shachar

Comment: Re:To remove this... (Score 1) 230

by Sun (#47829129) Attached to: Akamai Warns: Linux Systems Infiltrated and Controlled In a DDoS Botnet

You really should look up how Unix does its stuff. In particular, how the page cache works, and how inode ref-counting work.

The short answer is that you are wrong. Everything is erased.

Of course, this is, strictly speaking, false also. Some things are on read only file systems, or on pseudo file systems that do not allow erasing (such as /proc). Those, as well as the path leading there, will not be erased. Everything else, however, is gone by the time "rm" finishes.

Shachar

Comment: Re:There's a lot more going on... (Score 1) 161

by QQBoss (#47781417) Attached to: Research Shows RISC vs. CISC Doesn't Matter

That is more or less accurate. The goals of the original RISC were stated to be making a Reduced Instruction Set Computer, but what was in fact produced was a Reduced Instruction Set Complexity CPU. By restricting the touching of memory to only loads and stores, all other instructions that were able to be executed in one clock COULD be executed in one clock always. Whereas some CISC instructions involving arrays could kick off 10+ memory touches as a side effect, RISC instructions could never do that (sans via exceptions). So when all 10 of those memory touches weren't required, the RISC architecture could optimize away the unnecessary ones (which was a bitch in 1990, but common place by 2000 and exceedingly trivial by 2010, to put it roughly).

I taught CISC architectures (68K mostly) and was a minor architect for PowerPC (I helped work on the early EABI- embedded application binary interface- architecture)

But this leads to a problem: Cache. That CISC operation that made 10 memory touches took roughly 10-18 bytes of instruction storage (68K example), and 10 data cache accesses that would either hit or miss. But a 16 bit RISC would take 22 bytes (and didn't double the number of useful registers available) and a 32 bit RISC would take 44 bytes (but generally doubled the number of useful registers, reducing the need for so many loads and stores). Thank goodness you took fewer transistors to implement the instruction pipeline, because you need them all back to make the Icache bigger! The hope being that those 10 memory touches were rarely needed if you had more registers, so you could cut back on other loads somewhere (but we didn't get really good at doing that automatically until the late '90s, by which time we could show that the RISC penalty was effectively negated, specific numbers remain the property of my name-changed employer but were down to single digit percentage differences). Dcache would have the same hits and misses, unless you were also able to allocate saved transistors to some Dcache which might affect hit rates by some low single percentage points.

But with complicated instructions come pipeline clocking challenges. Implementing the entire x86 pipeline in 5 stages would result in having a sub-200 MHz pipeline today- the P4 push to 4 GHz required up to 19 stages (and who knows how many designers) worst case, IIRC! Meanwhile, most RISC architectures zoom along happily with 5-7 stages and only manufacturing nodes or target design decisions keep them from clocking up to x86 frequencies.

Hands down, it was never any 'benefits' of CISC (or, specifically, the x86 architecture) that allowed Intel to take the field, it was market forces and manufacturing might. A win is a win.

BTW, to the AC GP, just because an instruction appears complex (most SIMD operations, MADDs, FPSQRTRES, etc...), they still count as RISC if they can be either executed in one clock or at least pipelined with nominally one result per clock if they don't impact the pipeline for all the other commonly executed instructions. After all, we can made a divide instruction execute in 1 clock, too, as long as you don't mind your add instructions taking 16x longer (though still one clock), but that is cheating.

Comment: Re:The world we live in. (Score 1) 595

by Sun (#47753645) Attached to: New Nail Polish Alerts Wearers To Date Rape Drugs

Please provide source to that claim.

As far as I know, a great majority of acquaintance rapes are by either family member or neighbor. Then again, I haven't been keeping track, so I might be confusing things (for example, this might be the statistics for minor's rape, and thus irrelevant for the date rape discussion).

Still, if you can back your claim, please do.

Shachar

Comment: Re:One of the most frustrating first-world problem (Score 1) 191

by QQBoss (#47660641) Attached to: Reversible Type-C USB Connector Ready For Production

At some point in your life you're going to have to go all Zen about it and not care so much.

Only then can you throw those old SCSI cables out.

Hah, I scrapped 4 cubic yards of collected computer detritus, including at least a dozen different SCSI cables (with some ultraSCSIs) today. Been needing to do that for years. I did shed a bit of a tear over the Amiga stuff, though.

Yes, I donated to anyone and everyone all that I could before I scrapped. But 4 working PCs couldn't even be given away to an orphanage!

Comment: Re:Not all that surprising... (Score 3, Informative) 131

by Sun (#47660269) Attached to: Errata Prompts Intel To Disable TSX In Haswell, Early Broadwell CPUs

I have a firend who came to me, eyes all glowing, about this new feature his shining new CPU has. I listened in and was skeptical.

He then tried, for over a month, to get this feature to produce better results than traditional synchronization methods. This included a lot of dead ends due to simple misunderstandings (try to debug your transation by adding prints: no good - a system call is guaranteed to cancel the transaction).

We had, for example, a lot of hard times getting proper benchmarks for the feature. Most actual use cases include a relatively low contention rate. Producing a benchmark that will have low contention on the one hand, but allow you to actually test how efficient a synchronized algorhtm is on the other is not an easy task.

After a lot of going back and forth, as well as some nagging to people at Intel (who, suprisingly, answered him), he came across the following conclusion (shared with others):
Many times a traditional mutex will, actually, be faster. Other times, it might be possible to gain a few extra nanoseconds using transactions, but the speed difference is, by no means, mind blowing. Either way, the amount you pay in code complexity (i.e. bugs) and reduced abstraction hardly seems worth it.

At least as it is implemented right now (but I, personally, fail to see how this changes in the future. Then again, I have been known to miss things in the past), the speed difference isn't going to be mind blowing.

Shachar

1 + 1 = 3, for large values of 1.

Working...