Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Bigger fuckup than John Akers (Score 1) 84

3) Contractual obligations/customer relations, in the enterprise world people build systems they expect to last many, many years and not have the parts disappear on a whim. Which is is why Intel has launched Itaniums as late as 2012, whoever they suckered into buying it will get time to bail out. Don't underestimate the value of grudges in the enterprise, any executive who gets burned by IBM ditching it fast and dirty will be their enemy when the next big consulting/outsourcing contract rolls around.

Comment Re:Unity is rubbish. Systemd is rubbish (Score 3, Interesting) 110

Except they're not chasing the mainstream, they're chasing the hype wave of Apple/Google/Microsoft trying to be the "big next thing" instead of what is actually mainstream today with Win7/OS X. Instead of picking a market and staying on target to finish the job they still haven't finished on the office desktop from 1999 or the laptop from 2004 or smartphone from 2009 or tablet from 2014. And at this rate I don't think Ubuntu will stay in one place long enough to be relevant to anyone outside the ~1% of the desktop market Linux owns today.

Comment Re:20 million out of 50 million stolen? (Score 1) 59

Here's a really simplistic example - if you carry auto insurance the liability levels on your policy give a good indication of how much wealth you have (because liability coverage is about protecting your assets not anyone else).

You don't even need to go to the insurance companies, in Russia you just buy the registration database and then target people who have Mercedes and BMWs.

(I'm not being facetious, this is how the criminals actually do it).

Comment Re:Why Cold Fusion (or something like it) Is Real (Score 1) 350

Springer is a rather serious publishing company. Springer journals carry very real weight.

.

Springer was a rather serious publishing company. In the last decade or so they've switched to publishing any old rubbish that they can make a fast buck off. Look at the LNCS series for examples, they're publishing proceedings of conference that look like they were held around a table in a beer hall.

Comment Re:Is D3D 9 advantageous over 10? (Score 1) 55

Games only started using D3D 10/11 *very* recently -- the back catalog this could enable is huge, and D3D 9 games are still coming out today. It'd say it's very important to support.

Bullshit. Almost all games have had an D3D 9 rendering path since XP has been so massively popular, but a whole lot of games has taken advantage of D3D 10/11 where it's been available. It's very important to the number of games you can run on Linux, but it does not represent the state of the art. Speaking of which, WINE's support of D3D 9 through an OpenGL has been pretty good. Or rather my impression has been that if they can figure out what DirectX is doing, there's usually a fairly efficient way of doing in OpenGL. The summary tries to paint it as if OpenGL has been a blocker to DirectX support, my impression is quite the opposite. A gallium3d implementation is closer to the hardware and "more native" than a DirectX-to-OpenGL translation layer, but while it might boost performance a little it won't fundamentally support anything new.

Comment Re:Why Cold Fusion (or something like it) Is Real (Score 1) 350

Does he mean a transient reaction in the test set-up that produces the byproducts of fusion, but not long enough to generate useful power?

A transient reaction that can't be reliably reproduced despite recreating the same conditions to the best of our ability. Which might be because the conditions necessary are so extremely specific that they only got them right once by accident or because of some contamination or malfunction that somehow produced the necessary conditions yet attempts to recreate them fail. Or the results of the initial experiment were wrong, but here they've clearly put their desire to believe it was real over their good judgement.

Comment Re:But the ID shouldn't have to be secret (Score 1) 59

Except authentication is usually not username+password or digital signature, it's identification+official paper saying you're that person. Everywhere your use your passport, driver's license or any other photo ID you're relying on three things:

1) The difficulty of acquiring the information to be on the card
2) The difficulty of forging the card
3) The difficulty of fooling the issuers into producing a fake card

The last one is often a sneaky one, enough ID info and you might trick one of them into believing you've lost your ID and issue a new one. But there's enough direct fakes too, if they have the necessary information that's half the way.

Comment Re:I still don't see what's wrong with X (Score 4, Insightful) 226

Except that's pretty much what all AJAX web apps do, they "export the UI through some generic mechanism" to the browser so I'd say it's very common. No need for roll-outs and patches, if the server now says there should be a new button there is a new button for everyone. The issue is that I find most web apps really suck compared to native applications so locally I usually want a native, non-web application.

What I'm talking about is a native toolkit that'd make the applications you normally use locally network transparent at the application level, not the display server level. Essentially a toolkit where the UI is always living in its own thread, asynchronously to the actual application. Network transparency just means that thread happens to be living on a different machine, drawing to a different display. And you could tweak it to handle that better, but you wouldn't have to it'd sort of run remotely without modification.

For example, I made a basic calculator just as a proof of concept. Connected locally (I still used a TCP connection just to localhost, better options are available) it looked and acted entirely as a native app you could use every day. It recorded buttons pushed, sent the push events to the back-end and sent updated display text back. I hadn't made it better, but I hadn't made it worse either. The cool thing though was that now I could connect to it remotely. Same calculator popped up, my button clicks go over the network, display text came back over the network. It's a working local native app and a working network transparent remote app. At once. Without any application logic in the client, just drawing tools.

Comment Sure they do (Score 5, Insightful) 76

People would like a magic box that make them anonymous and secure on the internet while they log into Facebook, just like they want a magic diet pill while they continue to stuff their faces with sugar and fat. Or for a more relevant tech example they'd like a magic oracle to tell them if a website belongs to who they think it belongs to which is why we have CAs as the best approximation. It's never going to work that way, but there's a lot of money in selling snake oil...

Comment Re:I still don't see what's wrong with X (Score 1) 226

The layer in the system between the user applications and the hardware interface is the place where QT, GTK, Windows graphics api, and all the other graphics toolkits go. Those toolkits shouldn't care too much about the hardware details, just the published capabilities of the GPUs.

It's kinda hard not to care, because a lot of it depends on where you have the data, where the processing capability is and what the link capability is. Sending a video stream is heavy. Even sending an event stream like applications do all by themselves is too heavy during say a resize or scrolling action. Some time ago I experimented with turning Qt into a remote application toolkit, basically taking all signals and slots and serializing them over SSL. It was actually surprisingly successful, basically it was puppeteering a client to draw the interface and using signals and slots to synchronize information on demand. Only the bits you connected sent events across the link.

There were plenty little gotcha's though, like the scrolling I found a way to make a trigger that'd only fire after a custom delay, like for example 50ms after you were done resizing. And I needed to add a system to say "When this button is called, include the check state of this radio button and text of that textbox", but the nice thing was that on the client side it was acting like a client window. It was resizing, the menus were popping up, the buttons responded (though the actions might take time due to latency) and I could do client-client signal/slots like "when the user checks this box, enable these extra fields" too without a server round-trip.

I could do neat things like send a jpg and have the client draw it and even if the client moved the window around, covered it with other dialogs, scrolled it in and out of sight it was zero overhead. Yeah I know kind of like a browser, but not like any RDP/VNC solution. Often I needed the same resources over and over and didn't want to transfer them every time, so I needed a caching system. Kind of like HTML5 persistent objects I guess but before that. And I could populate list/tree/grid objects up front or on demand, a bit like DOM manipulation in HTML.

It wasn't transparent but it was somewhat API transparent, you'd get a "RemotePushButton" instead of a "QPushButton" which acted the same, but instead of actually drawing anything just sent commands to the client which drew the real QPushButton. You didn't really see that though, you just called the functions and connected the RemotePushButton's onClick() signal as if it were a QPushButton. Kind of like HTML+AJAX on steroids but looking and feeling like a native application. That I feel would have been rather next-gen to see it finished.

If you're wondering why I didn't then mainly because the product I was thinking to use it for kinda died on the drawing board. And because to really become user friendly it'd have to integrate on a much deeper level, so you could use all the Q* classes without rewriting everything, the Remote* classes were a hack (QObjects) working with the standard library. And you'd want to put more work in persistence, the idea was that you could yank out the plug on one machine, log back in on another and it'd redraw everything but initially it passed all though and didn't shadow the client state on the server. It could have though, the rewrite was just too much.

Comment Re:Once again proving ARM is awesome (Score 2) 97

which still imposes a significant overhead in terms of transistor count

They did it on the Pentium Pro which had ~1/1000th of the transistors modern processors have today. Even though the instruction set has grown a few times in size, it's certainly entirely irrelevant when it comes to total transistor count today. But keep on spouting nonsense.

Comment Re:Once again proving ARM is awesome (Score 2) 97

*digs up the carcass you can flog the dead horse again*

No x86 chip from the last 20 years runs CISC instructions internally, it's split into micro-ops and AMD/Intel has spent the last 20 years optimizing their decoder and internal instruction set for this one task. If you think using the ARM instruction set is more optimized than that you've drunk way too much of the kool-aid.

Slashdot Top Deals

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...