Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:What's the point? (Score 1) 213 213

Ah.. but I thought remote access was a niche application, no longer relevant enough for developers to care about?

That's a bit much. I suspect what you are hearing is that Network Transparency is a niche application.... Remote access while vastly less important than it was 20 years ago is still used. Which is why Wayland supports it and supports it better than X11. This is what I keep saying you cannot confuse remote access which is far better with Wayland and X11's specific methods of remote access. BTW we know KDE and Gnome are going to support remote because they have been working with Wayland already.

I use Stumpwm along with the pager from Lxde.

LXDE is in the process of becoming LXQt. It fully intends to support remote using Qt's system. As per the June 2013 announcement Hong Jen Yee (PCMAn) is waiting on the freedesktop guys to tell him the messy details are worked out, right now they've told him what not to use in Qt 5 if he wants to easily be on Wayland and along with Razor they are making sure that LXQt doesn't use that stuff some of which is being backported to LXDE. I suspect that LXDE will never be native Wayland. I do know they are ripping out X11 dependencies from the LXDE codebase to make such a port easier if they choose to go all the way but they don't intend to.

However getting to your use case: LXDE right now works beautifully with SOC configurations. That's why it is exist. So the kind of lighweight dumb system you are asking for now that you want it for LXDE become much easier. These are sold all over Asia using LXDE based distributions and they are cheap (generally under $200, often like $125). I'm not sure if they include additional languages or even test against English but even if this is not a "just go out and buy this" it certainly is a proof of concept. So there you go: If you want to use LXDE you will have terrific support and get an upgrade.

As far as Stumpwm none of the X11 windows managers will work with Wayland. Window management for most is a rewrite from scratch. You can use Stumpwm for X11 on Wayland of course. However Stump isn't really a window manager as much as a programming exercise demonstrating how to do windows management in LISP. So in theory you got lucky while someone is going to have to port it to Wayland, it very well might be pretty easy. However to complicate this under Wayland the display server, window manager and compositor are complied into one process there is no abstraction like there is in X11. So Stumpwm while getting to run native on Wayland is easy to do anything useful with it it is going to need to be hard paired with a display server and compositor. I don't see any evidence that Stump will go there. So let's assume Stumpwm as an end user product is gone even if it still exists as a teaching tool.

Moreover I'm not sure the culture is going to allow you to be able to just casually change window managers under Wayland. For example LXQt will only be tested against its own compositor which will be tuned to LXQt. You can run other applications but when you run LXQt (again the version of LXDE that will exist on Wayland) you have picked your GUI.

This BTW is where you get huge advantages. You LXDE top to bottom can be compiled for the SOC hardware (the way Android works today) so you will see huge increases in battery life and much better performance.

By texting I mean SMS. You have something that lets you do that from your computer?

Yes. There are many solutions. Apple's built in Messages application does this. http://mightytext.net/ works for Android.

Why keep upping the hardware requirements when we have a working solution already? Aren't the landfills full enough?

We don't have a working solution already. X11 doesn't work. New OSes / features up hardware requirements. That's the norm.

But i'm not even sure you are being environmentally sound. Even on the kind of crappy hardware you are probably burning about 3GB / hr of network for remote X usage assuming this is all wired (wireless is more) or about $.03/hr in electricity over a WAN (wired LAN is close to 0). Or about $.25 / day ~ $50 / yr. The landfill is less environmentally destructive than all the electricity you are using by consuming all that extra bandwidth to have everything sent to you post processed instead of storing the bulk locally and just getting instructions.

Comment Re:What's the point? (Score 1) 213 213

OK... it certainly is the case that Wayland remote will work worse with dumb terminal type setups. Wayland is assuming the the box remoting into is smart. A phone is plenty smart to run Wayland, and if you kept your applications light (i.e. about 10 years old) would be able handle the toolkits. So for your lapdock type setup you could actually run your desktop applications in a way that is comfortable over mobile data and not chew up insane amounts of data. Again as is usual for the actual use case you are describing Wayland is likely far better than X11. Not far better at doing things the X way but far better at doing the same function as long as you are willing to not cut against the grain.

As far as security goes the receiving machine can have an unwritable filesystem (or unwritable from the OS running Wayland) or be virtualized and just blow away and restore the image after use.... You can achieve the same security with smart as you do with dumb.

But if you want actually dumb, then you need to virtualize your screen. Then you talking something like VNC. Wayland makes use of smart so doesn't support dumb as well. Smart is the more common situation today....

Availability of choice is something that has made Linux great for a very long time. Will there be no future development in this area? Is it going to be QT/GTK forever?

Any toolkit can have Wayland remoting. That's going to be a standard part of writing new toolkits in say 10 years.

Providing network support at a layer between the toolkits and what Wayland provides would be fine by me so long as it meant that this layer was what the toolkits or applications are coded to talk to, not Wayland directly.

In theory Wayland supports that. In practice it isn't going to be what's going to happen so I don't consider it particularly relevant. In practice it happens at the toolkit level and applications pick up their remoting from their toolkit unless they want something more complex.

But... if I don't have my lapdock on me remoting the phone to a computer would be a handy way to handle long-winded text conversations.

That's session sharing. What's better is not remoting the conversation to the computer but that the computer syncs with the phone and has a copy of the conversations at all times. Many messaging systems already provide that. I do that today. No reason to remote just push the data.

I see something like moving the remote support out to the toolkits as making this dream far less likely.

I disagree it makes the dream far more likely. To actually be remote you need good WAN behavior on high latency connections, like a cell phone. That's unfixable not available with X11. Everything else is just using a secure potentially unwritable image for remote machines (VM...) and enjoy. Yes when your main OS changes toolkits (something like KDE 5 to 6) you will need to update your dumb images to match but that's not hard in your scenario.

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

Server I think is trickier. Let me throw out a hypothetical for say 2028.

Samsung releases a 1024 core SOC which is cool enough it can be used in a blade. Intel is using 16 core Xeons that require a full 1U. The Samsung cores are say each 1/2 as fast as the Intel cores. Everything needs to be custom compiled for the hardware but Samsung has their own fully supported distribution which supports cloud foundry, open stack... The complexity of the x86 makes Intel emulating these designs impossible.

Now that I think could do it.

Comment Re:What's the point? (Score 1) 213 213

You are relying on every author to care about this issue. I don't think that even a majority will.

I'm relying on the author of every (or most every) toolkit to care. Once the toolkits remote you have better remoting on a per toolkit basis than you do under by X. Your hypothetical application author has to have chosen an obscure modern toolkit which doesn't remote. If they choose a common one it is going support Wayland. If they choose an old one it is going to support X. You are talking well under 1% of all applications and those likely designed not to remote.

How bad is the latency going to be anyway? Today's cable modems are much faster than the LANs where I first learned to 'love X'.

I think you are confusing bandwidth and latency. Latencies on LANs are probably slightly higher than they were 25 years ago. Latencies on WANs aren't remotely close to LAN latencies 25 years ago, and are slowly creeping up not decreasing. Internationalization and additional layers of servers are mainly to blame. SIP has a hard 150ms barrier. This used to be no problem now with international especially to the 3rd world even 200ms variants aren't slow enough.

Moreover we don't have any solution to latency. While we might be able to half them through technology and better engineering to go beyond that we either need to shrink the planet or increase the speed of light.

A local Gnome? A local KDE? My main purpose for running remotely is to NOT have to install and maintain all that baggage in multiple locations.

I'm starting to think you are constructing your objections to be difficult. If you are an end user what do you care what sits on the hard drive? You install a mainstream distribution you install the standard toolkits, done. There is nothing to maintain beyond running Linux. You mentioned cut and past in earlier posts. The X-Server doesn't support any interaction beyond plain text, like clicking on applications starting the appropriate support libraries without those toolkits being installed. So you are doing this now. You are contradicting yourself on your use case. You can't have this being a feature you use in the last post and then in this one want to run without local toolkits. You don't have that now.

You are going eventually be able to construct a use case where X11 is better. I will freely acknowledge there exist possible use cases where X11 will be a better fit than Wayland. Say 1% or users. But by the same token I can easily construct a 100 use cases where X11 is terrible for every use case that exists on which Wayland introduces a slight problem. So those arguments don't do much. When you choose X11 you choose all those problems for other people. You need to argue that X11 is better than Wayland not that Wayland is imperfect. My opinion is that X11 mostly sucks at everything. It has been a disaster for Unix, forcing Unix to emulate hardware configurations that no one has used for two decades. In today's world X11 doesn't do anything well, and does most things far worse than any of the other mainstream competing graphical display systems.

And I'm including remoting in that. X11 doesn't even have a security layer or traffic shaping. SSH which is the dominant security layer breaks 3rd party traffic shaping. I'll start throwing out examples of non-fringe use cases where traffic shaping is needed like international or congestion and X11+Security will collapse.

So if you want to be remoting on a LAN not WAN, have a not quite dumb terminal but a Linux box that for some reason has enough of the toolkit installed to support the X-Server versions of applications but not the X-Client versions of those same applications, and want to run lots of applications from different sources then X11's remoting will work better. With a use case that important I'm shocked that the Wayland project hasn't been cancelled already. The reality is that what's going to happen is you will change one of those parameters the easiest being: install the full toolkit which is what you probably are actually doing now and everything will be better than it is now.

As for mobile apps with keyboard and mouse. Other than mobile development, why would you do that? And why would anyone want to remote that? Again that just seems like needless nitpicking use cases. You want to use a mobile application, download the virtual image and run it locally. The hardware is being emulated anyway, what difference does it make which CPU the emulated mobile runs on?

To do both of what? Display both kinds of applications remotely?

Have a display system that both shares and doesn't share buffers. And as far as remoting to properly the same remoting strategy. One is designed for strategies the other doesn't support and vise versa.

If a different method of getting the display data from one place to another works better then why would I be against changing it?

I don't know you tell me. Wayland is a different but better method of getting display data from one place to the other and yet you object to changing it. So why?

just give it a default implementation that may not be the most efficient but at least works. If the application or toolkit programmers don't care about remote access then it falls into that by default

It works that way for applications but not toolkits. Toolkits have to care. Do you know any toolkit authors that don't care?

My point being that you never know how people might want to use something. That's what made Linux great.. you had the choice to make it do things whether the original authors thought it was useful or not. Now it's just becoming another Windows or Mac OS jail.

Really? How do I move make sure that video and sound stay in sync remotely, to do something fringe like watch a video, even though the X11 authors didn't think that was useful? Tell me how Linux allowed people to do things it wasn't engineered to do.

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

Of course if the dominant player cut their margins they can preserve their position with their least profitable, least demanding customer. That's always the case with disruption from below. Microsoft did precisely that with netbooks almost a decade ago where they allowed netbooks to:

a) drive down the price of OEM Windows
b) not allow them to raise the specs for years and thus made the XP -> Vista upgrade less advantageous while often equally painful.
c) by forcing Microsoft to focus down market created a bigger opening for Apple at the top of the market. ....

Absolutely if Intel choose to go after the ARM business they could. But Intel just turned Apple down on a fabrication deal. Intel wants their margins more than they want marketshare. Intel's least demand, lowest margin customers are ARM's high margin most demanding customers. That's how ARM slowly moves upmarket. That's how disruption from below works.

As for SOC for laptops. Here we disagree. The x86 market today has standardized hardware. Intel, Microsoft and Western Digital created a hardware / software standard that's lasted for a generation. But that hardware / software standard doesn't need to hold, and obviously wouldn't be holding if x86 is being replaced. I can easily imagine a future generation of SOC for systems with keyboards as much as they are useful in today's tablets. That doesn't mean today's SOCs are good enough, today's SOCs are barely good enough for Chromebooks. We know that the bottom rungs of laptop users were able to replace some or most of their usage with the current generation of iOS/Android tablets, if we picture SOCs 3x as functional...

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

I'm thinking of ARM as a classic disruptive technology: https://upload.wikimedia.org/w...

i) ARM comes in first and takes customers who have requirements that x86 couldn't possibly satisfy: done
ii) ARM takes those customers who could be on x86 but gain tremendously from ARM: done
iii) ARM takes the least profitable least demanding customers from x86: happening with Chrome books -- in progress
iv) ARM takes over people core to x86 (laptops): not happening yet
v) ARM takes over more demanding users x86 desktop, server... : not close
-- this results in x86 becoming a niche product for the most demanding users
vi) ARM takes over the most demanding users extincting x86: not close

I'm saying I can see step iii becoming step iv. Of course ARM this year is not ready for step (v). But that's different than what the situation might look like 10 or 15 years out. If neither Windows nor Linux were tuned for x86 as the primary platform its dominance in server would be in more danger. If ARM vendors were moving $100b+ / yr in CPUs (double Intel's entire revenue) the server would be in more danger....

Comment Re:How soon until x86 is dropped? (Score 1) 146 146

I haven't been following it. The D-1540 seems like a nice offering. Smartphones are now 1/2 of the entire consumer electronics industry. I wouldn't underestimate the money going into ARM.

As far as ARM in server where I think ARM is likely to expand to first would be laptop. HP Chromebook 11 for example already uses this processor. Then it moves up market taking over some mainstream laptops. I could easily see for by end of decade for Apple's laptop lineup:
ARM for Macbook (OSX or a variant of iOS)
Intel for Macbook Pro (OSX)

Apple's the bulk of all profits.

Comment Re:Whats left unsaid... (Score 1) 116 116

First off I want to point to one paragraph: Nineteen states have laws on the books that limit such networks. They range from strict prohibitions on any or most municipal broadband service (Texas and Nevada), to requirements that a municipality hold public hearings or a referendum before offering service, as in Alabama, Colorado, Minnesota and Virginia. At least 89 communities around the country have publicly owned fiber-optic networks.

As for this case of preemption in Tennessee this is kind of nuts but overturn or preempt is too simple. The FCC here (https://www.fcc.gov/document/fcc-preempts-laws-restricting-community-broadband-nctn) argued that Tennessee outright violated Federal law and told a municipality to go ahead and violate state law. What's important is the FCC argued that the restrictions limited competition and regulating competition is FCC not the states. The FCC however specifically stated the states can simply ban municipalities from offering broadband services. What they weren't allowing was the state to regulate the market in a way that differs from the FCC because that does fall under FCC jurisdiction.

So the FCC is allowing the states an out here if they really hate municipal broadband. North Carolina is more interesting because the FCC specifically listed some of the provisions of their law which were barriers to investment and others that were not. Because of the out, allowing the states to ban, you avoid many of the problems if the FCC had simply preempted. The way you were phrasing it Tennessee to resist might very well have the municipal officials arrested, though of course that would move the federal case from lazily working its way through the system to urgent. Tennessee also might simply refuse to regulate the new company at all, denying it any state protections. They could deny them any ability to borrow. ... That is Tennessee has the ability to defacto ban even if they can't dejure ban and thus the FCC was wise to recognize that.

  I suspect the Sixth Circuit (USA federal court) where this case is heading is going to create a framework for how to handle this. Probably what they are going to say is the FCC is entitled to challenge state laws in federal court but not entitled to just tell municipalities to ignore laws the FCC disagrees with.

This is subtle and not the norm.

Comment Re:Whats left unsaid... (Score 1) 116 116

A 5 year ROI is basically unheard of in an infrastructure build - 20-25 is about where it's at, because it's expected to last 50 (or more)

No it isn't. 25 years ago the home internet infrastructure didn't even exist. A few years later it would have been extra capacity at LECs since people were using the cooper phone lines more hours per day. A few years after that it would have been DSL and coax connections capable of 5mbs or less. All of which are totally worthless now. Payoff in 60 months was on the high end, the company needs to make some profit on their infrastructure spend. If people start demanding faster relative speeds that means faster upgrades the spend goes from 60 months down to 36 or so. I'm not picturing 300 months for decades if not a century.

we don't need to care about the 500 miles in between towns - the middle mile is already done (for the most part)

100 towns * 5000 residential homes each * 2 gb/s * .2 average usage = extra 200k gbs of traffic or 200 pb/sec of traffic. That's a big deal.

Or since we are talking nationwide. America has 130m residences.
130m* 2gb/sec * .2 usage = 52 ebs of traffic. Our middle mile is remotely close to handling that.

We don't even have the technology to support a middle mile that large even if we were willing to spend a fortune. Now of course your point about delivering fiber capable of doing say 10gbs and only delivering 100mbs now is true. Any new infrastructure being put in in 2015 should be able to go faster than 1gbs. We agree there.

I'm having trouble with your pronouns who is us? You seem to be shifting from a NZ perspective to a USA perspective. Anyway... that's the economics. If American towns think they can get 50 years out of a fiber buy then they can certainly pay for it. But they don't so they won't. And that's the point of my posts. People think internet is cheap to provide while the reality is it is very expensive to provide and they are paying a fair estimate for what it costs given a reasonable payoff matrix. If you think the matrix is grossly mispriced then invest in telcos because they are sitting on a goldmine or own your own fiber / become an ISP. Which it appears you have.

Comment Re:Whats left unsaid... (Score 1) 116 116

What I've read from i.e. the amicus briefs to the FCC the law prohibited the electric company from servicing someone that didn't get their electricity from same company (or wasn't "in the area serviced"), not that they did any of the things you mention.

I think what you are talking about is one case the Electric Power Board (EPB) of Chattanooga offering Internet and video service to residents. Absolutely terrific internet service. Comcast claimed they were using ratepayer funds. Ratepayer funds are from the state governments. Using them for a purpose not allowed by law is approaching embezzlement. When you talk about southern states there is also federal involvement since their utilities often came out of New Deal legislation.

And in either case the FCC didn't like that law and struck it down

The FCC can't strike down a state law. They can argue in court against it or work towards its repeal. They aren't that powerful.

. What would be a fair characterisation then?

The problem with American broadband there really isn't one. America given its population densities is a broadband success. We have a huge percentage of the population getting ever increasing speeds at a good price point. Its not perfect but I don't think of this as a failure.

The problem with American municipalities offering broadband is that they want to play fast and loose with funding. If there was a up or down vote on whether internet should be taxpayer supported the vote would be no. But cheap internet is very popular so if the municipality makes the funding mechanism opaque the taxpayer subsidy is much more popular. Basically the problem we always have with government: 70% of Americans think the government spends too much and needs to cut spending however the moment you name any specific government program we fund 70% of Americans think that's a good use of taxpayer money and support the spending. Internet is just one more example of the inconsistent beliefs about government spending of the middle 40% of American voters. Republicans are mostly opposed to those sorts of financing games to expand government while Democrats are mostly in favor.

The law allows municipalities to pay for internet. It allows for everything you suggest easily. What it mostly doesn't allow for in those states is making it opaque and without making it opaque it lacks enough public support to become policy. The FCC under a Democratic administration of course is going to support making it opaque since they strongly support better broadband as a public good.

Comment Re:I was thinking of "high end" in terms of (Score 1) 146 146

This was exclusively for workstations but in terms of multi processor there definitely were multi-processor 486s sold. I had a buddy with 4x486. SCO was the typical OS for these boxes. OS/2 and Linux were both working on it and would achieve it.

Also also with SCO the x86/i860 combo was popular (for an exotic workstation). The 486 while having good floating point math sucked at vector math. The i860 while good at vector math was bad at multi-tasking. There were both motherboards and compilers to take advantage of this combo which was a winner. It allowed you to build a workstation for under $10k that was a bad version of MIPS style workstations.

How much net work could a network work, if a network could net work?

Working...