Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Can my car have a sense of humour too? (Score 1) 109 109

Ok... I wrote a big huge TLDR response to this but then I decided to open a new-tab and re-read the comments that began our discussion and deleted it. Now I will write another :-)

Life can be described in 'mechanistic' terms. A definition that I have heard along those lines is that it must have metabolism, reproduction, self repair and evolution to be life. Usually this results in discussions about what happens if you find something that only fulfills some of the 4. Also commonly mentioned are viri which in a sense do some of these things but require cells to infect in order to do so. This seems to me to be the kind of 'life' definition you are using when you talk about things like death from sodium azide.

I'm not sure how it can be argued that life, by that kind of definition has anything to do with emotions. Do single celled organisms have emotion? Which of these attributes do what to form love, anger, fear?

Those things are states in the brain.. which happens to be composed of cells which are alive by the mechanistic definition. To some degree we can understand and demonstrate those emotions as being physical states in the brain. Scientists have brought the emotions out by poking, prodding and stimulating various regions of the brains of living, conscious people.

On the other hand, there is incredible complexity in all that huge mass of cells and how they communicate. We can't map out all the interactions which occur as a brain experiences love or anger or fear and simulate them on a computer.

Further, and i think more importantly.. I don't think we really understand how we perceive those brain states. When the cells of my brain activate to form an emotion I don't experience it as an individual cell taking a bunch of inputs from other cells and deciding based on them when it is time to activate a bunch of outputs sent to other cells. I experience it as a whole individual.. as a person.

If the emergent behavior of the sum of reactions between these mechanisticly living things can form me.. why can't the sum of reactions between some other form of logic gates combine to form what is essentially a person. I can't even define what exactly a person is so surely I can't rule that out!

It seems unlikely to me that things like metabolism and self-repair are what set me apart from a non-sentient robot.

Comment Re:Can my car have a sense of humour too? (Score 1) 109 109

"I would like it very much if you could provide some information that suggests that all you need is complexity to make something with feelings and emotions."

I don't know why you would expect evidence of that from me. It isn't quite what I said! Maybe there is some aspect to our existence as people that goes beyond the materials and energy interacting within us. I don't know. Good luck proving it either way. But.. even if something like that does exist.. is it generated by the formation of our bodies? Is it somehow attracted from elsewhere and binds to us as our bodies are formed? Even if there is something more to us.. without thoroughly understanding it whose to say the same thing wouldn't (or would) happen to a sufficiently complex AI?

But... emotions, memories, sensations... they have all been produced experimentally in real human beings by electrically stimulating various areas of the brain. So.. tell me that much of us doesn't exist at least on some level as mere signals passed around by cells.

"But the key is that they are alive"

Have you ever attempted to put a real definition on the term 'alive'? good luck with that. I feel I should write more on this but what do you write to describe the indescribable?

"There is no evidence that any machine of any type even has a "thought", let alone an emotion."

Agreed. But... human beings can be considered to be highly complex chemical machines. And.. your statement still stands. Something that was only a 'sufficiently complex' machine could theoretically appear to be a person. (oh yah.. try to define person too). Maybe that's all a sufficiently advanced AI would be. But.. maybe that's all you, and every other person on the planet is. Sufficiently advanced chemical machines that they seem to be people. I only 'know' that I am a real person. And.. nope... I can not prove it! I can give you no evidence that I actually have thoughts and emotions rather than just sufficiently complicated electrical and chemical signals to simulate them.

So... I take that one on faith. I'm not a big fan of faith over science but how do you devise a test for things like person hood that can't even be defined? Since I believe that I am a person and other humans seem to be of similar makeup and have similar traits of sensation and emotion I believe that they (including you) are people too!

If the day ever comes that someone creates a sufficiently advanced AI... it's going to be quite a conundrum figuring out if we should consider that to be a person or a thing and what basic rights therefore do or do not apply. Good luck to anyone on either side of that argument! If anyone ever thinks they can win it I think they are deluded.

I'll take the 'lets just assume it's a person' side because it' safer. It would be less harmful to anthropomorphize a machine than to objectify a person.

Comment Re:Can my car have a sense of humour too? (Score 0) 109 109

"I wonder what it is that makes people think that robots can be given emotions, when we have no idea how brains generate emotions?"

Ah, how refreshing. An intelligent thought on this subject.

"what makes people think that it doesn't require living cells to have feelings, sensations and emotions?"

Damn... I jumped the gun.

Cells are really vessels containing incredibly complicated chemical reactions. They communicate through various electrical and chemical means. I don't know if there is any non-material, spiritual aspect to it or not (and not interested in a debate on it) but if the sum of all these reactions and chemical plus electrical signals can add up to either make us or attract the undefined spiritual thing or whatever.

Why couldn't the sum of a whole bunch of electrical signals through semiconductor switches or even something really crazy like a mechanical computer made of wheels and gears do the same?

Now.. whether something like this could actually be built is an entirely different question. The complexity would be unimaginable. But then.. I don't see anybody building humans out of raw chemicals either yet nobody is going to argue that a being made of cells can't have feelings, sensations and emotions.

Comment Re:What's the point? (Score 1) 216 216

"LXDE is in the process of becoming LXQt. It fully intends to support remote using Qt's system."

That's interesting. I'm glad people will still have a sort of lightweight choice. I'm not really an LXDE user myself though. I just use the task bar from LXDE. I suppose I was being overly garrulous in mentioning that part of my setup.

" Stump isn't really a window manager as much as a programming exercise demonstrating how to do windows management in LISP."

That's an interesting statement. The author wrote Ratpoison first. His big motivation there was he wanted a GUI that didn't require a mouse. He was a big Emacs guy and used to having Lisp as part of his environment to work with. He found he was building Lisp like abilities into Ratpoison so he decided it made more sense to do a re-write this time using Lisp from the start. At least that's what the website says. Whenever I look for Lisp configuration help online the questions and answers I read sound like other users are actually using Stumpwm as their main window manager. I've never seen anything to indicate that it is only a programming excercise!

Personally, I use it because it is tiling. I used to really like Konqueror in KDE. I used it's split-screen feature a lot. You could run a terminal in the bottom. I used to run emacs in that, edit PHP code and have the page I am working on displayed in the window above. It also worked as a file manager, with scp support so I would have a second tab for uploading image files and stuff like that.

When KDE started pushing their new browser over Konqueror at first I thought my preferred way of working was going away. I read people's arguments about it, stating that Konqueror was no good because it tried to be too many things.. counter to the Unix philosophy. i suppose they were right. I realized that what I really wanted was tiling. That's basically what I was using Konqueror for.. a tiling Window manager. That's what Stump is to me now, not a Lisp exercise. As a bonus.. where before I was running my text editor in Konqueror's terminal, now I get to run the GUI version of it!

Maybe what I need to do is learn to write my own compositor. I don't want to do that. I have sooo many projects I am already working on and no idea where to even start. Contrary to how it might seem i am not married to the idea of X forever. i just don't want to lose any features, only gain them.

"Yes. There are many solutions. Apple's built in Messages application does this. http://mightytext.net/ works for Android."

Thanks, I will check that out.

"$.03/hr in electricity over a WAN (wired LAN is close to 0)."

For remote X I mostly use wired LAN. I value the fact that I can do remote X over the Wan but I do mostly use VNC for that.

I don't like Wifi much because we have too many neighbors using it and causing interference. I prefer non-portable wired terminals in heavy use spots. Of course we use wifi with portable devices like tablets but I'm ok with VNC on those anyway.

I mostly use remote X as a way to access the computer in my den in my workshop. Yes, I know, i could install VNC. I really like that just turn it on and go, the illusion that the PCs in the workshop and in the den are one and the same. I have additional small, old cash register PCs that I intend to eventually set up the same way as a media PC in the livingroom and maybe in the kitchen for looking up recipes and stuff like that. Yes, I know, this is kind of a 1950s-70s retro-future image of computing. It's better than maintaining a bunch of separate desktops. Todays tablets and etc... are awesome but there is still something to be said for full size monitors, keyboards and mice. I think there would be a lot to gain if thin-clients were to become more mainstream.

It's also nice to be able to point out that using open source software allows me to create a custom setup like this. Even if you don't like my setup.. you can implement YOUR favorite setup. I can customize it the way I want while someone else can customize it the way they want to. To me that's what has always separated Linux from OSX and Windows. From my perspective Linux seems to be losing that quality and there isn't really a viable alternative to get that back. Implement remote access on Syllable?

Ideally I would have a system where logging in remotely or locally, both give me a login screen. I log in and get a list of any sessions i might have already open. I can switch to one of them, close any of them I don't want running, or start a new one. Upon switching to a session, if I last connected from elsewhere the screen would automatically resize to fit my current terminal. Upon creating a new session I could chose any installed window manager or desktop environment. It wouldn't matter because remoting would be implemented on a lower layer that is common to every GUI program. Oh.. and one other thing.. each session would have it's own audio stream and connecting to it would get me sound. For dedicated terminals it would be nice if USB devices were automatically remoted too. For non-dedicated terminals that could be a feature to be turned on/off, kind of like sharing files and printers in Windows remote desktop.

It seems like as things currently stand most of the pieces to do this exist. X does remote display. PulseAudio does network audio (though I am struggling to make that work). There is USB over TCP/IP support in Linux but you have to go to the commandline and tell a client to share a specific USB device. Then you have to go to the server's commandline and tell it to connect to that device. In a dedicated terminal i would like to be able to configure it such that every USB device (except the keyboard and mouse) automatically gets connected to the remote machine. Maybe this can be done with what exists now and udev rules? I haven't gotten that far.

Anyway, implementing all of this.. if remote support is fragmented across toolkits, possibly non-existent on some lesser used toolkits.. that sounds even harder than it has ever been!

Comment Re:What's the point? (Score 1) 216 216

>> Any toolkit can have Wayland remoting. That's going to be a standard part of writing new toolkits in say 10 years.

Ah.. but I thought remote access was a niche application, no longer relevant enough for developers to care about? That's what I am hearing when Wayland supporters say that 'nobody' is using it. It's too niche for Wayland and yet widespread enough that each toolkit will bother to implement it all on their own? I really don't get that. And why should they, each one would be re-inventing the same wheel!

"That's session sharing. What's better is not remoting the conversation to the computer but that the computer syncs with the phone and has a copy of the conversations at all times. Many messaging systems already provide that. I do that today. No reason to remote just push the data."

By texting I mean SMS. You have something that lets you do that from your computer? It's not like I am going to force all my friends and family to switch to Hangouts or Skype or something like that just so I can type easier.

"Yes when your main OS changes toolkits (something like KDE 5 to 6) you will need to update your dumb images to match but that's not hard in your scenario."

i don't even use KDE or Gnome. I use Stumpwm along with the pager from Lxde. Am i going to have to install Gnome just so that I can remotely access GTK apps and KDE so I can remotely access QT ones? Is there any room for a variety of window/desktop managers, especially lightweight ones in a Wayland world? Even if I was still using KDE (I used that for years) I don't think i would want to see Linux change in a way that FORCES users onto one of the big two. On top of that, i have it all compiled from scratch (Gentoo) so that it runs nicely on my older equipment. That's another reason I don't want Gnome+KDE on the dumb terminals. Neither of KDE or Gnome is going to fit on the CF cards I was describing to you anyway. I'm not just complaining so that I can keep MY setup. Why keep upping the hardware requirements when we have a working solution already? Aren't the landfills full enough?

i guess there is TFTP but that is frankly a pain in the ass. To use that you have to set up your own custom DHCP server rather than just use either a commercially available router or a simple router distro.

Comment Re:What's the point? (Score 1) 216 216

"Your hypothetical application author has to have chosen an obscure modern toolkit which doesn't remote."

Availability of choice is something that has made Linux great for a very long time. Will there be no future development in this area? Is it going to be QT/GTK forever?

"If you are an end user what do you care what sits on the hard drive?"

Because I'm the one who has to put it there. Hey, remember when Linux was a 'hobbyist OS'? We still aren't all technicians setting up labs for an employer somewhere. Some of us actually use Linux at home still!

You seem to care about security. That's part of why I want my clients thin. The fewer things installed on them the fewer things need to be locked down. Ideally I want a very small display server and just enough underlying OS to run it.

"So if you want to be remoting on a LAN not WAN, have a not quite dumb terminal but a Linux box"

Yes! Yes! Absolutely! Is that not what I have been describing? Was i somehow not clear about it? Actually.. I'd go for a real dumb terminal except old PCs are essentially free and the only dumb terminals I know of are old CRT beasts with no audio support. My remote devices are tiny PCs that I think used to be cash registers. They run off of wallwarts, have no fans and came without hard drives. I added CF cards rather than real hard drives to keep noise, heat and power consumption down while eliminating moving parts.

", forcing Unix to emulate hardware configurations that no one has used for two decades."

See! I told you Wayland supporters were rude. You are calling me 'Nobody'! Go look at other forums discussing this topic. Apparently this 'nobody' is not alone!

You aren't even addressing the fact that I already said I don't really care about X11. I care about remote functionality. Providing network support at a layer between the toolkits and what Wayland provides would be fine by me so long as it meant that this layer was what the toolkits or applications are coded to talk to, not Wayland directly. That way ALL applications could still be remoted. And.. if that layer used similar methods to what X11 used to provide remote access or something entirely different or even used multiple methods, chosen by the application to best suite the kind of content being displayed... that's all ok by me!

"I don't know you tell me. Wayland is a different but better method of getting display data from one place to the other and yet you object to changing it. So why?"

Because.. as it seems to be proposed Wayland is NOT a way to get ALL display data from one place to another. That is outside of Wayland's scope. With remote support moved to the toolkit or application level we only get remote support where someone chose to support that feature. The old system meant EVERYTHING could be displayed remotely. I only want the baby saved, you can pitch the bathwater.

>>>>"To do both of what? Display both kinds of applications remotely?"
>>"Have a display system that both shares and doesn't share buffers. And as far as remoting to properly the same remoting strategy. One is designed for strategies the other doesn't support and vise versa."

Ok, I know the word buffer to mean an area of memory. Basically a big array containing pixel data. Why do I care if remote support is implemented in one big array or a dozen? Hey.. either way.. knock yourself out. Do whichever works better.

Do you mean something different when you say this? Are you saying that we can't have both full desktop remote access and also be able to remote single applications? I do this in Windows with RDP every day at work! On one monitor I display a terminal server desktop to run Visual Studio. i display the full desktop because Visual Studio spins off lots of child windows and that doesn't work as well if I just display Visual Studio remotely. At the same time I am displaying Navicat, running on that same terminal server but by itself, outside of the desktop. it shows up in my local taskbar... it's just like remote displaying a single X application.

"As for mobile apps with keyboard and mouse. Other than mobile development, why would you do that? And why would anyone want to remote that?"

Do you know what a Lapdock is? I use it because it solves 90% of my mobile needs without the need for buying a laptop. It's a little shell that looks like a notebook. It has no CPU. My phone plugs into it. I have VNC and RDP apps on my phone. i have git & svn. I have the Arduino environment. I have game console emulators. Of course there is the web browser. The list goes on. Pretty much everything I could do on a computer is there, if it isn't... I can use it as a VNC or RDP terminal. I can even plug in USB devices. I've used it with a USB attached software defined radio, storage devices and Arduinos myself. All I am required to maintain is something I would maintain anyway... my phone!

Is this a common setup... well.. they are discontinued. But.. any Android tablet combined with a bluetooth keyboard and mouse is pretty much the same thing. Plenty of people do that. Another similar environment would be a Microsoft Surface. Even iPads are commonly placed in cases that have built in bluetooth keyboards although they don't support mice without a jailbreak. So.. I'm hardly alone in using a mobile environment in a more desktop way!

Why would I want to remote my phone? i think I would be more likely to want to remote display something from my desktop on my phone. But... if I don't have my lapdock on me remoting the phone to a computer would be a handy way to handle long-winded text conversations.

Ultimately I would like to be down to just two non-dumb devices. A main computer at home with a Linux install customized to exactly how I like it and kept fairly constantly up to date. Give it lots of processor, RAM and GPU power. Then give me dumb terminals.. as dumb as possible. Make the hardware low power so that it uses less electricity and wears out slower. Make the software dumber so that there is as little as possible to require security updates.

The second non-dumb device would be a portable one. It's only purpose in not being dumb is so that it might be useful even without a signal. It should be pocket sized... a cellphone. A roll out screen would be a nice addition some day when those are available. Meanwhile I'm happy to just have a dock like the Lapdock I have now which I can bring with me when I want something more computer like. There is no substitute for a real keyboard when typing.

I see something like moving the remote support out to the toolkits as making this dream far less likely. That is what I am against, not replacing the underlying protocols to be more video friendly.

Comment Re:What's the point? (Score 1) 216 216

Is remote display unimportant because there are more people who only run locally? Remote display is somewhat of a fringe thing. But.. so is all Desktop Linux. We are here BECAUSE we are on the fringe. If we weren't we would be on Windows or OSX! Desktop Linux is painful. People are always sending documents, videos, etc.. that require special effort to install proprietary codecs or run semi-functional applications in Wine. If you are sticking to Linux on the desktop then there is probably already some fringe feature you love or rely on or you wouldn't be here. What happens if one day it goes away? If any fringe users of Linux are going to be written off as not significant then you might as well pack it up because we are all fringe users in one way or another.

A local Gnome? A local KDE? My main purpose for running remotely is to NOT have to install and maintain all that baggage in multiple locations. At that point one might as well just set up a NAS to hold their home directories and install the full set of software on every computer. But.. then.. might as well just run Windows. That's how just about every Windows shop office is set up. At that point how does Linux provide something unique?

"It isn't possible to do both."

To do both of what? Display both kinds of applications remotely? Or use two different underlying implementations? It can't all go through one buffer AND go through separate buffers. Ok. Maybe I should say this.. I don't really care about X11 the protocol vs Wayland vs the display protocol of that Mac compatible OS the aliens used in Independence Day. I'm not arguing for or against keeping the underlying architecture behind remote or local display unchanged. X11 serves me well although I am quite aware it is not perfect. If a different method of getting the display data from one place to another works better then why would I be against changing it?

Wayland supporters often say that X11 network transparency or remote display or whatever is already broken. OK. My limited understanding is that as originally intended an application told X to draw a shape or add text, etc.. what got passed over TCP/IP was the command to do so, not a raster graphic of the shape. What I gather from the various pro Wayland posts is that under the hood various hacks have mostly replaced this and now everything is getting drawn at some higher level and passed as bitmap. Ok. But when I go to use an application remotely IT STILL DISPLAYS. In fact, now I have VirtualGL installed so I suppose that has things rendering entirely outside of X and just streaming through as bitmap. But.. it is working! Actually, I love the idea of doing 3d applications this way. Let me build up ONE client that has ONE big honking video card and play video games on it but displayed elsewhere without making me invest in a house full of GPUs.

But that isn't what I came to talk about. It's the interface that I am interested in. I just want to be able to tell the current session that my display is over here. And it all displays. If I feel like doing this with a single application at a time.. I want that to work. If I want to remote display my desktop environment.. then that should work too. If I open a desktop environment remotely.. then when i open something within that environment it should display there. X forced this. When you linked against X-libs it didn't matter if you were actually linking against GTK or QT or ____ which in turn linked against X libs or even if you coded directly to X libs. Implementing remote display at a lower level meant everything had it.

So.. different methods of remote display are needed for different applications. So what? I don't see how that still can't be implemented at a lower (and thus common) level. just give it a default implementation that may not be the most efficient but at least works. If the application or toolkit programmers don't care about remote access then it falls into that by default. Big deal.. at least for LAN users it probably doesn't matter anyway. If say.. the programmers of video player X or flashy 3d accelerated game Y do decide to care.. they tell the toolkit which method they want to use, probably by choosing a different container class or something like that. The toolkit then passes this information down to the common layer which then uses the appropriate method for display. The big thing I am trying so desperately to stress is that when Johnny Z downloads X and Y and runs them he doesn't see any of this! It just works!

So what's so bad if some of these applications never really were meant to have that remote display ability?

Latency?

"I'd assume that highly specialized toolkits which don't remote, don't remote because they can't tolerate latency."

Now you are getting back to someone else making the decision for every user as to what is usable remotely and what is not. For example... you mention touch application specifically... I use touch applications with a mouse and keyboard all the time! I have a Motorola Lapdock. It doesn't even have a touch screen. But.. it has a trackpad and a keyboard. Basically I'm running Android in a Laptop like environment when I do that. i use apps all the time where the authors never intended anything but touch and yet to me it is incredibly useful! There are a few (and I mean only a few) apps that don't work very well because they rely on multi-touch gestures and the author didn't make alternate ways to access those features. That is because those apps authors weren't thinking about less common ways that others might use their software. They just code for the 90%. For the most part I have found better alternatives. Those authors get my money! Does that mean the apps I use don't have lots of touch gestures or don't work well using just the phone? Not at all! Most are the best of both worlds!

Is the latency too much for touch but acceptable (to me) with a mouse? Ok.. then I will just use the mouse! Problem solved! I don't want separate applications for these situations. That just means more incompatible or partially compatible file formats.

How bad is the latency going to be anyway? Today's cable modems are much faster than the LANs where I first learned to 'love X'. And the LANs now... lightning fast! I've seen choppy though not entirely unwatchable video over remote X displayed on my computer at work tunneled through SSH. And that's through SSH, I used to have openvpn and it ran a lot better through that. (yah, i know, it wasn't really using the X protocol as intended, sill don't care)

For comparison I used to do PHP development back in the early 0s using a 1200 baud connection via an old Nextel cellphone. It was just fine to run a text editor at the server via ssh and I could browse the results with images turned off. I didn't know anyone tethering a cellphone back then but it let me get out of my apartment and work from the cafe without paying for their then expensive wifi.

My point being that you never know how people might want to use something. That's what made Linux great.. you had the choice to make it do things whether the original authors thought it was useful or not. Now it's just becoming another Windows or Mac OS jail.

You talk a lot about how different applications or toolkits might handle latency differently in order to provide a great user experience. Who doesn't want a great user experience? But.. if the tools to do that exist at a lower level, where everything goes through (like a display server) then 1) a lot of duplicate work can be eliminated and 2) more applications are likely to benefit. You are relying on every author to care about this issue. I don't think that even a majority will.

Comment Re:What's the point? (Score 1) 216 216

"Well yes it does result in a fractured feature. They Wayland people see the fracturing as a plus."

So does this mean that I wouldn't be able to say remotely display a desktop environment which uses QT and within that click a shortcut to a GTK app and expect it to open and be managed by that QT desktop environment.

That is the functionality we have now. Once you have your desktop environment displaying remotely everything you do looks and feels local. How can you have that when each app may have a different remote implementation?

>> Talking about Wayland not having remoting implying that Wayland applications won't remote isn't fair not accurate.

Yes it is. In my previous statement I chose QT and GTK as examples because they are so common. A user could have any number of applications using any number of GUI toolkits. Assuming they will all bother to implement their own remote access would way over-optimistic.

>> a) Supporters of network transparency don't know what network transparency is

There are many levels to look at things. You can look at electrons traveling through silicon or the behaviors of transistors combining to form logic gates. Skipping a lot of levels, further up you get all these software buffers, etc.. that Wayland supporters like to talk about. Some levels up from that there is an end user who sets their DISPLAY variable or runs "xorg -query ", etc.. and gets remote access to their desktop. Wayland supporters keep talking about those buffers and other low level things and saying that what goes on at that level isn't really network transparency. Ok. But the user doesn't care! I know I don't!

" You have to make choices. Deciding to help you is deciding to harm others. People who want better games (and BTW I don't game) are making the same argument you are in reverse."

  If I can watch a high definition video feed in real time over the internet then I should be able to remotely display a desktop or a user should be able to remotely display a game. The two should not be mutually exclusive. Surely it is possible to fix this in a way that pleases the gamer without screwing it up for the remote desktop user.

You probably are thinking I just proved your point but I did not. I said we should be able to remotely display our DESKTOPS. Not just individual programs. I should be able to see my favorite desktop manager and click shortcuts within it without worrying about which toolkit each uses. It should just work.. just like it does now.

Before you say.. VNC... nope. That is not the same. I use both remote X and VNC. My remote X server is configured to automatically connect when I turn it on. (It just runs xorg -query ) The very first thing I see is a login screen where I can log in as any user of the client computer. If I happen to have changed out the monitor.. well.. auto detection will have already adjusted my resolution accordingly.

I have three VNC sessions running in the background at all times. Two run under my login. One is just the perfect dimensions for my Lapdock. The other is the perfect dimensions for my iPad and happens to be usable though not great on my monitor at work. The third session runs as my daughter's user for her to log in. Most of the time all 3 are unused and yet all 3 are running because otherwise I would have to ssh in first and start them before I could use them.

Comment Re:What's the point? (Score 1) 216 216

" but there are many capable developers who deal with that kind of specialty "

Capable of developing a display server? Really? Why?

Until recently pretty much every *nix used X. Of course Windows and Mac also exist, clearly somebody develops those. There are a few outliers like Syllable & ReactOS. All in all I would think that the number of display servers or non-server systems serving that purpose could be counted on one's fingers.

If that's true... why would there be a bunch of available developers with the knowledge to do such things. What are they already working on and why?

Comment Re:What's the point? (Score 1) 216 216

"They do agree remote usage is an important feature and it is something they intend to add."
"They do agree it is something on their todo list."

Are you privy to something that I am not? According to the FAQ it is out of scope. I've watched the videos... all I see are statements that it is someone else's problem and that X doesn't do remote anymore anyway.* Wayland developers and supporters have even suggested that networking support be moved to the GUI toolkits. Surely that would result in a fractured feature that works differently on some applications than others and probably not at all on some.

Also... on the To Do list? Wayland is already being pushed in production. This should not happen with basic features still on the "To Do" list. I'm not sure if there are any desktop distros defaulting to it but there is Tizen. I for one have been looking forward to being able to remote display applications from my desktop on my phone or vice versa ever since I used my old Sharp Zaurus! Do you think it's too impractical on the small screen? I often worked this way on the Z'. I even occasionally did real work with SSH on a 3x2" screen with T9 keyboard Nokia Series 60 phone back in the day.

"A respectful comment from the anti-Wayland side! Well done."

I do not believe that there has been much respect coming from the pro-Wayland side. The attitude from the developers seems to be.. you are a minority.. you don't matter. The stuff you care about doesn't work now anyway (even though you are currently using it without problem) Non-developer Wayland supporters are even worse, accusing people who are concerned about remote display of being Neck Beards that are holding them back from getting 3d video acceleration. If the anti-Wayland side seems disrespectful perhaps it should be viewed as a natural response by a community under attack.

* - As Xorg developers I'm sure they know what they are talking about regarding X not doing remote anymore... on a developer level. So it's not really using the X protocol as originally intended when I display something remotely or something like that. As a user I really don't care. I just know that my application is displaying remotely. In other words... it just works.

Comment Re:What's the point? (Score 1) 216 216

"Remember that applications, especially those that are often used remotely, are likely to be somewhat delayed in how long before they switch to Wayland only versions."

To me it's not the ability to remote those applications that makes X great. It's the ability to remote the ones that most people would never even think of remoting.

Today the cloud is all the rage. The market is demanding it not just the 'geeks'. For an application that truly begs remoting I think the question isn't does the operating system support remoting it's "why doesn't the application have a built in web interface?!"/ (to the lay user X or Wayland is a part of the OS)

To me the fact that X allows me to display applications that the author never wanted me to display remotely is awesome. It's part of a broader environment of "I get to decide" what features are useful to me, not an application author, not the market or anyone else. This is generally the way Linux has worked until recently. For a system that makes these decisions for you, where the things a typical user would want just work a Windows or Macintosh desktop would be just fine. That need is already filled!

That is not to imply that it is only important to me on a basis of ideology. I happen to use a remote X terminal myself. i use it as though it were just another face of my main PC, not as a way to run specific, specially remote-worthy applications. It's a way that I can have one and only one highly customized desktop to babysit and yet have it in multiple places.

Frankly, Scarlett, I don't have a fix. -- Rhett Buggler

Working...