MIT shared memory support? This has been in the X server for a long time. Apps can use that to send an image to the server, the server then can take that image and composite it with others for the hardware video output buffer. The application can create the buffer on the server, render to the buffer, and then issue a command to the server that the buffer is finished rendering and can be used for compositing an output frame. The application would be given timing information, such as the frequency and the timebase, so it can know when the deadline is for submitting a finished buffer. Any submission of a buffer after the deadline is saved for the rendering of the next frame. this can work with direct and indirect rendering, in indirect the actual programming of hardware occurs in the X server, you would use GLX to send OpenGL commands to the server.
X itself is useable and was not the issue. The problems were with drivers were due to the lack of drivers, and problems with the device dependant layer (such as lack of auto configuration of drivers or when auto configuration of drivers screwed up) rather than with X itself. These problems would continue no matter what window system was used, because it was not a window system problem, it was a problem with drivers. Wayland mainly solves some slught issues with visual artifacts, as far as I am aware, by addressing issues with vertical synchronization. An X extension could have been created to address these concerns by allowing X applications a clue as to the frequency and timing of vertical synchronization and a double buffering facility that could be used by apps to update their screen contents.
Systemd still supports the system V boot process features, you can still run init scripts from systemd if you wish.
It would be trivial to add a dynamic loader, or if necesary a simple compatability layer for a stable ABI, to the kernel so that old drivers will work fine.The only reason they don't is that Linux developers are anti-social and basically like the idea of Linux being unuseable to most average people, because it makes themselves feel elite to be able to use something that is so difficult to manage. Yes, Linux needs a dynamic loader or compatability layer for drivers, but try telling that to kernel developers who are off in their own world where average people can be expected to learn to love editing configuration files. Most people are not interested in that stuff, they just want to use their computer to get work done and get off, not muck around with recompiling drivers and editing configuration files.
the general concept behind systemd makes sense, its mainly some additional features on top of the current model, such as the ability to have processes started on certain system events. The fact is, if you want your bootup process to be controlled by bash scripts, all you need to do is configure systemd to start your bash script and youve got a more traditional init system. So, systemd does not take away any functionality, only adds it. Systemd supports the system v init process features so you still have all the old model functionality available to you. So, it does not make much sense that people complain about this when they can easily configure things however they want, including having a BSD style init, by having systemd hand off control to your own scripts, including to work the way things always have. People act like systemd has taken away something when it has not, i think many people just hears some soundbite about systemd introducing a new model and assume that they can no longer use things the way they do currently, which is not the case. it seems like people who don't like systemd don't want people to have the additional functionality that it provides, because it does not take away anything. Its open source software, and its something that you can control and configure to your hearts content. Its much ado about nothing. systemd, while being configurable, also will make things easier to use for many users. I think the ado about systemd is more about Linux people who think that Linux should be hard to use except for a small elite and do not want the OS to be useful to less technically adept users. This is even though making it more useable for less adept users does not in any way harm or take away flexibility from experts, who can still configure everything if they want they want to.
i thought they would be looking for industrial gases in the spectrum from planets. It doesnt seem like streetlights and city lights would be significant compared to the amount emitted by suns.
I for one welcome our new moth overlords.
Thats incorrect. Look at the Obfuscated C Code Contest. You can write TOTALLY unmaintainable code in ANY reasonably powerful general purpose language. Such features are necessary to write good software, though of course, anything can be abused.
Still, you have not actually discounted the fact that IBM could if it wanted to operate manufacturing wherever it need to be to compete, it might not be very profitable, but they could have profits in line with the chinese, and keep this business out of hands of the chinese. It seems as though instead, everything has to have a massive, fat profit margin to these companies, that making a modest amount isnt enough. If the chinese can manufacture these goods, albeit at a relatively low profit, IBM could spin off subsidiary to do it.
It could be cheaper to replace the cheap consumer RAID drivers than to buy expensive "enterprise grade" stuff. Maybe the cost of a RAID array with backup is cheaper than all the more expensive stuff.
My solution is the one that would actually allow ISPs to gradually upgrade things over time rther than to replace everything at once, by allowing the interoperation. Its a lot easier if the changes are concentrated at the ISP end rather than effect subscribers as well. Its true that over time as due to the turnover of ipv4 older routers, that ISPs could gradually replace the subcribers routers with newer models. It would also be possible even for ISPs to collect older routers, flash them with new firmware, and put them back out, in the process of customer turnover cancellations and signups. The whole point is the solution i describe gives ISPs a transitiion period.
Theoritically, any block of Ipv4 addresses outside of the local subnet could be used, if an ipv4 address is used as a fake address, and then the user asks DNS address which happens to resolve to a real IPv4 address with the same number, then, the same NAT trick could be used with a mapping between created between another temporary local ipv4 address to the real internet ipv4 address which was already being used locally as a fake ipv4 number. Though, I would only recommend that be used as a fallback if 127.x.x.x is used up. A small part of the of RFC 1918 addreses could also be allocated for the pool of fake ipv4 address, such as maybe 172.20 and 172.21, giving a pool of 131072 ipv4 addresses, plenty for most use cases. I doubt most people will have that many TCP connections at once. Since 127 is not used for local networks, it is the best choice however as the first choice. Again, 127 is so large, i doubt most users would ever exhaust it, especially if the fake ipv4 mappings are timed out after a period of maybe 1 -7 days or so.
Isnt subnetting more a software implementation DHCP, and BGP thing in the router, enter a net mask address and network address into the router config and then the router can analyse the addresses to determine if they are local or not. It seems, if IPV6 does not provide an equivalent for DHCP's getting the net mask then we are screwed. But net masks are not something you find in the IP packets headers themselves.
The fact is, TCP v6 was defective by design, because of what it does not have, and that is a mechanism for a long transition period between ipv4 and ipv6. If we had such transition period, ipv6 would now be widespread. The transition period means that ipv4 and ipv6 networks can communicate with each other. Making Ipv6 talk send packets to an ipv4 network is easy: give the ipv4 address block a subset of the ipv6 address block. The more complex but entirely doable part is ipv4->ipv6. Since ipv6 is larger address space than ipv4, ipv4 cannot directly see a lot of ipv6 addresses. The answer lies in the DNS system. When a user on an ipv4 network askes for the IP address associated with a DNS address which only has an ipv6 address associated with it, somewhere upstream, an upstream router and DNS server will conspire to 1) give the user (ipv4 peer) a fake IPv4 address for a DNS address 2) give the information on the ipv6 to fake ipv4 mapping to the router 3) which the router uses NAT to rewrite the packets headed out from from the fake ipv4 destination address to the real ipv6 destination address. Ipv6 packets headed in would be rewritten to ipv4 replacing the ipv6 source address with the fake ipv4 source address. Each ipv4 peer should be able to re-use the same block of ipv4 fake addresses, the mappings can be done on a per ipv4 peer (user) basis. Using this, its also possible to give ipv4 clients direct access to ipv6, using an
Molex, those were connectors that every PC builder fondly knows about from the power supply connections.