Editor Wars? Do you have a cat that walks on keyboards? If so, vi can be very deadly to open files.
Oddly enough, it wouldn't. You could use NAT hardware in front of old gear and everything will just keep working. Stuff that gets updated, could just use the new syntax and deal with things correctly. Stuff like core routers and switches wouldn't care. It would be fare less disruptive than trying to install ipv6.
I know a few people who have conspired to tell others that the nontraditional domains are like 1-900 phone numbers and when you use them, you will get a bill from your ISP.
Early ip resolver libraries would sometimes parse octal ip addresses with commas as in your example of
Some of the early proposals to expand the IPv4 address space was to allow use more of the bits from the ports source and destination addresses so you could do things like "ping 188.8.131.528" or "ifconfig en0 184.108.40.206/32/13/2 dstbits 4 srcbits 8"
Microware OS-9 from 1979 used program and modules somewhat like DLL or shared libraries. The code to load a module would CRC check them when loaded and that bit of code could check a list and that list could either allow or deny any module. If you loaded the right data module, you had built in white listing about 3 and a half decades ago.
I could still hear the Saturn V when the 1st stage dropped off. It had lovely base with a crackling. Figuring speed of sound, vs speed of light and wind and sound drop off over distance, I suspect this thing isn't that loud.
My computer is in a data center where it belongs. My desktop is just a fancy terminal.
There is one disadvantage of the different ARM modes and that is the an arbitrary program will contain all the needed bit patters to make some useful code. This means that any reasonable large program will have enough code to support hacking techniques like Return Oriented Programming if another bug can be exploited. I would love to see some control bits that turn off the other modes.
Consider buying a new battery. Most laptop and cell phone batteries last between 200 and 400 charge cycles before their life gets too short.
If I compile from source, I can ensure that the binary I have is unlike any other in the world. That has protected my machines in the past so I will keep doing it.
Early Java was nothing other than a mess of pointers to pointers to pointers to pointers to more pointers all in a multi threaded system. The T1 addressed that problem but the concept of "All problems in computer science can be solved by another level of indirection*" is false and at some point compiler writers fix part of it. When they win, concepts like the T1 fail.
Sun tried great things with the T1 and it was like a great chess move that failed. The problem is they did a pawn sacrifice of their core business for that attack and it just didn't work out. Up until the T2000, Sun never designed their high end kit, they stayed with the low end and groups like Cray or SGI did their "big iron". The only great boxes sun designed in house where the small pizza boxes. The SS1, SSP20, x1, netra210 were great little servers. Things like the 690 and e10k were outsourced and while they were impressive as well, they didn't have the personality of the pizza boxes.
*To Quote David Wheeler
Have you read "man inittab" on any system V derived? action=respawn means it will ALWAYS run at the listed run levels. Sort of like how it runs the svc daemon does now. Whoever planned the new system just didn't get "init".
SMF only runs things as long as the contract system works.
As far as writing sensitive data to disks, do you know about the "real world?" Take a look at any online credit card system in the world. You will find people enter their card number as their email address, shipping address, reference number. You will find admins sending stuff like "can you fix 4111 1111
ifconfig isn't about the stack. It is a tool to tell the stack what to do and has been for more than 3 decades. Inventing new tools to do the same job was pure incompetence.
No, the t2 can preserve the context of 64 threads but it can and will only run no more than 8 execution threads at a time. In most cases, the pipeline is so starved, it won't even manage 8. When it is running 8 at a time, it is doing each at a much slower rate that the older CPUs would be doing if they were made using the same process.
The II/IIi/IIIii can preserve something like 4 processes executions context at a time. Sometimes that is better. It is better on nearly all of my workloads.
Integer priorities mean I have absolute control.
The current system has no guarantee of any order of anything. This means if you get hacked at a non privileged user level, that process can hang around until it gets the "system is shutting down" signal, then do a quick fork/exec a few times and keep running until the system sends it a kill -9. Meanwhile it has a system without syslog running and without any auditing running. Take advantage of something running a broken xml library that runs setuid, and you own the system until it power off and nothing is logged at all.
What advances would that be? The ones out of Fujitsu? The T chips are just now catching up with workloads that they can run reasonably. I have work loads that a 15 year old Sparc IIi will out perform a few year old T2. The V100 was a $1000 appliance box yet the base T2 was selling for more than $6,000. If the UltraSparc IIIi was made at 22 nm (unlike its original 130 nm) and it would scream for most web appliance roles. It would even be a nice cpu for the Lights Out Management system and it could even run Solaris unlike their current LOM which is running Linux.