You seem to miss the fact that the companies could do that now, but don't want to.
No, actually, I comment at the end that we could do this now, but that companies don't want to.
You're basically proposing to strip freedom from service companies, and have some sort of government regulator determine where their storage Must Be, and what API they're restricted to only using.
No, I'm proposing that there be industry standards. There wasn't a government regulator necessary to determine that email providers must use SMTP to transfer email. It's just the standard, and it doesn't make sense for individual companies to go against the standard because it would cut themselves off from interoperability with everyone else.
And, by-the-way, Google doesn't have a walled garden, they have an open API and other companies can already integrate and let their users keep backend data in a variety of google services such as google drive.
Umm... bullshit? Ok, provide me with instructions on how to have Google Docs and Gmail store all my email and files on Dropbox in a way that's supported by Google. They may have some APIs for some things, but they aren't working with Apple and Microsoft to create a vendor agnostic platform for web storage. And why are you all defensive and butthurt over Google?
I've been trying to make the point for a while now that I think we really need to rethink the design of the Internet.
Just to give an anecdotal example, right now, I have at least 4 different Internet data storage accounts. Dropbox, OneDrive, Google Drive, iCloud, and my web host's storage. That's because, if I want to share documents with someone using OfficeOnline, I need a OneDrive account. If I want to share documents with someone using Google Docs, I need a Google Drive account. If I want to use the features on my Mac and iPhone, I need an iCloud account. And then each of these services has its own authentication service, i.e. I have to create and manage separate accounts with separate passwords for each. The account names and passwords might have different requirements. The dual-factor authentication on each, if it's available, might work differently. And this is just talking about a small subset of the services that I use online.
If you asked me to list all of the websites that I've used over the years, and provide a list of what information I've had to provide to each one, I wouldn't know where to begin. A lot of these sites require security questions, which is generally a terrible idea. Each site requires that I pay for services by providing my credit card information. Lots of services online and in real life require a host of personal information to authenticate your real-life identity, but every time you provide your social security number as proof of identity (for example), you're increasing the number of people who potentially have access to that social security number, and therefore the number of people who could make unauthorized use of that number.
Without getting too deep into the solution I'd propose (hint: public-key encryption), I think we need to consolidate both the authentication and the data storage of all of these different services. Whether you use Google Docs or Microsoft Office Live or some other web-based document editor, you should be able to store and manage the documents in a consistent place, accessed through a standard API.
So why am I bringing this up here? Well, it related to the Internet of Things, also, in that all of that information should be able to be encrypted so that only you can access it, and then stored in a location of your choosing. It shouldn't matter which device or who manufactured it-- if it's your device, you should be able to control where the data is sent, store it with your own encryption key, and no one should be able to access it without your authorization.
Of course, none of this will happen, because it requires that we create a set of standards that everyone abides by. Meanwhile, Google wants to have their standards that serve their purposes and keep users in their walled gardens, Apple wants their own standards to keep users in their walled gardens, and Facebook wants their own standards for the same reasons. That's why we have all these different Messaging applications, and none of them can inter-operate, even when they're doing something as simple as passing text back and forth.
We've covered this. You're a crazy German who has somehow assumed that, because you once said something in Greek and your Greek friend didn't criticize you, the German pronunciation of any word in any dialect of any language is proper, and the only people in the world who disagree are English speakers who are somehow all dumber than you.
Once we uncovered that much information, it stopped being worth my time to compose real responses. The fact that you don't believe linguists are capable of studying languages is just the last nail in the coffin. I've read your responses up until now, but I won't read any more. It's a waste of time. I'm guessing you're probably a 12 year-old or a mental case anyway.
Well as I understood it, the argument that Snowden's leaks had helped terrorists centered around the idea that, prior to the leaks, terrorists wouldn't have known that they were being monitored, or at least wouldn't have known the manner in which they were being monitored. Now that they knew that they were being monitored, and they had additional information about how they were being monitored, they would be able to change their behavior to avoid detection.
So if we can say that these terrorist organizations have not changed their behavior, it goes a long way towards debunking that theory.
Yeah, I actually really like the quote, "well prior to Edward Snowden, online jihadists were already aware that law enforcement and intelligence agencies were attempting to monitor them."
Really? Terrorists were aware that law enforcement were attempting to track and monitor them? Next thing you know, we'll find out that the mob is aware of law enforcement attempting to locate evidence and identify potential witnesses. What a shocker.
They think it's about limiting yourself to pipelines, but it's not. It's about writing simple robust programs that interact through a common, relatively high level interface, such as a pipeline. But that interface doesn't have to be a pipeline. It could be HTTP Requests and Responses.
Yeah, I don't want to put words in anyone else's mouth, but I feel like there's been some cognitive dissonance in response to the Snowden leaks.
I've had conversations with people who, on the one hand, claim that what Snowden revealed couldn't possibly be helpful or meaningful, because he leaked things that "everyone already knew anyway". Meanwhile, on the other hand, they also claim that Snowden is a horrible traitor for releasing vital national secrets that threaten our safety. I feel like you can't have it both ways.
As I see it, he took what was a conspiracy theory that few people in the USA took seriously, and turned it into fact. It would be like leaking documents that JFK was, in fact, assassinated by the CIA, and then people responded by saying, "So what? I've been hearing that rumor for years! Still, we should kill the person who leaked it because he's compromising CIA operations."
Sometimes new stuff is actually much better than then old stuff. I was skeptical about binary logs until I actually tried it. The advantages of a indexed journal is overwhelmingly positive. "journalctl" is an extremely powerful logfilter exactly because of the indexed and structured logs.
None of which requires that logging be moved into PID 1. Instead, all you need is the ability to support a new log format in some syslogd. Unless you were some kind of moron, you'd design the new program to be able to log to both text and binary formats at the same time so that you could enjoy the benefits of both formats. Systemd may or may not do this, I don't care; there's no reason whatsoever why logging should not be a separate daemon.
If PID2 is responsible for critical features like eg. cgroups which affects all running processes, including PID1, then it won't make a difference.
cgroups is a kernel feature. It doesn't stop working because whatever process you're using for cgroup management dies. The process comes back, reads the state from
The only reason that we even need a daemon for cgroup management is that we're making inadequate use of capabilities. When a user (or script) runs a tool which creates cgroups via a mount, they should not need to use any tool for privilege elevation because they should have the right to manipulate one or more cgroups in one or more approved ways — which can consist of a couple of lines in a file which is sourced by init scripts. In systems with init scripts of any complexity, all of which source external files, no changes need appear in them whatsoever.
Even with a[n old, slow] HDD it only takes about a minute to boot my Ubuntu PC, and that's with a stupid-long POST to deal with the second ATA controller's stupid-long POST added to the base machine's stupid-long POST.
With that said, I am not against improvements to boot speed. I simply question the need for a replacement for PID 1.
If you define "has" as "has within a mile," then you're absolutely correct. If you define it as "has passing the home," then definitely not.
I live on a paved road and I'm several miles (at least three) away from fiber. Literally the only company with fiber into my county is AT&T, and as you likely know, they are bastards of the first degree.
Minimum standard for what? 2014? Per individual? Per family? Per household? Per block? Per neighborhood?
Please try to keep up.
1. Standards change. 10Mbps might be an acceptable minimum today, but it certainly won't be in 2024, let alone 2054.
1. Standards change.
The devil is in the details.
So is the wankery of your comment.
Basically everything is doable at 10Mbps. It's an acceptable minimum standard. We'd all like to see more, but at least they're setting the bar someplace livable.
cgroups existed before systemd.
the cgroups functionnality existed in the kernel but wasn't really used that much before. [...] whereas current
Yes, my argument was that altering the init scripts would have solved most of what systemd solved. Thanks for confirming that.
each script end up fucking things up in its own original and different way.
The scripts are unified by maintainers. I've already made the proposal that you could actually interpret unit files as init scripts, with the right parser which basically stripped out the sections in brackets, dumped the rest of the content of the file into a series of variables by sourcing it, and then running a unified init script. This would work just fine for any daemons which are long-running, without any complex work. All you'd need is a hashbang at the top of the unit file.
proper handling of dependencies at runtime
Already handled by several init systems.
None of which are the original sysvinit.
Congratulations, you just hammered home the point that you don't understand Unix, while simultaneously proving that you don't understand sysvinit. Using fancy scripts with the original sysvinit is still using the original sysvinit. You are witnessing the awesome power of the Unix philosophy. Because sysvinit is small, simple, and completely modular, the scripts could be extended to provide functionality which sysvinit didn't try to claim for itself. Instead of moving more functionality into PID1, the functionality can be implemented at the script level.
Or cron if it's time-based activation. Or udev if it's hardware based activation. Etc.
Why do we need 83 different way to start some code ?!
Wasn't the whole point of Unix philosophy having one piece of software which concentrates into doing one thing and doing it well?
You failed to understand the Unix way. It's not to have one piece of software. That's the systemd way. It's to have many pieces of interoperable software which can be combined to perform complex tasks, and reconfigured to perform other complex tasks. So we have cron and at (which are often merged) and we have udev and inetd. And each of these things does one simple thing. It's not unusual to want to start processes in multiple ways, that is in fact common to all modern operating systems. You can start them from the command line, you can schedule them, you can start them from the GUI. Is that a problem for you?
Before, you'd have the same concept spread into a dozen of different systems, each only doing part of that functionnality.
And you still do. Only now, they're all managed by one process which, if it craters, will not just cause them all to fail, but which will cause a panic. Great idea!
if you don't like it, don't use it.
That's getting harder to do as people depend on it. I may finally have to go back to BSD.