There will be several papers on this substance published in the Journal of the American Chronochemical Society.
There, fixed that for you.
Even for songs that I really like, if they come up too many times in rotation in my playlists, I'll vote it down because I get tired of the same song over and over.
You know that "repeat" mode on music players that plays the single same song over and over again?
My SO uses it all the time, playing his favorite song while doing chores. Now we have an offline player so it won't be counted, but think what it means for the statistics of lists from people who use that mode regularly on a streaming service. A song may be played hundreds of times on a single week.
Thanks for the feedback, it's interesting to have someone's else use cases for this idealized tool.
As you outline, there are several ways to handle the cases for collaboration / multiple editing
- Online: every change gets almost instantly notified to all people editing the content. Not trivial, but it's the case with the least amount of changes in case of conflicts.
- Creating a local branch. The offline changes are made to a local branch. When the machine gets back online, you can try to merge all the changes back to the "trunk" as a single changeset, using special tools to solve the conflicts.
- Check in / check out. This is how Sharepoint allows content collaboration. To change some content, you need to lock it out, making it read-only for every other user.
- "Post"/push requests. Instead of having a single huge merge with all the changes from a branch, each small change is posted to a queue. The maintainer of the trunk can see all individual changes made by different users, ordered by timestamp. Each change can be accepted or rejected by the administrator of the "official" version.
All these models are valid depending on the type of content and collaboration. Ideally, a dedicated library would support any of them. For a personal content management system, usually you would only use the "local branch" method, as no one would change the main repository while you're away.
Version control systems already have all this, but they work with the granularity of a whole file. The benefits of having a single universal library would be that you could reasonably implement this versioning for smaller snippets (for example the amount of content in a copy/paste). Doing this is possible today, but it's currently not practical as you would have to port the copy/paste protocol for each different platform.
A wiki is not that different from version control in that regard, it's just more user friendly. (You can easily create copies of a base page, and later merge their page histories). It always makes it clear which one is the main "branch", and changes are made in-place in the same interface used for reading. Thus it's easier for non programmers to understand the content model.
In addition, who operates the repository server, and who pays for its operation?
Being a service, each company could deploy their own servers. It could be built as a federated system to share content between different servers, but usually you would synchronize all your devices from a single vendor which your trust.
That's done to save bandwidth, not to make changes mergeable.
To be fair, you only have to make changes mergeable if you're editing the exact same simultaneously in two places. If you're using it as a personal tool, you only need to keep track of versions make sure that the user is using the last one.
Collaborative editing whiteboards manage to solve that problem, too.
How does "tabs sync in browsers" handle merge conflicts if the user has the same tab open on two machines but edits the content of a textarea element differently on two machines?
It doesn't. But it would be trivial to expose both versions of the content and let the user choose which one she wants to use, or even copy/paste snippets from any of them into the final version- again, just like when using a wiki or version control.
Good luck with that if the other machine is switched off, or if both machines are behind NAT.
Distributed version control doesn't seem to have a problem with that. You keep separate content branches until (and if) the user wants to perform a merge with the assistance of merge tools, or deprecate branches that have become obsolete.
Whose responsibility would it be to provide the user interface for "seeing two versions at the same time to select the parts that they want to keep" in all applications for all platforms?
I didn't talk about "all applications" in my original post above.
I talked about "building a truly universal synchronizing platform", i.e. one single service / protocol that some interested applications might want to implement, in order to delegate the solution to the sync problem to that platform and have it available in all devices.
When I said "universal" I was referring to "software that ran on all platforms", not to "all content in all applications". Obviously it wouldn't solve the problem for applications that didn't implement the protocol. But having the library in all platforms would make it always available to any application that used it.
that would require "all kind of content and work sessions" to be in a form that can be easily diffed.
Ideally yes, that would be an important implementation detail. Remote desktops already do this for graphical content, and version control does it for text content. Binary would be trickier, but not impossible.
I shudder to think of what that might look like.
I have a good idea of what it should look like, and it's not much different to what "the cloud" services achieve today (Dropbox or Google Drive or MS OneDrive for files, tabs sync in browsers), only centralized and homogeneous rather than ad-hoc for each application (assuming that it would work as a single service for shared content and tools, i.e. something like an "Evernote for apps", not a platform capable of running every conceivable software stack).
The trickiest part would be sending heavy binaries between devices. A simple way would be to share only pointers to the information in the other system, and having the user manually request the byte-heavy blocks that they want to use when they switch to a new device (either pushing them from the old device or pulling them from the pointer available in the new screen).
Conflicts could be handled the same way that wikis do it, keeping track of the last edited version, and providing tools for the user to perform manual merges or seeing two versions at the same time to select the parts that they want to keep.
There isn't an insurmountable problem from a theoretical point of view, as the problems are simpler than they look, or have already been solved. The main problem would be backwards compatibility (you would have to throw away most desktop tools, although Mobile has already done that).
Why? Because it would need to be able to run on all possible future hardware
If the hardware is Turing-complete, it is fairly possible, no god-like powers required. It only would take to build an interpreter or compiler to the language specifications.
Ok, that was the words. What it would mean to have the exact same software in all platforms is that someone could build a truly universal syncronizing platform.
This means that you could seamlessly move not just your files, but also the running sessions of your applications using a single protocol, instead of depending on individual per-app hacks or remote terminals. You could copy the content to your device, edit it with local apps and return it to the server. Just like git version control, but for all kind of content and work sessions, not just code.
And you don't think that, the fact that the voting system is not exploitable, plays a large part in making vote-buying nonexistent?
The very same countries that have stable democracies now are the ones which implanted such systems because vote-buying was a common practice in the past.
if you let people build whatever they want, they invariably build cocks.
In First-Person Shooters, "StC" stands for the "start-to-crate" time. Guess what the "C" in "StC" stands for in social virtual reality.
And then there's the fragmentation issue. Should they use Redhat or Suse or Yellowdog (wait what?) or Ubuntu or Kubuntu? What's the difference? Explained in phrasing that makes sense to somebody with a degree in Political Science?
That part should be easy to explain to those types. "Those are several vendors competing for the same market, so if things go wrong you can switch between them without having to completely retrain your tech people. If you start having problems with Windows too bad - Microsoft is the only provider".
On the contrary, allowing people to outcompete each others on who works for less is what causes poor people to run out of options. If you work the whole day for slightly less than a subsistence salary, there's no room for doing something that will improve your life.
Slavery is doesn't appear because "by definition" someone is forced to do something against their will, it happens because some removes all other options from you, so that the other possible voluntary alternative is death.
Did you really just compare forced labor with the threat of harm and/or death to voluntary employment?
There's a difference in degree only, not semantics. When people live in a region so poor and uneducated that all jobs and communication with the external world are provided by a single landowner, there isn't much difference between being a free peasant or a slave. This advice coming from someone living in a country which was governed by that model for several centuries.
I have a true question - exactly WHERE do you recharge while on a road trip?
Unless Canada has a very large network of fast dedicated charge stations compatible with your car model, or you travel only to places where there is always such a station within range, how do you manage to move through the country without fear to run out of batteries? I've never found a place that explains how to do this in detail.
Alan Cooper would never advocate the type of UI in question.
What kind of UI? The kind where designers watch users having problems with some parts of the design, and fix those parts based on empiric evidence? I think Cooper would advocate that.
"Catch a wave and you're sitting on top of the world." - The Beach Boys