It's free, but you are only allowed to distribute through Steam (meaning Valve gets 30-40% of your revenue). For a game that was going to sell mostly through Steam *anyway*, it means fewer parties picking at your revenue, but if you somehow weren't using Steam, it represents a big jump from UE4's 5% royalty.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Source 2 is 'free'..... so long as you ONLY make your content available through Steam. For a lot of developers, this is just accepted, but some games aren't on Steam.
So let's say you use UE4 and don't sell through steam. They get 5% royalties. Or Unity, where you pay a flat fee for the game engine.
If you use Source 2 for 'free', the only way to sell it is through Steam, which gets *30-40%* royalties. Source 2 isn't free, it's a hook to try to get more lock-in to keep Steam as the premiere distribution platform.
1. TPM costs money. Almost no one uses it. Therefore, it adds cost and almost no one realizes value.
2. Never used reachit, no idea.
3. A significant cost adder without much value. An eMMC might not be *too* much, but it's still significant. It'd probably be cheaper to ship with a distinct USB key, but really having the ability to put a recovery image to arbitrary USB key is more useful and less likely to become a source of servicing headache in and of itself.
4. Another cost adder that's likely to either be ineffective or a source of problems. 'Hardware' network engines are frequently problematic enough in high-end enterprise products. The absolute crap that would be a consumer grade product makes me cringe.
While I agree with the sentiment that this isn't to be considered unlawful or anything, the word censorship does apply. The word censorship means simply that content is reviewed and objectionable portions suppressed/deleted, not that a state institution is doing it or that there is no alternative way of producing that content.
If a private radio station bleeps out something, it's still called censorship. Sometimes it's for FCC guidelines so it's at least related to government in such cases, but different radio stations exercise different disciplines. For example a song that references weed gets bleeped on one local station, but not another in my area.
The meaning of a word is not something that should be politicized...
Whether Lenovo is engaged or not, it seems Microsoft may wish to issue a purging through a Windows defender update. This would probably be the healthiest thing for all around.
Hopefully this will be a lesson to all the vendors about the risks of taking money for shovelware....
While I'm inclined to also be suspicious of the study and fear people getting the wrong idea that it's ok to drive under *any* impairment, I do find one portion of your comment bizarre:
It's disappointing to see my tax money going to support the use of either.
I'm scratching my head at this sentiment over a study that was probably extraordinarily cheap compared to how much tax money goes towards enforcement and incarceration to fight the use of marijuana.
The pain comes in getting developers all to have share-nothing sensibilities throughout the stack such that any particular piece can fail and it proceeds without a hiccup. A developer uses a quick and dirty BDB, hard to make that stand up regardless of what rug gets yanked out from under it. For developers *trying* to get there, the debug and testing rigor required is significantly higher than what they are accustomed to. Keep in mind that not all these applications are targeted to arbitrarily large audiences, some of them target no more than a small business or team in an instance. If designing for big scale a lot of the sensibilities are unavoidable and you better have the talent to get it done regardless, but not everyone is designing for scale.
To unpleasant user experience, many web sites and mobile apps get their brains thoroughly fried in strange ways during sessions, often rooted in some component falling over that wasn't actually prepared for. For a web page this is usually little more than an annoyance, but mobile app developer recommending reinstall over such a glitch is not uncommon.
This is getting a tad off topic though. Disposable system images are nice for cheap IaaS, but a penalty is paid up the stack that sometimes more than makes up for the savings below, and that situation is very highly dependent upon luck of the pool of talent and skill available in the situation.
One of the big shifts that systemd is embracing is towards IaaS/PaaS type services.
While that is a growing use case, the trend will plateau. Making an application stack that intelligently does that is no small feat, leaving a large chunk of the market not able to make the shift. For an example, look at how most new mainframes are used. Brand new software and hardware stack catering to running software pretty much exactly the way it was run 30 years ago. Some places and developers won't change. Besides, a great deal of companies that brag about how awesome their PaaS implementations do not provide a pleasant user experience and/or suck down a *lot* more resource than they need. Even when people *think* they got the hang of these architectures (after non trivial work toward that goal), they still frequently deliver sub-par experiences.
So logs are binary? Just send them on and do with them whatever you want.
The issue is that plain text has the best parser in the world for when things go badly wrong: the human brain. In a binary format, if things go wrong so that parsers lose track, then it's a lost cause. If structured text log is damaged beyond the reach of the utility, a human can still apply knowledge to do some forensics. Add to that the scenario where you don't have said parsers handy in a 'rescue' context.
At least you finally get to see all the logs and won't find information went MIA because the logger was not yet running or could just not get the data in the first place.
And that speaks to my point, I said that systemd might be able to win over detractors. Binary doesn't add value to any of the cases you described. Those are nice features, and if there were *no* downside, then people wouldn't argue as much about relative value. Right now the discussion is 'but we can do these tricks!' and answered with 'but it isn't worth it', not that the features are inherently bad. If the things given up are mitigated, then the 'we can do these tricks!' becomes more persuasive
More and more apps are going to log there
It will *never* be the case for every app. Syslog was never universally supported. Windows has had decades of an analogous unified logging facility, and not even all *microsoft* code uses it to log. A monolithic logging facility has *never* become ubiquitous for all applications on a system. Besides, that was just one example. Another is that systemd emphasizes 'systemd-nspawn' when there is an 'unshare' command that with a *little* scripting can do the same thing. A wrapper around the utility in shell strikes me as a way to get an entirely new system call into the hands of administrators that is more approachable.
Ok, I meant to say 'not all of the criticism is strictly from a luddite perspective', I recognize that some of it is stubborn rejection of change, but that's no reason to point at that aspect of it and say 'see, they have no leg to stand on, *some* of them can't make a coherent argument as to why they are pissed!'
systemd development should be both proud and concerned. Obviously they have provided value as a non-trivial population stands up and defends them for the sake of the value. Obviously they are leaving some users adrift because so many are pissed. And then there are a lot of people on both sides that just love excuses to argue, but there is definitely a significant meaningful core of supporters and detractors.
All the criticism of systemd is not strictly from a luddite perspective. There is a population that appreciates meaningful advances (Wayland, btrfs, even some facets of systemd), but doesn't like some of the compromises systemd has employed to achieve their goals. Getting stuck in a point of time before systemd is not a desirable result, and in fact systemd might be able to win over some detractors if they recognize criticism and make sensible technical solutions to those rather than continuing to say 'oh everyone loves it except some impossible to please luddites'. For example, journald could embrace native text logging with external binary metadata and deliver all the goodies they provide and quell all the (justified) bitching that human readable logging is a second class citizen in their model.
They may not be able to accommodate all the objections (e.g. the amount of complexity they *must* do in pid 1 to have guaranteed comprehensive service management without blindly applying namespace isolation everywhere that would make a system look even weirder/risk breaking some services), but they could come a long way.
The issue for many of us is that things are being implemented that go beyond what systems administrators can follow along without understanding how to be a more robust software developer (and even then, there's some loss of convenience in analyzing things compared to an interpreted language). Systemd design shifts focus on specialized tools that are better at their specific task, but less reusable in similar contexts. If I started with syslog and learned 'tail -f' will let me watch logs, then I have acquired knowledge that can be used the next time I encounter logging output. If I learn 'journalctl -f', then that knowledge does not transfer to the huge number of other applications that do logging. It's a small example of things that in aggregate pose a significant challenge.
An administrator faced with a 'classic' design won't know everything about the system, but can get far with 'set -x', 'find', and 'grep' because the configuration, logging, and much of the 'glue' code is in clear text, and communication between programs usually hits the filesystem in fairly specific ways. Now with things like systemd and dbus, 'invisible' things happen (well, overly generic communication channels and compiled code). When the kernel implements new awesome stuff, it frequently manifests in sysfs, which is nice and discoverable. Advanced functionality that adheres to the 'everything is a file' and generally presents and accepts simple utf-8/ascii data. Not everything in the kernel does that, sometimes it creates obscure devnodes with ioctls instead, but it's a common and good practice in kernel land.
In general, we already have a system that embraces many of the design principles observed in systemd and actually does a decent job of making the concepts work: Windows. Even with a great deal of talented investment over the course of decades, when a Windows system goes off the reservation in certain ways, no one will be able to bring it back because of how complicated the integration of the various components. While certain concepts can be specifically be done better (e.g. journald does better than windows event framework), the emergent behavior of Windows that becomes impossible to overcome by administrators isn't really due to those specific things.
FYI you may want to try xpra (not wayland, but still). It's better than X forwarding, but operates on principles that translate to the Wayland stack if you dig into it.
Besides, I know portions of NCAR use VirtualGL (at least last I was involved). It does some stuff Xpra doesn't and currently only works with an X stack, but again it operates on principles that don't really intrinsically use the X11 network features.
Of course you may simply be referring to the fact that such approaches has not yet evolved in Wayland ecosystem, rather than implying they are hampered by not having an X11 style approach to remote applications.
As you know the Xserver was network transparent, so neither GNOME/KDE has any capabilities to piggyback on
Which really is still not a big deal, because...
As for Weston, I have not tested it, but Weston does include RDP support from what I can tell.
The best seamless remoting implementation for X11 is no longer actually using the X protocol. Xpra does remote X applications using compositing and window management hooks rather than anything involved in the X11 protocol interaction.
Of course it's entirely plausible that specific scenarios could be better done in the toolkit, but I think the scenarios are frankly limited compared to the complexity of making it happen. At the same time real time encode of graphical content is relatively less expensive. Better to have local applications speak network under the covers for the most part with an Xpra like approach to cover the gap better than even X11 does today.
Compared to X11, RDP isn't good for seamless graphical element integration into the local environment (though integration of audio makes it better on another front, and performance wise RDP runs circles around X11).
All that said, I'm not one to be down on Wayland. Xpra demonstrates how a linux graphical environment is best remoted, and it doesn't really use the X protocol at all for the business end of things. It interjects as a compositor and window manager, with a dummy X server to satisfy the demands of X clients. The compositor gets the graphical data, sound comes along, and intercepting window manager hints lets it do other things like correctly place 'tray' icons. In other words the X protocol is at this point thoroughly superseded for it's big strength. I don't know if Wayland has something like Xpra yet, but I have hope.
Of course some of the vagueness is precisely because things happen mysteriously, and systemd has a habit of doing unexpected mysterious things. Of course it's not alone, you have quite a few subsystems all deciding to be a bit 'automagic', with systemd and associates just being the most prominent. As a consequence, if you manually do something like reconfigure a network device using the underlying tools, something can mysteriously redo them later when it thinks something has happened like a lease expiry, even though dhclient no longer runs. Or a time change event at boot causes dhclient and some mysterious third party to disagree about when lease goes away. dhclient isn't renewing lease, but some third party decided that a lease wasn't renewed and deconfigures the adapter. It makes no sense, but someone in some random component thought something wasn't proper and decided to 'help' take care of something that wasn't their business.