Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
However, I don't really see what the fuzz is about. What they are in fact demonstrating is a relationship between conditional von Neumann entropies, which they claim is a measure of "uncertainty" (it is in a specific meaning of the word "uncertainty"). However, there is a difference between von Neumann entropy and the variance of a physical observable as used in the Heisenberg uncertainty principle. On the other hand, if you label a physical property such as entropy "uncertainty" and demonstrate a relationship between those entropies, then you can indeed call that an "uncertainty relation" but that's just a cheap way of attracting attention.
Also, I am not sure if it is possible to obtain the Heisenberg uncertainty relation from their equation. I would expect that, for example by entering pure, disentangled states in their equation, that Heisenberg should be recoverable (because of course, Heisenberg also applies to pure states). I don't immediately see how that can happen since the von Neumann entropy for a pure state is zero. Perhaps I am just missing something and perhaps my QM is a bit rusty
That's a bit stronger than his polite remark "I remain undecided" at the end
I quit reading at the point where the original author claims that the entropy for an open system is given by Boltzmann's formula S = ln W. That is simply incorrect since the derivation of this formula clearly assumes a closed system (and not somebody pulling a polymer with a pair of scissors in a heatbath, which is an open system). For open systems, there are numerous other ways of defining entropy (which you can all derive from the Boltzmann entropy by modelling the heat bath by the way).
There might be some merit in turning the reasoning of most physicists around: starting from Boltzmann until today, most physicists try to explain entropy and the second law of thermodynamics as originating from a more microscopic view (and there are some very convincing arguments for it, see J. Bricmont et al). However, I doubt that by mixing concepts from information theory and statistical mechanics, and then by turning everything upside down, you'll get to a point where you discover the new all-encompassing force that underlies everything ("information"??).
For thin clients (and desktop virtualization for that matter), this is also where the cost savings are. No serious VDI vendor will tell you that the CapEx (investment in hardware, licenses,...) is cheaper with thin clients and virtual desktops: you need to buy additional licenses, you're going to run desktops on server hardware (ok, 100 at a time on the same box) and then I still didn't start about the licensing galore (Microsoft VECD, Citrix XenDesktop or VMware View or...). The real cost savings are in the fact that it's much easier to manage, and being able to let your very expensive system administators do something else than troubleshooting a desktop (which costs you twice for the end-user downtime and the sysadmin troubleshooting it).
The same goes for thin clients: the up-front investment is larger, but they are very easy to manage (plug into the network and the thing autoconfigures itself, pointing you to your virtual desktop -- which means fewer expensive sysadmin interventions on-site for replacing hardware!), they live longer compared to traditional desktops (these used to have three-year lifecycles whereas thin clients typically have a five-year lifecycle -- roughly speaking you'll need to buy two traditional desktops for one thin client in a 5-year desktop lifespan; I'll concur to the fact that with the economic situation, you'll see prolongued lifetimes for both thin clients & desktops but the idea remains the same, numbers might differ today).
So is the thin client cheaper? In most situations and looking at the total picture, sure it is. Even despite a higher up-front investment. The real problem is not really the price of a thin client but whether your applications and IT environment support thin clients/server based computing (TS/Citrix/VDI).
Sidenote: I work for a consulting firm where I work a lot with VDI & Server Based Computing in general; we strive to be independent as possible (trying to nuance the vendor claims as much as possible for our clients) but that might mean I am a bit biased towards using SBC if it works
This is just Microsoft trying to convince IT admins not to have application compatibility as an argument against Win7 migrations, and not requiring to implement dreaded MED-V like, Terminal Services, Remote Desktop XP, VDI solutions just to keep that darn ol' app running. That also requires maintenance of multiple operating systems, and in fact, just as many as there are instances of non-compatible apps.
I acknowledge your point on Samba 4 not being production ready. I was merely using the example as an indication of "core functionality" that appeared to be missing.
I have seen countless problems restoring AD after a DC failure. I created a mock scenario with a Samba 4 DC wherein the entire database was wiped. I simply used Samba's own LDB toolset and had it up and running again in seconds.
Glad to see that they are providing a toolset to do this. I do wonder how FSMO role recovery, global catalog recovery and GPO recovery will be done. I hope with the same easy and especially in a fully Microsoft supported manner.
However... the most tricky part of an AD disaster recovery (as you know as you speak with experience), is not getting the database back running, but verifying its integrity. Again, I wonder if tools similar to NTDSUTIL will be ported to a Samba equivalent.
You're missing the point. It isn't about cost at all. The point of having an open source replacement for AD is to make it easier for software developers to take advantage of the largely undocumented protocols. This is designed to facilitate interoperability. Even Microsoft, from the light of the anti-trust lawsuit it lost, extended an olive branch to the Samba team to assist in providing documentation. Plus, the work that Samba does stands to benefit Microsoft as well because they might be able to see where the Samba team has had some really good ideas and legally incorporate them into mainstream AD.
+1 Karma score for being the first to provide a good answer to the reason of Samba 4's existence
I do agree that it can be a good trigger for Microsoft to be forced to document some parts of AD that are scarcely documented (garbage collection, tombstone processing,
I have used Samba 3 in the past and was very pleased with its stability. It is a very decent product and I believe that Samba 3 has an added value to provide (a limited form of) Windows Domain services.
For me, the step up to an Active Directory environment is merely an academic exercise in order to study the Microsoft closed source internals in more detail. Interesting, yet of little practical value in a commercial or educational environment (given the low costs... which brings us back to that).
BTW: Then who are the Samba team targetting as a user? I cannot imagine many home users would require an Active Directory environment, so naturally they would be targetting at small businesses, where Microsoft also has a competitive offer with SBS & EBS.