- It is inevitable eventually, but so are lots of things
- Sure, I expect it to become a real problem in my lifetime
- It is already happening
- No, it'll be fun.
- The singularity should worry about me"
Oh, they make a profit all right. By taking the money and then not making any flying cars.
Nope, the EU 'government' created the damn thing in the first place. The EU *court* struck it down.
OTOH, at least the whole scheme was out in the open, as opposed to the plain illegal surveillance in the US.
OTTH, who knows how many of the national goverrments are doing that as well.
There is also the neat trick (used by the UK among others) of turning a blind eye to, say, US surveillance of european citizens in return for access to the data. That way noone is offiically spying on their own citizens, but the net result is the same.
For various reasons, functional langugaes are not always a realistic option in you project (especially if you're not starting from scratrch, but yes: there is a lot more to learn from functional programming than lambda-envy.
No, I love objects.
But language features should be used where they bring value, not randomly because they're there.
First of all, I have no problem with immutable stuff, I don't even consider that state.
Mutable object state is fine too, as long as it in fact reflects the behaviour you're modelling. But it should be pared down to that. Not absolutely religiously, but it is a good ideal to strive towards, right up there with high cohesion and low coupling.
Many litter their objects with instance variables that do *not* model the core state of the object at all, but rather serve as a 'nifty' way to pass data to functions with less typing. This makes the object more stateful, which just makes life more difficult all around: you need to know what has happened before to predict what a function will do, and so you depend on many more places in the code to be certain that any one piece does what it should. The logic becomes less readable, more bug prone, harder to alter without introducing bugs, and harder to verify with tests. And that's not even going into thread safety and parallelization.
Fighting all this is not fighting object orientation. If anything, some solution approaches can be considered *more* object oriented: if that param-list starts getting too long, rather than abuse instance fields as lazy params, group the params into sensible value objects, and you may find the new abstraction yields additional benefits: an avenue for better expressing function and intent, implementing validation and constraints, encasulating calculations and query logic from the client code where it only constitutes noise, into self-explanatory functions on the data object.
Caching is an optimisation that does indeed introduce state, and all the problems that comes with it in full.
Like all optimisation, it constitutes buying performance in exchange for increased complexity (of some sort). The cost of complexity is high, and the tradeoff should be qualified by a solid cost-benefit anaylis, and by extension pertain to a demonstrably real performance problem.
That said, I certainly accept that 'true' changing state can be part of a good model, and I don't consider that caching. Just keep it under control, make sure it actually delivers value, and don't sprinkle it around lightly.
I was intentionally overstating it a bit.
There are of course cases where comments are warranted, and properly justified optimizations may be just such a case. The others also usually fall under the broader umbrella of "exceptions" to how things might normaly be done. Just don't use it as an excuse to make the code less clear than it could be given the perfomance constraints, and beware of premature optimisation, which is a prime cause of brittle and unreadable code, with frequently no real benefit to offset the cost.
And always consider if you could say some of it with code too.
As a general rule, though, I still think it is wise to keep in mind that since the comment is not executed, there is no guarantee that it reamains correct, if ever it was.
I sometimes make a hobby out of trying to find at least one error in every comment I see. It doesn't always pan out, but tre percentagewhere it does is both staggering and frightening - I warmly reccommend the practice to everyone.
You always have to keep in mind that code will be changed by serveal peopale, and your 'elegant' intention may not be understood or followed through by the next guy. So go for simple rules of thumb that not only keep your code readable and clear, but can accommodate future change while ramaining so.
My number one rule for keeping code both readable and robust is this: Reduce state.
I don't mean everything needs to be purely functional, but consider state a general liability to both correctness, readability, testability and maintainability. Less is more..
* Whatever state you have should be focused and serve to explain/model the actual problem domain, not just 'keep stuff for later'.
* Keep state as local as possible - most code is litered with instance variables that should have been locals and params.
* Just because an object _can_ bundle state with its functions doesn't mean it _should_.
* If it can be done in a static method and still make sense, do so.
All comments are lies.
Write readable *code*
It does neither. Which dimension reversed is left as an exercise to the reader. Hint: it's not time.
Who said anything about GPS? There there is plenty of stuff in geosynchronous orbit that need a clock accuracy that requires taking relativity into account, and has been from way before GPS, regardless of where the first experiments to demonstrate the effect took place.
In any case, proof at NEO invalidates the newtonian prediction, and I know of no model that predicts that the problem would exist at NEO but go away at higher altitudes.
You're pretty sure. Oh, that's settled then. Don't bother googling it or anything, it's not like anyone actullay knows this stuff or publishes anything about it.
FYI: Merely the altitude of, say, geostationary orbit implies a potential energy that means you have to account for time dilation if you want to stay in sync with clocks on the ground. This was proven experimentally decades ago, and predicted way before that.
This type of catch in itself is not that uncommon or interesting.
The interesting questions are "why so harsh" and mor importantly:
"why now?" and?
The answer is that educated russians with marketable degrees are fleeing the country by the boatload to escape what Putin is doing to Russia.
Eroding freedoms, isolationalist policies, state-sponsored nationalism, rampant corruption, tolerance of violent crime, and these things in turn scaring away foreign investors - to an intelligent, educated young adult this easily adds up to "not the place to build a future".
Same here. What I need is a sharp axe to split firewood with. Why would I use a console OS for that? This product is clearly worthless.
this has far more potential
If immersive efefct i what you'tre going for, the potential for this technique is in fact severly limited.
First let's strip away some marketing mumbo jumbo:
The "projecting directly onto the retina" pitch is bull.
Unless you want to venture into eye surgery, you can't bypass the optics of the cornea etc ("lazers" or no "lazers"), so any light looking like it comes from a particular direction has to actually arrive from roughly that direction. It follows that and some part of the chain has to physically cover at least as much of the field of view as it looks like to the viewer. If you're close enough to the cornea that doesn't have to be very big, but unless you're willing to fix your gaze in a single direction and shave your eyelashes, there are practical limits to how far this goes.
The "no screen" pitch is also bull:
The DLP-chip is a screen just fine, just a really small, really bright, reflective one. Optics can make it look bigger, but this approach doesn't really scale to anything beyond a binocular-like FOV as long as the screen/chip remains stationary.
Either you need a bigger screen, or you need the small screen to follow your pupil around as your eyes move (really fast).
The latter is likely to take longer to become practical than the surgical option, so for the next few decades, it's going to be external screens of some sort for most of us.
That said, doubly curved displays, more advanced optics and futher miniaturisation can greatly improve FOV, size and quality compared to the cluncky rigs we see todaty, but don't expect anything beyond "really clunky ski goggles", even in the long term.
Things equal to nothing else are equal to each other.