What nonsense. I claimed that VR had the potential to correct for the limitations in current technology around "broadband" human interaction. Obviously more needs to be done in terms of capturing each persons 3d "image" to project into the VR space and so on. Why you find this offensive is beyond me. (And yes I didn't read the article, this is slashdot after all).
I'm not sure why I'm supposed to prove anything, I thought we were discussing ideas? Where I see the short term use case is in school of the air type environments. It's a long way off, but that doesn't mean it's not a good idea. But as I alluded to, I think the commercial environment is where you might see this hit earlier. Games will drive the tech, but economies of scale could see some new and interesting applications.
The point of this, the entire point, is that VR provides the potential to create an immersive experience that will finally allow true broadband human interaction. Having worked in corporate space for many years trying to get cross site teams to function well, I can assure you that chat rooms, phones, even webcams and the like, do not cut it for human interaction. So much is lost in the subtle body language, the eyes, the stance, the arms folded. VR could change all that. If I can finally see you properly, look you in the eye, share a virtual whiteboard, then it will truly no longer matter if we are in the same office. Or classroom.
And why not? This Notch fella sounds like he's just been holding things back.. "No VR because Oculus sold out!" "What's that mega-corp? 2 Billion?". Owned by Microsoft might see rational rather than ego-centric decision making.
Well, if all factors are equal it doesn't vary, otherwise every run on the same machine would vary and it would be useless. The point is that there enough differing variables between machines that it becomes useful for finger printing (and also for identifying specific hardware/driver/os/browser signatures). It would be used in conjunction with other techniques in practise I am sure.
Different drivers, OS's, web browsers, GPU's etc all have slight effects when asked to render something onto the canvas. The trick is that the raw resultant bits can then be captured trivially using getImageData() and then sent back to the tracker site (after hashing or what have you to reduce the size). It'll render the same way every time on your machine, but will differ to someone else's. (Showing my age here), kind of like how you could easily see the difference between the old Voodoo and TNT2 graphics card by how they rendered.
An insightful post, I'd love to hear if you had an ideas on how the system could be improved?
Please mod this up and GP down. +5 Ignorant.
This is a good point. If we ever get to the point of being able to efficiently convert matter into energy with negligible loses, then science fiction becomes far more believable. The "scarcity" of resources equation hard wired into our biology would be irrelevant. The physics is simple, but the engineering is a real bugger.
Don't be put off by the "C++" in the title. Most of the concepts are applicable to any language. It's about the engineering behind large scale software development.
If every university simply taught this book, software development would be called software engineering. Written in 1996, and still we have not learned the lessons. Flawed. Wordy. Partially out of date. And yet, if you understand and apply the concepts in this book, you will design applications and systems of the standard that everyone actually expects software to be at (rather than where it is).
To be fair, you basically set this kid up for failure. What you describe is a significant engineering challenge, and you gave it to a computer science graduate, with no experience. If you gave this to someone with 10 years under their belt, I'm sure they could create a lovely maintainable package; but as it is, you should start over. You may as well have asked him/her to design the Golden Gate Bridge. Computer science does not teach engineering, there is no way this kid could have had the necessary skills.
Mod parent up!
An anonymous reader writes "I'm mildly autistic and in my mid 30s. I know I'm not the smartest person ever — not even close — but I'm pretty smart: perfect scores on SAT, etc., way back in high school and a PhD from a private research university you've heard of. I don't consider intelligence a virtue (in contrast to, say, ethical living); it's just what I have, and that's that. There are plenty of things I lack. Anyway, I've made myself very good at applied math and scientific computing. For years, without ever tiring, I've worked approximately 6.5 days a week all but approximately 4 of my waking hours per day. I work at a research university as research staff, and my focus is on producing high-quality, efficient, relevant scientific software. But funding is tough. I'm terrible at selling myself. I have a hard time writing proposals because when I work on mushy tasks, I become depressed and generally bent out of shape. My question: Is it possible to find a place where I can do exactly what I do best and keeps me stable — analyze and develop mathematical algorithms and software — without ever having to do other stuff and, in particular, without being good at presenting myself? I don't care about salary beyond keeping up my frugal lifestyle and saving a sufficient amount to maintain that frugal lifestyle until I die. Ideas? Or do we simply live in a world where we all have to sell what we do no matter what? Thanks for your thoughts."