Because we're still living in the '50s where every household has only one tv.
So many parents force the consoles to be in the main room, because "it's for the family" or "we don't allow the children to have TVs of their own". I consider it a form of child abuse, but it's common.
Can these Samsung Smart TVs be made to ignore all the convergence stuff and just be a monitor?
Last I checked, you needed a network connection for this stuff. So all you need to do is... not plug in the network cable. Or configure the wifi.
So just use it as a TV and you're golden.
You know what's funny? I have a Samsung 40" series 6 that I can't remember the model number of - really early smart TV that's not worth the effort to use as one so I didn't bother to hook up the ethernet when I moved. At least once a month it would lock up, as in picture and sound still going, but the UI would either stop responding, or respond but not do anything requiring a hard power off to fix. Since hooking up the ethernet again it hasn't done that in nearly a year. Tin foil hats at the ready...
How is it different from every iPhone, iPad, Android phone or tablet or laptops with webcams, recording even your location, video and audio?
Because they don't. Your android phone activates the webcam as requested for activities that use it. They don't run them 24/7 due to the battery sucking nature of them. As I understand it, the Kinect is ALWAYS listening and ALWAYS recording because it is sitting waiting for you to speak the command words, or wave the right gesture. Sure you'd expect that this is just recording to a circular buffer which gets thrown in the bit bucket when it doesn't detect something on the whitelist, but years of experience has taught me that it's not going to be long before some hacker gets into the internals and finds a database that has recorded everything everyone in the room has said for the last few days. Emails will fly around Microsoft's HQ and they'll spin it as merely for "anonymous usage stats" or "essential algorithmic learning" but we'll know that yet again a company was caught doing something no sensible person would believe they would do AGAIN.
In the case of Apple, fans actually boast of the huge profit margins on each phone plus the fact that you can't do anything Apple doesn't allow you to is viewed as Apple protecting you.
True, however if you can't do it, the carriers can't do it to you either. Apple's control had a lot to do with carrier bloatware. We can fix the abused android phones, but consumers would rather it be usable out the box.
This is very, very slowly getting through to the managers, though.
I had a boss not too long ago who simply assumed that everyone who ever bought a product wants to get our newsletter. I warned him that we might end up on blacklists, he chose to belittle my being a scaredy-cat and ignore me.
Last I heard is that he's fighting a losing uphill battle to get off the various spam blacklists because NONE of his emails get to their recipients anymore, and he noticed that it's not building trust in a company when you have to phone a possible business partner who has a commercial spam filter to tell him that he has to dig through his spam for your mail.
Unfortunately most businesses seem to realise this is going to be a problem, and rather than not sending spam in the first place, they just ensure it comes from different mail servers and a different domain to their normal operations.
If you are a business you HAVE to. From the start I made my mailing list completely opt-in. That doesn't stop AOL users from using the spam button instead of the prominent link at the top that gracefully removes them from the list. You can't have customers not receiving order confirmations or order updates or have business email blackholed because some webmail users decide they don't want your mail anymore.
Almost no one can hear a difference between loss-less and any of the codecs at high bit rates (256K+).
I wonder how many of the "I can hear the difference" crowd are comparing old MP3s to lossless rips. I can hear the difference between my old MP3s and modern LAME encoded versions of the same source. Can I tell the difference between modern LAME high bitrate MP3s and FLAC? Only when I know ahead of time which is which!
Warning, pure speculation follows based on a very brief time working in the games industry.
The PS2 was notoriously difficult to utilise compared to the PS1 and the Dreamcast, but over time it managed to hold its own against the more powerful gamecube and xbox. At the risk of hugely oversimplifying what made the PS1 manage to hold on so long was that it had a dedicated vector processor which meant that the competition's (N64, Saturn) faster processors mattered much less. (The N64 used the main CPU for just about everything which made the 90-odd MHz MIPS much less impressive.) The PS2 architecture was an evolution of the PS1 by adding more dedicated vector units rather than going the T&L GPU route which was just about to hit the big time.
The PS3 swapped vector processors for the Cell which was an obvious choice considering. However all ATI and nVidia GPUs have their own vector processing capabilities and I'd imagine that the costs of developing a special PS3 GPU that gave proper emphasis to the Cell were HIGH rather than making a CHEAPER GPU which must have been the intention. So the Cell became half redundant. And with all the compromises that were made to get the costs down it wound up with too much power in one narrow field, no memory bandwidth and no unified memory and a weaker GPU than the 360.
The 360 used a plain architecture that could be leveraged relatively easily from the get go, but has a lower potential for hidden magic. The PS3 was designed to have the potential to blow it out the water but the reality is that no one has found any hidden stores of power. Much like the Itanium, it was only better in theory while in reality it was a struggle even to match the competition. The end result is that developers have to work harder to match the 360 excepting a small number of rendering effects which become easier on the PS3.
Contrast with Mac's F9, F10, F11 and F12 keys. If your program just happens to use one of those keys, you're shit-out-of-luck (as is the case when trying to debug something in Visual Studio in a virtual machine, for example).
You can use Cmd-F9/10/11/12 to avoid the expose stuff. OS X sees that as a different combination so doesn't fire expose but VMware passes the F-key unmodified to the VM which seems like an oversight but has got me out of a number of jams. If not using vmware YMMV.
The default is to overscan on every TV I've seen, but the last few I've bought in recent years allow you to switch off the overscan from the TV menu. Sometimes it's called 1:1, sometimes Native, sometimes Full. Often it's simply listed in the same menu as the 4:3/16:9 widescreen menu thingy.
You'll find that all your HDMI sources like BR players, consoles, etc. will be running scaled up too though it's not so immediately obvious when there's no start menu on screen!
No, they're talking about activating the game at the point of sale, probably in addition to all the arcane DRM techniques they use.
And how would your console determine it has been activated? By going online.
If you watch large teams of programmers, the managment actually force the developers to write slow code, claiming that maintainability is more important than any other factor!
I don't see why it should be one or the other - maintainability is important, as is using optimal algorithms. Fast algorithms can still be written in a clear and understandable manner.
Up to a point, then you've got to make a choice. Keep the high level OOP constructs, or flatten it out to make the compiler's job easier.
THEN you have the next level of optimization, keep the readable code or do it the "clever" way that nets a 40% boost. And as any experienced coder will tell you, clever code is the antithesis of maintainable.
There are lot of problems with portable applications which try to write into the directory where
Do portable progs on your fav linux distro do the same? That is, they write their configuration files to
What happens when an app with no root priviledge tries to write its configuration files in
When you are installed on as large a number of computers as Microsoft's OS is, you have to be a little more responsible. Improve the security model to bring it closer to Linux, spectacular! Leave it so that writes to previously okay directories now fail? Terrible.
Maybe I'm over simplifying this, but to me it would seem trivial to remap writes to a user directory. Every time OpenFile is called on Program Files/blah/foo.cfg, open %USERDIR%/Local Settings/App Data/blah/foo.cfg if it exists, if not, copy the one from program files and then open it. Vista already has the compatibility options that correct for misapplications of win32 functions, for me at least my complaint was that their backwards compatibility was really half assed and they've shown no interest in updating it.
Which now that I think about it, seems to be their pattern (X360 Xbox compatibility has stalled, XP never did get a working soundblaster emulator)