Only for some people. I was using Netflix by just being persistent a couple of days ago, but yesterday it wouldn't let me through at all no matter how many times I tried to get in. It let me into the app, but whenever I started trying to watch a video it would try to force me to log into PSN. After that failed, the Netflix app produced an error and halted the streaming.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Even granting your strawman, here, the people who are interested in "social status and pretty hardware" aren't in a position to "experiment and innovate" on the platform. The iPad is not built for developers, it's built for end users. Restrictions for developers becomes simplicity and usability for end users, and they'll never miss the functionality they don't know they don't have.
This has been my experience as well, emphasis on the "unnatural and slightly painful". No real discernible issues with my iPhone or its reception under normal operating conditions.
Thank you SOOOO much for using the accent. So many people don't seem to know how to type it.
The problem with "Open Source" hardware, and any other tangible thing, is simply that for most hardware of any significance, a person would need a factory and expensive resources handy to go about trying to make it. Granted, the barriers to ACTUALLY utilizing your rights to modify, update, and redistribute open source software are similarly insurmountable for most people,but this is even more so.
Don't get me wrong, if what this means is that design documentation (schematics, blueprints, manufacturing instructions, etc. etc. etc.) are released with the hardware (so that other companies can use it as a base) is made available, that's still great. That means that Linux will run spectacularly with "Open Source Hardware" underneath because writing the necessary drivers, etc. should be trivial, but I have a feeling adoption will be significantly lower than the open source methodology has been for software.
Underlying it all is the problem of money. Where open source software can afford (until it reaches some critical mass, at which point monetization through advertising and support tends to become a practicality) not to make any money, a provider of open source hardware has to expend significant manufacturing, R&D and Production costs, and most companies won't be willing to simply give away the fruits of all of that effort since the number of people who can contribute back will be relatively limited by comparison (contributors would need to be able to manufacture the hardware to test their modifications thoroughly). Unlike open source software, where there are many contributors, open source hardware would have comparatively few, so the cost to each contributor is much higher and the benefit of having extra eyes looking at the designs much lower. I'd like to be proven wrong, but even looking at the "success story" over at the Make blog, it looks like the vast majority of the "open source hardware" projects were toys with blinking lights and pointless gadgets. Things that might make a fun weekend project, but nothing like what OpenMoko is (was?) trying to do, or that can significantly improve our computing infrastructure and get rid of the problems caused by closed hardware (especially things like video cards, which are still giving open source OS's trouble)
Except for the little detail here:
Not really that necessary to have a spy poking around in source code that was handed to you on a silver platter, huh?
I guess it cost them the price of some odd number of data analyzers then...
Thanks for doing the legwork on this one. You confirmed my suspicions about this situation exactly.
I can't speak for the GP, but I would be saying it with a straight face. It's neither asking a support-related question nor providing any support-related answers.
Nobody reading a post about a magazine review is going to learn anything that can help them solve their antenna issue. People who are posting with antenna issues (but not posting a discussion topic about the review) are not having their topics removed. An article about a product is not automatically relevant to technical support discussions about said product, even when that article is a very well-researched review. Technical support is about stating a problem that you, personally, are experiencing and trying to find a solution. Posting a third-party review does neither.
a dropped call is a dropped call. If the iPhone 4 drops calls in areas where other phones (like the iPhone 3) worked perfectly, then the problem is not the number of bars displayed or the software.
According to the Anandtech review, the opposite is true:
From my day of testing, I've determined that the iPhone 4 performs much better than the 3GS in situations where signal is very low, at -113 dBm (1 bar). Previously, dropping this low all but guaranteed that calls would drop, fail to be placed, and data would no longer be transacted at all. I can honestly say that I've never held onto so many calls and data simultaneously on 1 bar at -113 dBm as I have with the iPhone 4, so it's readily apparent that the new baseband hardware is much more sensitive compared to what was in the 3GS. The difference is that reception is massively better on the iPhone 4 in actual use.
If people with this problem were measuring it in frequency of dropped calls during normal use, we probably wouldn't be seeing nearly so much complaining.
Your post reminded me of this:
apple things they can force DEV's to pay $99 year just for free apps or $99
No, a few really stupid developers think Apple can and wants to try. Apple has made it clear that they want different platforms for Mobiles, Tablets, and the Desktop by spending a ton of development effort to diversify them in the first place. Why on earth would they then merge them together, losing 3/4 of the functionality of a desktop and all of the media professionals who buy expensive, new, profitable Macs?
What will be interesting is seeing how long it will take cell phones to catch up to where PCs are today (in terms of processing/rendering power).
Why would we want them to? There's no need for that sort of raw power on a cell phone unless it's powering things we haven't yet dreamed of. Even once we could theoretically get that kind of power on a cell phone, it's not like PCs will have stagnated to the point that a cell phone with a wireless keyboard, mouse, and monitor would be a viable replacement for our "main" computer either. No doubt Windows 2025 will require a terabyte of RAM to run, and still won't be any better at what it does than it was in the year 2000.
Frankly, there are many tasks that can be done quite well from a cell phone, and I'm sure many tasks that can't be done on them now could be feasible in 15 years time. Camera motion sensors and a bulb capable of projecting a full-size keyboard on-the-go would be a nice replacement for netbooks for many, I suppose. But in terms of processing and rendering power, it's not like a tiny cell phone screen (even with a full keyboard) is desirable for watching HD movies, playing the latest FPS, doing video editing, print publication work, or really anything else that tax our desktop machines to the limit. The most I can imagine wanting to do on a phone are things that computers have been capable of for some 10 years or more now: connect to the internet, grab my e-mail, connect me with my friends through either text, video, or audio interfaces, and give me something mildly amusing to do when I'm bored and have 10 minutes of downtime. We just don't need an 8 core processor, 4 gigs of ram, and a dedicated video card to do that.
You're assuming they don't want to exaggerate the difference in results.
And making your so-called "money" worth a damned thing.