Even better, what if the cloud storage is free of charge to the end user and ad supported? Do the advertisers pay? What happens if some of the advertisers are media companies, would they end up paying (eventually back to themselves, less admin costs) for our legal personal copying?
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
I'm a mobile QA guy and work with both platforms - I needed a rooted Galaxy S2, since Android (even as of ICS) still doesn't respect the proxy setting properly for apps (so I need a requires-root app called 'Autoproxy' which lets me intercept and modify HTTP requests, I use it to inject error conditions and such).
I have to say that at least in terms of the end-user tools, rooting an Android device is *much* more difficult than jailbreaking iOS. Also there's the fact that on iOS I don't need to do any of that, I can just use a stock phone and it'll send everything to the proxy defined in the wifi settings.
Obviously personal preferences differ, but based on what I've seen of Android from working with it every day, and the things that annoy me about it vs. the things that annoy me about iOS, at least for the moment I'd never choose Android for my personal phone or recommend it to friends who ask what to get.. (although I'm constantly re-evaluating that as we get new 'gifts' from the project manager, it's just that Android has never won so far..)
>> Are we talking like optical black, suitable for coating the insides of instruments like telescopes and microscopes?
> Blacker! I'm talking black knobs with black legends on a black control panel black. It's so black it's frictionless.
(shamelessly reposted from another
> For some people with expensive existing music collections/movie collections (that predate iTunes or that were not gotten through iTunes), an Android tablet is really the only option they have.
Perhaps I'm feeding a troll here but would you care to explain why? If you've heard or read somewhere that the iDevices can't play mp3s purchased outside of iTunes then I guess you'll be pleased to hear that's entirely incorrect - or are you referring to some proprietary music/DRM format available only on Android?
Think of it as keeping your existing lock, but you just change the part of the lock that sits in the door frame to one that when sent the appropriate electrical signal gets out of the way allowing the door to open despite the lock on the door still being in a locked state.
(*web app QA by profession, so hitting Alt+PrintScreen on the VNC window is just a lot more efficient/much faster turnaround for getting a rendering issue into the bug tracker than the other way, which means taking screenshots on the iPhone by holding down the two buttons, quitting the browser, going into the photos app and emailing them to yourself from there, which launches the email app taking yet more time, then you have to re-launch the browser to get back to what you were doing.. and I use ad-hoc as I don't have to share the available bandwidth so it's much faster/screen updates are much more frequent than when going through the wifi router..)
before making such broad assumptions, a person should [..]
Wouldn't do not to reference related work such as the Stanford Camera Array - video here showing the multitude of neat tricks that can be done by processing images from multiple apertures into a single image:
The advent of inexpensive digital image sensors has generated great interest in building sensing systems that incorporate large numbers of cameras. At the same time, advances in semiconductor technology have made increasing computing power available for decreasing cost, power, and package size. These trends raise the question - can we use clusters of inexpensive imagers and processors to create virtual cameras that outperform real ones? Can we combine large numbers of conventional images computationally to produce new kinds of images? In an effort to answer these questions, the Stanford Computer Graphics Laboratory has built an array of 100 CMOS-based cameras.
Multi-camera systems can function in many ways, depending on the arrangement and aiming of the cameras. In particular, if the cameras are packed close together, then the system effectively functions as a single-center-of-projection synthetic camera, which we can configure to provide unprecedented performance along one or more imaging dimensions, such as resolution, signal-to-noise ratio, dynamic range, depth of field, frame rate, or spectral sensitivity. If the cameras are placed farther apart, then the system functions as a multiple-center-of-projection camera, and the data it captures is called a light field. Of particular interest to us are novel methods for estimating 3D scene geometry from the dense imagery captured by the array, and novel ways to construct multi-perspective panoramas from light fields, whether captured by this array or not. Finally, if the cameras are placed at an intermediate spacing, then the system functions as a single camera with a large synthetic aperture, which allows us to see through partially occluding environments like foliage or crowds. If we augment the array of cameras with an array of video projectors, we can implement a discrete approximation of confocal microscopy, in which objects not lying on a selected plane become both blurry and dark, effectively disappearing. These techniques, which we explore in our CVPR and SIGGRAPH papers (listed below), have potential application in scientific imaging, remote sensing, underwater photography, surveillance, and cinematic special effects.
ok sorry for the somewhat off topic reply, but I just had to ask about this one; when syndicated through Google Reader this story had a Scientology.org Flash ad embedded within it - are they really advertising on
I'm pretty sure it's not Google doing it, since not all feeds have this kind of embedded ad.. (i.e. it's within the post itself, right below the "Read more of this story at Slashdot." link)
Is it possible to replace human speech with a computer? Yes, most definitely. Is it practical to do it in something like the kindle with current technology? No.
If you see the problem as being the computational power of the mobile device being insufficient, surely with current technology you could just hand that task off and render the text into speech on a server somewhere, then have the client software download the resulting mp3?
I don't see any particular reason to want to be able to do it on the device, well, network availability issues notwithstanding - I mean, even when it becomes possible with faster mobile processors, I suspect (nay speculate..) that if it's as cpu intensive as you suggest, the battery drain incurred throughout on the fly text-to-speech playback would be greater, compared with just a relatively short burst of over the airwaves file transfer followed by the same duration of much lower power consumption incurred by regular old mp3 playback?
Or they may take me seriously and be sending some nice gentlemen in balaclavas to rifle through my bins some time later tonight..?
My friend's response to this idea, lifted from an email conversation about the same:
That's a reclaimed liberty, don't you see? The freedom to have the state take such close interest in the minutiae of your everyday life, that's a rare gift. Who knows what benefits this could bring, what positive lifestyle changes they could force you to undertake freely by inspecting your refuse? All this Orwell posturing about privacy - they're the real fascists, dude.