Why did anyone need to do this field survey? It simply confirms what we already know - cosmic rays create SRAM errors. Hot components fail more than cold components. Big whoop.
There are two prices: there is a "retail" price that is tremendously inflated. And there is a "negotiated" price that is paid by insurers. The "retail" is absolutely unaffordable.
As an example, my annual tests ordered by my physician last year were $700 "retail".
The negotiated Blue Cross rate was $130. I paid a co-pay of something like $40. So, the lab somehow cheerfully forgoes $570 revenue, and collects $40 from me and $90 from Blue Cross.
Poor people have to pay $700. Or go through horrible paperwork and nonsense to access free government services. Or wait till it is an emergency, and go to the emergency room, and then they'll run the needed blood tests. Then they wind-up owing $10,000 which they can't pay and then ruin their credit.
The system is bass-ackwards.
Oh, you likely will need a paid version of CrashPlan, but you don't need a Crashplan cloud account. The paid version is for better encryption and more-frequent backup interval. I'm unsure of whether the free version will keep trying to backup if the backup machine is unavailable for the once-every-24-hour backup.
When I was involved in high-frequency stock trading, we did a weekly physical duplicate with a set of 3 drives. Drives were then exchanged via Fedex between our colo location and alternating between the two partners in the business. (Most recent backup kept at the colo site.) If it is really a concern, this solves the "both houses burning down" problem.
All these rsync/TrueCrypt solutions are doing just what CrashPlan already does for you in more convenient form. Yes, they have transparency that CrashPlan doesn't have, and you can examine the source code and build it yourself if you so choose. Most people, though, won't have the desire or skills to do that. I'm happy with my CrashPlan backup with NSA fallback.
Bother to read the docs, and then use Crashplan.
You can use CrashPlan to backup to their cloud (for $), your own computer (free) or a friend's computer (free).
- Your computer does *not* have to be on 24/7
- The backup machine does *not* have to be on 24/7
Too bad that your lack of reading comprehension is going to give you a flood of rediculously-complex solutions.
It's a ridiculous, impractical concept.
I was a radio amateur in high school and college. At the time, portable transceivers, commonly called "HT"s, (for "handi-talkie", I think a Motorola trademark) were getting popular with hams. Initially, there were no companies making such transceivers specifically for amateur radios, so they managed to get surplus police radios that could be re-tuned to work on a nearby ham band.
The elite choice was by and far the Motorola "bricks", so called because of their weight, size, and most of all reliability. But they were expensive - several hundred dollars for a radio that was beat to hell but still worked. These were the "iPhones" of HTs:
I couldn't afford one, but I found a larger, clunkier version from a company I think called Tec, at a swap-and-shop (flea market). It had a modular design. You popped off the back, and there were probably 20 little cubic plug-in modules.
Problem is, those things just never worked. Well, imagine all those hundreds of contacts, jostling around during day-to-day use by cops. They were totally unreliable. And the thing was huge, due to the packaging overhead.
These were the "Phonebloks"...
As I understand it, these are all drivers/cars that are licensed to carry the public - either "black cars" (licensed livery drivers in licensed livery cars) or licensed taxi drivers driving licensed taxis. These can be fairly costly licenses to maintain, and so I have to assume these are full-time drivers.
Why is there a controversy over rates?
These drivers are using UberX to fill-in when they don't have any full-fare opportunities. They can take it or leave it, it's up to them.
I think there is some confusion with a third-tier of Uber service (not sure if rolled-out anywhere yet?) for ride-sharing amongst the public, in places where they can legally do that.
How do UberX rates compare to the rates these drivers would normally command?
It's funny how none of this on either side mentions what the actual rates are.
While I use an ad-blocker on my desktop machines, I don't do anything to block ads on my iPad. I realize I can at least shut-off targeting, (and could use a proxy - would be easy given I almost never leave the house with it, and just use my WiFi) but I haven't as of yet. I suppose I will, though, because I am noticing a trend: most of the ads I see are for products I have already bought.
So, I recently got interested in Sous Vide cooking, did some research, and bought a Sous Vide Supreme. Guess what I see blasted across websites on my iPad? Sous Vide Supreme. Now, you'd think after a while these brilliant algorithms would notice that I've stopped search for equipment, and have seen searching for recipies. But, no, they just keep wasting their money pushing ads for the product I've already bought.
This is just one example. I've had this happen with other products recently, as well. In every case, I haven't noticed the ads until I've already bought the product.
Cloud services are useful and convenient, but these products should provide the option for local-only connectivity. The devices should have publically-documented APIs available directly on the device, and it should be possible to disable the cloud service.
As an example, I have a WiThings scale. There is no local API. It's absolutely silly that my weight and body fat measurements need to go to "the cloud".
I'm not terribly concerned that the a change in weight or body fat is going to trigger an NSA raid.
(Maybe DEA raid: "Excessive weight-gain alert! Possible inception of marijuana use!!")
On the other hand, these devices all are good occupancy indicators.
Multi-core parallel processing comes to toothbrushes!
Let the one-upmanship begin! Can anyone say "razor blades"?
As usual on
You can turn off the background paralax effect. But, really, that is quite subtle and not that objectionable. I turned it off, simply because I figured it eats CPU, GPU or both unnecessarily.
The new animations are gratuitious - they don't seem to serve any useful purpose. They are just plain silly-looking. Home-page icons now fly-in from all different angles. Drag a page, and now you are no longer dragging a skewmorphic piece of paper, but a skewmorphic sheet of silly-putty - drag at the right side, and the page warps, your finger "stretches" the right-hand side of the page. This kind of stuff was all the rage on Linux desktops - about 5 years ago. By now, everybody still running Linux has gotten tired of it and turned that nonsense off. The "bounce" now has a "warp" effect as part of it as well - the page deforms when it bounces.
It's like playing a bad ho-hum video game where they amped-up the effects because of lack of compelling content.
No, you can't disable these effects.
I'd imagine that if there is a medical issue with this, it is worse on iPad, because it fills more of your field of view when you are using it.
Well, yes you can. You can downgrade to a device that Apple has deemed incapable of rendering these effects. I think you need, say, an iPhone 4.
Apple seems to have become recently brain-dead when it comes to practical aspects of UI. And I hate to say it, but it must be due to Ivy, because they were quite good about it before. He is really, really good at designing appealing surfaces and finishes and packaging. UIs, not so much.
Another example of the non-functionaly of the new UI - buttons. It seems now that many buttons have absolute NO feedback that you have pressed them. I imagine the concept here is that the button is meant to perform some action, and the action itself is the confirmation that the button was pressed.
(Of course, a button is a skewmorphism, and we don't want skewmorphisms, right? So, I guess I shouldn't say "button" but "that word that's a bit bigger and fatter than the other words, and is off by iteself, that if you touch it something happens"...)
Somebody should have telegraphed that message to the poor developers who were given the impossible task to insure that the "action" happens soon enough for the user to connect their touching something on the screen with the "action" - regardless of the amount of work the action might take, and, oh, regardless of any other background processing that might be going-on in the device. Well, actually, I suppose somebody did, and those developers probably now feel like shit for having failed, even thought they could not have possibly suceeded.
Sony is a big electronics company AND a gaming company, so perhaps your friend can have his cake and eat it too. (Or perhaps it's a lie...)
I spent a couple years at Sony San Diego Studio as a contractor, albiet not as a "game programmer". Two contracts doing Ruby/Rails backend stuff, first working on internal software that manages configuring back-end servers and deploying them, and then working on back-end admin and console services. The latter was definately much more fun, since it was working in the same space as the game developers. (Sony produces most/all of their sports-related games at their San Diego Studio. I worked on back-end stuff for MLB The Show and Mod Nation Racers.)
It's a typical big-company tech environment. They pay standard competitive rates to contractors, and I gather the employees are well-paid and get good benefits. It is definately seasonal, with cruch-time around the holidays, unfortunately, but then the place is nearly desserted in the summer as people use comp time then. Everyone seemed generally happy. It feels like any other well-funded, non-venture San Diego tech company. Laid-back, even looser than normal dress code, really not excessive pressure. (Though one particular night rolling-out the Mod Nation Racers beta just before Christmas got some nerves on edge, as it was their first major deployment on Amazon, and didn't spin-up enough servers for demand. Well, and Rails... So, it was a night of emergency surgery to see what could be taken out to improved throughput and response. A couple of normally-unflappable co-workers got pretty frazzled.)
Dunno about the game development teams, but both groups I worked in were big on Scrum. The good thing is the meetings are only 10 minutes. The bad thing is, you have to get in by the daily meeting time. (Which was conveniently scheduled for the sleep-ins, though, so really not that bad.) (Best Scrum moment - a co-worker passing out from locking his knees while standing. Of course, they had to call EMS as a precaution, and I'm sure it was embarrasing. My boss smoothed it over by remarking that he did the same thing at his wedding! The irony is that the passer-out was a big fitness nut, and won some competition at the complex gym, so I suppose even more embarassing...) At the same time, there were the weekly, typical corporate-style meetings where you all fall asleep around a big conference table and somebody wakes you up when it's your turn. But at least these are kept to a minimum.
If Sony is an option, I'd highly recommend it.
I never use my registrar's DNS, even though I like my registrar (Moniker) a lot.
Nor would I ever use a web hosting company's DNS.
It's safest to use third-party DNS.
Risks of single-sourcing registrar, DNS, and hosting:
- host/registrar goes out of business, or suffers a disaster. You now have no ability to switch to a different host until the situation is resolved through IANA and somebody else gets control of the registry records. If you use third-party DNS, while you now can't change to a different DNS provider (assuming registrar failure) you can still add new hosts to your DNS and can still move between hosts. You can survive a failure of any two of the three services and change providers.
- you have a dispute of some sort with your host (assuming single-sourcing with host). They now can hold your domain hostage for payment.
Keeping DNS/registrar/hosting all separate maximizes versatility and makes you more able to squirm out of unpleansant and unexpected situations.
You decide which is which.
" Wait, is it about creating apps or creating websites? "
Both, and that's part of the problem.
HTML/CSS/JS can be used to create "hybrid" applications (native apps that use a WebView for the UI), "webapps" (website content locally-installed) or traditional websites.
The needs/problems with each are quite different. For websites, it makes sense to shift some of the burden to the client, if that means significantly less data is sent. Especially for mobile.
For native apps, you might (or might not) have a local server running on the device, but even if no server, you are typically serving pages out of a local (device) filesystem. The "slow link" problem goes away.
jQuery Mobile tries to work in all of these environments, and so isn't optimized for any one of them. You have to know which pieces to use in which environment.
It doesn't help that most developers seem to treat jQuery Mobile development like they do all JS development - as cut-and-pasted snippets to be copied from blog posts and samples. Most don't seem to bother to read the documentation, let alone learn what parts are efficient and which are not.
There are two mistakes that are the most common cause of JQM project failure - using IDs (a problem because JQM loads "pages" into the document, and if one builds a JQM site without thinking, one will wind-up with duplcate IDs in the DOM, and this is not allowed in HTML and will give unpredicatable - but almost always wrong - results. Yes, it's 2013, and HTML does not deal with "pages" at all, and so JQM and others have had to come up with hacks). The other is creating a site/app using the "multi-page" feature, which is very limiting, because your entire site must fit in one document. But developers don't bother to read the docs, and then don't understand why they can't link from a single-page document to a multi-page document. (Because it's not designed to be used that way!) Both mistakes are easily fixed at almost no cost, if you deal with it from day 1, and become prohibitively expensive once you have built-out a full site while sweeping the problems under the rug.
On the JQM support forum, probably half the problems posted are due to one or both of these issues. And the developers are quite stubborn once told that their approach won't work. They insist they have to use IDs, because "classes are two slow", and that they need a multi-page "because I want page switching to be fast".
I'd rather have a site that actually works, myself, than one that doesn't work, but is marginally faster than one that does. And multi-page isn't needed to cache pages in the document and/or pre-fetch pages. Multi-page does nothing that can't be done with single-page and some options, but it does bring with it a maintainence headache. (As does the use of IDs.)
It is true that the documentation stinks, and has only gotten worse. So, there certainly is a need for a good book or two.