Dealing only in KPH is sufficiently hard for someone like myself raised with MPH that even if i switch my GPS / speedometer to KPH, I still have to do the mental conversion back into MPH to get a feeling for "how fast is that".
A couple of weeks of driving in a KPH based country and you'd get over it. It just takes a little experience is all.
So what's with all these people estimating weeks to learn such things? I remember years back, when I took my first trip to the UK, and people talked about the weeks it'd take to learn to drive on the left side of the road. I found that, by the time I'd got a few blocks from the airport, maybe 5 minutes, I'd already stopped consciously thinking about it, and just drove like the others around me. Similarly with the speedometer the rest of the world; all it took was matching the numbers on the highways signs to the numbers on the dial, which worked right from the start, and felt natural after a few minutes.
The only real difficulty I've found with such things is learning the words in a different language. I've found that that can actually take a few weeks, though the vocabulary on traffic signs is generally so limited that it's not all that difficult a task. But I haven't seriously tried learning the terminology on signs in China or Japan yet. That might be a bit more of a challenge than, say, Finnish or Russian road signs.
But in my country, we order beer as a half-pint or a pint, and everyone knows what they're getting.
So which country do you live in, where this is true? Here in the US, and across the Pond in the UK, the stated size of a beer glass is usually the capacity to the brim, but the amount in the glass is less than that. Off and on, there has been a bit of a fuss over this shorting in both countries, and there have even been laws passed outlawing the practice, to little avail. If you're living in a country where beer is measured in ounces or pints, you're almost certainly getting short measure in any bar or restaurant. It's only likely to be accurate if they're using the sort of glass with a visible "fill line", and those are not common.
So where do you live, that you get the advertised measure in glasses of beer (or other drinkables)? Curious readers want to know
(We might note that it is obviously silly to require that drinking glasses be full to the brim. That would mean slippery floors from the spilling as the glasses are carried to the table. But that doesn't justify lying about the amount that you're delivering to the customer. It just means that glasses should be made slightly oversized, preferably with a fill line near the top.
Also lumber. Everyone knows a 2 by 4, but say that in metric. That'll probably be easy to fix though.
Yeah, maybe, but we also know that this is an obvious case where vendors are legally permitted to defraud the customers by giving short measure.
And yes, I do routinely cut wood to within an accuracy of 1 mm. Calling a piece a "2 by 4" is OK for informal purposes, I suppose, but in addition, the store should be required to display the actual measurements in mm. If I think it's going to need some serious sanding, I can take that into account myself.
Because under US law, credit card companies are liable for the cost of credit card fraud above a nominal amount, they have strong incentives to continuously search for and attempt to block fraudulent transactions. I don't think there is any comparable legal driver that forces health providers to bear the financial cost of similar fraud from patient info loss, nor are they necessarily "in-line" to see the exploitation of information stolen from them.
Perhaps the significant difference here is that, with credit cards, the main usage is bogus charges that have an immediate monetary value. With the medical information, there's no specific dollar amount that's been "stolen"; the value is in who's willing to buy the information. This doesn't result in any specific charge against the medical corporation or the patient, so the financial system considers its value to be zero.
This is also what might make it difficult to fight. You can't just say that the medical corporation is responsible for an charges over $50, because there are no such charges in the patient's name. The only effective way of fighting the problem will involve the (mis)use of the medical data.
I've seen this comment from some Scandinavian sources, to explain an interesting curiosity: In recent decades, a lot of medical "advances" have come from Scandinavia, and what they've mostly had in common is that they started with study of accumulated medical records, what the statistics folks (including my wife
The interesting part of this is the explanation of why this data dredging happens so much in Scandinavia. The explanation seems to be that the governments there didn't try to make the medical records very secret. Rather, they imposed serious financial repercussions to "misuse" of the data. Thus, here in the US, expensive medical problems (e.g., a positive HIV test) typically result in loss of job and permanent unemployment. In Scandinavia, firing an employee because of expensive medical problems can result in serious fines against the employer. So employers have an incentive to find good medical help for employees instead of firing them. (The fact that medical services aren't charged to employers also helps.)
I haven't seen much discussion of this outside of Scandinavian sources, though, and there might be a lot more going on. But there is definitely a problem in the US, where medical data is a valuable commodity that can be used for all sorts of anti-social (and anti-individual) purposes for profit. But the medical industry doesn't suffer when this happens, so they have little incentive to "waste" resources preventing it.
How do you propose it gets around blackouts? If it did you would have the entire epicenter relying on fringe cell phones for service. It's like having an entire town piggy backing on a handful connections. Those who are in range will have their batteries toasted before you could say YouTube.
Well, one thing that might help is a "social responsibility" campaign. Publicise the fact that this is an inherent problem, and the solution is for as many people as possible should be prepared with extra batteries; portable battery packs, etc. Explain to people that the system will only work if enough people have the extra power in their pockets to keep the messaging system alive. And that, in an emergency situation, they might avoid using sites like youtube.
Granted, some people will enjoy leeching off the rest of us. But it's possible that, by calmly explaining the situation to people, most of us will do what it takes to keep the system up and running.
Don't we already have a tech called bluetooth for that?
Bluetooth doesn't handle phone-calls or SMS. That and that it's generally just a goddam trainwreck - I admit that, on occasion, it will actually work.
The nearest thing I know of is the Serval project.
The OLPC (One Laptop Per Child) project had this capability from the start. Their normal setup is a flock of laptops with only wireless comm hardware, all talking to and relaying messages for their neighbors, plus a wired machine somewhere in the area that provides access to the outside world.
Actually, this was the intended "normal" situation back in the ARPAnet era. It didn't make sense to the military funders to rely on a single relay machine that would be an easy target. But suppliers of the commercial Internet never liked the idea, because they've always wanted to charge customers for every device with access. A flock of devices using a single member's Internet access was explicitly banned at first because of this. As they slowly realized that they couldn't continue to hold the Internet back that way, they switched to the approach of software that hands packets to a single router/gateway box, and not directly to any neighbor.
We still see this very clearly with email, which on most customers' gadgets requires sending a message to an email "server" (typically on an ISP's machine), rather than directly to the target machine. If members of your family want to send messages to each other's gadgets, do the messages go directly to their machine? Or do they go to an address on some company's machine, which tells the recipient that they have a message? This isn't accidental; it's done that way so that the company has access to all your messages, and you have to continue to pay them or lose the ability to send messages to people within your own household.
This isn't necessarily silly. I live in a house with 3 floors (plus a basement
"When I've looked at hospitals, and when I've talked to other people inside of a breach, they are using very old legacy systems — Windows systems that are 10 plus years old that have not seen a patch."
No surprise there; that's about how long it takes to process all the paper work (mostly due to HIPAA) to get a new system approved for use inside a hospital. The new Windows 8 purchases should be coming online sometime around 2024.
If you want to install a patch, the approval process starts all over from scratch
Wohoo! I got informative + insightful + flamebait mods for my message! That's one of the mods I've been trying for for years (plus the rare chance to use "for" twice in a row).
Now to see if I can achieve the ultimate: getting "funny" along with flamebait and (informative or insightful). Preferably all four, though I'd wonder if that's actually achievable if you start with 2 points.
If they'd install a decent browser (in addition to the crippled browser that came with their tablets)
That would require buying a second noon-iPad tablet on which to run a non-crippled browser. Because the iOS API lacks support for runtime generation of executable code, all browsers in Apple's App Store are either Safari wrappers or, in the case of Opera Mini, remote desktop viewers.
So which case describes Chrome? I have it installed on an iPad, and it lacks most of the "walled garden" flakinesses of Safari, pretty much doing things the way browsers on non-Apple systems do them. Thus, Safari balks when you try to get it to display a PDF in a page, but Chrome does it like you'd expect, and sometimes even sizes it to its container correctly. Safari can display PDFs ok, if it's the only thing in a tab, but if you try to surround a PDF "object" with HTML, Safari flatly refuses, showing the "not implemented" message instead. I've taken to including a link to the PDF inside the "not implemented" failure message, and clicking on that link works fine, showing that Safari is quite capable of displaying such files. It just doesn't like to do so inside a web page with, say, additional information about the PDF. But somehow Chrome implements both cases. Google finds a number of complaints about this, and comments that nobody seems to be able to find a fix for Safari's flakiness in this case (and many more
Tablet focused design has ruined the web
Nah; the people who still use the web haven't seen much of anything "ruined". They see the web they've long seen, just with a larger set of web sites each month, and maybe a few new features in their browsers. It's just the suckers that succumb to the vendors' enticements into their Walled Gardens that think things have changed. If they'd install a decent browser (in addition to the crippled browser that came with their tablets), they'd see that the web is chugging along as it always has, some parts of it good and other parts not so good.
The fact that the marketers have pushed their New! Improved! products for small, portable computers doesn't mean that the old products have suddenly lost their capabilities. It just means that some of the customers have been persuaded to switch to other things that may or may not be any better.
The biggest problem with "the web" from a tablet user's viewpoint is all the old sites built by "designers" who haven't yet learned that their sites need to work on whatever screen the visitor has, including the small screens that so many people are carrying around now. The days are past when a site designer could design only for people with screens as big as the fancy one sitting on the designer's desktop. If your site doesn't work on the small screens, you won't attract many of the billion or so people who weren't using the web 5 years ago, but are now.
This isn't the fault of "tablet focused design"; it's a problem caused by designers' contempt for people with such small, cheap and portable equipment. They've been essentially anti-tablet since before tablets even existed. But they're slowly coming around, as they slowly realize how crappy their sites really are, from the viewpoint of most newcomers to the Internet.
(Actually, the web has always worked a lot better if you consciously avoid sites created by "designers". Those built by people with an engineer's concern for usability have always been a lot more useful, and they tend to work pretty well on tablets, phones, etc. The "designers" usually don't think they look pretty. But people continue to use google a lot, for example, despite its blatant lack of "design". Or maybe because of it.
I am wondering how a company that has all the money and talent can't catch a bug like this. Their test surface is laughably small compared to what Android or Windows has to support. What is going on there? What process are they using?
It's a well-known software phenomenon: The time it takes to build and debug a program is proportional to the number of people involved. Some argue that it's closer to the square of the number of people (due to the number of interactions in the graph connecting the portions written by different programmers). If you want a bug-free app developed quickly, give it to one person, and make sure that one person understands the problem well.
Actually, a more fun analysis says that the time is really just a function of the (square of the) number of managers managing the development team. But that might be taking cynicism a bit too seriously.
They tried to ban it in North western schools recently because "it's racist". Obviously the people trying to ban it were idiots.
"If I do not want others to quote me, I do not speak." -- Phil Wayne