So I switched to MacOS. If it's not going to be transparent and I have to spend time digging around and doing system-level stuff in the non-Unix way, why not do it on a system that's at least got great UX and lots of officially supported hardware and applications?
As you note, the problem with batteries is there's just so much undifferentiable import crap. Lots of it has fancy packaging.
Anker is no doubt trafficking in generics as well, but they do have their own design department (even goods like their Qi chargers that are made out of glass and metal have logos embedded in them and don't look like everyone else's generics) and when I posted a lukewarm review on Amazon ("Seems to work, nothing impressive, but good that it works.") about a phone battery, a rep with native English contacted me immediately and asked if there was anything they could do or offer to improve my experience from lukewarm to stellar.
So that at least is indicative of a company that cares. Note that I don't work for Anker, but since that experience (the phone battery was my first purchase of their products) I've purchased a number of subsequent products and none of them performed more poorly than the original OEM equipment, so that's at least something in this world of mostly fake batteries.
my word brain is running behind by a generation.
I have an R6300 (much less expensive, 90 percent of the power) and routinely saturate our 802.11N channels using DD-WRT, including to the outside world (connected via Google Fiber, which includes its own router, but a router that's significantly less cool). Before we had GF, we used the DD-WRT QoS features heavily and it was absolutely perfect.
The router is handsome, has been rock solid and running strong for many months now, and only cost $100 on sale at a Best Buy retail store. Prices may reach even lower now, particularly when sales are on.
so spare me the politics.
It was "Why is Sony failing?"
The reason that sony is failing is that you can buy (or, in your terms, "rent") more content, more accessories, more apps, more of everything, and do so more conveniently, from competitors products. The device itself is not the failing; it is that the usefulness of the device is diminished by the relative lack of things to do with it, and the lack of ways to do so conveniently.
It matters not at all what you think of the big picture to answer the posed question; it is simply that whatever Amazon offers, Sony offers *less* of it—not in the device hardware, but in everything that surrounds the device hardware, in the ways that the device hardware can be used. Sony's hardware is thus less useful, not for reasons relating to hardware or UI design, but for reasons relating to business relationships, customer-facing opportunity structure, and so on.
The politics of DRM and so on is an important discussion to have in our political life, but the fact that Amazon offers DRMed books has little to do with why Sony is failing (Sony, of course, offered the same—just fewer of them, with fewer ways to get them on the device, and fewer accessories to use with it).
Yes, the community is the product—it is also the product that the community consumes. Yes, publishers and manufacturers skim value off the top of that circular transaction. That is, as you point out, the business model.
And what I am saying is that that is the *dominant* business model right now, and that Sony sucked at it in comparison to Amazon or even to Barnes and Noble.
They're still working 20 years behind everyone else, caught in a love for industrial and UI (as opposed to UX) design.
They don't get the "ecosystem" concept. In fact, they actively fight it while everyone else is trying to build it.
Everyone else has known for a decade at least that every product is part of a service.
Sony is still busy thinking that every service is part of a product.
Others: The product is one of our service's features/facets.
Sony: The service is one of our product's features/facets.
So their devices are technically great, but too often they come narrowly bound to half-assed services that have only seen enough investment to allow the product to ship with the basic claim that it's functional. As a result, you can't actually practically use their products for nearly as much or nearly as well as competing products. The content isn't there. The accessories aren't there. The third parties aren't there. The fellow users interacting aren't there. Other devices may be technically inferior, but that have a large ecosystem of content, enthusiasts, third-party developers, accessories, etc. behind them.
While everybody else is practically begging the world, "Please, community! Embrace our product and take it in organically emerging directions!," Sony is busy saying "Get lost, community! We're in control here; stop trying to take this in non-approved directions!"
Other tech companies would kill to get a community going. Sony would kill anyone that claims to be a part of a "community" around their product.
I am interested in watches.
Whether or not a smart watch is worth it is an open question. If they can provide me something that I think I need with it, then sure. I've outlined a list in comments on previous stories, for quasi-trolls that were about to lash into me for being so general.
But I wear an automatic mechanical beater right now—specifically because it's virtually indestructible, represents only a minor investment (and thus financial risk), and requires no maintenance, attention, or battery-swapping. It's accurate to about 2 minutes per year, which means that about once a year I tune the time on it.
Most of the stuff that smartwatches are currently being said to do I either don't care about (fitness tracking, health monitoring) or currently use a smartphone for with far less hassle (bigger screen, more natural UI) so it'll be a stretch. But I'm open.
Journals aren't arbiters of Truth (capital T), they're just what they say they are: JOURNALS of the ongoing work of science.
Someone records that they have done X in a journal. Because said journal is available to other scientists, other scientists get to try to make use of the same notes/information/processes. If they are able to do so, they journal it as well. Get enough mentions in enough journals that something works, and we can begin to presume that it does.
If only one mention in one journal is ever made, then it is just another record in another journal of another thing that one scientist (or group of scientists) claim to have done.
Peer review is just to keep journals from expanding to the point that there is too much for anyone to keep track of or read. It is emphatically NOT the place at which the factuality or truthfulness of notes/information/processes are established once and for all. That happens AFTER publication as other scientists get ahold of things and put them through their paces.
Seriously, this is all exactly as it is supposed to work. I have no idea why there is such hoopla about this. There is nothing to see here. One group journaled something, other groups couldn't replicate it, they no doubt will reference this failure in future articles, and "what happened" is recorded out in the open for all of science, thereby expanding our pool of knowledge, both about what consistently works in many situations and of what someone claims has worked once in one situation but appears either not to work in the general case or requires more understanding and research.
Again, there is nothing to see here. Let's move on.
this would seem to be moot to me. Humans have only been here for the briefest of very *recent* moments, but we do have a particular interest in keeping earth habitable for *human* life.
Assuming your numbers are correct, it still doesn't do us any good to say that gosh, a few million years ago there was a lot more carbon dioxide, if for the purposes of *human* life a particular (and lower) level is necessary.
The goal is for us, not for the earth itself, to survive.
1) Monitor and keep and continuous chart of blood glucose, sleep cycles, blood pressure and pulse rate, blood oxygenation. Don't even know if the tech is viable for these, but they'd interest me.
2) Be part of a payments system that actually gets traction out there. Let me import all of my cards of various kinds and then provide them wirelessly to others without having to pull out a card (and/or a phone with a specialized app).
3) Same thing, but hold all of my tickets for entry into events.
4) Connect to a voice-to-text service to enable personal logging/journal-keeping just by talking at it.
5) Find a way to operate clearly and reliably using gestures and voice recognition rather than touch input when desired.
6) Have built-in GPS and voice navigation.
7) Have a built-in high-resolution camera to enable convenient visual capture of information.
8) Do all of this in a cloud-based manner so that everything that the watch did/tracked was available from all of my other tech devices.
9) Have a between-recharges time measured in weeks.
I don't know, it would have to be pretty freaking fabulous. But there are some basic things that I *don't* care if a smartwatch does, and those are probably more telling. I absolutely do not care about doing these things on a smartwatch:
6) Linking it to my phone via bluetooth
7) Telling time
Number 6 in particular is a non-starter for me. Battery life on phones is already too short. And phones are the devices that I use for web, email, and other informational tasks on the go because they (not a smartwatch) have the screens suitable for reading/editing. I need them to last as long as possible, and I have no interest in duplicating their functions on a smartwatch. So I refuse to enable bluetooth on my phone all the time just to get some additional "watch" features.
It needs to be a "standalone" device in the sense of no other devices needed for it to operate normally, but a completely cloud-integrated device in the sense of "but I can access everything it does and it can access everything I do on my other devices over the network."
Number 7 is also pointedly interesting. I don't care if something on my wrist can tell time. Social "time" as a concept is more ambient than ever. Everything has a clock on it. Your computer. Your phone. Your thermostat. Your radio. Your car dash. Every ticket machine of every kind, from movies to transit to events. Public spaces and the sides of buildings and billboards and retail shop signs. I don't look at my wrist or my phone to know what time it is. I do a quick visual 360 and in general, I find what I'm looking for, wherever I happen to be. A "time-telling device" is frankly a bit 19th/early-20th century a this point.
maintainability and "correctness" from the CS class perspective aren't important. Are the data and data processing valid, and did the job get done as quickly and as cheaply as possible—that's all that matters.
the very ruthless and very rich tell us that money doesn't solve problems.
Well, you already got everyone else's money after being absolutely driven to do so for decades. Now you tell us that having tons of other peoples' money is no good anyway when it comes to really important stuff. Hmm...
So rather than sit on a pile of billions that you've tied up after getting it from other people, just give it back if you've now learned that it didn't do all that much good in the first place. No? Well then, you're either a liar or a hypocrite.
I do this all the time in my line of work. Someone hands us a data dump of 2 million lines in a messy CSV file with dozens of columns that's old collected data. We benefit if we can make use of it--but we have to clean it up first.
It's a one-time job--script something up to process the data into cleaner data and format it as we need it to make use of it ourselves. Then, toss the script away.
There's a big difference between "software engineering" and "coding." Coding skill is useful in all kinds of edge cases that mainstream applications don't handle, and when you're just looking to finish an edge-case task, you don't need to think about correctness or maintainability. You certainly don't want to pay a contractor $$$ to cook something up if it's only three dozen lines and will only be used for two weeks. For those casesÃ"the "who cares if it works six months from now, we just need it to do something specific for us this afternoon" caseÃ"you just need it to work once or twice, during the current project, as a part of the *process* of getting it done, not as an *asset* to monetize over time.
I totally get where he/she is coming from.
what, exactly? Calendars are synthetic tools used to synchronize human activity. That is their one and only value. They do not exist in nature; nature synchronizes with itself without our intervention.
But we need a shared, common way to refer to particular dates in time so that we can refer to records and events retrospectively and arrange for future events prospectively—together, in a coordinated fashion.
So your proposal replaces one time measurement system on which everyone is more or less on the same page, in which the representation of a particular moment in time is broadly accepted across a large swath of humanity...by another system in which across that very same swath of humanity, a moment in time can be represented in multiple ways.
This would seem to reduce, not increase, the value of a calendar for all practical intents and purposes.
This proposal is most likely to catch (well, let's be honest, it's never likely to catch) but it's most likely to catch in advanced industrial/post-industrial societies where the resources and level of education to make use of it are in place. So you're proposing to introduce extensive new ambiguity in timekeeping into the population in which there is currently the least ambiguity in timekeeping.
Again, seems ass-backward to me.
Link to Original Source