Noooo! Don't mention
All components have a cost, including the software. Let's say LG can include CrapKeyboard 1.0 for free, and GoodKeyboard 3.7 for $0.05/unit. Guess which one they're going to include?
Yes, phone pricing is broken down to that level. The cost of the supported software is a lot higher than the cost of the no-longer-supported software, because they're still paying the developers to support it. As long as CrapKeyboard used to work at least halfway decently (and it must have, because it was in the old production line), throw it in there.
It's a pretty simple explanation, actually.
Be cautious in what you claim. Dropping it in the toilet isn't a maneuver most of us would consider "smart".
No, but receiving a "free" phone and complaining that the free-market isn't forcing the vendor to fix its shortcomings is kind of disingenuous.
In the olden days, we'd have said he's "looking a gift horse in the mouth."
Of course they were failing. They were failing in 2011, and they knew it, and in case they didn't know it, their CEO told them so. Go re-read their CEO's Burning Platform memo in case you had forgotten how badly they were doing.
In 2007, Apple stepped in and not only did they define a new high-end smartphone market, they owned it, and shared it with nobody. Nokia went from sharing the top-of-the-line smartphone market with Blackberry to a middle-of-the-road smartphone company, and they did it without moving a step. About a year later Google delivered Android, which redefined and then completely dominated the low-end smartphone market. Meanwhile, Nokia delivered nothing new. Nothing. They feebly tried to do something with Maemo (and later MeeGo), but couldn't even ship it. This was about 2009. And Android makers didn't stop there, either. The Galaxy S (as you mentioned) came out in 2010, pushing the out of the low-end smartphone market into Apple's market, and that was the herald for Nokia's decline. In 2010.
Meanwhile, MediaTek had shipped a reference design for low-end phones in 2008. Any plant in Shenzhen could now produce a cheap handset for about $10, so they did, filling shipping containers with the cheap phones that have become ubiquitous in the developing world. Nokia couldn't ship a cheap phone for twice that price. When you're buying cheap phones, you're going to pay the lowest price - so the cost-conscious consumers immediately abandoned Nokia's low end.
All this happened from 2007 through about 2010. Elop's memo came out in 2011, just after Android sales had exceeded theirs for the first time, signalling the end of Nokia's relevance in the marketplace. Nokia's marketshare continued to decline, as they shipped nothing noteworthy. By last year, Nokia was barely remembered as that company that used to make phones before iPhones came out.
Microsoft drove them into the ground at high speed.
That is completely wrong. Microsoft bought them in September of 2013. According to my calendar, that was last year. "Failing" is a polite word for the dire straits Nokia was in at that time. Microsoft didn't drive them any place they hadn't already gone themselves. Perhaps you're confusing the sale of Nokia with the agreement Nokia made to adopt Windows 8 for the cash they needed to keep the lights on. Nokia had already failed to deliver Maemo, which had been in the works since before the introduction of the iPhone. Nokia was incapable of delivering a smartphone OS. They had four years and couldn't do it. MeeGo might have eventually done something for them, but it would have been an even smaller market than Microsoft could deliver.
Let me repeat: Nokia needed Microsoft's cash just to stay in business, back in 2011. That is not the sign of a healthy company.
All that and I still have to say the Microsoft phone is not a terrible device. Nokia put a really nice camera in there, the battery life is good, the screen is clear, and the device is really well made. But the Windows app store is sadly lacking, and Cortana is certainly not yet at the caliber of Siri. It's still just an also-ran in the phone market.
Microsoft had nothing to do with Nokia's decline. Nokia did that to themselves by standing perfectly still, while the entire market passed them by on both sides. Microsoft just picked them up for the scrap value.
Microsoft has done some really brilliant things as of late. They've wholeheartedly adopted automated testing for everything. I don't know if they have any product teams that aren't Agile, or aren't doing test driven development. I recently asked a product manager about his product's defect backlog, and he shot me with a cold stare: "We don't have any known defects in our product. As soon as a bug report arrives, the entire team drops what they're doing, and within 15 minutes a developer is working on repro'ing it, and it's fixed within a day. These are very rare occurrences." This was for a million line shrink-wrapped product.
Although it's taking them a long time to turn their teams around, Microsoft finally knows how to engineer code right, and they are quite willing to share with anyone willing to listen. But too many of their clients don't listen, too many of their vendors and suppliers don't listen (driver bugs, etc), too many of their own internal teams are still dragging legacy code bases forward, and they still have a long history of bugs that we all remember. Another problem they have is economic: their primary competition is their old products, like Office 2007, which are good enough for most businesses and students. They really want to get everyone on their Azure cloud, using Office365, live, OneCloud, and to rent computing resources from them, and that's driving a lot of their products in an unnatural direction for their consumers.
Their marketing people haven't helped. Windows RT? Really, they had to emulate Apple's walled garden? The closed iOS ecosystem is about the worst thing Apple ever did to their customers, The Apple tax sucks 30% from every dollar spent on the platform, and there's virtually no escape. And because we all know it sucks, we won't willingly jump into it again - so Microsoft loses even more.
Their forays into other platforms have been abysmal: Ford's SYNC is a crime against drivers. They bought a failing phone company for their hardware, turned out walled garden phones, and nobody showed up. Their previous attempts at embedded systems make people WinCE. And because they start everything out as closed source, and try to contain their own stuff, they see every product as a battle entering competition to the death, instead of an opportunity to cooperate. That got them a long way, and made them a lot of money, but now there are good alternatives, and nobody gives a damn anymore. The stuff they're producing now will all be too much, but way too late.
I don't think you considered the depth of the question: what is the risk? Could your device contain credit card information? Could it have your social security numbers? Could it have a way to access your bank account? Your retirement accounts? Your brokerage accounts? A lot of your personal finances could be at risk. Are you wealthy enough to be worth kidnapping, and if so, could the device provide access to your family's panic room, or to your alarm system? What about medical information? Assign dollar values to those (it's certainly nebulous, but you want to end up with some kind of estimate) and add up your overall potential for loss. Now, divide by the likelihood your device will be compromised - you might estimate that tens of millions of devices are recycled each year, and you might figure a hundred thousand are handled by people who would like to steal from them, giving you roughly a 1 in 100 chance of having your device compromised. Would you bet the information above on those odds for $300?
Maybe you don't think you have very much worth stealing. Perhaps you're young, and don't have a retirement account, and not much in the bank, so your financial risk is only $1,000. Maybe you don't see any risk at leaking your health data. And maybe you're supremely confident in your abilities to wipe the flash RAM. Good for you, take the $300 and spend it. For you, it's a solid bet. For those of us with more at risk, it's not such a sure thing; even if I am confident in my skills at wiping these devices, what if I make a mistake?
Well, it confounds it at any rate. But completely filling the device's memory 33 times in a row is pretty likely to overwrite everything at least once or twice - even the hidden "failure reserve" space if it's included in the wear leveling (and if it's not, then it doesn't yet hold any sensitive data, so there's no problem). Guttmann's values may be irrelevant to today's storage media, but that many repeated rewrites of anything still mostly does the job.
If you were an engineer in charge of destroying data printed on paper, and you decided on shred then burn then stir the ashes in water, how many times would you repeat the cycle in order to be sure the data was destroyed? Hint: if your recommendation is greater than one (in order to be pretty sure), check your job title, because you're probably Dilbert's pointy-haired boss.
Drives today work almost nothing like the drives of 20 years ago. They don't paint bit-bit-bit in a stripe, they encode a set of bits in every pulse of the write head. Alter it a tiny fraction, and it becomes a completely different set of bits, one that error correction won't be able to overcome.
Old disks were recoverable because the mechanisms weren't precise, and the data was written with big chunky magnets to assure it was readable. All that slop has been engineered out on order to achieve today's remarkable areal densities. One overwrite is all it takes - as long as you're overwriting it all.
What is the value of a used device? Compare that to the risk of the data on that device going to a malevolent third party.
I've had people saying "oh, look at all these hard drives, you should totally sell them on ebay and I bet you could get $10 apiece for them!" Adding up the time I would waste running DBAN or sdelete or whatever, and keeping track of which ones have been wiped, and double checking to make sure everything is really gone, it's not worth the time.
A big hammer and a punch, driven deeply through the thin aluminum cover and down the platter area, takes about a second and leaves nothing anybody would bother trying to recover. You can quickly look at a drive and say "yes, this drive has been taken care of", or "hey, there's no jagged hole here, this drive isn't destroyed." The aluminum cover contains the shards if the platters are glass. I don't care who handles them after destruction. There's no worries about toxic smoke. And if you have to inventory them before shipping them to a recycler, the serial numbers are still readable.
Smashing a phone wouldn't destroy the data on the chips, so a fire is a somewhat safer option.
The "scanner" portion of these devices is typically an embedded system that drives a hardware sensor, and speaks USB out the back side. You could probably open one up, solder a cable to the right points on the scanner board, and you'd have exactly the simple and transparent scanner you requested.
But because the business wants a truckload (no pun intended) of functionality out of these scanners, they need it to have more capabilities. First, it needs to be on the network, or it won't give them any benefit. Next, it needs to be multi-tasking so it can display alerts, etc. Its primary task may be to inventory the stuff coming off a truck, its other tasks may include assigning work items to line employees, displaying alerts on the supervisors' screens, punching the timeclock for breaks, and possibly even employee email. To a lot of businesses, a browser based interface lets them run whatever kind of functions they want, without the expense of continually pushing a bunch of apps out to a bunch of random machines. So taking all that together, embedded XP is one (bloated) way of meeting all that.
So while the scanner itself is simple, it's the rest of the hardware in the device that was infested with XP and other malware.
Don't forget the RF shielded optical fiber interconnects, for true fidelity at high frequencies, and a mellow bass.
What I think a lot of the utopian visions miss, as well as a lot of the posters here, is that the problems with programming are not problems with the tools, but with the code that these amateurs produce. Writing clean, clear, correct, modular, maintainable, tested, and reusable code is still a skill that takes time to learn.
Generally, most people understand following a sequence of steps to achieve a goal. They can follow a recipe's steps to bake a cake. Some can even write down the steps they took to accomplish a task, which is the beginning of automating it; but recording and playing back steps is certainly not all there is to programming. Almost anyone who can write steps down can then learn enough of a language to string together a dozen or even a hundred individual steps to then achieve a goal: StepA(foo); bar = StepB(foo); StepC(foo,bar);
Once someone tries to add logic to their scripts, the resultant code is generally buggy, slow, difficult to maintain, impossible to test, and probably should not be put into production, let alone reused. What a professional software developer does is recognizes the difference. He or she uses his or her experience, skills, and knowledge to organize those instructions into small groups of functionality, and wraps them into readable, testable, reusable, methods. He or she recognizes dependencies in the code, follows design principles to ensure they are properly organized, groups related methods into classes or modules, knows when to follow design patterns and when to break from them, groups related areas of modules into architectural layers, and wraps the layers with clean, testable, usable interfaces. He or she knows how to secure the code against various types of attack or misuse, and to properly protect the data it's been entrusted with. He or she understands validation, authorization, authentication, roles, sanitization, whitelisting, and blacklisting. And he or she understands the many forms of testing needed, including unit testing, system testing, integration testing, fuzz testing, pen testing, performance testing, as well as tools to evaluate the code, such as static code analysis and metrics.
On the other end of the developer's life are the inputs to the processes: requirements, stories, use cases, usability, scalability, performance. They know that following certain development methodologies can make a great deal of difference to the software's quality. And then there are the realities of all the non software development issues: equipment, firewall rules, IDPs, networking, vendor contracts, software licensing, hosting, distribution, installation, support, bug tracking, and even sales.
Tools can help with all of these steps, but as you pointed out, having a word processor does not make one a poet.
Really? Because I don't seem to remember the purges that took place when Reagan took office, or Bush, or Clinton, or Obama. I don't remember when they arrested the political dissenters from the opposition parties, hauled them out of Washington and trucked them up to camps in North Dakota where the majority froze to death, or shot them in the basement of the Lubyanka after pronouncing them guilty in a secret "trial". Perhaps that all took place when the Ministry for Information took razor blades and cut out the encyclopedia pages for Jimmy Carter, and extended the entry for the Bering Sea to compensate, because we can't really trust our history books.
Go read Mitrokhin's books. Read the KGB's own history, stolen from their own archives. Compare it to what the USA claimed actually happened, and to what the USA claimed was Soviet propaganda. Mitrokhin's papers serve as independent corroboration that essentially everything the USA said about the Soviet Union's "active measures" was true.