Fortunately, there are already many specific well-recognized accreditation exams in the vocational education world. Many more are bound to spring up in the future, since they probably generate more money than the cost of administration. Once these accreditation exams become recognized within the industry as trustworthy, they will not need the blessing of some accreditation agency.
So let's say that you've developed a rigorous certification exam in some advanced Python programming techniques. Every additional person that takes your exam makes you money, because most of your expenses were sunk into the cost of developing the exam, which is already done. Administering and grading an extra test costs you far less than what the test-taker pays. So it's in your interest to have as many people as possible take your test. You make money and network effects work in your favor. That gives you some great incentive to encourage people to take your test, and the best way to do that is to put out a high-quality, free course on advanced python programming. Many people will learn from it and not pay you. But there will be others who learn from it, really get good, and decide that they want a certification which documents just how good they got. This person will be your customer. This kind of "everybody wins" educational scenario doesn't have to be a pipe dream, and it doesn't have to come from inside the entrenched educational system.
You're right that motherboards won't post without memory sticks, but I don't see a good technical reason about why that should be. UEFI could be written so that it posts by using only the resources of the processor and its cache, if it detects no usable memory. I mean, never mind 128MB of L4. Even the 6MB of L3 that modern processors have is larger than the entire system memory of our parents' first computers. It should be more than enough to run something as simple as UEFI.
It would also be rather useful. Instead of issuing you beeps as it fails to boot, a motherboard with a correctly written UEFI implementation could post without working ram and run diagnostics on exactly which systems are working and which are not, and what exactly is going wrong. I really think this would increase everyone's system-building confidence and give the manufacturers who make it happen a leg up in the market.
I agree. This is what I would recommend if I worked for them: Make the "frame" of the controller standard, allow adjustment maybe in one or two directions, but then make it possible to replace the moldings with custom parts of different shapes and materials. Fancy people could even buy surfaces with natural materials like ebony, leather, silk and wool. Because, you know, sometimes you get bored of the tactile experience of plastic. I actually use my dremel tool to make custom wooden moldings for my mouse. I have large hands and love the satisfaction of making the geometry exactly match what my hand naturally wants to do.
I am not a business type, but if I was, here is one thing I would consider: Allow people to make a model of their perfect mouse, or perfect game controller, out of play-doh. Then have them take photos of it from all angles, enough so that software can reconstruct the 3D shape. Send those pictures to some new business with standard parts, 3D printing tech and a CNC machine, who could just print them out a mouse from whatever material they like. It wouldn't have to be cheap. The world has plenty of rich people who are being underserved in the tech-for-the-super-rich market. For example, very rich people typically use an iPhone 5s, but so do many ordinary folks that ride with me on the bus. Very rich people tend to use some normal Logitech or Razer mouse, just like me. And they use the standard Playstation controllers. There is no Aston Martin or Maseratti option for the tech devices that they (like the rest of us) probably interact with most often. That seems like a market gap waiting to be filled.
It's understandable that the employer has the attitude: "If the employee is fucking up, I want to know about it."
For example, if the company driver is tailgating, the employer wants some monitor to warn him. Basically, the goal is to not allow fuck-ups to go unacknowledged.
This, in turn, leads to an byzantine rulebook about what exactly constitutes a fuck-up. The employee learns the book, but there is a backup system in place in case some rules fail to be mechanically followed. Pretty soon, every rule which was once handled by common sense, like "don't pick your nose in front of customers" becomes another line in the rulebook and a subject of monitoring. (This trend is driven largely by the vastly increased possibilities for monitoring.) Such an employee simply stops using his/her own judgment, because there is nothing left for individual judgment to decide. You just remember the rules, and you follow them, so as to not attract negative attention from the rule-enforcement system.
I think that this is a real trend in the low-wage labor market in the US, and it's moving up the payscale. The trend has some benefits, in that some people who cannot be trusted to use good judgment can be trained to follow explicit rules. In a regimented setting like this, such people become useful employees. But I honestly can't imagine having a job like this - spending 8 hours going from assigned task to assigned task, and performing the tasks "by the rules" instead of how I see fit to perform them. Hopefully the next step in this tragic progression is that the people in such jobs will be fired and replaced by robots with modest AI. Once the rules are formulated explicitly enough, it's won't be too hard to implement them with robots. Hopefully the productivity increases of this transition will allow us to pay for the welfare of the displaced workers. At least then they could use their time doing something fulfilling, like gardening, playing with Legos, or whatever.
What's so sad for me about this whole story is that took an amateur and an outsider to debunk this research, and only after an ivy league school set up an entire institute for this snake oil. Now they're saying "oops, sorry, our bad for trusting the bunk we read in the peer-reviewed journals" but why weren't experts in psychology doing this debunking themselves? And why didn't it happen immediately upon the publication of this bunk? Why didn't UPenn take a second look at this crap before they devoted an institute to it? And why is the US government putting serious money into programs based on it?
All of this stuff will eventually get walked back in the coming backlash (one hopes), but the fact that psychologists themselves were not able to recognize the crap in their own journals should be a serious wake up call for that whole discipline. If a psychology department wants to have an elite faculty, I say that at least two should be highly skilled in data-analytic methods and devote most of their research activity to undercutting the work of others. Also, a lot more research money should go into replicating experiments that the field takes as significant. Unlike other people who post here, I do think that psychology is a real science, and one of the most valuable sciences we have. The fact that it's being done badly does not make it a pseudo-science. But it does highlight the urgency of drastic reform in the field. Like I said, this should be a wake-up call. Psychology departments of the world should all be resolved to never let this kind of disaster happen again.
Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.