If we have a machine-readable and human readable paper record, then the paper record could be imaged, then submitted to a independent system to verify that all the votes are accounted for, and that what is printed is what is read by machine. It is up to the voter to verify what is printed is what was voted.
What's more is the voting verification system does not need to be from the same manufacturer of the voting machine itself.
Without a doubt, web is s crapshoot of browser inconsistency and standards. Imagine this hypothetical scenario: No more local apps, but you have a web server running locally, which when you install an app, installs to the local web server. Your entire desktop is in a browser. What are the problems with this? Many: 1. Serialization to HTML/CSS/JS is slow and unnecessary. The code path to put a red rectangle on the screen is absurd 2. Those interfaces prevent direct access to local hardware. 3. Operational Latency - the back and forth across the web-client/web server barrier is prohibitive for many apps. 4. Start-up Latency - downloading 3D textures and meshes and other assets can take hours.
What is more likely to happen is we have local clients that use web content within the local client.
And then there is the"fog". I call private clouds the "fog" because it's around you, not up in the sky. The web does have the ease of software distribution on it's side. I think eventually when all this NSA stuff shakes out, we'll move to local clouds with self-hosted data as a way to protect and manage our data. There will be an industry standard super server you install apps to which will mimic local apps. Then for your data to be accessed rather than serve a warrant to your hosting provider, they have to serve the warrant to you directly.
Really, I see the privacy and 3D (coming virtual reality) to bring back focus on local apps.
I'm showing my age here, 38, but no talk is complete without mentioning the Dunning-Kruger Effect. I have witnessed this first hand, even with myself. When you are young and full of vigor, you charge forth into the great unknown t eagerly writing lots of code. As you gain experience the code decreases but is of higher quality. I've now taken to assign a valuation to each line of code as liability vs added value. because in a few years some kid will come behind me other the other side of Dunning-Kruger and change this without really knowing what it is doing. I also spend more time doing research on what I am doing so my execution is flawless. Experimentation is rare. In the Art of war, the battle is only the last step and the preparation is really what determines the outcome. Similarly, code is only written when the planning is complete. This is the difference between code monkeys and engineers.
But older engineers often get complacent. I too went through this phase. Many get comfortable with one technology, (Java,
My advice is if you're old, don't get complacent, keep learning. If you're interviewing one of us veterans, keep an open mind. We might not be as cheap on paper, or outwardly enthusiastic. But if we're still in it after 20 years, we love what we do just as much as a new guy, and we will pay dividends in the long run.
So there could be two groups, those who look to improve their skill, who quickly distance themselves from the group that doesn't. Of course, there will still be wide variance in skill between the members of each group. I'm sure you can think of other ways it could happen.
No, I can't. I started out and I sucked. I got better eventually through experience. In order for it to be truly bimodal, people have to start in either camp A or camp B and end in the same camp they started in. Because if you transition from one to another over time, any point in time will capture a group of people in between the modes. Now, you can argue that people don't spend much time in between those modes but you haven't presented any evidence for that. What's more likely is you have geocities coders on one tail and John Carmack/Linus Torvolds on the other tail. And in between are people like the presenter and I. And since I'm not instantaneously going from bad to good, the reality of the situation is most likely some degree of a normal curve filled with people trying to get better at programming or even just getting better though spending lots of time doing it and learning a little along the way.
For all your attacks on the presenter, your argument of a bi-modal distribution sounds more flawed to me. I would love to see your study and hear your argument.
This guy doesn't know how to measure programming ability, but somehow manages to spend 3000 words writing about it.
To be fair, you can spend a great deal of time talking about something and make progress on the issue without solving it.
For example the current metrics are abysmal so it's worth explaining why they're abysmal. I just was able to delete several thousand lines of JavaScript from one of my projects after a data model change (through code reuse and generalization) -- yet I increased functionality. My manager was confused and thought it was a bad thing to get rid of code like that
Another reason to waste a lot of time talking about a problem without reaching an answer is to elaborate on what the known unknowns are and speculate about the unknown unknowns. Indeed, the point of this article seemed to be to advertise the existence of unknown unknowns to "recruiters, venture capitalists, and others who are actually determining who gets brought into the community."
So he doesn't know......programmer ability might actually be a bi-modal distribution.
Perhaps
If he had collected data to support his hypothesis, then that would have been an interesting article.
But you just said there's no way to measure this
So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand