With a billion credentials, they certainly haven't had the chance to exploit them all yet. It's too late for 0.01% of the victims, but not too late for the rest of us.
In cryptography, it means a number that is only used once -- n-once. However, it is actually the wrong word to use here, as a cryptographic seed's most important attribute is unpredictability.
Don't underestimate the importance of the right education. Our company almost collapsed under the stupid organizational structure put in place by our last CIO, who was not an engineer, and had no idea how engineers work. I never before realized how much damage an org chart could do.
... the use of the new "picture" tag which is a container for multiple image sizes/formats
... and it will only require 50,000 words in which to send it.
I also make it a point to go through supermarket lines with a real cashier rather than a do-it-yourself scanner. Not because I am a technophobe (quite the opposite) but because I like dealing with a real human.
I generally avoid the self checkout, but I might use it if there are no other customers in front of me and there's a line at the cashier. Have you ever waited for a self checkout behind a typical person? I want to claw my brain out as I watch them stupidly wave a package over the scanner again and again, all the while covering the barcode with their hand. Or they bounce everything off the glass, as if they're buying basketballs. Or they have to sift through their entire basket, to find that one bottle of Axlotl juice that they want to put in the bag next.
All you have to do is sign up and they'll migrate your email account to their IMAP servers. https://xcsignup.comcast.net/o...
IIRC they hurriedly provided this back about the time Windows 8 came out, because Windows 8 has no POP3 client.
Maybe the requirement to upload bulk updates was a lower priority for that development team than getting other features implemented, and it's still on their stack. Or maybe they ran out of budget before getting to implement that feature. Maybe the stakeholder who was assigned to work with that development team failed to understand his or her own user base - the stakeholder's job is to provide the business perspective, and maybe he thought a pretty color scheme was more important than bulk uploads.
People can still make poor decisions in any framework, which does not necessarily invalidate that framework. The good thing about an Agile approach is that as long as the team is there, the software can still be easily changed.
And if she hasn't already, your wife has the responsibility to file a bug report or at least report her concerns to the stakeholder - the team may not even know of this need for bulk updating, or the financial impact of the one-at-a-time process. It sounds like it's fairly easy to quantify the cost of the inefficiency, which should help prioritize it accordingly.
Software is malleable in that whatever is on the inside can be safely changed through refactoring to meet your new design goals. And yes, you have to adhere to strong design principles: the open/close principle helps ensure that you can safely migrate to a new API while still supporting your old clients; the interface segregation principle helps ensure that your clients are always getting the right service without confusion; and you have to commit to serious code coverage metrics for your automated tests. That means you don't even write an exception handler unless you have a unit test that proves it properly catches the exception.
And developers absolutely cannot work in a vacuum, or be incompetent - there's no room for them. So when they're writing the negative tests, they are expected to be smart enough understand the permutations and the boundaries in the requirements they're implementing. But high complexity means lots of paths through the code, which means lots of tests, and this need for testability that is practically and realistically achievable provides incentive for the developer to keep code complexity down. That is a feat he or she continually accomplishes through the refactoring step of TDD. That way, instead of writing fifty tests, perhaps they can split it into five modules and write ten tests. Not coincidentally, this activity continues to improve modularity, reusability, and maintainability of the module. So it improves the code's design after it's written (an activity that still was not needed up front.) As a bonus, you get to execute the automated tests again and again, so future maintainers benefit by knowing they haven't broken your module. TDD is actually a design methodology, not a test strategy.
And I know that you're using CAPTCHAs as a clever example (how can you prove that you wrote a transformation so complex that you can't Turing test it?), but the real answer there is it depends on what code you're testing. Are you testing the code that processes the outcome for a true or false response? Are you testing the user interface, that allows them to type letters into a text box? Those tests aren't especially hard to automate. But when you're talking about the specifics of "is this CAPTCHA producing a human-interpretable output?" then you're talking about usability testing, which is expensive, manual, and slow. It's a task you'd perform after changing the CAPTCHA generation routines, but you wouldn't be able to automate. So I'd have to manually test it only after changing the generation routines, and I wouldn't alter the generation routines without scheduling more user testing.
(If I ever had to write a CAPTCHA for real, I'd probably try to parameterize it and allow the admins to tweak the image generation without my having to further change and test the code. So if the admin figures out how to tweak it to a black-on-black test, and preventing low-contrast color schemes wasn't identified in the original effort, the admin could still untweak it. And yes, that should generate a bug report, even though it would be recoverable.)
But in terms of difficult to test code, teams that do this kind of development work well will often have different suites of tests for different situations. Etsy does this really well, by splitting tests into various categories: slow, flaky, network, trunk, sleep, database, etc. They always run all trunk tests on every build, but only if the developer is working on something that tests the actual network communication would he execute the network tests. See http://codeascraft.com/2011/04... for their really inspiring blog.
There's a ton (or a megabyte) wrong with the hardware/software construction analogy, but organizations like the IEEE keep pushing on it because that's the way people look at "engineering".
The problem is the analogy makes everyone who doesn't understand software think there has to be some "big design up front" before you write software. Of course, when the end product is as infinitely malleable as software, that's simply not true. The human interface needs a design in order to mesh with the humans in an elegant and consistent fashion, but the code? No. The only purpose of code design is to make the code readable and maintainable, and those are attributes you achieve through test driven development and continual refactoring.
I'm not saying that ideas like object orientation, design patterns, design principles, etc., are unimportant, nor am I saying that an overall application structure like Model-View-Controller, or Extract-Transform-Load, is wrong. But the continued efforts wasted trying to make Big Design Up Front work leads to unimaginably expensive wasteful processes that only work for a very limited, very rigid set of products, and of those most fail anyway. Worse is when non-developers fail to realize that the code itself is the language of design. Back to the construction analogy, people think that an engineer produces a blueprint, then 100 people grab hammers and shovels and build the building. Hire 200 people. They don't all have to be skilled laborers, either, some are just guys with shovels and hammers. Want it to go up faster? But in software development, anything automatable has already been automated. When a software developer needs to do "construction", he or she types "make". Want it to go faster? Buy a bigger build server.
The engineering the IEEE is trying to achieve is accomplished by test-first development, continual automated testing, and peer code reviews. It is not achieved by producing thousands of documents, months of procedures, and boards of review.
My father-in-law believed he could "witch" wires, pipes, or whatever, using two pieces of copper wire. Funny thing is, he could never repeat a witching while blindfolded. We figured that decades in the construction industry meant that he could subconsciously spot the clues where a typical pipeline would be run.
If I were planning where to run tile in a field, I'd look for the low spot, and the easiest, straightest run from there to a drainage ditch. Doesn't take beechwood sticks or copper wires to figure that out.
V2V doesn't have to be limited to reporting just your own vehicle's data. Each packet could include data known about other nearby vehicles. Why does this matter? Because my car has radar, cameras, and ultrasonic sensors that detect all sorts of nearby vehicles today, so its packets could include reports on all the nearby vehicles it detects, including your old car.
Additional data on other vehicles helps identify failing systems (or cheats), and can theoretically provide some corroborating information about the nearby traffic. Let's say that one of the paranoid people who have posted above tries to dodge tickets by rigging their V2V to always report they're traveling the speed limit, even when they're exceeding it by 30 km/h (even though it's obvious that reporting your coordinates every 100 milliseconds will reveal your true speed.) But if a couple different cars with radar report "vehicle at X,Y, bearing B, change in bearing -3.000 d/s, velocity 38.00 m/s, acceleration +0.1 m/s/s", then even if the offending car self-reports that it is going at 29.00 m/s the rest of the cars in the area can still respond as if it were traveling at 38.00 m/s.
(It's also interesting to consider that evolution will tend to remove incorrectly reporting cars from the road, as they will be involved in more accidents.)
Note that this doesn't even violate anyone's privacy in order to achieve safety. The packet doesn't have to identify the vehicle, as its location is (or at least should be) unique. That way if my right side ultrasonic blind spot sensor picks up a car that is 2 m away, it can simply report the existence of a vehicle at the computed X,Y.
Finally, how does this benefit you, in your old vehicle that doesn't have a V2V system? Once other cars on the road have V2V, those other cars will control themselves to avoid colliding with you. Every car that automatically steers itself away from harming you is one less chance at an accident you might get in. It won't make much of a difference initially, but as time goes on and more vehicles become equipped, you'll gradually have your risks reduced.
Simple: you can automatically activate and deactivate it in certain trigger conditions (light bar, high speed, etc.) but you always let the cop turn it on and off at will.
If the cop has been issued a camera, but it's not recording at the same time that he's arresting someone who accuses him of using excessive force, what's that going to say to a lawyer, or to a jury? "Well, your Honor, we had three police officers trying to subdue the subject in the car when they all had to discharge their weapons and fatally shot the unarmed man with six bullets, but coincidentally the officers had all just been peeing by the side of the road so none of them had their cameras on." I know we have a few kangaroo courts in this country, but when it gets serious you still have to convince a jury to believe the shit being shoveled doesn't stink.
Because camera footage could have vindicated their behavior. And if a cop with a camera turns it off just before he shoots someone, especially an unarmed robbery suspect, do you really think a lawyer is going to just let that slide? The very existence of the cameras will be enough to change behaviors.
There are some jurisdictions that are talking about having the cameras enabled wirelessly whenever the light bar comes on, and then they keep the video rolling until the cop stops the car, gets out, gets back in, and starts driving at the posted speed. So if he stops at a rest area, restaurant, or wherever in a non-emergency capacity, it won't automatically turn on. Of course he'll have the option to turn it on or off whenever he wants. But a cop whose camera is coincidentally turned off every time he's accused of abuse will quickly raise suspicion.
And have you seen the crap cops have to put up with? They're constantly being accused by abusive liars. The honest cops don't seem to mind the cameras in those cases because it really cuts down on their stress. When a defendant accuses a camera-equipped cop of abuse, the quickest answer is to show the defendant's lawyer the video, and (according to NPR) the defendant almost always drops the accusation. And if provoked, the video can help justify the use of force.
It may get some cops to moderate their behavior, and that's fine - we need professional police, and if the camera helps remind them, there's nothing wrong with that.
But your quote specifically says, "principally through performance on a common statewide placement examination." It does not say the CSU system uses SAT or ACT for admissions standards. Perhaps if they based admissions on the SAT or ACT results, they'd need less remediation. Of course, that means rejecting a bunch of the little revenue-generating tykes instead of sending them over to the bursar's office to extract the maximum amount of Financial Aid money from them.
It would be interesting to compare the graduation rates to the remedial course attendance. Do the remedial students fail to graduate at a higher rate than the qualified students? Are we doing those younger, under-qualified students a disservice by allowing them to matriculate?