I know of no such certification program in the USA. If one exists I'd love to know about it.
I know of no such certification program in the USA. If one exists I'd love to know about it.
Are there any non-human religions? No? Well, then let's just go ahead a make that next shaky step up the inference chain: Humans, not just religious ones, are the root of all evil. Of course, being human is corralated to lots of other interesting features. So I'm just going to go with the one I think is most likely:
Self-awareness if the root of all evil.
Right. In my career I've been taught one language (Pascal) and been tossed face-first into 4 other languages: C/C++, C#, Lisp, and Python. Language learning happens in phases:
- I can learn the syntax, reserved words, flow control, and basic structure of a language in 3-4 days. At that point I'd be able to take a small block of simple code and figure out what it's doing. I could write very simple programs to do pretty pointless things.
- Given another week or so I could learn enough of the support libraries to have some exposure to basic things like file access, network APIs, thread and process creation, database access, etc to understand what more general code is supposed to be doing. So about two weeks in I could start contributing to real work by finding and patching certain types of bugs, adding simple feature extensions, or building useful test cases.
- Given another 2-4 weeks of work I might be knowledgeable enough to create moderately complex subsystems, debug most single factor bugs, or begin commenting on any architectural problems.
- I won't know enough to make architectural changes for another month or more.
In the end, I wouldn't trust myself to make big changes on less than three months familiarity with a language. With some projects I'd want that much time just on that project's codebase, *after* the three months getting familiar with the language. At that point I've probably learned enough to know what it is I don't yet understand.
Make an objective scientific argument in favor of the survival of the human animal as a species.
For bonus points, make sure that you do not co-incidentally argue for the preservation of all species on the planet.
All existence is subjective. I think therefore I reach. I don't really care about objectivity, because I'm not objective.
But, if I were to try, I guess I'd base it on the existence of self awareness. Science cannot exist without the self-aware mind to posit, observe, and reflect. For there to be a science or logic within which this argument can be judged valid or invalid there must, therefore exist that mind. Given that you find value in scientific objectivism, you by extension hold value in the self aware mind.
I doubt that's acceptable, as you'll probably call it circular, but, as I said initially...I'm not objective when it comes to my own existence.
I agree, in the long term, but it's not a one step process. You throw people an resources at the problem, expecting to lose some in your ignorance. But humans are explorers. We (well, some of us) have curiosity and drive to do novel things, so there will be people willing to be the first at great personal risk. Leaving the water was risk/reward, climbing that first tree was risk/reward, climbing out of the trees was risk/reward, sailing around the world was risk/reward, going to the moon was risk/reward. We've just got to realize that both sitting on our ass doing nothing and venturing to mars are also risk/reward choices.
"A journey of a thousand miles begins with one step." -Lao Tzu
There's a great West Wing episode which discusses why we should, but somehow I think that wouldn't gain me much here. Discussions of the nature of man, and the establishment of wonder being particularly squishy in hard science terms.
Instead I'd point out that all safety critical systems are engineered around the notion of redundancy. Shit happens, and when it does, things break down. When that unexpected thing happens to our Earth-bound ecology, what, exactly, is our safety strategy? Hide in a hole? For how long? What if it's biological? What happens if someone accidently creates Card's molecular disruption device. We can't reasonably colonize another star system (yet) but we aren't *that* far from being able to establish some very worthwhile planetary redundancy. It's worth it because we are stuck on this rock that I think we should rename 'The Single Point of Failure'.
It seems like the obvious question, from a geek activists point of view, is: Why are the firmware components in our cars not open source? I should be able to compile and validate the loaded firmware so I (or, you know, someone who actually cares) can verify the security, legality, and safety of it's operation. It isn't even required that I be able to re-load the firmware, just that I be able to validate it.
I feel we've reached the point where more pixels doesn't really help. What we need is people to come up with better ways to interact with what's already there. I can't *do* anything with the megapixels I already have because simply interacting with the device obscures 10% of the screen, and 100% of what I wanted to interact with.
For instance: I've been waiting for a good CAD system that works well with a touch interface since my first iPhone's 3" screen. Now I'm at 5" with 20x the pixels and there's still no decent touch capable CAD app. Maybe it's eye tracking, maybe it's rear panel touch, maybe it's voice, I don't know, but something on the UI front must be changed.
We've got a pretty bad record so far at attempting to intercept objects moving at delta Vs approaching orbital velocities (which 10km/s is definitely that). That's using a guided projectile to attempt the intercept, and counting intercept as close enough to damage via explosive charge. Based on that I can't see us firing a tethered harpoon containing no guidance or propulsion engines, and having any expectation at all of hitting something moving that fast. And that's not even considering the question of actually attaching to the object.
While I agree that there's bias in the reporting here, in all cases the driver of the "autonomous" car has not been cited as at fault. Thus they are certainly within the definition to say the software wasn't at fault. I am with you in one respect: All of the data should be made public on every incident, and on any close calls where actions of any outside agent (the driver of the "autonomous car" or one in a nearby car) prevented an incident. Unfortunately it's not in Google's interest to do so, and there are no laws mandating it, so it probably won't happen.
So is Google going to pay my speeding ticket when a cop pulls over my autonomous automobile for speeding?
Almost certainly. Though they will bring in several well-respected highway safety engineers to testify that following the flow of traffic is significantly safer than following the posted speed limit. Enough jurisdictions will lose money arguing these cases that there won't be money to be made by writing the tickets. Absent both the financial and safety benefits the police will stop issuing the citations.
Both the 6 and the B keys belong on both sides of a split keyboard. It couldn't possibly cost more than another $1 and we can get back to fighting over real significant, intractable problems, like the Oxford comma.
Mostly I'd agree, but there are a few exceptions:
1) no recursion. Except perhaps forms of tail recursion known to exit in bounded time. But you definitely have to ask yourself why you are doing something that could either be unrolled into a loop or has some kind of exponential growth potential.
2) Local variables are fine so long as the analysis is done to guarantee the maximum stack requirement is pre-committed. I mean, realistically the return pointer is a stack variable, so just calling a function that returns would violate the "no local variables" rule. I wouldn't allow dynamically sized items, because then bugs might cause stack overflows...
3) I agree that malloc and free are forbidden *during real-time operation*. However, in some situations you use dynamic allocation if you can pre-allocate all necessary elements prior to entering operational states. This really depends on whether your system *has* a pre-operational state,
Professionals make mistakes. Garbage collection is a useful tool to make it more difficult to screw up.
I get this. And as a software engineer I fully agree. However, in practical terms, there shouldn't be any dynamic memory management happening at all.
It's a real-time system. It *must* interact, on time, with all the planes that are in it's domain. That should be a bounded, predictable load, or there's no way to guarantee responsiveness. Given that, an analysis should have been done on the maximum number of elements the system supported. Those elements should have been preallocated (into a pool if you want to treat them "dynamically") before actual operation began. If/When the pool allocator ran out of items it should do two things: allocate (dynamically) more, and scream bloody murder to everyone who would listen regarding the unexpected allocation.
This is (one of) the reason(s) I generally haven't liked garbage collected languages for real time systems. There's rarely ever a way to guard against unexpected allocations, because *every* allocation is blind.
I'm a software engineer who focuses on automating manufacturing equipment. I've been hearing bad stories about Amazon for at least ten years now. About a year ago I got a call from a head-hunter about a job opening dealing with autonomous warehousing and order fulfillment that sounded like a dream job. Five minutes on the phone got me the name of the company (Kiva Systems) and two minutes of google told me they were being purchased by Amazon.
Nope. Hard stop. I don't even need to know how much they are offering. I love what I do, and I won't willing work for someone who risks making me hate it. Go ruin someone else's psyche. I'd be *really* interested to know how many people there are just like me.
To downgrade the human mind is bad theology. - C. K. Chesterton