Thanks for the information. I spent a little time reading about this. However, at some point the model predictions have to be matched with real-world results. This becomes difficult as climate models predict long term trends, not actual results in a short term. This opens them to criticism and denial claims. It's a tough sell to people without a technical math and science basis.
Anyone who develops simulation models will tell you there are tradeoffs and unknowns that cause errors. The real issue is how significant are the errors. Weather models fall apart quickly as the predicted time frame get's large. Anyone trust the weather prediction for several weeks away? Global warming predictions are in the same vein because there are too many possible variations to the system. What the models can tell us is, given the things we know about, trends emerge.
What I find amazing is that people without understanding the limitations of the models will take them as fact or fiction when they are somewhere in-between. Because one weather model accurately predicted a hurricane when all others missed it everyone started using that model. The next hurricane was completely missed by that model, whoops!
Personally, I believe that there is substantial empirical evidence of GW even if the model predictions are off.
I'm very confused. Wasn't Carla Fiorina an instrument of HP's down-slide with her involvement in the "Pre-texting" scandal where she hired private investigators to spy on the other board members? How soon we forget. It was a similar situation with RCA's board near it's end that pushed the decision to sell to GE.
After getting my BSEE I begain my career designing integrated circuits, I soon started writing software to aid me in design and then migrated into the in-house design automation software group, working on projects such as gate-level simulations and circuit synthesis. I tried to get into the computer science program for my masters in the BIG local university. I was told flat out to forget it as EEs didn't have the necessary background to get into the program. I then went to another school where I completed masters in Biomedical Engineering, Electrical Engineering, and Computer Science. It's let me work productively with Physicists, Mathemeticians, Engineers, and Computer Scientists. There is room for all to coexist and learn from each other, but experience has made me skeptical of pretty much anything my co-worker's say until I do the research myself. That skepticism has served me well throughout my career.
I think that Bill's generalization should be taken with a grain of salt until actual data supports his suppositions.
It is possible. With LG getting out of the plasma market, I found a new 60" one for $400. My only complaint is that LG has always been stingy with their inputs. This one only has one HDMI input.
A similar story was told to me about Joe Weisbecker when he was working in the RCA research laboratories. He came to management with an idea for a general purpose video game system. After rejection, he built it anyway in his garage and called it FRED (something like fun, recreational, education device).
When microprocessors just started taking off, management came back to Joe and made FRED into it's first microprocessor, the 1801 and RCA created it's first video game system called Studio. The 1800 family had a very intriguing architecture. It had 16 general purpose registers. And by general purpose, I mean that you had to specify which one would be the program pointer, which one would be the stack pointer, and so on. You could change them at will in your program so you could switch the program pointer register to make a subroutine call with virtually no overhead as long as the last subroutine instruction put it back to the calling procedure's pointer. Putting a value in the accumulator automatically set the status flags. It took me many hours make my first 8008 program work since I was expecting the "zero" flag to be set when I loaded the accumulator with a zero value, silly me. It also had an instruction pipe so almost every instruction (except for long ones) took exactly 8 clock cycles (long took 12). This made it trivial to figure out how long your program would take (just count the instructions) or write UART functionality. It was a perfect design for a micro controller. The big drawback was cost as it was fabricated in SOS CMOS so it could be radiation resistant in satellite applications.
Joe was an interesting character. I have a book he wrote that describes how computers work by using pennies.
The term to google is "Super-resolution". Unfortunately, I wrote this a long time ago and most of the techniques were based upon image pyramid processing which my employer has patented. They used this in a product for consumers back in the early 90s from their spinoff company (now defunct) called VideoBrush. Besides super-resolution, it did real-time video stitching to capture everything from panoramas to high-resolution whiteboard capture just using a handheld camera (no tripod necessary). Pyramid processing helped in allowing real-time high accuracy alignment of images with at least 10% overlap on a consumer PC.
That said, the technique is pretty straightforward:
* Capture a number of images that overlap the region of interest.
* Align the images using the appropriate degrees of freedom (Affine should work fine here) to sub-pixel accuracy.
* Merge the aligned images. Basically, at each upscaled pixel location, average the values from the aligned images.
If you pick up something with reasonable video resolution that can do I-Frame only then you can use multiple images to do a super-resolution still. The premise is easy... Multiple images will not cover the exact same pixel positions (unless the drone is affixed to a stationary point). You can use this fact to merge multiple images into a single one with much higher resolution than any of the single images. The more images that you can overlay, the higher the resolution you can squeeze out.
The trick is to have good alignment and warping algorithms to do the overlays. I've done this for an employer in my previous life with impressive results.
For your particular scenario iris recognition seems to be the most viable option. Iris is very fast and accurate and will not require removing gloves etc.
Iris scans are much more reliable than fingerprints. However, they don't come without issues. The capture algorithm must include:
* Dealing with occlusions. Either the top or bottom of the iris is usually occluded depending on racial origins.
* Dealing with spoofing. For this a single snapshot is not reasonable. A sequence (video) is needed in order to check for pupil pulsations that indicate a live eye. In addition, you need to do spherical eye checks so you know you're not looking at a projection. The best system I worked on used random flashes of IR illumination to cause specularities on the surface of the eye. This also aided eye positioning for finding the eye and doing these checks.
* Dealing with eye covering. Glasses and shields are a minor problem since they can distort the iris and they can reduce spoofing detection.
LuneOS. It could use a few more developers to bring it out of alpha state
Back in the late 70's my manager hit me with a statement that has stood the test of time...
"If it isn't tested, it doesn't work".
He was originally talking about gate array programming but it seems to hold for any design discipline.
For most of us in the US, a significant portion of the local public school budget comes from property taxes. I was asking a realtor in DE how they deal with public schools with such a low property tax basis. The answer wasn't surprising... The majority of the populace sends their children to private schools for their elementary education but still go to the public high schools. They feel the major advantage is that the private schools compete (and therefor excel) at educating and that citizens do not pay continuously high taxes after all their children have left the school system. They still feel that their public elementary schools give a decent education because they are smaller and more manageable (thus less costly). I don't know whether this is the prevailing feeling in DE, but it seemed appropriate for this thread.
If you require using sophisticated procmail filters on your personal account then it seems like your setup is wrong from the get-go. Your incoming mail server should be taking the brunt of the work and using a progressive and efficient filtering before any filtering by content.
I use a spamdyke based front end that has a whole arsenal of whilte, black, and gray filtering of emails using RBLs RBLHS, reverse lookups, etc. It also can do header "pattern" filtering as well, but I currently don't use that feature. This blocks almost all spam quickly and efficiently. The last stage is to run it through spamassassin for those things that are in the gray (not a simple reject/accept, but a cumulative scoring) area. Worst case mail delays are on the order of few seconds through the whole chain. Spamassassin only gets a small number of incoming emails to work on. The stragglers usually come via accounts at yahoo, live, etc.
The nice thing about spamdyke and other systems like it is that it does it's job very fast. For example, the blacklists and whitelists in spamdyke can be setup as directory tree structure so it is a very quick lookup to determine whether to accept or reject the specified domain or ip address.
I also use systems like honeypots and hunter-seekers. The latter looks at what is graylisted or accepted by spamdyke and does http checks on the domain to see if it should be blacklisted. It also may decide to do tests in ip address neighbors to see if more should be blacklisted.
Like all systems, you must be proactive at identifying mail that shouldn't have been rejected. It is a rare situation, but there are a few companies with badly configured mail servers (like no reverse dns entries). However, after many years of operation my whitelist contains only a handful of domains. The automated blacklist process sends me email when it adds a domain, just in case.
If you're running with zfs, just take a snapshot of the file system before handing over the system. When they're done, roll back to your snapshot. Both take seconds to perform. There may be other filesystems that can do this, but this is the one I'm familiar with and it works extremely well and doesn't require any virtual machine layer.
I worked with an absolutely brilliant man that came out of the FORTRAN era. Two character variable names, large functions, no comments, etc. He carried this style through our transition to PL/I and then C. No one could understand his code until they understood his system, which was pretty strict. The second character identified the purpose of the variable, X was a loop counter, and so on. I supported his code after he left the company and once I got the hang of it it was actually easy to figure out where to find what you wanted.
Personally, I would never have adopted his system, but it did work for him and he banged out code quickly with minimal bugs.