I would agree with this too. For that telescopish feel, you can get tripod mount attachments for most binoculars. Allows the adult to point the binoculars, swap to the kid without too much trouble.
Define actions (instant, daily, weekly alerts) for ranges of CVSS scores http://nvd.nist.gov/cvss.cfm?c...
Track incoming CVEs (http://nvd.nist.gov/download.cfm) , assign CVSS scores specific to your organization. Also have a organization specific remediation approach.
As you find out who is using what software, and use the CVE CPE (http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-2168) information to target more specific users.
In the blast emails, you could potentially harvest who thinks they may be affected to gather CPE information.
It's going to be a thankless, painful job, so you may as well automate as much as possible.
Link to Original Source
There will probably be a market for this in the tech enthusiast. But it will be highly unlikely to go mainstream. Mainstream (iphone 5s) is 7.6mm thick and weighs. According to http://motorolaara.com/2013/10... it is probably about 9.3mm - effectively as chunky as a 2 year old device.
What may evolve from this is specialist hardware and specialist configurations.
Some interesting spin-off technologies might be high speed bus interconnects (thunderbolt 2), modular and novel hardware configs (3d scanning - project tango, yotaphone - e-ink backside). Ultimately, enabling technology advances is what google spends it money on these days...
The report doesn't really go into an important measure.
What is the defect density of the new code that is being added to these projects?
Large projects and old projects in particular will demonstrate good scores in polishing - cleaning out old defects that are present. The new code that is being injected into the project is really where we should be looking... Coverity has the capability to do this, but it doesn't seem to be reported.
Next year it would be very interesting to see the "New code defect density" as a separate metric - currently it is "all code defect density" which may not reflect if Open Source is *producing* better code. The report shows that the collection of *existing* code is getting better each year.
That is different. My read of the GINA is that your health insurance provider is not allowed to use genetic screening to make coverage RISK decisions. As in, they can't force or require you to screen for cancer and then decide that you aren't coverable because of BRCA. Apparently life insurance is not covered by GINA, so that is another issue.
Also note that GINA is an American law. Not global.
The comment I made was about tuning treatment based on genetic information - which is very different. Rather than a cocktail of drugs to suppress and support different side effects and responses - you can more targeted doses to resolve your direct issue. Warfarin is a good example, too much doesn't help, too little doesn't help. Your genes help identify what your correct dose is.
As Genome Wide Association Studies begin to crack more of the genomic puzzle, there will be tighter and tighter direct correlation between medicine types & doses and the effectiveness of those drugs. As this efficacy increases, it is highly likely that the best insurance coverage will be based on genomic information.
Determining precise doses of a drug and which drug should be used is going to make for much better quality of medicine. I would expect that in a couple of decades people are going to look at the drug practices of today and laugh that we are pretty much throwing darts at the drug dartboard and choosing whatever it lands on.
Opting out of specific tests will be like not wanting X-Rays to see if a bone is broken.
Well for commodity items - I get your point. However, my personal experience is owning a house that has a really unusual shelf pegs. Unusual in that they are simply not available. I ended up modelling them and using shapeways to print them. What I made is up at https://www.shapeways.com/shop....
The cost, was about $2 per peg - which is about the same cost as low run retail products at home depot.
3D printers will make it affordable for extremely low run prints. For spare parts and out-of-production items it removes a lot of obsolescence.
A common definition of science is "knowledge, as of facts or principles; knowledge gained by systematic study."
Science is never stable. There is always layer upon layer of detail that is waiting to be discovered. The "Standing on the Shoulders of Giants" is the underlying concept. Our level of scientific understanding is driven by our current understanding and our needs to go deeper. The knowledge can change and grow based on deeper systematic study.
In the middle ages, when transportation was limited to horse, cart and walking. The naivety of a geocentric university was sufficient for the time. And for the most part motion of planets was fairly accurately explained by epicycles. The "Science" of the age was sufficient. As travel and migration required more detailed knowledge, the science improved to explain what was seen. New models were formed, and tides, winds and so on became more accurate and combined into a deeper understanding.
The beauty of science is that as the foundations of one area is broken down and rebuilt, what replaces it must not only encompass what was there, but also link deeper into other areas that caused the original science to fail. It doesn't make the previous science and knowledge bad, just incorrect. One can't deny that a model that explained a known phenomena for that point in history was bad science.
In 40 years time*, we'll look back at the misguided fools at the start of 21st century and our futile and plain incorrect approaches to fusion. We may not be there, but we'll probably dealing with all sorts of funky and interesting materials on the way to get there.
Those of us who will have children should know that their science *will* be different in a lot of areas than our science. That is a good thing.
* Bonus points for replies that say why I chose the "40 years time".
Alternate line exposure is not new, it is in a lot of current generation sensors. Omnivision, Sony and Toshiba all have sensors out with this capability.
The underlying issue is that when doing alternate line exposure you are getting only half the resolution for each range. DSP and image processing techniques can help smooth out the issues, but you are fundamentally dealing with a half-height dark and a half-height light image. Depending on the alternate-line approach, you also get other funky color fringing issues due to the underlying bayer pattern. As the article notes, there are color fringing issues
A good generalized approach is to output a 1/2 resolution image in both dimensions, otherwise you will get a vertical stretch if you keep the horizontal width at full resolution. So it means for a 16 MP camera, you will get only 4 MP HDR images. In a lot of cases this will be more than good enough... But it makes it really difficult to sell and explain to users.
There is usually a good reason that advanced features aren't release/published. A lot of the time it comes down to features be sub-optimal on what is supposed to be a highly polished product.
Both are leaders.
Managers are Organizational Leaders.
"Senior"/"Staff" Engineers/Architects are Technical Leaders.
Different focus, but similar soft leadership skills. They are peered, and should have a similar work load and a similar amount of hassle...
Firstly, I bow to your low 5 digit user number. You are an old hand...
I won't bite at all the points that are worth biting.
The mentoring part is the leadership part of management. When I have an engineer in my team go "wow, I've never done it that way" or "that was inspiring" it is all worth it. For reference, the two quotes were for "techniques for estimation" and "requirements analysis".
The managers role is to get the team as efficient and effective as possible. This means taking experience (from in the ranks) and finding ways to apply it to make their life easier and more effective.
Maybe. Let's break it down - Phones vs Cars...
Phones have Location Based Services (LBS) on a typical phone also uses wifi, GPS and Cell Tower location. A request to LBS expects should return a reasonable fix to the highest accuracy. In a dense urban environment, there is a lot of information from wifi/Cell to give a good fix - probably better than GPS. They have a magnetometer that is affected by materials around them and is not guaranteed to be aligned in a consistent way to the movement. In a dock, it is insulated somewhat and can be compensated.
Cars have wheels that are stuck to the ground and provide a good distance measuring tool - everyone has probably seen a dedicated GPS. They have a fixed magnetometer that is well protected from interference and is fixed relative to the vehicles direction of travel.
Both can align to roads on a map, so you have a correction factor in the roads. Any good nav system will lock onto the roads.
So Cars probably already have a higher probability of tracking reasonable well as is (I've never had the problem described except in hills under dense tree cover). Phones have some better LBS capability. Adding the sensors to the cars, and having a wifi/cell phone lookup capability (live or otherwise) would probably give cars a solid edge. This story seems to be more adware for eyeballs, but may have some merit.
Exactly. A bit of sensationalism in the story.
All Sites (included millions of parked) are in 38%/%32 mix. Looking 600 pixels down and you see the active (non parked sites). The percentage is 52% vs 11%. The big drop in for MS in 2009 was probably a nail in the coffin...
Some comments and views here... Some people won't like,
First, performance is driven by a mixture of ability and motivation (google two-factor performance). The ability is (relatively easily) measured and difficult to fake. Motivation is intensely personal and very hard to measure, motivation is very easy to fake. When interviewing or selecting staff, you look for people who have an overt demonstration about the motivation. The want for passion is a call for an overt demonstration of motivation.
Second, the barriers to entry for the software world is very low, professional accreditation isn't needed and generally isn't need to be renewed. This leads to a very inconsistent and bumpy collection of development skill. How do you sift through this? You look for the developers that show a strong and overt interest. They should at least be average, if not strong. If anyone could be a building architect, you would look for people either with a name and a track record, or you would look for someone that is always building models.
Third, the software world has a lot of contributors, but few leaders (either management or technical leaders). With few anointed or emergent leaders, you don't have the basis for leading teams. The emergent leaders are hard to spot initially. So again you look for overt passion and opinions. These will be uneven leaders (tech depth, not necessarily mentoring, no best practices).
These three represent the primary gaps in the industry that I see that makes the fallback position is to look for people who show passion. The theory is that passion has to be present and you can shape other deficiencies. Of course the paradox is that these people typically have such a strong opinion that the shaping is difficult or impossible.
Soft skills are fun !