There's been consideration of self-mutating algorithms where the hash of the previous block determines the problem to be solved for the next, functionally making CPU/GPU hardware the most efficient way of solving it (because you have to be generic). That's a damned difficult research question, though, not something we have a ready to go answer for.
Although seriously, it's generally considered in poor taste to award Darwins for a person or group who take out innocent bystanders along with themselves. After all, they're not necessarily improving the gene pool if they take others out too.
I don't know, an EE PhD might...
As a (candidate for) PhD CS, though, I'm fairly certain if I turned up with a screwdriver one day, people would panic and/or probably tackle me to the ground.
Well, if they're doing this properly, it shouldn't be about whether the student learnt the material, but how.
It should be used to show:
Students who aren't engaging with the material, and may require early intervention
Levels of interest in the material (would different material suit the learners better?)
Problems with the material (are there particular parts many learners highlight and/or comment on? Could indicate confusion, for example)
Why is it disconcerting?
I mean... yes, it can be mis-used. The data should be used to flag up pupils who may be struggling, but will also flag those who may already know the material, but just because data could be incorrectly used doesn't make it inherently worrying.
As a recent ex-scientist (hint; I moved to software developer for the shorter hours and better pay), if we only had one problem to solve at a time, it would be much easier...
Seriously though, probably not. It's experiencing a dip after it's initial surge of interest. It's not a roller coaster, or a rocket, it's a company. It will have ups and downs. Demand will fluctuate over time. It can experience market saturation (those of us who have now kickstarter-ed so many projects that we need to wait for some to finish before we pay for more).
Also; what's this nonsense about 50,000 projects and not getting near their total, as if that's a bad thing. It's not a magic money tree; most of those projects probably didn't interest people, so they failed at the first hurdle. That's not a tale of woe, that's someone being saved from spending months/years of their life developing a product that wasn't going to sell.
Have you ever tried telling a userbase that there's a problem with their browser and they should change? If you're lucky enough that they read the notice instead of just hitting reload a few dozen times then complaining it doesn't work, generally they'll tell you that it works elsewhere, and why not on your site.
It also presumes they can move browser; less of an issue with Safari, but we've had to put in work-arounds for IE6/7 for users who are locked into those browsers by their employer (who really, really doesn't care enough to change).
Oh, and unless you either don't have to support the users, or have a very generous allocation of support staff, telling 20-30% of your total users to change browser is going to involve the support staff being hopelessly swamped with related questions and issues.
> The day that you were able to tell what someone was running and make a decision based on that, we basically lost the point of a standard
Well, sort of. If the browser gets the standard wrong, and the options are:
1. It doesn't work for that browser.
2. Degrading the result for everyone.
3. Implementing a browser-specific work-around.
Which would you really prefer? Yes, user agent testing is heavily mis-used, but it's not the terrible idea it's made out to be.
I'll give you a specific example; we had an issue with file uploads with Safari over SSL. For some reason if the connection was kept alive, Safari would frequently start uploading the file but never complete. The work-around was to force connection close for Safari; it wasn't perfect, but it massively reduced the frequency with which the issue appeared.
My network is vulnerable. I know this, because it exists.
The question is how vulnerable.
I run Linux, not OpenBSD, so there's a greater chance that I'll get a zero-day attack sprung on my network. However we make that compromise because it's considered reasonable.
I run services we need, but each is a risk.
There is no such thing as a secure network, there is only a secure-enough network.
I do that for systems I maintain.
I've nuked systems just for looking suspicious, despite not being able to prove someone cracked them (half the binaries in
Anyone who doesn't re-image a cracked system is unbelievably naive, and it will come back to bite them hard one day. Like hell am I going to take the word of someone who broke into my systems that they didn't leave a rootkit.
The same way you tax payments in any other non-USD currency.
> against unemployment, against medical expenses, against global warming, against guns, and lots of other things
I want protection against unemployment not because I expect to use it, but because I believe it's the most cost-effective way of reducing crime.
I have private medical insurance (bonus points; I'm in the UK, so this is 100% optional for me), but again I think universal healthcare coverage is a good thing because it's more cost-effective than the alternative. Ill people are unproductive; helping them get better when something isn't terribly serious is cheaper and better than waiting until they end up in ER (A&E here)
Global warming; is it really that odd that I don't want something bad? Have you seen what the weather's been doing to your country's east coast recently?
Guns; errr... y'know what, both sides want to cherry pick statistics and/or go with their gut on this. Show me a balanced analysis and I'll go with it. As it is showing that deaths due to guns go down in countries without guns isn't a helpful statistic without knowing whether deaths due to other weapons (esp. knives) fill the gap, or not. Pointing to individual examples (as both sides like to do) is virtually useless.
Despite what is frequently suggested, there's a much greater scarcity in skilled developers/researchers/whatever than of ideas for them to spend time doing.