Microaggressions are things white men due to anyone who isn't white or male.
Well, it so happens I'm not white, so STFU.
Microaggressions are things white men due to anyone who isn't white or male.
Well, it so happens I'm not white, so STFU.
I don't know why he uses that label. Most Americans equate socialism with the USSR because "Soviet Socialists",
Most Americans think there's no gravity on the Moon. Somebody did a survey about this, and when Americans who believed there was no gravity on the Moon were asked why the astronauts on the Moon didn't float away, the most popular answer was "heavy boots".
And abundant as America's natural stores of ignorance are, there organizations who are laboring mightily to add to them.
Well, besides "dumb" and "smart", there's "experienced" and "inexperienced"; and experience shows while this has always been true to some degree, that degree varies over the years.
In particular if you are old enough to remember what the American middle class was like in the early 70s, it's a shadow of its former self. Oh, we're materially better off in some ways, but that's largely a function of (a) technological advances and (b) the shift from single earner households to dual-earner households and (c) a massive reduction in leisure time.
The bottom line is that fewer people have time to act as involved citizens. That creates a power vacuum that is filled by people with resources who can make a return on investing their time and attention in being influential.
Seriously, if civilization actually does fall apart exactly who is going to compel him to honor his promises? There won't be any courts to sue him, and the people who paid him to do their apocalypse preparation for them will be... unprepared.
So either way the people (if any) who pay for these things will never get to use them.
That's what the cloud fire guy is talking about. Companies arbitrarily deciding whats good or bad on their networks.
Well that actually puts the issue in a different light; or at least it potentially does. There are two reasons a company might decide some content is bad on their network. The first is that it's bad for their customers. That's not only none of the company's affair, it makes no business sense. The second reason would be if it's bad for the company itself.
How can something that sells be bad for the company? Lots of ways; it could take up a lot of time and not make very much money; it could damage the company's brand; it could alter the user experience the company is trying to provide. Now personally I think the idea that dinosaur erotica poses a problem for Amazon or its users is a bit far-fetched. But if Bezos had some basis to believe that it did pose a problem then banning it would be both reasonable and rational.
See my other post on this; under the Stockholm convention DDT is allowed in the control of vector borne diseases and in fact the world uses some five million kilos of the stuff annually on mosquitoes. The reason more isn't used is the places where it would be most useful don't have the money to buy the stuff, cheap as it is. That's what you should be getting in a huff over, not some non-existent ban.
The places that do have bans (like the US and the EU) can afford better solutions.
I should also point out the problem with your graph, which shows an increase in malaria deaths starting around 1972, when the US banned DDT. The problem is that DDT was not banned in the rest of the world in 1972. In fact it has continued to be used in the rest of the world, often with funding from USAID. About five million kilograms of DDT are still used every year worldwide, the bulk of it in India.
The current international status of DDT is that it is banned in signatory countries to the Stockholm convention for all purposes except mosquito borne disease control. This ban is actually beneficial for DDT in malaria eradication, because it reduces the populations of mosquitoes that have become resistant due to agricultural applications. DDT is fully banned in most first-world countries, but they don't need it. They have the resources and sophistication to control malaria vectors with IPM.
So if DDT is legal to use in places that have endemic malaria, why haven't we used DDT to eradicate malaria worldwide? There are several reasons, but the big one is that we haven't made any serious attempt yet to eradicate malaria worldwide with DDT or any other pesticide. People have talked about it, people have advocated for it, but nobody's ponied up the billions of dollars it would take to actually put a program together that could do it.
Funding clearly is the limiting factor in DDT use; most of the countries using DDT today are in subsaharan Africa, but the quantities involved are tiny, sporadic, or both; often amounting to a thousand kilos every couple of years.
I have some actual knowledge about this issue from projects I've worked on.
DDT is excellent in domestic applications (i.e., to house interiors) because it leaves a long-lasting toxic surface when sprayed on walls. Other pesticides such as permethrin are more expensive to use because you have to go back and spray the surfaces of the house several times a year, whereas a DDT application is good for a year or more. This kind of domestic application is especially effective at stopping malaria transmission because the infectious agent (Plasmodium) has no natural focus other than humans.
In fogging applications the impact of the DDT ban is nil; in fact using DDT this way is arguably counter-productive, not even counting downstream ecological effects. The reason DDT is bad for outside applications is the very same reason it's good for interior applications: the durability of the molecule -- or more precisely its breakdown products. DDT is not much more long-lived than malathion or permethrin, it's half-life is about 50 days; but it breaks down through loss of HCl into Dichlorodiphenyldichloroethylene (DDE) which has a half-life of almost six years and does a lot of DDT's residual killing.
Why is long lasting toxicity good for inside pesticide applications and bad for outside applications? Because outside the pesticide doesn't stay put. It washes away into soil and pools of water -- where mosquitoes lay their eggs. Bathing the larvae in sub-lethal concentrations of DDE puts evolutionary pressure on the mosquito population, producing adult mosquitoes who are resistant to DDT. You never want to expose mosquito larvae to pesticides which are used against adults. So for outside fogging applications you want something that'll kill mosquitoes the fog contacts, then breaks down as quickly as possible into something that's non-toxic.
Before you advocate something like the widespread reintroduction of DDT, it would be best if you educated yourself on its effects, methods of application, and side effects. There's a lot of misinformation out there to the effect that DDT is a panacea; it's not. For example I've seen one old toxicology study that is frequently cited by anti-environmentalists as proof DDT doesn't have toxic effects on birds. The flaw with that study, and the reason that they don't have more recent studies to cite, is that question of DDT per se in the environment is moot; it doesn't last long enough to bioaccumulate. It's actually the very long-lasting DDD and DDE breakdown products that are the culprits.
It would be reasonable to reintroduce DDT for domestic applications, provided that we can structure its use so that the effectiveness of the program isn't undermined by DDT that has been stolen and diverted to agricultural use. I can tell you from experience that theft is an enormous problem for teams operating in places that have serious endemic malaria problems.
So it really comes down to this: is the lower cost of DDT offset by the security and audit trail you need to ensure the program's long viability? Either way there's no reason to not eradicate malaria, and we don't need DDT to do it. The cost of eradication is tiny compared to the cost malaria has in economic output, lives shortened, and political destabilization.
I looked into RDF and OWL a few years back. OWL is more powerfully expressive, which is not always a good thing.
RDF more closely matches the way relational databases work; in many ways SPARQL is like SQL without the relational model baggage (or the awesome query optimizers that come with that). OWL on the other hand understands a much more powerful subset of first order logic, including generalizations such as such and so cannot be true for any object of a certain type. On the surface this seems like it would be great -- it eliminates the need for things like the trigger mechanism that all non-trivial relational database platforms provide. RDBMSs need triggers because the underlying relational model doesn't have any provision for expressing logical constraints; it's only good for saying the FOO of BAR is BLECH -- just like RDF.
But the downside of having a larger subset of f.o.l. is that you have to pay much, much more attention to the logical consistency of your model. This may sound to naive ears like something you always want to do anyway, but in practice it's often simply impossible. Consider the problem of two groups who don't view the world the same way sharing some kind of information base. Because they don't agree with each other and probably never will, their models of the world are inconsistent with each other. That precludes them for sharing an OWL dataset unless you've carefully segregated out the areas of disagreement, which probably means that the common bits meet neither group's requirements. However neither would have any objection to asserting that the AUTHOR or WAR AND PEACE is LEO TOLSTOY -- exactly kind of simple, uncomplicated assertion that the relational model and RDF were designed to represent.
The conclusion I drew is that OWL is a fine language for creating logical models, but that RDF was more practical for systems where diverse users share data. OWL might well be very useful on the boundaries of such systems, enforcing policy rules about data that is allowed out or into the system from outside.
It doesn't matter what the code produces. You use an independently developed and open system to confirm that the code in question conforms to the method. Then it's a matter of showing the method is valid, which of course is the important question. Patents don't mean something accomplishes what its inventor purports it does.
The source code shouldn't matter; it's the method used by the source code. If that method cannot be reproduced without the source code, then the output of the program is worthless. If it can be reproduced without the source code, then the output of the program may have value, if the method used stands up to scientific scrutiny.
As it stands all the prosecution has amounts to a black box with a red and green light on top and a slot in the side into which a couple of samples are dropped. If the light subsequently turns red, then the prosecutor wants the jury to believe the samples match. But they have no reason to believe that other than the prosecutor telling them to trust the box.
How is this any different than calling them up and telling them what is broken?
I can answer that. I've had a lot of experience fixing up information flows in public agencies. The difference is in what happens to the information in your call once it's in the hands of the agency. It often falls into an irrationally complex morass of criss-crossing processes. Watching a government or non-profit organization respond to a new piece of information can be like watching an individual pachinko ball drop through the machine's forest of pins, only you can be sure that it will eventually drop into the right slot, the question is will it make it there in time? The morass into which your request falls isn't designed; it has evolved, and chances are nobody has ever had the job of seeing whether what it has evolved into makes any sense -- until a new system is planned.
One way to think about an organization is to compare it to the best organizations of that kind. And the best governmental organizations excel at performing routine tasks. None that I have ever seen excel at reinventing themselves; that takes the introduction of an outside force. It also takes the eyes of an an outsider with a knack for seeing which processes generate value and which processes simply support other processes. That's not always clear. I've had clients, with a simultaneously smug and hopeless air, hand me a fat ream of "critical reports" that a system absolutely had to generate. The first time this happened I was alarmed given my slim budget, but I quickly learned to ask this question: which of these "reports" do you actually use to make decisions with? Inevitably causes the ream of "reports" to slim down to a half dozen or so.
But if the hundred or so other things in that stack aren't things the organization uses to make decisions with, then what are they and why are they produced? Inevitably the answer is that they're produced to carry data from one process to another -- something that a computer system can do without any marginal input of labor. That means that upwards of 90% of the office work can be eliminated.
The result of eliminating that work isn't (as is often feared) that jobs disappear; it's that the organization becomes orders of magnitude more responsive. I've worked with mosquito control agencies that went from sending an inspector out days or weeks after the report of a problem (by which time it is certainly past) to sending out an inspector the same day and if necessary a spray truck that very night. I've worked with non-profits where donations took weeks or months to be deposited go to depositing the check and sending out the thank you letter the very same day. It's not hard to be responsive when you have a system that gets the right information to the right person immediately; it's impossible when your systems take weeks to get you information you need right away.
How do things get that bad? Not because you have bad people. You start with inexperienced people who learn how to do their jobs from the people who came before them; and since nobody has a full view of the entire system they come to see their job as keeping the system running more or less as it has been. That's not because they're bad or stupid; it's the best anyone can do under the circumstances. When there's was a problem in their part of the system the do their best to patch that part so the problem goes away.
Experienced programmers will recognize this anti-pattern; it's called "lava flow". Eventually the system becomes more patch than productive process and the effort to keep it running approaches or exceeds the effort spent on doing things that are intrinsically valuable.
So yes, I absolutely believe installing a system, particular a system with mobile data input, can have a massive impact on a public agency's responsiveness. I've seen it happen repeatedly. Imagine you're in charge of dispatching workers to deal with problems, but all you have for information is a half dozen printed spreadsheets some of which have no data that is newer than a month old. Now imagine the difference if you can pull up a map of any kind of complaint -- abandoned car, pothole, litter, dead animal -- and the data is current as of a few minutes ago. You can now send your road patching crew out to that cluster of potholes; your animal control officer goes straight to where there were reports of strays today rather than weeks ago. Just in time saved criss crossing the city, often in vain, is a massive force multiplier.
But not necessarily at a higher rate than the established base rate.
I took an algorithms course at Harvard. It was just as hard as anything I took at MIT, and I took 18.313 back when G.C. Rota was still alive (greatest math teacher ever, by the way).
Of course there are people who are there because they're "legacies", and I suppose they take different courses, but the kids who get in because they're smart are pretty damned smart.
As for left-wing indoctrination, Harvard is a bastion of the establishment. The prep school crowd in particular has been thoroughly indoctrinated in the perfection of capitalism and the moral entitlement of the ruling classes. It doesn't mean that some of them aren't apostates, of course.
Car customizer fits a 1952 Ford Flathead V8 into a Tesla Model S...