Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
IANAL, nor a politician, but IMHO the furloughs are not about saving money.
They are a result of the federal government not having authorization to spend any money.
It's like a company in bankruptcy proceedings, the curator takes over and protects the assets while working to get the best outcome for the creditors.
"these are facilities that don't have any services being discontinued"
If that were true, nobody would be unhappy with their closure, and those places wouldn't be a very safe place to be even before the government shutdown (no maintained roads and trails, no and safety equipment, no animal control/fire/law enforcement/first aid service, etc).
What it's about is both preventing damage to assets and preventing spending of any money not deemed absolutely essential, which they have been instructed to do from the top down.
If a website needs a security update for a zero-day exploit, or gets hacked or vandalized during the furlough, the IT guys are not allowed to do anything about it because they are on furlough. They are not deemed essential employees and therefore they can not do work, any work, including volunteering to support the website (nothing they can do about that, in fact they can get in trouble for breaking those rules). We should be lucky that there is a webpage with a notice: They could have simply powered the machines (cloud, whatnot) off. What if the air conditioning turns off and the server room overheats, or there is some kind of water leak in the room, damaging the running server(s)? If I was responsible for an Internet-exposed website, and I was instructed to protect the assets with only absolutely essential expenditures, and I would be guaranteed not to be able to do anything for it for an indefinite amount of time and there was nobody willing and able to take on the responsibility during my absence, I would shut it down too, to prevent being faulted for anything happening to it in my absence.
If inside a national park an accident or crime happens that needs for example a road closure, a rescue, a fire department t respond, or an arrest (for example, for damaging public property, public intoxication, etc), then the government can't help and can't control the damage because there is no authorization to spend any money to pay for the work and materials of the rescue, fire control, etc. So the best way to prevent damage in the park is to completely close access to it.
Bill Gates might have made his money in a company that now competes with Google, but he really is comparing apples and oranges.
The big difference between the Bill & Melinda Gates Foundation and Google is that one organization is a publicly traded Internet company that does things to make money, and the other is a privately held charitable organization that has received lots of money and is looking for a way to spend it.
So, for one of the two it makes sense to try to cure malaria, for the other it makes more sense to get more people access to the Internet. Both things will be good for the world.
While I understand pragmatism vs perfection, quite probably your code (or your os/libraries/api definitions) still had actual bugs, they simply didn't result in the same (language correct but unintended) behaviour or they didn't get triggered (as easily) in debug compiles. Even simple oob access bugs could still exist.
I personally don't feel fully comfortable releasing programs/firmware with bugs that I know are dependent on compiler optimization or instrumentation flags, unless I understand exactly how the compiler flag makes that happen for each specific bug. Actually, when possible I like to compile and test a program with varying flags, and for varying ABI's and platforms for the sole purpose of identifying and fixing those bugs. It can be a good help for locating bugs, similar to running a program in valgrind, with libefence, etc.
Remember, the drones flying now are the first ones of their kind, all made on their deadlines with their priorities.
I'm not in this industry, but for drones that can fly multi-day missions with thousands of miles range, can cruise at high altitude, and given the many large stretches of non-hostile airspace around the globe, even I can come up with various navigation solutions for drones that keeps the hardware safe from gps blocking or faking, and will probably often allow the mission to continue uninterrupted. Especially if you're thinking about squadrons of drones with varying configurations... Those would probably be able to take out the gps jammer as an automatic side-mission.
It will cost more than a $25 gps receiver module, but it's not for navigating your girlfriend to the beach and back either.
You will be able to trick some of my drones to land on your secret base, but that will be the kind that has not much more than a beacon that starts transmitting when the flying stops. Perhaps also a countdown clock that goes beep with a red, white, and blue wire coming out of it.
"Rather than running in the same CPU ring level of protection and potentially crashing the OS when you have a driver bug."
It's nothing but theory that the microkernel controlled computer would function just fine with buggy drivers, as there is no complete enough microkernel OS to even compare that with the rest of the real-world's top OSs.
Actually, we here have an example of a microkernel OS that 'they say' has had the opportunity to be fantastic with a buggy USB or SATA driver for years, yet somehow those drivers took forever to be mature enough to be available.
Perhaps it's not such a great ideas to allow bugs by design in something as fundamental as a driver.
Or perhaps that bug in the SATA driver will still corrupt your data on disk, or that USB driver will still abort your printjob, even if the driver gets reloaded right after that and the computer pretends nothing is wrong.
The whole 'will not crash with buggy drivers' or 'more secure because of how the drivers are' is not only not proven in the real world, it's also exactly the where the microkernel is the solution in search of a problem that doesn't exist.
Nice for some people, but... there is always a 'but'... Here one 'but' is that straight talk doesn't allow tethering. The only way to do that (legally) on the AT&T network is with AT&T's 'mobile share' contract that includes paying for the phone subsidy...
What you're saying is that it's impossible to truly fully simulate a human brain, based on the observation that fully simulating a biological cell hasn't been done yet.
But If they could make progress and do that, they could be simulating a brain that is a live. Then how would they be certain that that hasn't happened yet and it wasn't alive?
And beyond that, how would they actually be certain that an incomplete simulation of a human brain is not alive? What if it's equivalent to a brain of a cat?
At what time does it become ethically wrong to terminate the simulation?
Not knowing that it's not alive doesn't mean that you're not killing it if you do something that would kill it if it were...
Shooting the box with schroedinger's cat might kill the cat, if it (still) were alive, so perhaps you should refrain from that.
"We understand weather enough to simulate it."
"We don't understand how the human brain works enough to simulate it."
Simulations allow numerical verification of models & theories.
Simulations are part of the scientific process to 'understand' things better.
A lot of that weather knowledge came from trying to simulate it and seeing where it was wrong and where it was right.
It most definitely was not a situation where the weather scientists said, 'hey now that we know the weather, let's put it in a computer simulation' and then good weather forecasts magically came out of the machine. The process is highly iterative. 'measure real world -> maybe it works like this -> simulate model -> nope or yup, repeat'.
And actually, afaics, state of the art weather simulation has gotten so far that it seems mostly a sparse data problem these days, but that doesn't mean we should stop trying to improve the simulation algorithms and focus solely on increasing the data volume and accuracy. For one, we may not be using the data we have completely correct yet.
Similarly, we'll learn more about how brains work the more we try to mimic it in simulations.
As soon as you truly fully simulate a human brain, wouldn't it be murder to turn the machine off?
I'm sure that there are applications that would wish to put February 31 into a date field in a database, so why restrict them?
I'm not a geneticist (IANAG?), but while reading the article, I thought that they just discovered that rare variations found today are mostly 5000 years old or less. That doesn't mean that humans evolve faster, it might also mean that rare variations simply don't survive more than about 5000 years (perhaps they get disappear, or somehow get masked, or simply cease to be rare)...
For example, when you put a drop of colored paint in a bucket of white paint, the colored paint is initially rare. But if you wait 5000 years then look at the bucket, it will be a bucked of off-white color without a 'rare' drop of paint...
"C on evaluating the likely accuracy of the data..."
Don't forget that currently available barometric data from other sources is quite sparse in most locations.
You have to start somewhere. Apps can be updated with improvements...
A lot of the data issues you mention occur more in urban environments than they do in suburban environments or out in the country, and the currently available barometric data from other sources is probably a lot more sparse outside of urban environments than it is in areas with subway systems and tall buildings (people do have smartphones with enough data coverage for sensor networks well outside of large cities).
The app can certainly also look at other sensors such as the acceleration, positioning, and motion sensors to help with data issues. The 'network' can find bias or accuracy issues from nearby measurements of other devices. The 'In a car with open windows' situation is easily detectable by the high rate of variation in pressure even for a short measurement duration (and even so, a long term average might still yield useful information, bias 'when in motion in the car' can be measured when the vehicle stops (using the motion sensors)). The app could even learn, f.e. "When near this wifi, report the measurement with a potentially previously learned pressure bias", or "When near this bluetooth, then it's in a car". With the light sensor and positioning/acceleration sensors, the app can even be quite aware of various state of the phone (sitting motionless on a flat surface, or being carried inside of something, or being carried open, or being held up to an ear, etc). I'm pretty sure it is possible to make an app that collects a tremendous amount of useful data for weather prediction that would otherwise be unavailable (actually, temporal variations from motionless devices in a location with unknown bias can be very useful raw data, and the strengths and positions of the windgusts around buildings may even be useful raw data).
Also don't forget the power of large numbers, you don't have to use all the data: Outliers or inaccurate measurements/devices can be detected as such when surrounded by enough other measurements, and snr improves with filtering of multiple measurements (also from multiple devices).
"Scientific method is rather simple"
It is in the field of math, some aspects of physics, and in theory, but in the real world filled with people (scientists) working in real world organizations, regarding the field of medicine, the public 'outcome' or 'status' of scientific method is strongly influenced by politics and economics.
In addition to that, human health status is such a complicated system to measure that pretty much every medical field study uses very sparse data points. It should be no surprise that in the field of medicine discoveries are made regularly that certain things have been previously misunderstood.
And as a result, not everybody follows the herd. And that is simply how the world, filled with human beings, works.
"it's not that people don't trust science, it's that they don't trust the motivations of the people practicing it."
Bingo, we have a winner.