The Office Space riff is just so much cadmium cojones: https://www.youtube.com/watch?v=FECIYlo3KRY
The Office Space riff is just so much cadmium cojones: https://www.youtube.com/watch?v=FECIYlo3KRY
extra battery USB-charger things
Yup, I would definitely agree with this.
My setup up is:
- 10'000mAh USB powerbank (good for ~4x full recharge of the smartphone)
- small compact USB wallwart that can still deliver at least 1'000mA (2'100mA model in the same build size are starting to appear).
- USB roll-up cable (take very little place and doesn't tangle)
With that I'm good to go every-where for long period of time. I can recharge the smartphone on the go with the powerbank.
Or plug it into the wall, or even into the electrical outlets available in most european trains to recharge the battery.
That has really helped me once I've switched from old dumbphone (holds charge 1 week at least) to modern smartphone (is very heavy on cloud usage, charge holds 1 day under heavy 3G usage)
Equipment is still compact and doesn't eat up too much place in pockets/bag/backpack.
Note that lithium batteries are a delicate technology.
It's better to either go to a well known brand, or at least buy from a well known shop so it's easy to have warranty in case of defect.
They might all be produced in china, but at least you can reach someone who is liable for the build quality.
Avoid buying lithium batteries from shady seller on ebay (or taobao, etc.) who promise you 30'000mAh for 20$ (might explode !)
Of course, it helps to live on a continent where the standard power connector is compact and easily interoperable (and are all 100-240V range by default).
Bad luck for you if you live in the UK and its humongous power connector.
Call me a cynic, but I see this happening every day on the UK motorway network.
That's also Google's own experience when they gave out a few of their first (as-in a regular car that can go fast, but modified with a ton of sensors) beta-version (as in early version, first time not driven around by the developers and engineers making them, but regular testers. So they might crash any moment now or other weird bugs) autonomous cars to some employee for them to test.
The employee were supposed to be very watchful and pay attention to everything the car was doing and not to rely on it, because so early test versions might go wrong at any moment without notice.
Instead at the developer's/engineer's horror, the people were blindly trusting the AI and took to opportunity to read, take a nap, etc. and generally not pay attention at all at what the car is making.
That's why google is now pushing for cars that 100% autonomous without even a drivers wheel (an emergency "stop" button being the only control available) and that go slowly.
People are going to be careless at the first chance given by technology. So let's not take risk and make sure the thing is idiot- / darwin award- proof.
I saw a documentary on the development of self-driving cars some 10 or more years ago and they were already using other visual cues besides lane markers. For example the subtle difference in shade between the track of the tires and the lane center.
The idea isn't bad, but I haven't seen a car yet into production with such capabilites.
That's perhaps due to the fact that most car currently deployed with lane detection must cope with bad weather, and as soon as the light conditions are sub-optimal and/or the road is slightly covered by rain water (or simply brand new and has no tires marks already) the makings aren't as much visible as the retroflective paint used for the white lines (you need snow to cover that. Or the paint must be really old and a good layer of reflective water need to obstruct it - which are still event that happen here around and disrupts the LDWS of cars I've been driving)
In my case its more a question of habit:
I write a lot of code professionally, so nearly always when I write the sound
I mean the verb "to break" as used in the operand in most language to break out of a loop.
I seldom discuss cars, and thus rarely write the noun "the brake"
(and when I discuss, I usually speak another language. English isn't the language I speak the most, as one might notice)
There's no difference between "change in speed of light", "change in distance", and "change in travel time for light". They're all the same thing.
More or less in broad details.
Don't both instruments detect very small changes in round-trip travel time for light, comparing one direction to the other?
To over simplify:
- MM experiment was about measuring a clear and constant difference of speed of light that depends on the orientation of the two beams. (if there's any fluctuation, it should be noise and discarded. It's the average speed in each direction of space that is important).
- LIGO experiment is about measuring a fluctuation of distance. It fluctuates around a set mean (because the speed of light is constant no matter which direction or travel speed), but as wave of space distorsion go through it tiny oscilliation might happen. (In other words, it *the noise* which is important. The mean speed should be constant no matter the direction.)
In very broad details (actual physicists and historians are both going to laught at me) :
Back in the 19th century, the theory of ether was that light was a pure wave (like sound waves, etc.) Like any wave it needs to travel inside a medium (e.g.: speech is a sound wave that travels through air - under the form of a compression wave of said air) (e.g.: ripples are surface displacement of water, etc.)
As lights travels through space, it should mean that space isn't void. Instead there's a medium that can carry this wave (just like air carries sound, water carry ripples) except that medium should be much thinner and lighter than air, because everything behaves as if there was void between the planets.
(Note: under this hypothesis, ether is just a simple substance/medium like anything else, just extremely light/thin).
Now the question is: how would you test it?
Well if stars roam around an universe filled with a medium called ether, that mean they move relative to the ether fluid.
And depending on the relative motion of earth in ether, the speed at which the light wave are carried in this ether should look different between the direction of the ether flux (relative of us) and perpendicular to it.
Think "doppler effect in water". Or think the distorted ripples caused by something travelling on the surface of water: as seen from the traveller's point of view, waves in front seem to move away slower (as the traveller catches up to them) whereas side way, the move away the usual speed.
So ether could be proved by setting up an interferometer. Then as you rotate it around (thus aligning differently them beams. Which beam is along the traveling distance of earth through ether, and which beam is perpendicular), you'll notice a shift in the interferometer. You're detecting the speed of travel of planet Earth across the ether medium.
Result of the experiment: Nothing. Silch. Nada. No matter how precise you measure, no matter how you orient the setup, *the speed of light is the same in all direction*. It's not direction dependent, it's not dependent on some flux of some "ether medium".
Nothing of the variation that was expected, given the speed of travel of planet Earth through ether.
A more precise instrument wouldn't have helped. The planet Earth travel rather fast around the sun, so the different of speed, though subbtle should definitely be noticeable.
Now back to LIGO:
Since the 19th century, the duality particle/wave is known (and has been proven by slit experiments, etc.).
Special Relativity states (among other) that the law of physics are invariant. The speed of light in void is always 1*C. No matter the reference, no matter if you travel, etc. (Thus, although the result of MM where surprising given the ether model, its absolutely what's expected under special relativity. Zero surpise. Speed of light should be 1*C in vacuum, no matter how you orient your experiment regarding the orbit of our planet around the sum. Or the travel of the sun around our Galaxy. Or the expansion of the milky way away from the big bang).
General Relativity states (among other) that gravity (and momentum, energy, etc.) are caused by the space it self getting curved (not a substance, a medium like ether. It's space time that bends).
One over-simplified metaphor is the popular 2D/3D model of balls placed atop of a rubber sheet (2D) which gets deformed (in 3D). (except that according to string theorist, in real life the ruber-sheet could have an insane amount of dimensions
Another way to think about it (pure 2D): a sheet of squared paper. You draw dots on it (objects), and around the objects the grid will get automatically distorted, the grid suddenly isn't square anymore. Other objects which travel around don't actually curve *their trajectories* they always keep moving forward on the grid. But because the squares on the grid aren't squares anymore but squished, this *forward travel* seen from the outside looks like a curve).
This makes an interesting point to test: light has no mass. According to newtonnian physics, it should keep traveling forward as it doen't feel any gravity pulling on it (force of gravity is propotional to the product of both masses, and as light's is 0...).
But according to relativist theory, light *should bend*. Because it keeps travelling forward (as anything else) but travels so across space that was bent (along a "grid" whose "squares" are "squished"). So light should bend around massive objects, even though it has a mass of exactly zero.
Fast forward later and gravity lensing DOES work. (The predicted bending of light around very massive object like black hole has been measured. The space around black hole measurabily works as if it was a huge optical len)
Now these distorsions themselves should have a speed limit. The effect of gravity isn't instant. It travels around space (like light) and has an upper limit of travel (should travel at the speed of life. 1*C is the top limit for anything that travels).
To go back to the 2D/3D metaphor imagine using a high speed camera and filming the rubber sheet as you drop the ball on it. As soon as the ball touches the sheet, you should see the ripples forming and travelling away until eventually the object settles in a curved well.
The same is predicted to happen in real life. Massive objects create big distrotions of space. If the objects changes, the distosion doesn't instantly travel with it. Instead one "could see" ripple travelling from the object outward as the distrosion slowly settles in the new conformation due to the new position of the massive object. That's gravitationnal waves: ripples as the distorsion of space settles to adapt to the changes of ultra massive object.
If you have a massive enough event (like two very big black holes orbiting each other) the waves might get big enough to be measurable.
So to detect that, you *also* setup an interferometer. *BUT* this time, no matter which orientation you turn your experiment (or given LIGO's huge size - "no matter which direction planet earth's rotation orient it for you, depending on the day/night cycle) you should always find that the speed of light is 1*C in vacuum, in all directions.
Instead you just sit and wait, and try to measure thing as precisely enough. And eventually one of the ripple of something massive enough will cause a big enough distorsion to be noticeable on your experiment.
That's what is definetly confirmed to be measured at LIGO.
Again both experiment rely on interferometers.
- MM relies on there being a medium - called "ether".
- MM would have expected that there's a constant N % of difference of speed of light along the direction of travel of planet Earth accross ether.
- MM doesn't extra precision beyond some point. Either those N% are here or not. Using LIGO-level precision just puts extra decimals at the end of this constant.
- LIGO doesn't rely on a medium. It's the space-time it self that is bent. Vacuum gets shaped funny around massive object (as proved by gravity lensing).
- LIGO doesn't find a difference in speed of light in some direction. It's always 1*C in all directions.
- LIGO on the other hand might see noise. Tiny fluctuation. These fluctuation are predicted to happen due to very important bending of space time due to very bit masses playing at very high speed.
- The precision of LIGO *does* play a point. The higher the precision, the tinier the fluctuation that can be detected, and thus the less your rely on a bat-shit extreme massive event like a pair of "galaxy core"-level blackhole merging with each other at relativistic speed just on the parking lot outside the experiment.
I hope that these makes the difference clear.
Now for the obligatory disclaimer: (to paraphrase Dr. Leonard McCoy) I'm a (medical) doctor, not an astrophycist.
My research happens at much tinier scales compared to that, so take my explanations as an over simplification, with a grain of salt.
Sure then 1880s apparatus wasn't going to detect gravity waves, but that's just a matter of sensitivity of the instrument.
Whereas, as stated above, the reverse is not true.
LIGO-level precision isn't necessary to detect a constant bias in the speed of light along the direction of travel of planet earth across the ether medium. It would have added more digits of precision on the measured factor (or, as it turned out, more zero after the actual "1.00000..." factor measured in real life).
We still call an electron microscope a microscope.
Well, actually there are *surface* electron microscope, and *transmision* electron microscope.
We actually DO make a distinction.
(And for that one : you can trust me, I *do* work in life science)
Better source, including examples of deaths due to inaccurate cell-phone 911 location:
Cellphones must be a godsend to 911 in this regard. I wonder how many people died over the years because they couldn't tell the ambulance where to come?
Nope, quite the opposite. Imagine you're in a high-rise apartment building. First, you're indoors so the building prevents you getting an accurate GPS location fix, so emergency services will only know approximately which CITY BLOCK you are located on. Secondly, given time the location may get more accurate, but even once they've figured out which building you are in, they have no idea which apartment you are in. If you're unable to give your name, address, and/or open the door, it will take emergency services HOURS to find your cold dead corpse.
With landlines, the phone company kept detailed records of exactly where each phone line was installed, so the INSTANT you dialed 911, the dispatcher had your exact address including apartment number coming up on-screen. With VoIP E-911, you are required to type-in your exact location and the service provider, so again 911 calls have your exact address the moment you call. Cellular operators have fought tooth and nail to resist any technical improvements to cell phone E-911 which would give emergency services a quicker and more exact fix on your location, because they would cost a few dollars more per customer:
I have no idea how to get the GPS map data changed.
Put up a simple, self-closing, self-opening gate. Drivers will see it and realize that CAN'T possibly be a public road, but everyone can still drive right through it without getting out of their vehicle.
That's the best solution, because you'll NEVER get ALL the navigation apps to update their maps, and even if they did, some folks will continue using an old set of offline/downloaded data indefinitely.
However, if you are willing to invest some time, you can fix MOST navigation apps, by contacting the few biggest data providers. Expect it to take a year before the changes start to slowly trickling out to navigation apps users:
Here in the desert, water is a BIG issue.
Not really. If it was, they'd stop the farmers growing Alfalfa in the California deserts, then exporting it to China. The "BIG issue" is an utterly broken antiquated system of pre-1914 water rights.
Remind me again who is having their free speech silenced by this
Google. And in practice, the people who rely on it to have their content be found (i.e. everyone else).
3. Why does Google have free speech rights that normal companies don't, e.g. credit references can't report things that happened long ago by law, and can't claim free speech allows them to.
Maybe those companies should? The solution to "some idiots excessively weight events that happened 20 years ago" is not censorship of facts, it's to educate people that other people change and that needs to be taken into account.
"Little prigs and three-quarter madmen may have the conceit that the laws of nature are constantly broken for their sakes." -- Friedrich Nietzsche