Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment PowerBank + WallWart + RollUp (Score 2) 282

extra battery USB-charger things

Yup, I would definitely agree with this.

My setup up is:
- 10'000mAh USB powerbank (good for ~4x full recharge of the smartphone)
- small compact USB wallwart that can still deliver at least 1'000mA (2'100mA model in the same build size are starting to appear).
- USB roll-up cable (take very little place and doesn't tangle)

With that I'm good to go every-where for long period of time. I can recharge the smartphone on the go with the powerbank.
Or plug it into the wall, or even into the electrical outlets available in most european trains to recharge the battery.

That has really helped me once I've switched from old dumbphone (holds charge 1 week at least) to modern smartphone (is very heavy on cloud usage, charge holds 1 day under heavy 3G usage)

Equipment is still compact and doesn't eat up too much place in pockets/bag/backpack.

Note that lithium batteries are a delicate technology.
It's better to either go to a well known brand, or at least buy from a well known shop so it's easy to have warranty in case of defect.
They might all be produced in china, but at least you can reach someone who is liable for the build quality.
Avoid buying lithium batteries from shady seller on ebay (or taobao, etc.) who promise you 30'000mAh for 20$ (might explode !)

NOTE:
Of course, it helps to live on a continent where the standard power connector is compact and easily interoperable (and are all 100-240V range by default).
Bad luck for you if you live in the UK and its humongous power connector.

Comment Google's experience (Score 1) 601

Call me a cynic, but I see this happening every day on the UK motorway network.

That's also Google's own experience when they gave out a few of their first (as-in a regular car that can go fast, but modified with a ton of sensors) beta-version (as in early version, first time not driven around by the developers and engineers making them, but regular testers. So they might crash any moment now or other weird bugs) autonomous cars to some employee for them to test.

The employee were supposed to be very watchful and pay attention to everything the car was doing and not to rely on it, because so early test versions might go wrong at any moment without notice.

Instead at the developer's/engineer's horror, the people were blindly trusting the AI and took to opportunity to read, take a nap, etc. and generally not pay attention at all at what the car is making.

That's why google is now pushing for cars that 100% autonomous without even a drivers wheel (an emergency "stop" button being the only control available) and that go slowly.
People are going to be careless at the first chance given by technology. So let's not take risk and make sure the thing is idiot- / darwin award- proof.

Comment Not in production (Score 1) 601

I saw a documentary on the development of self-driving cars some 10 or more years ago and they were already using other visual cues besides lane markers. For example the subtle difference in shade between the track of the tires and the lane center.

The idea isn't bad, but I haven't seen a car yet into production with such capabilites.

That's perhaps due to the fact that most car currently deployed with lane detection must cope with bad weather, and as soon as the light conditions are sub-optimal and/or the road is slightly covered by rain water (or simply brand new and has no tires marks already) the makings aren't as much visible as the retroflective paint used for the white lines (you need snow to cover that. Or the paint must be really old and a good layer of reflective water need to obstruct it - which are still event that happen here around and disrupts the LDWS of cars I've been driving)

Comment Or habits (Score 1) 601

In my case its more a question of habit:
I write a lot of code professionally, so nearly always when I write the sound /brek/,
I mean the verb "to break" as used in the operand in most language to break out of a loop.

I seldom discuss cars, and thus rarely write the noun "the brake"
(and when I discuss, I usually speak another language. English isn't the language I speak the most, as one might notice)

As /. is filled with computing people, that might explain why brake/break mix up is common, even among people who are able to understand the difference between bare/bear or lose/loose.

Comment Almost - Ether vs LIGO (Score 1) 446

There's no difference between "change in speed of light", "change in distance", and "change in travel time for light". They're all the same thing.

More or less in broad details.

Don't both instruments detect very small changes in round-trip travel time for light, comparing one direction to the other?

Not exactly.
To over simplify:
- MM experiment was about measuring a clear and constant difference of speed of light that depends on the orientation of the two beams. (if there's any fluctuation, it should be noise and discarded. It's the average speed in each direction of space that is important).
- LIGO experiment is about measuring a fluctuation of distance. It fluctuates around a set mean (because the speed of light is constant no matter which direction or travel speed), but as wave of space distorsion go through it tiny oscilliation might happen. (In other words, it *the noise* which is important. The mean speed should be constant no matter the direction.)

In very broad details (actual physicists and historians are both going to laught at me) :

Back in the 19th century, the theory of ether was that light was a pure wave (like sound waves, etc.) Like any wave it needs to travel inside a medium (e.g.: speech is a sound wave that travels through air - under the form of a compression wave of said air) (e.g.: ripples are surface displacement of water, etc.)
As lights travels through space, it should mean that space isn't void. Instead there's a medium that can carry this wave (just like air carries sound, water carry ripples) except that medium should be much thinner and lighter than air, because everything behaves as if there was void between the planets.
(Note: under this hypothesis, ether is just a simple substance/medium like anything else, just extremely light/thin).
Now the question is: how would you test it?

Well if stars roam around an universe filled with a medium called ether, that mean they move relative to the ether fluid.
And depending on the relative motion of earth in ether, the speed at which the light wave are carried in this ether should look different between the direction of the ether flux (relative of us) and perpendicular to it.
Think "doppler effect in water". Or think the distorted ripples caused by something travelling on the surface of water: as seen from the traveller's point of view, waves in front seem to move away slower (as the traveller catches up to them) whereas side way, the move away the usual speed.

So ether could be proved by setting up an interferometer. Then as you rotate it around (thus aligning differently them beams. Which beam is along the traveling distance of earth through ether, and which beam is perpendicular), you'll notice a shift in the interferometer. You're detecting the speed of travel of planet Earth across the ether medium.

Result of the experiment: Nothing. Silch. Nada. No matter how precise you measure, no matter how you orient the setup, *the speed of light is the same in all direction*. It's not direction dependent, it's not dependent on some flux of some "ether medium".
Nothing of the variation that was expected, given the speed of travel of planet Earth through ether.

A more precise instrument wouldn't have helped. The planet Earth travel rather fast around the sun, so the different of speed, though subbtle should definitely be noticeable.

Now back to LIGO:
Since the 19th century, the duality particle/wave is known (and has been proven by slit experiments, etc.).

Special Relativity states (among other) that the law of physics are invariant. The speed of light in void is always 1*C. No matter the reference, no matter if you travel, etc. (Thus, although the result of MM where surprising given the ether model, its absolutely what's expected under special relativity. Zero surpise. Speed of light should be 1*C in vacuum, no matter how you orient your experiment regarding the orbit of our planet around the sum. Or the travel of the sun around our Galaxy. Or the expansion of the milky way away from the big bang).

General Relativity states (among other) that gravity (and momentum, energy, etc.) are caused by the space it self getting curved (not a substance, a medium like ether. It's space time that bends).
One over-simplified metaphor is the popular 2D/3D model of balls placed atop of a rubber sheet (2D) which gets deformed (in 3D). (except that according to string theorist, in real life the ruber-sheet could have an insane amount of dimensions :-P ).
Another way to think about it (pure 2D): a sheet of squared paper. You draw dots on it (objects), and around the objects the grid will get automatically distorted, the grid suddenly isn't square anymore. Other objects which travel around don't actually curve *their trajectories* they always keep moving forward on the grid. But because the squares on the grid aren't squares anymore but squished, this *forward travel* seen from the outside looks like a curve).
This makes an interesting point to test: light has no mass. According to newtonnian physics, it should keep traveling forward as it doen't feel any gravity pulling on it (force of gravity is propotional to the product of both masses, and as light's is 0...).
But according to relativist theory, light *should bend*. Because it keeps travelling forward (as anything else) but travels so across space that was bent (along a "grid" whose "squares" are "squished"). So light should bend around massive objects, even though it has a mass of exactly zero.
Fast forward later and gravity lensing DOES work. (The predicted bending of light around very massive object like black hole has been measured. The space around black hole measurabily works as if it was a huge optical len)

Now these distorsions themselves should have a speed limit. The effect of gravity isn't instant. It travels around space (like light) and has an upper limit of travel (should travel at the speed of life. 1*C is the top limit for anything that travels).
To go back to the 2D/3D metaphor imagine using a high speed camera and filming the rubber sheet as you drop the ball on it. As soon as the ball touches the sheet, you should see the ripples forming and travelling away until eventually the object settles in a curved well.
The same is predicted to happen in real life. Massive objects create big distrotions of space. If the objects changes, the distosion doesn't instantly travel with it. Instead one "could see" ripple travelling from the object outward as the distrosion slowly settles in the new conformation due to the new position of the massive object. That's gravitationnal waves: ripples as the distorsion of space settles to adapt to the changes of ultra massive object.
If you have a massive enough event (like two very big black holes orbiting each other) the waves might get big enough to be measurable.

So to detect that, you *also* setup an interferometer. *BUT* this time, no matter which orientation you turn your experiment (or given LIGO's huge size - "no matter which direction planet earth's rotation orient it for you, depending on the day/night cycle) you should always find that the speed of light is 1*C in vacuum, in all directions.
Instead you just sit and wait, and try to measure thing as precisely enough. And eventually one of the ripple of something massive enough will cause a big enough distorsion to be noticeable on your experiment.
That's what is definetly confirmed to be measured at LIGO.

Again both experiment rely on interferometers.
BUT:

- MM relies on there being a medium - called "ether".
- MM would have expected that there's a constant N % of difference of speed of light along the direction of travel of planet Earth accross ether.
- MM doesn't extra precision beyond some point. Either those N% are here or not. Using LIGO-level precision just puts extra decimals at the end of this constant.

- LIGO doesn't rely on a medium. It's the space-time it self that is bent. Vacuum gets shaped funny around massive object (as proved by gravity lensing).
- LIGO doesn't find a difference in speed of light in some direction. It's always 1*C in all directions.
- LIGO on the other hand might see noise. Tiny fluctuation. These fluctuation are predicted to happen due to very important bending of space time due to very bit masses playing at very high speed.
- The precision of LIGO *does* play a point. The higher the precision, the tinier the fluctuation that can be detected, and thus the less your rely on a bat-shit extreme massive event like a pair of "galaxy core"-level blackhole merging with each other at relativistic speed just on the parking lot outside the experiment.

I hope that these makes the difference clear.

Now for the obligatory disclaimer: (to paraphrase Dr. Leonard McCoy) I'm a (medical) doctor, not an astrophycist.
My research happens at much tinier scales compared to that, so take my explanations as an over simplification, with a grain of salt.

Sure then 1880s apparatus wasn't going to detect gravity waves, but that's just a matter of sensitivity of the instrument.

Whereas, as stated above, the reverse is not true.
LIGO-level precision isn't necessary to detect a constant bias in the speed of light along the direction of travel of planet earth across the ether medium. It would have added more digits of precision on the measured factor (or, as it turned out, more zero after the actual "1.00000..." factor measured in real life).

We still call an electron microscope a microscope.

Well, actually there are *surface* electron microscope, and *transmision* electron microscope.
We actually DO make a distinction.

(And for that one : you can trust me, I *do* work in life science)

Comment Arleady problematic now (Score 4, Informative) 601

...when they finally go big time, given that the white lines currently are used to guide them on multi lane roads.

No need to wait for autonomous vehicle.
Current safety devices use it already:

- Lane Departure Warning:
vehicle uses the contrast of white lines on dark asphalt to guess where the lane is, and can either alert the driver (e.g.: Volvo cars) or correct course (e.g.: BMW) to stay in the lane. The driver needs to explicitly switch on the turn signal to tell the car that he indeed intend to turn the car.
No lines, not easy for the car to tell what exactly the trajectory should be. Whereas humans can more or less guess based on the surrounding and know where the "virtual lane" should go (and TFA's idea is that this guess-work will force drivers to be more prudent and slow down. My own feeling is that the first 2 weeks, the drivers will be watchful, then they'll get used it, and then everything will be back to normal)

- Forward Collision Avoidance:
vehicle have a forward facing radar that can detect other vehicle in front. So the car can see if the other in front breaks (when they are both in the same lane, i.e.: a traffic jam) and automatically slow down the cruise control (and in some car, resume driving once the traffic jam clears and the car in front starts again).
Also, the cars can detect incoming vehicle or vehicle that are on a crash course and prevent by applying breaks.
For that to work, again the car's computer need to have some basic idea of where lanes are. Other wise, there's a risk that the car will hit the break, even if the stoped/slower vehicle was in another lane, or the incoming car is in the other half of the road (like in TFA's case).

It seems similar to what i believe they did in the netherlands where they removed any distinction between the road and the pedestrian areas which apparently slowed down traffic.

...well at least, pedestrian and cyclist collision avoidance (more usually called "City Safety" by constructor, and currently slowly becoming a strandard option on most vehicle in europe), is entirely Lidar-based or shape-recognition based.
(i.e.: the car doesn't stop on its own because you're dangerously close to a pedestrian area or a bicycle lane, but because it recognised the object in front of you).
So at least *that* idea isn't disrupting existing safety device. But still...

I'm more proponent of some European city which have buried some of their highway network underground.

I don't think that forcing people to think about the security themselves by removing safety marking will actually work on the long term.
I strongly suspect that people will slowly adapt and get used to the missing markings, and start driving as carelessly as before.

If you think about it, large swaths of road miss markings, specially in developing countries. And those countries aren't exactly known for lower incident rates (though other reason, like vehicles to broken to be road-safe, missing driving education, etc. are other factors in play).

Comment (TFA citation) (Score 2) 513

Now, why not have them roleplay this (with an AI, or with a partner that would accept "playing" the subserviant) ?

Which is also shown as an example in TFA itself:

Interestingly, some AI assistants out there do cater to this sort of thing. CEO of Robin Labs, Ilya Eckstein claims there is a high demand for AI assistants that are "more intimate-slash-submissive with sexual undertones".

To each his own liking. And better to molest a virtual entity in roleplay that is designed to respond this way, than molesting a real person.

Comment Or the other way around... (Score 1) 513

Or you could consider the opposite:

Some of the people that you qualify "idiots" might have weird urges. They might *want* to degrade women even if they know it's bad.
Now, why not have them roleplay this (with an AI, or with a partner that would accept "playing" the subserviant) ?

Fed up that girls that you know / you (depending on sex) get constantly cat called?
Hey, why not build a special cat-call bot that the cat-callers can cat-call, and leave uninterested human females alone ?

Comment Intel Cores Schmores (Score 1) 136

Perhaps cores-schmores is one way to approach this? Lots of small cores with relatively slow clocks, as higher clocks tend to worsen power efficiency.

Which is also the road that Intel themselves pursue with Xeon Phi (the currently used descendant of their failed GPU).

I'm not discounting Intel's success with single-core performance per se, but I sometimes feel it's aimed at speeding up legacy applications

Yup, the drawback is that not a lot of current application are able to run on tons of separate threads.
Not only "legacy" but even applications recently produced or currently being produced.
But the architecture can have some success on servers, and some scientific workloads.

Comment Different types of material (Score 1) 165

While you're in checking mode, perhaps you could research uncountable nouns.

1. You don't add any value to the current conversation

2. Also, you're wrong : it's correct to use the plural :

I suppose you could take the totally opposite route and choose Shapeways or iMaterialize's rubber/elastic type materials

It might surprise you, but there are clearly more than one type of material that is flexible (e.g.: flexible nylon and printable rubber, just to cite the first 2 of the top on my head).
As the poster is referring to different types of material, rather then a bigger quantity of material, the usage of plural is correct.

Comment Batteries (Score 1) 223

There are many times during the year when I may need to drive 300 miles round trip. If it won't make it then it's a non-starter.

Two things makes this realistic:

- vehicle size: In a bigger truck, there's more room to store aditionnal batteries.
Whereas extending a Model S would necessitate fill the front and back trunk with additional batteries (increasing weight and killing potential cargo), on a truck you could realistically use more space for additional batteries while still having plenty of room left for cargo.

- also, electric motor are less complex and cheaper than internal combustion engines. Model X and newer Model S have two motors. European high speed train don't even feature a locomotive because *the whole train* is motorized - every single coach.
This increase either efficiency or (in P85D Tesla S) the peak performance.
So, to further the above point: On a "Tesla T" you could have the whole trailer motorized and with its own battery, dramatically increasing the potential range.
Tesla would need "simply" to design a flat-bed with motors and battery similar to its current dual-motor Model T/X platform, with the necessary point to attach a standardized shipping container for cargo.

- unlike a gaz tank, battery are swappable and it's a rather fast procedure.
Musk has always wanted to supplement Tesla's network of free super-charger, with a network of (paying) battery swaps.

In other words, planty of opportunity for electric trucks in some point in the future.

So by applying the same approach and

Comment Technology vs. Implementation (Score 1) 44

>The main advantage of bitcoin and other crypto currency protocol, is that there isn't a single entity in charge of the transactions, there's not a single point that you can block/ban.

Wrong and wrong. Hash.io has enough power to control the chain. Around 10 people basically control BTC

I was speaking about the general concept of the design.
Not the peculiarities of the implementation.

The bitcoin (and other cryptocoin protocol) are designed to eschew the need for a central entity (compared to other exchange protocols and platforms that need a central authority and couldn't work without one). By design, bitcoin doesn't need one, because by design it distributes the information across the whole network. And thus by design it CANNOT be anonymous. At best, it's pseudonymous (there are no Real Identities, so an user could seem anonymous at a quick glance). At worse one need to use complex coinmixing operation to manage to maintain anonymity.

The fact that a huge a part of the network is at the hand of few key player and in partice there's an oligarchy controlling it, isn't a result of the design (the protocol is designed without the need of an authority) but of how things have evolved practically for the current implementation:
BTC overly rely on Proof-of-Work (which at some point of time was important to attract new players to grow the network), and beyond that has focused on a specific PoW - a variant of Hashcash - that is computationnally simple, scale dramatically fast (each new generation of hardware completely leaves the older one in the dust) and thus in the long term works best for those with plenty of cheap electricity and cheap access to electronics.
Of which China has plenty (they have Three Rivers Dam, and they are the one who make the electronics for everyone else). So of course they'll end up being strongly advantaged, and few key chinese will hold much of the network's hashing power, and therefor would work as the de facto leader.

That doesn't change the design was to be without the need of a central authority, but be distributed accross a whole network (of which they simply managed to hold the most).

Or in short: The general idea behind the technology is interesting. The current bitcoin network is slowly turning into shit. But one doesn't change the other.

Comment Re:Already here (Score 1) 412

You can still better yourself and get a better paying job, just no free cable TV, smartphone, etc.

I think "smartphone" is a poor example on your list, as it is slowly becoming a critical piece of technology to be able to do anything.
Just like "a computer and a working internet conection" has been in recent times.
It's not just a piece of entertainment (like a TV), but a critical piece of technology to get access to maps, tools of communication (both voice and text messages), reading mail while on the go, getting information, etc.
There are lots of jobs where you basically need a smartphone to be able to work (random example: Uber driver*).

*: Though it's a bad exemple for this discussion, as we're currently speaking of Europe and to work as a driver there, you need a professional driver license, special profressional insurance, etc. these aren't cheap and thus working as an Uber driver isn't an entry job that cou can do when a smartphone and a car are your only pessessions in this world.

You buy what you can afford on your income rather than living above you means.
You want a better lifestyle? Do what the rest of do and EARN it.

The problem is : what happens with people who have always worked to be able to earn the lifestyle. But suddenly aren't able to work for reasons external to their will (e.g.: sickness/accident)
They are willing to work. The have worked up until now. They just suddenly can't anymore.

Comment Salary (Score 2) 412

So once you're in that "basic income" system of yours, I guess you're stuck living in some ghetto and would have no way of getting out of it.

1. It's European countries you're speaking of. Here around, what you call "some ghetto" are way nicer place than any of you ghettos on your side of the altrantic pond.

2. They idea is: "this buys you minimal living accomodation in the more modern parts of a big cite / or in a really small village lost in the back country, now it's up to you to earn anything more you would need to be able to access anything more that you would want"
Deciding to get a paying job is basically *THE* way of getting out of it.

Comment FFMPEG (Score 2) 133

After the FFMPEG fork is there a Linux distro that still uses FFMPEG & Mplayer?

Firstly: Not all distro switched to avlib.
Some simply decided to stay with ffmpeg (e.g.: opensuse never switched at all)
Some changed their opinion back (e.g.: Debian went back to ffmpeg after a while)
This is mostly to avlib never really being a good an active fork, and didn't manage to attract most developper to it.
(Unlike OpenOffice.org to which most developer migrated after the fork from LibreOffice.org).

Since then the problematic leader of FFMPEG has decided to step down,
avlib has merged back to ffmpeg
and distro are back to- / or are still using- ffmpeg again.
And the guy is now a contributor. He still writes code for ffmpeg, but he's not having a final say on everything and thus fighting with everyone to have his "one true vision(tm)" imposed.
He has fully realized that his character clashes with some in the community, has seen the disastrous result on having avlib forked (the linux ecosystem split across two different forks, none of which becomes a clear leader and each laging behind the other on some important features), and decide therefor to step down for the greater good of the community.

source: Phoronix

Slashdot Top Deals

Support Mental Health. Or I'll kill you.

Working...