Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Other Uses (Score 2) 371

It costs me $0 to transfer dollars to relatives electronically, bank-to-bank. No fees at all.

Also, perhaps you are unaware, but Western Union has been losing business for two decades now. Many immigrants now send money home by opening a bank account and mailing a physical ATM card to their relatives, which can cost as little as $0 if the relatives withdrawal the cash in $ (which many can), and can be as little as 1% if a currency conversion is involved.

Even using a non-network ATM typically has a fixed charge, e.g. $3/$300 = 1% charge to get a hard currency in one's pocket.

Bitcoin cannot do better (people who think they can transfer BTC for free are ignoring both the exchange spreads and the time/volatility factor).

-Matt

Comment Re:Sell now. (Score 1) 371

This mechanism is no guarantee that the seller will actually get the price they programmed in. In a crash there is not likely going to be a buyer at or above the programmed-in price and the seller can't force counter-parties to buy BTC (or anything) at the desired price.

In fact this mechanism exists in the real stock market and is manipulated all the time by contrived flash crashes to force those sell points to get hit. Since the price crashes through the sell levels so quickly there are no buyers at those prices and the sales wind up going at the market and forcing the stock down even more. Even worse, enough people using this sort of mechanism can CAUSE computer-driven crashes and lose their shirts in the process.

Unlike something like Bitcoin, stocks actually do have a real intrinsic value (even if investors can't calculate it with any accuracy). Which means that you can't sell a stock into oblivion and expect it to stay there very long. In the case of flash crashes, sometimes seconds, up to a few minutes at best. People who use these sorts of orders can wind up with sold positions and a stock price that has recovered before they can react and then they are screwed.

-Matt

Comment Re:Why do they call it a currency? (Score 1) 371

You've hit upon the basic mechanism but you haven't taken the next logical step. The issue is what happens when the pricing goes exponential (as it has, twice now).

What happens is that people who 'break even' almost universally go back into the market with more than just the house's money. Because the value is appreciating non-linearly, going back into the market almost always means spending far more money buying back in than you ever got out before that point.

For example, people who cashed out when it was hitting $250 a few months ago are now kicking themselves. In order to buy back in, they have to spend every last drop of their previous profits plus more to obtain a 'reasonable' speculative position at the new price.

This also points to how people's view of money shifts as the market rises, particularly people who have never had a lot of money or have never handled large sums or volumes of money (via their jobs or otherwise)... basically the definition of a 'sucker'. As long as they think they are making money, it becomes like a game of monopoly. In the first round someone who made a few $hundred thought they did really well. In the second round those few $hundred feel like chump change... it takes a few $thousand to really feel that you did well.

Well, it doesn't stop there. It keeps scaling up. In the third round a person might feel that a few $tens-of-thousands is necessary to do well and that a measily few $thousand is chump change. It keeps scaling up until a person either runs out of cash or runs out of borrowing capacity... the mindset continues to scale all the way until the whole thing comes crashing down.

That's why you wind up with people bankrupted instead of people being happy that they had fun playing with the house's money after the first round or two.

There are certainly a few smart people in the Bitcoin market who know how to extract actual profit from the trading. It mostly comes down to understanding the scaling effect and NOT being tricked by it (not scaling up the investment in each successive round by more than 1/2 the profit from the previous round). There are numerous well-known methodologies and I'm certain a small but significant percentage of Bitcoin traders are doing precisely that.

People doing the above scheme are the smart ones. They are draining hard money (real dollars) from the exchange against everyone else who is continuing to lever up. In the end, real money has been removed from the exchange while the remaining participants are basically just playing musical chairs with each other. When the eventual panic occurs there is simply not enough cash left with the exchange participants to cover everyone and the value crashes into oblivion.

This is how it will play out. It is ALWAYS how it plays out. Bitcoin is not going to be an exception.

-Matt

Comment Re:The value of bitcoin (Score 1) 371

VISA is more like 1% (or less) for anyone doing real volume. Small businesses have plenty of choices for credit card transactions... even Apple is getting into the game. And besides, if you want to compare Bitcoin transaction costs against something you should be comparing against something like PayPal, not VISA.

For bitcoin on either Mt Gox or coinbase, the bid/ask spread *alone* for a small transaction is ~0.5% or so, and exceeds 1% for the types of volumes a small business would have (and that is only just recently. Just a few days ago the spread was easily over 2%). That's just the bid/ask spread, never mind any other fees.

Anyone who thinks they are doing bitcoin transactions for 'free' is smoking something.

What we have here are a whole lot of people so fixated on the hype they don't realize just how much they are getting milked.

-Matt

Comment Re:Why do they call it a currency? (Score 1) 371

Maybe several million dollars when purchased new, but pretty much all computing equipment depreciates to near worthlessness over a few years, not to mention the cost of power. So if you are trying to argue that bitcoin has something physical backing it then it kinda falls on its face.

Nobody in their right mind can argue that Bitcoin is a usable currency. Currencies have to be relatively stable in the short-term. Bitcoin obviously isn't even remotely stable. Not only is it not stable, but it has no mechanisms at all to make it stable.

It's the classic (and obvious) hording / tulip-mania mechanic and I think most investors know this, but greed still rules until the day the whole thing comes crashing down around people's ears.

-Matt

Comment Re:SMP contention basically gone from critical pat (Score 1) 48

I've read the space map work but there are several issues that make them impractical for H2. The main one is that H2 snapshots are read-write (ZFS snapshots are read-only). Secondarily, my experience from H1 is that any complicated ref-counting or space accounting can lead to hidden corruption. Even if we assumed that the media data validation mechanism (a CRC or cryptographic hash) detects all media issues, there is still the problem of software complexity introducing bugs at a higher level.

So H2 is going to use a relatively simple freemap mechanism. In fact, it is using the same dynamic radix tree for the freemap as it does for the file and directory topology. It will still roll-up tiny allocations into a somewhat chunkier granularity to reduce freemap overhead (similar to what H1 did but at a finer 1KB grain vs H1's 2MB granularity), and it will roll-up space availability hints and propagate them up the tree. And it is type-aware so it can do things like collect inode allocations together for burstability. It isn't trivialized. But it does not attempt to implement ref-counting or any complex sub-topology to keep track of who and how many own which blocks.

The actual block freeing is going to be done in the background rather than in the 'rm' path (for a COW filesystem with snapshots which are actually used regularly, there's no point trying to free blocks in real time). This will allow H2 to implement several major features such as de-duplication, writable snapshots, and so forth with far less software complexity. So the complexity of the block allocation and any block duplication becomes trivialized while the complexity of the block freeing code increases. But it's an incrementally solvable problem. I can start with a brute-force memory-bounded scan and slowly add optimizations to recognize duplication in the topology, add topology hints to improve the efficiency of the block freeing scan, and so forth.

-Matt

Comment SMP contention basically gone from critical paths (Score 5, Informative) 48

This release removes almost all the remaining SMP contention from both critical paths and most common paths. The work continued past the release in the master branch (some additional things which were too complex too test in time for the release). For all intents and purposes the master branch no longer has any SMP contention for anything other than modifying filesystem operations (such as concurrent writing to the same file). And even those sorts of operations are mostly contention free due to the buffer cache and namecache layers.

Generally speaking what this means is that for smaller 8-core systems what contention there was mostly disappeared one or two releases ago, but larger (e.g. 48-core) systems still had significant contention when many cores were heavily resource-shared. This release and the work now in the master branch basically removes the remaining contention on the larger multi-core systems, greatly improving their scaling and efficiency.

A full bulk build on our 48-core opteron box took several days a year ago. Today it takes only 14.5 hours to build the almost 21000 packages in the FreeBSD ports tree. These weren't minor improvements.

Where it matters the most are with heavily shared resources, for example when one is doing a bulk build on a large multi-core system which is constantly fork/exec'ing, running dozens of the same process concurrently. /bin/sh, make, cc1[plus], and so on (a common scenario for any bulk building system), and when accessing heavily shared cached filesystem data (a very common scenario for web servers). Under these conditions there can be hundreds of thousands of path lookups per second and over a million VM faults per second. Even a single exclusive lock in these paths can destroy performance on systems with more than 8 cores. Both the simpler case where a program such as /bin/sh or cc1 is concurrently fork/exec'd thousands to tens of thousands of times per second and the more complex case where sub-shells are used for parallelism (fork without exec)... these cases no longer contend at all.

Other paths also had to be cleaned up. Process forking requires significant pid-handling interactions to allocate PIDs concurrently, and exec's pretty much require that locks be fine-grained all the way down to the page level (and then shared at the page level) to handle the concurrent VM faults. The process table, buffer cache, and several other major subsystems were rewritten to convert global tables into effectively per-cpu tables. One lock would get 'fixed' and reveal three others that still needed work. Eventually everything was fixed.

Similarly, network paths have been optimized to the point where a server configuration can process hundreds of thousands of tcp connections per second and we can get full utilization of 10GigE nics.

And filesystem paths have been optimized greatly as well, though we'll have to wait for HAMMER2 to finish that work for modifying filesystem calls to reap the real rewards from that.

There are still a few network paths, primarily related to filtering (PF) that are serialized and need to be rewritten, but that and the next gen filesystem are the only big ticket items left in the entire system insofar as SMP goes.

Well, the last problem, at least until we tackle the next big issue. There's still cache coherency bus traffic which occurs even when e.g. a shared lock is non-contended. The code-base is now at the point where we could probably drop-in the new Intel transactional instructions and prefixes and get real gains (again, only applicable to multi-chip/multi-core solutions, not simple 8-thread systems). It should be possible to bump fork/exec and VM fault performance on shared resources from their current very high levels right on through the stratosphere and into deep space. Maybe I'll make a GSOC out of it next year.

The filesystem work on HAMMER2 (the filesystem successor to HAMMER1) continues to progress but it wasn't ready for even an early alpha release this release. The core media formats are stable but the freemap and the higher level abstraction layers still have a lot of work ahead of them.

In terms of performance... well, someone will have to re-run bechmarks instead of just re-posting old stuff from 5 years ago. Considering the SMP work I'd expect DFly to top-out on most tests (but there's still always the issue of benchmark testers just blindly running things and not actually understanding the results they post about). Database performance with postgresql still requires some work for large system configurations due to the pmap replications (due to postgres fork()ing and using mmap() now instead of sysv-shm, e.g. if you used a large 100GB+ shared memory cache configuration for the test). We have a sysctl to enable page table sharing across discrete fork()s but it isn't stable yet... with it, though, we get postgres performance on-par with the best linux results in large system configurations. So there are still a few degenerate cases here and there that aren't so much related to SMP as they are to resource memory use. But not much is left even there.

Honestly, Slashdot isn't the right place to post BSD stuff anymore. It's too full of immature posts and uninformed nonsense.

-Matt

Comment Pretty obvious evolution (Score 1) 110

With Intel's 14nm so close, and 10nm production in another year or so, they need to use all that chip area for something that doesn't necessarily generate a ton of heat. RAM is the perfect thing. Not only is the power consumption relatively disconnected from the size and density of the cache, but not having to go off-chip for a majority of memory operations means that the external dynamic ram can probably go into power savings mode for most of its life, reducing the overall power consumption of the device.

-Matt

Comment Re:Don't really see the market (Score 1) 240

Happens to me all the time on ALL my android devices. Totally non-deterministic. Sometimes my Nexus 7 (2012) can sit with the display turned off for several days without losing too much battery, other times I turn it off (display off) and it discharges overnight and is dead in the morning. Happens to both my android phones too. I'm not forgetting to turn off Pandora either.

Sometimes my phone is 'hot' just sitting in my pocket with the screen off. Sometimes it is 'cold'. When it's hot it's obviously eating power doing something, despite the display being turned off. It's totally random when this happens... usually the longer I leave the phone between reboots the more likely it is to get into that state. VERY annoying. NO services are running (nothing in the service bar at the top). Doesn't matter. Still random hot or cold.

Doesn't happen to any of my Apple devices. If I leave an Apple device turned off (sleep mode that is), it retains its battery level and drains at a very slow and highly deterministic rate. If I turn off my iPhone (and I'm not playing music on it), it goes to sleep. Always.

Honestly I don't understand why Google thinks this is ok. It's an obvious competitive advantage for Apple when its devices don't kill their battery and Android devices do.

-Matt

Comment Re:won't help for Samsung note 2 (Score 1) 240

No no, resistance has nothing to do with it. The USB charger's output is current limited, which means that if the device tries to pull more current than the charger is programmed for, the charger's output voltage will drop. That is, the output will be limited to (for example) 500mA or 1A or 1.5A or whatever and if the device tries to pull more the output will STILL be limited to that value and the voltage will drop from 5V to compensate.

Smart devices can usually ignore the USB resistor-based programming pins (which run to different standards) and simply pull the charger's output down from 5V to 4V or something like that and measure the current (up to a reasonable limit managed by the device, of course), and figure out how much they can pull from the charger based on that.

Stupid devices will either not probe what the charger can actually do or will make assumptions based on the resistor-programmed pins (the one that has multiple standards) and not be able to handle the case where they guess wrong and crow-bar the charger's output by trying to pull too much current.

None of this has anything to do with the resistance of the wires inside or outside the charger.

-Matt

Comment Re:You should care more about safety... (Score 1) 240

Two-prong chargers just synthesize a ground. Well, there are two grounds actually... one for the AC side and one for the output side. On the AC side its the average of the AC sine wave between the two prongs taken off the middle of the rectifier. The charger's output is traditionally isolated with a transformer (a small transformer on the output of the switching power supply instead of a big one on the input), which isolates the output and also handles the flyback function for the switching power supply. The feedback circuit is traditionally isolated with an opto-isolator.

So the output side's ground is separate and isolated from the input side's synthesized ground. Which means that the output's ground will float to whatever it is connected to (which is what you want it to do).

The difference between the input side's ground (which is completely internal to the charger) and the output side's ground (which is exposed via the USB connector) is called the common-mode voltage differential. Obviously there are limitations since too much of a differential between the two grounds will cause arcing crossing the physical gap and/or transformer insulation. Most circuits are designed with insulation and gaps large enough to handle at least 600V of common mode.

(Common mode can be experienced directly by touching the 'ground' of a coax cable from your cable provider or touching phone wires whos 'ground' is the C.O. several miles away. The tingling you are feeling is anywhere from 60V to ~300V of common mode difference between the ground where the signal was generated and the ground your body has floated to... typically closer to the ground that your home is anchored to).

-Matt

Comment Re:Apple vs. Other Devices (Score 1) 240

What's happening there is that the phone is crow-baring the charger down to under 2V and then mis-interpreting its own voltage sensor (on the phone) thinking that the charger is connecting and disconnecting. Might also be tripping the short-circuit detector on the charger and causing it to cycle.

That's a problem with the phone. No phone should crow-bar the 5V the USB outlet provides, particularly not one that outputs >1A. (This is using your description of the charger being hot). All USB power sources are current-limited and the devices being charged have to deal with that properly.

-Matt

Comment Re:Apple vs. Other Devices (Score 1) 240

I have a multitude of Apple and Android devices and a multitude of Apple and non-Apple usb chargers and power sources, and also have a little non-Apple dongle charger in the car in the front.

It all works just fine. Apple might have weird charging cables but they still have usb on one end and they've worked with the dozen or so different usb charging sources I've got in the house and in the car.

Most USB chargers only throw out 2-5W. It's gonna take a while to charge-up Apple's big batteries with one of those but it will still work just fine.

-Matt

Slashdot Top Deals

System going down in 5 minutes.

Working...