Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Re:Google should just block Australia (Score 5, Interesting) 24

The fun part comes when Google starts obeying all the little wishlist things and so rightsholders's stuff stops getting as much visibility, thus sales.

Copyright is valid. Making a car requires an enormous capital investment for equipment, plus a ton of labor per-vehicle; the engineering expense to design that car is millions of dollars, and the production of that car is enormous. The Chevrolet Volt sold 21,000 units in 2012 and 25,000 in 2013, at MSRPs around $40,000; that's $840 million and $1,000 million. At below 20% gross profit margins, that's over $672 million and $800 million of production costs. By contrast, making music requires large amounts of labor to compose, perform, record, and master; making copies of music requires pennies per thousand copies and a capital start-up cost of a $400 PC you probably already own.

Given the above, copyright obviously requires protection. The impact of partial copyright compromise is non-obvious even to many marketing executives: illegal things like playing your radio loud enough for others to hear in public cause people to buy your song, even though these things also compromise your ability to charge money for performance in that context. Focusing too hard on protection of rights will lead to loss of the benefits conferred by those rights, just as if you protected the right to remain silent by prosecuting anyone who speaks without first raising his hand.

(By "rights" I of course mean "protections provided by laws which may be changed to expand, diminish, or extinguish their scope after appropriate legal process".)

Comment Re:OK, cool... (Score 1) 119

Panels of the same form factor with higher-efficiency cells install in exactly the same way. 255W panels install the same way as 180W panels and 355W panels (all of the same size). They rack up onto the same hardware.

Oddly enough, the cost of micro-inverters for panels above 255W increases; and modern power-optimizing inverters actually cost the same, but fail less-often and provide more-efficient power regulation. The installation for string inverters, micro-inverters, and power-optimizing inverters is roughly the same: connect each cell to the next, then run a home-run wire to each end of the array. String inverters plug a wire into each end; micro-inverters and power-optimizers plug in the same wire connector, but with a little box dangling off.

The three options have some rough differences.

String Inverters are cheap. You might pay $500 for a 5kW array. The array feeds each solar panel into the next, and so an underperforming module drags the entire array down and strains other panels: shade on one cell can cut your entire array's efficiency by 50%.

Micro-inverters are more-expensive. You might pay $1,200 for a 5kW array. The array plugs each solar module into the next through a micro-inverter, which really converts the panel's 600VDC into 3-wire 240VAC. This gives you a 3-wire service feed. Micro-inverters have a relatively-high failure rate.

Power optimizers cost about what micro-inverters cost. You might pay $1,300 for a 5kW array. They wire in the same way as micro-inverters, but pass 600VDC down to what amounts to a string inverter. The power optimizers themselves function similar to a modern lithium cell battery management system, drawing more power from higher-output panels and less from lower-output panels without letting the panels interact and stress each other. Power optimizers are simpler than micro-inverters, dissipate less heat, and thus have less loss and a lower failure rate.

So the physical aspect of installing any solar array is the same. If you use high-efficiency cells, nothing changes. If you use high-output panels--larger panels or similarly-sized panels with high-efficiency cells--you have to use either a cheap string inverter or a power optimizer. Micro-inverters are probably the worst choice in any installation: for a single-panel or small area, you should use a string inverter; for multi-panel, use power optimizers.

Comment Re:OK, cool... (Score 1) 119

Shipping the cells requires more energy because they're larger and heavier. It requires more shipping hardware and energy infrastructure maintenance. It requires more handling to install them, wire them, and keep them free of the energy-robbing layer of dust. Manufacturing costs increase for an array with the same output, so decay from oxidization, delamination, imbalanced arrays and overvoltage, or plain old damage costs more--as does the shipping and handling, again.

If I could get a single 2 meter by 1 meter panel that output 6kW, I could have that slapped up on my roof for $500, and have a cheap $450 string inverter installed for $1,000. As it stands, I can get a 6kW array with a $1,800 power-optimizing inverter (required only for multiple-panel installations) for $5,800; I can also pay about $5,000 for the full installation labor, plus a good $400-$600 to ship the material in the first place.

Comment Re:cheaper to keep 'er (Score 1) 140

I simply don't want the instability of future rate changes--granted that can happen anyway, but it's typical for the long-term, non-promotional price to be higher for a bundle than an individual element. They might give me TV for free for 12 months, then what? I don't check my bill and I pay $30 extra one month, and I don't even watch TV!

Comment Re:If it ain't broke... (Score 1) 253

Removing features is what made Firefox great. Firefox became a well-known piece of utter shit when it had added feature after feature and bloated to an enormous, complicated hulk of options lost in hundreds of options. Then alternate browsers came along with their slimmed-down feature sets, and people moved.

Chrome is ditching menu items few people use. It might not die of featuritis.

Comment Re:A John Deer bonfire... (Score 2) 489

Everyone seems focused on the farmers and their poor little butthurt selves.

What about the downstream cost? These failures reduce productivity and thus increase the cost of food. They draw money to John Deere for no value-add (rent-seeking). These things reduce the total number of products you can buy with your money (wealth), and reduce the number of people receiving (jobs) the money spent for a given investment of labor-hours (wages).

The inefficiencies of requiring a tech to stop by just to sign-off on a hardware change that actually works--and to charge $500 for the tech to do so--result in a reduction of wealth across the entire economy at every income level, and a loss of (primarily lower-income) jobs. This is an attack on all Americans and on all recipients of American agricultural exports.

Comment Zram? (Score 1) 64

Are we getting zram configured with swap device size = 4x RAM, mem_limit = 50% RAM, vm.swappiness = 100? That will give effective doubling of RAM for approximately no performance hit.

've not found the performance asymptote yet. The crude theoretical limit is exchanging 100% of working RAM for 3x the compressed space (100% compressed to an average 3:1, although 4:1 happens sometimes--typically you average 3:1). The frequency of page decompression increases as you reduce the working RAM (the part not holding compressed data). Page decompression takes approximately twice as much time as a worst-case CPU cache miss (full row-precharge, RAS, and CAS), so decompressing pages with relative infrequency is cheap, and can be effectively free.

Generally, a block of code operates multiple instructions per byte of an immediate working set, so e.g. 16KiB eats 400,000 CPU cycles decompressing and then the application spends 4,000,000 cycles operating on that area. The application will use part of its hot working set as well, so the time spent working on that page spans more cycles than just that. Depending on how fast the program works through working set, you can end up expending approximately 0% of CPU on swapping into zram; or, if the working set space is tiny and swapping is frequent, you can expend much of your time swapping in and out.

Prefetching and caching further complicates this. The Linux kernel can background-prefetch swap by reading sequential pages ahead or by taking explicit hints via madvise() such as WILLNEED (get the page ready), SEQUENTIAL (aggressive read-ahead), or RANDOM (read-ahead not useful) from the application. If the system uses 99.7% of the CPU, then there are 3mS per 1 second interval to decompress pages into a cache (without freeing from zram), and to compress pages into zram (without freeing from RAM--mark as not-dirty). That's 4.8 million cycles per 1.6GHz core--on a 4 core phone, that's plenty to compress many pages per second, and decompression is way fast. Efficient prediction can really reduce the cost of swapping into zram to a real-time 0.

As I said: that's if you have the CPU time unused. At 3mS per 1 second interval, as above, with 26 cycles per byte, a 4k page takes 106,496 cycles to decompress. With four cores loaded to 99.7%, that's 45 pages per core or 180 pages, 720k per second you can afford to swap in--no room for swapping out. If you're not running through that much cold-set data per second, zram costs power usage, but not performance. As well, the amount of data you can churn through is limited by nominal CPU usage, and the churn in practice is nowhere near the theoretical limit (effectively a MOV %eax,$addr loop with $addr+=4096 iterations), so low-CPU-usage operation would tend to inflict less power consumption.

I didn't go looking for the performance asymptote because the return between 2x and 3x RAM capacity isn't worth bothering with. With half the working set, the amount of swapping is negligible; even when you start digging deep into swap, the rate of data moving in and out of zram is low enough to not inflict a visible performance impact. The real-time impact vanishes as soon as the CPU is more-idle, so if there's a 30mS stall every 10 seconds you suddenly have time to catch up on 3mS of slowdown and break even. This is, thus, a fine place to simply settle for now, without worrying about the variability of workloads like you'd have to if you pushed to the limit based on large data analysis of common workloads.

Slashdot Top Deals

Usage: fortune -P [-f] -a [xsz] Q: file [rKe9] -v6[+] file1 ...

Working...