Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×

Comment The real question (Score 1) 211

Is ... can I run DragonFly on it? Or is the BIOS locked to Chrome ? If this baby has the normal write-protect screw / developer mode BIOS features that allow us to run whatever we want on it instead of being locked to chrome, then great!

We've had great success with the older Acer C720[P] (running a mobile haswell cpu) running DragonFly. So if one of these new HP Skylake-m babies allows me to cut into the dance then I'll give it a big thumbs up.

I'll have to buy one to find out, I guess.

-Matt

Comment Re:iPhone 5s with dying battery (Score 1) 183

I will impart a warning here. I have a friend who has repeatedly tried to use non-Apple batteries in his Apple mobile devices and its been a dismal failure for him. Spend the money to have Apple replace your battery, you will be happier in the end.

Apple laptops... well, the ones with replaceable batteries are a different story. Going third-party there works fairly well. The ones that don't... again spend the money to have Apple do it for you, you will be happier in the end.

Another recommendation... when possible, always leave your devices plugged in. This causes the Apple battery management software to properly load cycle the full battery and will significantly increase battery life. I usually bring along an external battery and just keep my phone plugged in whenever possible for that very reason (when convenient).

My ipad-1's battery is still in great shape (now going on 6 years old), though the ipad-1 itself doesn't have enough memory to really be able to run much any more. My ipad-2 as well. 512M of ram isn't enough to run apps smoothly any more on the ipad-2 (and the ipad-1 can barely run anything), but the battery is in great shape because I leave the devices plugged in as much as possible.

-Matt

Comment The key event (Score 1) 183

Perhaps not one key event, but a combination of solid state storage removing the hard-drive-failure event that often drove people to upgrade, CPU performance topping out, and RAM well beyond anything most programs need have all conspired together to give us desktops, laptops, and mobile devices that basically no longer get 'old'. Not to mention that power consumption is low enough now that PSUs just aren't burning out like they used to :-).

Something strange happened in the last year or two. I buy computers all the time for DragonFly testing, so I have a pile of machines of all different kinds including a bunch of BRIX form-factor units. I stuff nominal sweet-spot memory and storage into them all, always, because they get repurposed or farmed out to friends all the time to make room for new hw.

The strange thing that happened... it became convenient to just throw 8-16GB of ram into all of these things. Even the tiny little BRIX. And even the little BRIX can dual-head two 4K displays, and easily fit a 2.5" SSD (and so can hold quite a bit of storage). None of these boxes have any moving components except a fan or two. They don't fail if I put them on a shelf for a year.

Up until about 2 years ago I was regularly throwing away my oldest hardware, including the bulky cases (which had to be large enough to hold a CD and/or DVD and several 3.5" drives).

But the remainder of that really old hardware petered out last year. Now there's no reason at all to throw away my 'new' old hardware... it is still useful enough that I can give it away or repurpose some of its components. The cases are all small so I just reuse those if I can't find any use for the mobo. I reuse the SSDs (I never reused old hard drives). I reuse the PSUs (if any). There's no graphics card to replace since it is built into the cpu.

In fact, the only thing I haven't been able to recycle in the new old machines have been the DIMMs due to continuous technology changes, but those just stay with the original motherboard.

In our colocation for DragonFly our blade server (12 x haswell blades in 2U) has handled all of our needs and other than slowly replacing the remaining HDDs with SSDs will probably handle all of our needs for the next 10 years. Or longer. It will be interesting to see what the failure mode is for the hardware because it will probably be the first piece of hardware I own that stays fully active and relevant until the blades actually physically fail.

I love the technology but I think there's only more pain to come for Intel.

-Matt

Comment p.s. proper way to install a smart t.v. (Score 1) 81

... is to disable all of its smarts. If you value your privacy, don't let it connect to the internet!

I'll take a regular 4K monitor without any bells and whistles thank you very much. And if it has a microphone or camera built into it that will be the first thing I stick my soldering iron into before I begin using it for real. Gouge out its eyes and ears, and we're good.

-Matt

Comment Re:May not continue for the long-term (Score 1) 314

No, actually, very little energy is lost in transmission. Nationally its something like 6% in the U.S. currently.

We need very high voltage transmission lines (e.g. 750Kv and higher) with more capacity if we want to link grids together and be able to get electricity all the way across the country with reasonably low losses, but that certainly is not needed when moving electricity across a few states... and that is all that is really needed when talking about linking up renewable sources of energy.

Not that it wouldn't be nice to have new high-capacity cross-country lines. I'd love to see it happen. But not having it isn't a show-stopper.

-Matt

Comment Re:Solar is not cheaper, (Score 1) 314

Not exactly. In both India and China, coal is producing so much pollution that people's life spans are significantly affected. Basically they tried to build a western-style full-on system and almost choked to death. Coal is still widely used, but these countries recognize the environmental disaster that you get when you use coal without pollution controls. And China, even with some pollution controls, is hitting the limit as well.

In the U.S. coal's demise is mostly driven by the price of natural gas and the recognition of its real cost, not just environmentally, but also its real cost in terms of deferred requirements that politicians have let the coal companies get away with for decades. Now with the coal companies going bankrupt the states that formulated those lax rules are paying the price. That is adding nails to coal's coffin.

Just as with Nuclear, decomimissioning cleanup costs revealed when the coal mines and plants are actually shut-down are running into the billions of dollars... costs that the coal companies conveniently did not have to take into account in the years they were operating. Kinda like spent nuclear fuel, it just builds up over time when the rules allow you to ignore it.

-Matt

Comment Re:Solar is fine, so long as the price doesn't ris (Score 1) 314

Much of India doesn't even *have* 24/7 power, so Solar power is actually a pretty damn good fit. Solar is definitely cheaper in this situation. Minimal battery (just needed to stabilize the load and handle occasional occlusions from clouds), the panels, and the inverter and you are done. Night-time LED lighting can be battery powered.

Big deal in India which has virtually no reliable national power infrastructure.

-Matt

Comment Re:3 years old CPU (Score 1) 179

Its not actually that bad, period. All Intel consumer cpus since Haswell have roughly the same cpu performance. Only power consumption and GPU performance have steadily improved since then. So frankly it isn't much of an upgrade unless your old Macbook is going dead on you from battery drain.

If you still have an old Macbook with a HDD in it, then an easy mini-upgrade is to replace that HDD with a SSD. Poof, it will feel like new even with relatively limited ram. I did the 'ol switch-r-roo on my wife's old 5-year-old MacBook Pro, mostly for me, but the thing worked so well after the change my wife decided to keep it. Oops.

-Matt

Comment Re:Intel sucks at SSDs (Score 1) 62

It's almost certainly Intel's fault. Some of their SSDs do not follow the SATA spec properly on reset which can cause the initial probe to fail with a timeout. If you probe a second time it will succeed. I actually had to add a second probe to DragonFlyBSD's AHCI driver to work around the problem. It doesn't seem to be related to startup time, even with a long delay I'll see first-probe failures on Intel SSDs in various boxes.

Strangely enough the failures occur with Intel AHCI chipsets + Intel SSDs, but do not occur with AMD AHCI chipsets + Intel SSDs.

Just one of those things. Intel firmware generally suck across all of their chips. They write specs to make their hardware designs easier and don't really give a shit how much it complicates the drivers people have to write for the hardware. That said, their SSD firmware, once you can probe the SSD, seems to work just fine.

-Matt

Comment SSD pile growing, HDD pile shrinking. (Score 1) 62

Calculate the cost of the replacement cycle too and suddenly SSDs look a lot cheaper. It's just that most people can't think beyond the end of their noses, so if the up-front cost looks expensive they stop right there.

I bought my last HDDs last year. Two 4TB 'archival' drives for backups. My existing pile of new 1TB or 2TB HDDs (I have around a dozen 3.5" and half a dozen 2.5" left) will be dribbled out as needed, but won't be buying any new HDDs from now on. In fact, I couldn't even foist off some of my extra 2TB 3.5" HDDs onto friends last weekend. They weren't interested! Bryce happily took one of my original 40G Intel SSDs (2.5") but had no interest whatsoever in a brand spanking new 2TB WDC (3.5").

Last year I scrapped the 3.5" form factor (I made two exceptions for the two backup drives). All new systems have only 2.5" hot swap slots now. And until recently I had a growing pile of 2.5" HDDs to maintain those systems.

But now my pile of 2.5" SSDs continues to grow while my pile of 3.5" and 2.5" HDDs has begun to shrink. The strange thing about my pile of 2.5" SSDs... I haven't had to throw any SSDs away since I started it! That pile began with the old 40G Intel 2.5" SSDs that really started the industry ramp. All of those originals that I had slowly replaced with higher-capacity SSDs are still in the pile and still in perfect working order! And I actually use them, they are perfect for small test systems.

So its a bit of a strange situation. HDDs would die or get read errors and I'd wipe and throw them away. I never recycled old HDDs very much (they become unreliable when you mix cold and hot cycling or shelve them). But SSDs are a completely different matter. You can mix cold and hot cycling just fine, they basically don't go bad unless they fail outright (at a much lower rate than any HDD) or unless they are worn out from write wear (which is quite hard to do if you don't do stupid things with them). If its just an unrecoverable block due to chance, a reformat fixes the problem and the continue soldiering on (which hasn't happened to me yet but that's what I'll do).

So my SSD pile continues to grow and the capacities continue to cycle up. The pile saw its first 1TB SSDs last year. At this juncture if I have to replace my archival HDDs I might use some of the spares from my HDD pile, but after that I'll happily spend the money to throw in a bunch of SSDs for the same storage because they'll last a whole lot longer.

-Matt

Comment Re:Problems with SSD storage (Score 2) 94

That standard is for a flash cell that is at its wear-limit (basically at its end-of-life). A brand new flash cell that has only been written a few times has 10x the shelf life.

Those numbers are also quite misleading, because in addition to the above, a powered SSD will rewrite cells when their data becomes weak. So data retention for a powered SSD is going to be a very long time. Since SSD flash cell life is based on write activity, if you don't wear it out from writing your SSD to its limit and you leave it powered most of the time, it will retain your data and probably last a very long time (well in excess of 10 years, probably in excess of 30 years, possibly even longer).

SSDs don't eat very much power, a data warehouse would be workable.

HDDs on the other-hand wear out whether they are powered or not. Corrosion, stress (even if not doing anything), lubricant issues, and any number of other factors. A HDD on a shelf isn't going to last even 5 years in any reliable sense. You might be able to recover the data from the magnetic media, but the expense would become stratospheric if you have more than one drive to recover.

-Matt

Comment Re:Not a big deal... (Score 2) 458

No, not necessarily. The problem isn't so much the cpu cores, those will be mostly backwards-compatible. The real problem are all the other discrete PCI devices. If Microsoft does not provide updated drivers for their older OS releases, those older releases have no chance of working on newer hardware.

For example, Intel's Skylake chipset has I219 gigabit ethernet now (uprev from I218 which itself was an uprev from I217). The chance of older ethernet drivers working with newer chips is zero. In the case of the I219, the flash mapping and access mechanics changed drastically.

The integrated GPU is another good example. Skylake is up to Gen9. The chances that Gen8 code will work with it are zero.

One can go down the list. The only chipsets which are generic enough for older drivers to work are going to be the USB and AHCI chipsets. Everything else? Forget it.

But I don't know why people are complaining so much. The same can be said for BSD and Linux distros. An older BSD or Linux release is not going to work on newer systems. Most people don't care since they just update to the latest. While it is possible to backport the drivers to older OS releases, not very many people have the skill required so for all intents and purposes you need to run newer OpenSource OS's on these newer chips too.

-Matt

Comment Re:Energy consumption is going to increase (Score 1) 645

The problem with projections is that they rarely predict how things will actually develop. What happens in reality is that society slowly recognizes that a problem is present. Often too slowly (and probably too late when it come to climate change), but never-the-less it eventually gets recognized, and society shifts and adjusts.

Nobody seems to understand just how huge the energy economy is in the developed world, just to support our current life-style. The numbers you quoting, despite being probably wrong (very wrong most likely)... still, the numbers you are quoting are *nothing* compared to the energy infrastructure that drives the U.S. economy today. On a relative measure, if we can have what we have now, we can certainly achieve anything you've mentioned above.

At some point in the last 10 years this whole 'thorium reactor' movement cropped up, and frankly its hard debunk the utter stupidity of the model because very few people talking about it are bonafide scientists who know what they are talking about. Me included, on this issue. But on the other-hand I'm probably one of the few people who actually knows how to use a geiger counter and has had conversations with scientists standing in front piles of lead shielding crap I wouldn't want to touch with my bare hands.

When one of those guys tells me that thorium is a disaster due to its secondary byproducts, I believe him.

-Matt

Comment Article is kinda pie-in-the-sky wrong (Score 3, Interesting) 100

At least, not totally correct. Memory bus non-volatile storage such as Intel's X-Point stuff still requires significant cache management by the operating system. Why? Because it doesn't have nearly enough durability to just be mapped as general purpose memory. A DRAM cell goes through trillions of cycles in its live time. Something like X-Point might be 1000x more durable than standard flash, but it is still 6 orders of magnitude LESS durable than DRAM. So you can't just let user programs write to it however they like.

Secondly, in terms of data-center machines becoming obsolete. Also not correct. SSDs make a fine bridge between traditional HDD or networked storage and something like X-Point. For two reasons: First, all data center machines have multiple SATA busses running at 6GBits. Gang them all together and you have a few gigabytes/sec worth of standard storage bandwidth. Secondly, you can pop nVME flash (PCI-E based flash controllers) into a server and each one has in excess of 1 GByte/sec of bandwidth (and usually much more).

Third, in terms of memory management, paging to/from SSD or nVME 'swap' space, or using it as a front-end cache for slower remote storage or spinny disks, already provides servers with a fresh new life that means they won't be obsolete for years to come.

And finally there is the cost. These specialized memory-bus non-volatile memories are going to be expensive. VERY expensive. To the point where existing configurations still have a pretty solid niche to play in. Not all workloads are storage-intensive and these new memory-bus non-volatile memories don't have the density to be able to replace the storage required for large databases (or anywhere near it).

So, the article is basically a bit too pie-in-the-sky and ignores a lot of issues.

-Matt

Slashdot Top Deals

A continuing flow of paper is sufficient to continue the flow of paper. -- Dyer

Working...