Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Specced too low, weird form factor (Score 1) 174

This is the *mobile* i5, not the full blown desktop i5. It's basically the Broadwell successor to the Haswell 29xx series. 15W TDP or less. The BRIX runs 8W idle (not sleeping) and 20W at 100% cpu (all 4 threads full out). Intel is playing fast and loose with their naming schema for Broadwell.

-Matt

Comment Re:Specced too low, weird form factor (Score 1) 174

All the older haswell-based boxes have dropped in price significantly. They make decent boxes too as long as you are not compute-heavy. E.G. the 2957U is 2-core, no hypthreading, 1.4 GHz, no-turbo, and no AESNI (so https and other crypto is slow). Whereas even the Broadwell i3-5200U is 2-core/4-thread, 2.2 GHz with Turbo to 2.7 GHz, and has AESNI.

I have an Acer C720P chromebook running DragonFly (BSD) with the 2955U in it, which is very close to the 2957U. I would call it decent for its purpose and it can certainly drive the chromebook's display fairly well. Firefox is not as snappy as I would like, though.

On the i5-5200U even unaccelerated video decoding can run full frame at full speed on my 1920x1050 monitor and firefox is quite snappy.

If I had to make a cost-concious decision on using the older Haswell based cpu and giving up some cpu power I would say that it would still be a reasonable choice *BUT* I would compensate at least a little by throwing in more ram (at least 4GB).

-Matt

Comment Specced too low, weird form factor (Score 2) 174

It's specced way too low to really be useful as a general computing device, and the form factor is 'weird' to say the least. It's too big to really be called a stick, and too small to be able to pack a decent cpu. There's plenty of space behind the monitor for a somewhat larger device in a better form factor. The stick is a play toy that you will become disappointed with very quickly (think the old 'netbook' concept Intel tried to push a few years ago... that's what the stick feels like).

Honestly, the 'compute stick' makes zero sense for a TV-mounted device. It is far better to just go with a chrome cast stick or an AppleTV for airplay and using a pad or cell in your hand to control it if you want to throw a display up on the TV. Otherwise you will be fumbling around with a horrible remote or you have to throw together a bluetooth keyboard (etc...) and it just won't be a fun or convenient experience.

My recommendation... don't bother with this gadget. Instead, spend a bit more money and get an Intel NUC or Gigabyte BRIX (both based on Broadwell). And get at least the i5 version, the lack of turbo in the i3 version is telling. e.g. i5-5200 based box or better. It will cost significantly more than the stick, but it packs a decent cpu, can take up to 16GB of ram (2x204pin SO-DIMM DDR3), and depending on the model might even have room for a 2.5" SSD or HDD in it. The broadwell i5-5200U makes for quite a reasonable compact workstation and boxes based on it will be almost universally dual-headed. Of course, whatever floats your boat but I would definitely say that the lowest-priced Intel NUC or Gigabyte BRIX that is haswell-based or broadwell-based is still going to be an order of magnitude better than the compute stick.

I have one of the Gigabyte GB-BXi5H-5200's myself ('H' version fits a normal 2.5" SSD or HDD) and packed 16GB of ram into it. It is dual-headed so I can drive two displays with it and the box is small enough to mount on the back of a monitor if you so desire (it even includes a mounting plate and most monitors, such as LG monitors, are ready to take it). And if mounting it on the back of a TV doesn't make sense, mount it on the back of a monitor instead or just let it float behind the monitor. It's a small box, after all, it won't get in the way of anything. 4-thread (2-core), 2.2 GHz turbo to 2.7 GHz. Dual-head. Decent.

-Matt

Comment Re:Latency vs bandwidth (Score 5, Interesting) 162

That's isn't correct. The queue depth for a normal AHCI controller is 31 (assuming 1 tag is reserved for error handling). It only takes a queue depth of 2 or 3 for maximum linear throughput.

Also, most operating systems are doing read-ahead for the program. Even if a program is requesting data from a file in small 4K read() chunks, the OS itself is doing read-ahead with multiple tags and likely much larger 16K-64K chunks. That's assuming the data hasn't been cached in ram yet.

For writing, the OS is buffering the data and issuing the writes asynchronously so writing is not usually a bottleneck unless a vast amount of data is being shoved out.

-Matt

Comment Re:ISTR hearing something about that... (Score 2) 162

Actually, large compiles use surprisingly little actual I/O. Run a large compile... e.g. a parallel buildworld or a large ports bulk build or something like that while observing physical disk I/O statistics. You'll realize very quickly that the compiles are not I/O constrained in the least.

'most' server demons are also not I/O constrained in the least. A web server can be IOPS-constrained when asked to load, e.g. tons of small icons or thumbnails. If managing a lot of video or audio streams a web server typically becomes network-constrained but the IOPS will be high enough to warrant at least a SATA SSD and not a HDD.

Random database accesses are I/O constrained if not well-cached in ram, which depends on the size of the database too, of course. Very large databases which cannot be well cached are the best suited for PCIe SSDs. Not a whole lot else.

-Matt

Comment Not surprising (Score 4, Informative) 162

I mean, why would anyone think images would load faster? The cpu is doing enough transformative work processing the image for display that the storage system only has to be able to keep ahead of it... which it can do trivially at 600 MBytes/sec if the data is not otherwise cached.

Did the author think that the OS wouldn't request the data from storage until the program actually asked for it? Of course the OS is doing read-ahead.

And programs aren't going to load much faster either, dynamic linking overhead puts a cap on it and the program is going to be cached in ram indefinitely after the first load anyway.

These PCIe SSDs are useful only in a few special mostly server-oriented cases. That said, it doesn't actually cost any more to have a direct PCIe interface verses a SATA interface so I these things are here to stay. Personally though I prefer the far more portable SATA SSDs.

-Matt

Comment Re:Wow... (Score 1) 51

Well, except that it isn't a mere month. Unpowered data retention is around 10 years for relatively unworn flash and around 1 year for worn flash. Powered data retention is almost indefinite (doesn't matter if the data is static or not). The modern SSD controller will rewrite blocks as the bits leave the sweet zone.

The main benefit, though, is that SSD wear is essentially based on how much data you've written, which is a very controllable parameter and means, among other things, that even a SSD which has been sitting on a shelf for a long time and lost its data can still be used for fresh data (TRIM wipe + newfs). I have tons of SSDs sitting on a shelf ready to be reused when I need them next. I can't really do that with HDDs and still expect them to be reliable.

Hard drives have a relatively fixed life whether powered or not. If you have a modestly used hard drive and take it out and put it on a shelf for a year, chances are it either won't be able to spin up after that year or it will die relatively quickly (within a few weeks, possibly even faster) once you have spun it up. So get your data off it fast if you can.

So SSDs already win in the data retention and reliability-on-reuse department.

-Matt

Comment Re:nvidia/ATI should keep their new stuff propriet (Score 1) 309

I don't understand what you mean by 'non graphics competitors'. Intel, AMD, and ARM cpu offerings already have integrated GPUs with dual-head capability (and have for a few years now). There are no non graphics competitors.

Currently the best open source kernel and driver compatibility is with the Intel and AMD integrated GPUs. That's what all the KMS work was responsible for giving us. The performance of integrated GPUs has increased steadily over the last few years and has reached a point now where most 3D games will run with modest (but not high-end) settings, and *all* 2D (aka desktop operations) will run faster than you can blink.

I splurged for a mid-range card for my windows gaming box, but all my workstations just use the cpu-integrated gpus these days for dual-head operation. And they're nice and quiet and fast.

-Matt

Comment Consumers are not going to notice much difference. (Score 2) 72

Well, nobody with a laptop is really going to notice much of a difference because frankly there isn't a whole lot of software that actually needs that kind of performance over the ~550 MBytes/sec that can already be obtained with SATA-III. Certainly not that would be run on a laptop anyway.

It's just using the PCI-e lanes on the M.2 connector instead of the SATA-III lanes. This isn't a magical technology. There's a loss of robustness and portability that gets traded off. It does point to SATA needing another few speed bumps, though. The fundamental serial link technology used at the physical level by PCI-e and SATA is almost identical. The main difference is that SATA is designed for cabling while M.2 is not (at least not M.2's PCI-e lanes).

-Matt

Comment Re:Should be micro kernel (Score 5, Interesting) 209

Nobody does message passing for basic operations. I actually tried to asynchronize DragonFly's system calls once but it was a disaster. Too much overhead.

On a modern Intel cpu a system call runs around 60nS. If you add a message-passing layer with an optimized path to avoid thread switching that will increase to around 200-300ns. If you actually have to switch threads it increases to around 1.2uS. If you actually have to switch threads AND save/restore the FPU state now you are talking about ~2-3uS. If you have to message pass across cpus then the IPI overhead can be significant... several microseconds just for that, plus cache mastership changes.

And all of those times assume shared memory for the message contents. They're strictly the switch and management overhead.

So, basically, no operating system that is intended to run efficiently can use message-passing for basic operations. Message-passing can only be used in two situations:

(1) When you have to switch threads anyway. That is, if two processes or two threads are messaging each other. Another good example is when you schedule an interrupt thread but cannot immediately switch to it (preempt current thread). If the current thread cannot be preempted then the interrupt thread can be scheduled normally without imposing too much overhead vs the alternative.

(2) When the operation can be batched. In DragonFly we successfully use message-passing for network packets and attain very significant cpu localization benefits from it. It works because packets are batched on fast interfaces anyway. By retaining the batching all the way through the protocol stack we can effectively use message passing and spread the overhead across many packets. The improvement we get from cpu localization, particularly not having to acquire or release locks in the protocol paths, then trumps the messaging overhead.

#2 also works well for data processing pipelines.

-Matt

Comment Re:Can we be sure there are no exploits? (Score 2) 209

Well... basic procedures using only MOV/CMP/JMP is not something that even linux really needs to code in assembly. What is being talked about here is primarily the trap, exception, syscall, signal trampoline, and interrupt entry and exit mechanisms. Also thread switch code can get pretty complex because there is a lot more hardware state involved than just the basic register set. When you start having to deal with SWAPGS and MSR registers, you've really gone down the rabbit hole.

-Matt

Comment The core of the issue (Score 2) 281

The core of the issue has nothing to do with going off-grid and everything to do with matching production from renewal sources to the actual load on the grid. Without that we get into the situation that Germany finds itself in, which is two fold: (1) That electricity prices fall to zero during the day due to all the solar, and as subsidies go away the owners can't make money from providing power to the grid. And (2) The base load differential between day and night is so great that the traditional generation (i.e. coal) cannot run continuously at critical mass and so becomes extremely inefficient and uneconomical. So coal power generation companies in Germany are also going bankrupt.

Ultimately consumers with PV systems will be forced to pay spot rates and feel the pain. This is already beginning to happen in many parts of the country... where day-time electricity rates are lower but the buy-back is also lower, and night-time rates are higher and have a higher buy-back.

The idea with using the electric car battery (or some other form of temporary storage) is to use it store energy when prices are cheap and inject it into the grid when prices are expensive. This also has the side effect of reducing the base load differential between day and night, so other generation sources such as nuclear and coal can operate efficiently (and thus profitably) to make up the difference.

There is nothing nefarious going on. Really, going entirely off-grid is not something anyone should be trying to do unless they actually live somewhere with a flaky grid (or no grid). And the reality is that electricity prices are going to fluctuate even more between day and night, or rainy vs not, or windy vs not, as more renewable energy sources are brought online.

-Matt

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...