Become a fan of Slashdot on Facebook


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:First lesson (Score 4, Interesting) 125

I have two major beefs with IPV6. The first is that the end-point 2^48 switch address space wasn't well thought-through. Hey, wouldn't it be great if we didn't have to use NAT and give all of those IOT devices their own IPV6 address? Well... no actually, NAT does a pretty good job of obscuring the internal topology of the end-point network. Just having a statefull firewall and no NAT exposes the internal topology. Not such a good idea.

The second is that all the discovery protocols were left unencrypted and made complex enough to virtually guarantee a plethora of possible exploits. Some have been discovered and fixed, I guarantee there are many more in the wings. IPV4 security is a well known problem with well known solutions. IPV6 security is a different beast entirely.

Other problems including the excessively flexible protocol layering allowing for all sorts of encapsulation tricks (some of which have already been demonstrated), pasting on a 'mandatory' IPSEC without integration with a mandatory secure validation framework (making it worthless w/regards to generic applications being able to assert a packet-level secure connection), assumptions that the address space would be too big to scan (yah right... the hackers didn't get that memo my tcpdump tells me), not making use of MAC-layer features that would have improved local LAN security, if only a little. Also idiotically and arbitrarily blocking off a switch subspace, eating 48 bits for no good reason and trying to disallow routing within that space (which will soon have to be changed considering that number of people who want to have stateful *routers* to break up their sub-48-bit traffic and who have no desire whatsoever to treat those 48 bits as one big switched sub-space).

The list goes on. But now we are saddled with this pile, so we have to deal with it.


Comment Flood defenses? (Score 5, Informative) 125

There is no flood defense possible for most businesses at the tail-end of the pipe. When an attacker pushes a terrabit/s at you and at all the routers in the path leading to you as well as other leafs that terminate at those routers, from 3 million different IP addresses from compromised IOT devices, your internet pipes are dead, no matter how much redundancy you have.

Only the biggest companies out there can handle these kinds of attacks. The backbone providers have some defenses, but it isn't as simple as just blocking a few IPs.


Comment Re:No, it's not time. (Score 1) 183

You just use the scroll-wheel. The scroll bar is always a last resort. I prefer the scroll-wheel myself, but if the system doesn't have a mouse -- that is, one only has the trackpad, then either two-finger scrolling (Apple style) or one-finger-right-side-of-pad scrolling is a pretty good substitute.


Comment Primary problem is the touchpad hardware (Score 1) 183

The real problem is the touchpad hardware. The touchpad device itself may not be able to accurately track three or four fingers, and there isn't a thing the operating system can do to fix it. I've noticed this on chromebooks, in particular when I ported the touchpad driver for the Acer C720. The hardware gets very confused if you put more than two fingers down on the pad horizontally (or you cross them horizontally while you slide your fingers around).

It basically makes using more than two fingers very unreliable. My presumption is that a lot of laptops out there with these pads probably have the same hardware limitations.


Comment Re:NVMe is excellent (Score 3, Insightful) 161

Unless the project has only one source file, compiling isn't really single-thread bound. Most projects can be built make -j N. When we do bulk builds, that's what we see happening most of the time so with very few exceptions your project builds should be able to make use of many cpu cores at once.

The few exceptions are: (1) The link phase is typically a choke point and serializes to one thread, and (2) Certain source files might be so large relative to the others that everything else finishes and the build is twiddling its thumbs waiting for that one 200,000 line source file to finish compiling before it can move on to the link phase.

One other note - Builds are like 99.9% cpu driven. Storage bandwidth is almost irrelevant because there is almost no I/O involved in doing a build vs the cpu time required. Source files are already likely cached in memory. Temporary files don't last long enough to even have a chance to get written to disk (if not using tmpfs), and object files and executables are tiny relative to available storage bandwidth and asynchronously flushed as well (so nobody has to wait on them to be flushed to disk).

So, for example, when we do a bulk build of all 24000+ applications in ports, we use tmpfs mounts for all temporary files and our disk I/O is almost non-existent throughout the process. The only time we see busy storage is during maximum peak load when the running compiler binaries exceed available ram and the system pages a bit (you have to allow this in order to optimize the non-peak portions of the build to ensure that all system resources are fully utilized throughout the entire 22-hour-long bulk build).


Comment Re:Next Milestone? RAM (Score 5, Interesting) 161

Yes, but that is what the XPoint technology is trying to address. The NVMe technology is not designed to operate like ram and the latencies are still very high. Nominal NVMe latency for a random access is 15-30uS. The performance (1.5-3.0 GBytes/sec for normal and 5 GBytes/sec+ for high-end NVMe devices, reading) comes from the multi-queue design allowing many requests to be queued at the same time.

Very few workloads would be able to attain the required request concurrency to actually max-out a NVMe device. You have to have something like 64-128 random requests outstanding to max-out the bandwidth (fewer for sequential). Server-side services have no problem doing this, but very few consumer apps can take full advantage of it.

The NVMe design is thus more akin to being a fast storage controller and should not be considered similar to a dynamic ram controller in terms of performance capability.

Because of the request concurrency required to actually attain the high read capability of a NVMe device, people shouldn't throw away their SATA SSDs just yet. Most SATA SSDs will actually have higher write bandwidth than low-end NVMe devices (particularly small form factor NVMe devices). And for a lot of (particularly consumer) workloads, the NVMe SSD will not be a whole lot faster.

That said, I really love NVMe, particularly when configured as swap and/or a swap-based disk cache. And I love it even more as a primary filesystem. It's so fast that I've had to redesign numerous code paths in DragonFlyBSD to be able to take full advantage of it. For example, the buffer cache and VM page queue (pageout demon) code was never designed for a data read rate of 5 GBytes/sec. Think about what 5+ GBytes/sec of new file-backed VM pages being instantiated per second does to normal VM page queue algorithms which normally only keep a few hundred megabytes of completely free pages in PG_FREE. The pageout demon couldn't recycle pages fast enough to keep up!

Its a nice problem to have :-)


Slashdot Top Deals

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken