Please create an account to participate in the Slashdot moderation system


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Stupid article (Score 1) 75

Satellite pagers (and in more modern times, texts over the cellular network) are the most reliable way to get alarms out to field and on-call personal. Sure, someone could send a malicious fake page or text, but these alarms are mainly just heads-up to personal who are not in the operations center that something is amis. The main board will always be checked / personal will always call in and double check before anyone actually pushes any buttons.

This is a really stupid article.


Comment Re:First lesson (Score 4, Interesting) 135

I have two major beefs with IPV6. The first is that the end-point 2^48 switch address space wasn't well thought-through. Hey, wouldn't it be great if we didn't have to use NAT and give all of those IOT devices their own IPV6 address? Well... no actually, NAT does a pretty good job of obscuring the internal topology of the end-point network. Just having a statefull firewall and no NAT exposes the internal topology. Not such a good idea.

The second is that all the discovery protocols were left unencrypted and made complex enough to virtually guarantee a plethora of possible exploits. Some have been discovered and fixed, I guarantee there are many more in the wings. IPV4 security is a well known problem with well known solutions. IPV6 security is a different beast entirely.

Other problems including the excessively flexible protocol layering allowing for all sorts of encapsulation tricks (some of which have already been demonstrated), pasting on a 'mandatory' IPSEC without integration with a mandatory secure validation framework (making it worthless w/regards to generic applications being able to assert a packet-level secure connection), assumptions that the address space would be too big to scan (yah right... the hackers didn't get that memo my tcpdump tells me), not making use of MAC-layer features that would have improved local LAN security, if only a little. Also idiotically and arbitrarily blocking off a switch subspace, eating 48 bits for no good reason and trying to disallow routing within that space (which will soon have to be changed considering that number of people who want to have stateful *routers* to break up their sub-48-bit traffic and who have no desire whatsoever to treat those 48 bits as one big switched sub-space).

The list goes on. But now we are saddled with this pile, so we have to deal with it.


Comment Flood defenses? (Score 5, Informative) 135

There is no flood defense possible for most businesses at the tail-end of the pipe. When an attacker pushes a terrabit/s at you and at all the routers in the path leading to you as well as other leafs that terminate at those routers, from 3 million different IP addresses from compromised IOT devices, your internet pipes are dead, no matter how much redundancy you have.

Only the biggest companies out there can handle these kinds of attacks. The backbone providers have some defenses, but it isn't as simple as just blocking a few IPs.


Comment Re:No, it's not time. (Score 1) 183

You just use the scroll-wheel. The scroll bar is always a last resort. I prefer the scroll-wheel myself, but if the system doesn't have a mouse -- that is, one only has the trackpad, then either two-finger scrolling (Apple style) or one-finger-right-side-of-pad scrolling is a pretty good substitute.


Comment Primary problem is the touchpad hardware (Score 1) 183

The real problem is the touchpad hardware. The touchpad device itself may not be able to accurately track three or four fingers, and there isn't a thing the operating system can do to fix it. I've noticed this on chromebooks, in particular when I ported the touchpad driver for the Acer C720. The hardware gets very confused if you put more than two fingers down on the pad horizontally (or you cross them horizontally while you slide your fingers around).

It basically makes using more than two fingers very unreliable. My presumption is that a lot of laptops out there with these pads probably have the same hardware limitations.


Comment Re:NVMe is excellent (Score 3, Insightful) 161

Unless the project has only one source file, compiling isn't really single-thread bound. Most projects can be built make -j N. When we do bulk builds, that's what we see happening most of the time so with very few exceptions your project builds should be able to make use of many cpu cores at once.

The few exceptions are: (1) The link phase is typically a choke point and serializes to one thread, and (2) Certain source files might be so large relative to the others that everything else finishes and the build is twiddling its thumbs waiting for that one 200,000 line source file to finish compiling before it can move on to the link phase.

One other note - Builds are like 99.9% cpu driven. Storage bandwidth is almost irrelevant because there is almost no I/O involved in doing a build vs the cpu time required. Source files are already likely cached in memory. Temporary files don't last long enough to even have a chance to get written to disk (if not using tmpfs), and object files and executables are tiny relative to available storage bandwidth and asynchronously flushed as well (so nobody has to wait on them to be flushed to disk).

So, for example, when we do a bulk build of all 24000+ applications in ports, we use tmpfs mounts for all temporary files and our disk I/O is almost non-existent throughout the process. The only time we see busy storage is during maximum peak load when the running compiler binaries exceed available ram and the system pages a bit (you have to allow this in order to optimize the non-peak portions of the build to ensure that all system resources are fully utilized throughout the entire 22-hour-long bulk build).


Comment Re:Next Milestone? RAM (Score 5, Interesting) 161

Yes, but that is what the XPoint technology is trying to address. The NVMe technology is not designed to operate like ram and the latencies are still very high. Nominal NVMe latency for a random access is 15-30uS. The performance (1.5-3.0 GBytes/sec for normal and 5 GBytes/sec+ for high-end NVMe devices, reading) comes from the multi-queue design allowing many requests to be queued at the same time.

Very few workloads would be able to attain the required request concurrency to actually max-out a NVMe device. You have to have something like 64-128 random requests outstanding to max-out the bandwidth (fewer for sequential). Server-side services have no problem doing this, but very few consumer apps can take full advantage of it.

The NVMe design is thus more akin to being a fast storage controller and should not be considered similar to a dynamic ram controller in terms of performance capability.

Because of the request concurrency required to actually attain the high read capability of a NVMe device, people shouldn't throw away their SATA SSDs just yet. Most SATA SSDs will actually have higher write bandwidth than low-end NVMe devices (particularly small form factor NVMe devices). And for a lot of (particularly consumer) workloads, the NVMe SSD will not be a whole lot faster.

That said, I really love NVMe, particularly when configured as swap and/or a swap-based disk cache. And I love it even more as a primary filesystem. It's so fast that I've had to redesign numerous code paths in DragonFlyBSD to be able to take full advantage of it. For example, the buffer cache and VM page queue (pageout demon) code was never designed for a data read rate of 5 GBytes/sec. Think about what 5+ GBytes/sec of new file-backed VM pages being instantiated per second does to normal VM page queue algorithms which normally only keep a few hundred megabytes of completely free pages in PG_FREE. The pageout demon couldn't recycle pages fast enough to keep up!

Its a nice problem to have :-)


Comment Re:Why is Windows 10 the benchmark? (Score 1) 205

I should add, the evidence of this is plentiful. Anyone remember the days of IDE PIO ? Before IDE DMA and in particular before command and data blocks could be fully buffered by a hardware FIFO in the control, IDE PIO was a complete disaster. It barely worked (and quite often didn't). And we had to pull out the stops as device driver writers to get it work as well as it did (which wasn't very well).


Comment Re:Why is Windows 10 the benchmark? (Score 5, Informative) 205

Not quite true A.C. The instructions for those old 8-bit CPUs could be synchronized down to a single clock tick (basically crystal accuracy), thus allowing perfect read and write sampling of I/O directly. We could do direct synthesis and A/D sampling, for example, with no cycle error, as well as synchronize data streams and then burst data with no further handshaking. It is impossible to do that with a modern CPU, so anything which requires crystal-accurate output has to be offloaded to (typically an FPGA).

RTOSs only work up to a point, particularly because modern CPUs have supervisory interrupts (well, Intel at least has the SMI) which throw a wrench into the works. But also because it is literally impossible to count cycles for how long something will take. A modern RTOS works at a much higher level than the RTOSs and is unable to provide the same rock solid guarantees that the 8-bit RTOSs could.


Comment Re:model Slashdot response (MS DOS-ickies r.i.p.) (Score 3, Informative) 205

Looks interesting... I've pre-ordered two (both cpu models, 4G) for DragonFlyBSD, we'll get it working on them. Dunno about the SD card, but a PCIe SSD would certainly work. BIOS is usually the sticking point on these types of devices. Our graphics stack isn't quite up to Braswell yet but it might work in frame buffer mode (without accel). We'll see. The rest of it is all standard intel insofar as drivers are concerned.

My network dev says the Gigabit controller is crap :-) (he's very particular). But for a low-end device like this nobody will care.

All the rest of the I/O is basically just pinned out from the Intel cpu. Always fun to remark on specs, but these days specs are mostly just what the cpu chip/chipset supports directly.

I'm amused that some people in other comments are so indignant about the pricing. Back in the day, those of us who hacked on computers (Commodore, Atari, TRS-80, Apple-II, later the Amiga, etc) saved up and spent what would be equivalent to a few thousand dollars (in today's dollars) to purchase our boxes. These days enthusiast devices are *cheap* by comparison. My PET came with 16KB of ram and a tape cassette recorder for storage, and I later expanded it to 32KB and thought it was godly.


Comment Re:Wagging the dog (Score 1) 551

What new and relevant thing do you want to see in the phone? I for one can't really think of anything. I don't really need a better camera, for example, nor do I need any on-phone storage. LTE (or LTE-A) is plenty fast enough, no point having more bandwidth that I'm not going to pay the cell carrier for. Wifi is plenty fast enough. Games run fine on the -6 so they'll run fine on the -7. What's left?


Comment Intel must love these articles (Score 1) 276

So full of complete nonsense. Throwing out terms without knowing what they actually mean, let alone whether an operating system actually has to make any changes to support it.

Take speed-shift for example... all it does is remove the need for the OS to calculate a P-state for HLT/MWAIT. All ACPI has to do is present a smaller list of P states and *ANY* OS that supports HLT/MWAIT p-state setting (which basically worked meaningfully from Haswell onward) will instantly be using SpeedShift. There's nothing to 'support' unless the OS is coded to intentionally break it.

AMD's SMT improvements don't need any OS-specific coding. The original bulldozer architecture *DID* need OS-specific coding, because it was a piece of shit (and a lot of us just didn't bother to code the OS to try to characterized mixed integer/FP loads), but continuing to use that coding in the newer architecture doesn't really cost anything. And, again, the CPU topology is made available to the OS via ACPI, and any OS since before Sandybridge could use it. Linux and the BSDs have been using the topology info provided by ACPI for years, and Microsoft had better have been too, so no specific OS coding is required.

What a load of crap.


Slashdot Top Deals

"If there isn't a population problem, why is the government putting cancer in the cigarettes?" -- the elder Steptoe, c. 1970