Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment The PHEV is better anyway (Score 1) 93

The EV range of the Clarity PHEV isn't that much less than the BEV, and you can always fall back on good ol' gasoline if you need to drive for 24 hours without stopping for more than a fuel fillup for some reason.

I have the Clarity PHEV and love it. With a home charger and given my typical driving patterns, the ICE uses about 2.5 gallons of regular unleaded for every 1000 miles driven. Sure, electricity isn't free, but if you agree with the overall sales pitch of an EV, it's hard to disagree with a PHEV as it really isn't that much different for a lot of people.

Comment Re:Put down your fucking phone (Score 1) 113

Oh joy, another argument from the position of "it's always been that way, so why should it change now?"

You must be one of the "late adopters" who gets dragged into new things kicking and screaming. By chance are you running Windows 98, MacOS 9, or Red Hat Enterprise Linux 5 on your computer right now? Android 2.2 on your phone?

Comment Re:You're insane if you buy this (Score 1) 113

Who in their right mind is threatening to replace human drivers with cars that have no manual controls? A ton more money, R&D and experience has gone into automating flight in aircraft, to the point that Tesla's system called "Autopilot" borrows a term from aircraft that's been in use since at least the 80s.

We've been doing automation with airplanes for much longer, and arguably airplanes are a LOT easier to automate (because there's basically no risk of running into "traffic" -- vehicles piloted by other humans. In the extreme rare case of a danger of a mid-air collision, there's a well-documented collision algorithm that BOTH planes are REQUIRED BY LAW to adhere to; see TCAS/ACAS; so arguably you could easily automate TCAS/ACAS reaction as well.

And yet, in airplanes, there are still manual controls, and no manufacturer in their right mind is seriously proposing that we get rid of them (or the pilots).

Cars have a more complicated task, as long as there are other human drivers on the road, so I couldn't fathom any cars without a steering wheel showing up until at LEAST 25-50 years after we take away the yoke from airline pilots. And we're not even close to the latter.

Talk to any airline pilot, though, and they will sing the praises of autopilot. You can't have a situation where both pilots fall asleep, or go join the Mile High Club with flight attendants, while the plane is in autopilot, because something could happen at any time requiring manual intervention. But for the majority of flights, there's a long period of "normal" and extremely boring flying that a computer can manage, and that's when autopilot is engaged to relieve workload from the pilots.

People who drive their car on long highway trips just want the same thing. I think the technology already exists to make it a reality, as long as expectations are managed, and drivers aren't under any misconceptions that they can take a nap in the back while the system is driving.

Comment Re:You're insane if you buy this (Score 1) 113

You're insane if you buy this expecting it to be a self-driving car. It's NOT. It's NOT advertised as such, either. Comma.ai has the *eventual goal* of getting to self-driving cars, but with no delusions that the current state of the art is not close to that. It's a development kit.

In terms of actual utility, an ALERT driver ready to take over in a split-second can use OpenPilot and Comma's hardware to alleviate workload in ordinary conditions. Choosing to engage your driver assistance in abnormal conditions (wet pavement, snow, crashing airplanes, earthquakes, etc.) is a Bad Idea and the human driver is responsible for ensuring the system is off in these conditions and taking direct control.

The biggest problem is that people's expectations are unrealistic. They don't understand how an interim solution can be useful. They only see the negative risks of it, not the positive benefits, without understanding just how trivial it is to override the autopilot and drive your car when it looks like the autopilot is about to do something wrong.

Comment Re:Trackpoint (Score 1) 98

I refused to use anything but a TrackPoint as an input device to a laptop for many years -- probably about 10. Then I got a MacBook. A large, glass-topped touchpad with very accurate tracking, two finger scrolling, tap anywhere to click, etc. is every bit as good -- or better -- than a TrackPoint at productivity (the number of accurate clicks you can make per minute). The only thing a TrackPoint can do that a good touchpad can't, is endlessly spin your character around in one direction in an FPS game (or the camera, in a 3d modeling editor) at the speed you choose, consistently and for an extended time. :P

I had "a Mac phase" where I used exclusively Macs for a while, and I do miss their touchpads. I went back to an MSI laptop since then, and now I'm running Kubuntu on it after another brief stint with Windows. You'd be forgiven for thinking I change my mind a lot, but I actually tend to stay on the same OS install for years at a time once I find what I like. Right now Kubuntu is working A-OK for me.

The touchpads on Windows laptops aren't as good as a Mac's Force Touchpad, though. Maybe the holy grail of input + freedom is to buy a MacBook Pro and install Linux on it? :P

Comment Re:Who actually uses ZFS? (Score 1) 279

I use it on two production servers rented out at OVH with the following specs: 2 x eight-core Ivy Bridge Xeon E5 CPUs; 256 GB DDR3; 4 x 2 TB HDD; and 2 x 240 GB enterprise SSD, and an unmetered gigabit uplink. I run Ubuntu Server LTS as the root OS, and various virtual machines and LXD containers, ranging from Fedora to Windows Server VMs, and Ubuntu and non-Ubuntu containers on LXD. Each container and libvirt VM gets its own ZFS dataset, lz4 compressed, and the SSDs act as a persistent read and write cache for the HDDs. I get performance above 100k IOPS sustained on most workloads, and under intense writes, ZFS can keep the HDDs "saturated" at their maximum rated throughput (somewhere in the neighborhood of 120 MiB/s, give or take 20%). The HDDs are in RAID10, so I have redundancy, and 4 TB of backing store. But unlike running raw HDDs, I get SSD-like performance on 99.9999% of workloads, because the SSDs absorb nearly all of the read workload (with a very high cache hit percentage thanks to the ARC algorithm) and also buffer writes, safely unlike RAM because they're persistent storage, to reorganize random writes into efficient sequential transfers to the HDDs to maximize throughput.

On one of the two servers, the OS install was originally done on Ubuntu 16.04 LTS on a very differently configured server. I was able to very easily migrate that server to an entirely new box without having to reinstall the OS by using `zfs send` over ssh. It only transferred used blocks, and even then they were transferred compressed, so it was many times faster than doing dd | ssh. The box has also survived a full distro upgrade from 16.04 to 18.04 with only a few config file hiccups.

There's no other filesystem on the market that could utilize my particular storage solution this efficiently and safely.

Also, I have a cron job set up to check the ZFS pool every hour and see if it detected any corruption. Corruption in ZFS can be detected any time a disk block is read, or written to, as well as periodic automated "scrubs" that verify the integrity of data at rest. If ZFS detects anything amiss, I get an email. This has exposed the need for HDD replacement in the past, and the hosting provider was able to replace the disk with a new one without bringing my server offline, and the pool synced to it, started using it, and everything was fine again. The sync was quite fast given the amount of data transferred, again due to the compression and efficient way ZFS manages sequential writes on HDD media.

I don't think OVH even sells servers with SSD capacities large enough to give me 4 TB of usable storage on SSDs with RAID-1 redundancy (so, 8 TB total). If they did, it'd be $1000/month per server, probably. As it stands, I pay $120/month for each one of these servers. I can't begin to express how absolutely affordable it is to run tiered storage (SSD+HDD) servers on ZFS, while still getting virtually all of the performance you'd get if you ran pure SSDs.

Comment Only ZFS makes tiered storage servers viable (Score 1) 247

There are a lot of comments asking "why ZFS" but most of them don't really understand the main killer feature of ZFS (and ZFS on Linux): the ability to efficiently use tiered storage.

See, there's currently a problem overall with the storage industry. Flash storage, aka SSDs, is hideously expensive per gigabyte. Magnetic storage, aka HDDs, is hideously slow in terms of IOPS. For difficult workloads that require an optimization of server purchase price, high IOPS, and large quantities of local storage, Tiered Storage is the only real option.

This allows you to buy both: (1) relatively inexpensive, high write endurance but fairly low capacity SSDs -- usually on the order of 128 to 512 GB, depending on the size of the HDDs behind them; and (2) relatively inexpensive, high capacity but slow HDDs -- usually 8 TB or larger -- and combine them into one logical block device that *behaves* as if it were an SSD with many terabytes of storage. You get about 98% of the IOPS performance of the SSDs, while all the data ultimately persists to the HDDs behind the scenes. This is remarkably good for large databases and file storage servers, and the price of building all of that capacity in datacenter-grade SSDs is going to run you about $1000 more per terabyte of capacity, assuming RAID-1 redundancy.

With ZFS, you can set up your ZFS Intent Log (ZIL) -- basically a write buffer for the HDDs -- on a partition about 25-50% of the SSD capacity (depending on how write-intensive your workload is), and set up the ZIL in RAID-1 mode for data safety. ZFS will then efficiently create large batch sequential writes to the HDD that convert what could be thousands of SSD IOPS (small writes from a database, for instance) into a few dozen HDD IOPS. This allows your storage array to absorb even tens of gigabytes of random writes at hundreds of megabytes per second into the SSDs, which then get reorganized, optimized, and streamed to the HDDs sequentially in a way that optimizes throughput for the HDDs. And program-level calls to sync() or fsync() can legitimately return after completing the writes to the ZIL, even if the writes are still pending to the HDDs, because the data is genuinely on persistent storage that will survive a power outage.

You can also have an L2ARC (Level 2 Adaptive Replacement Cache) with ZFS, which is basically a page cache for *reading* from the HDDs that sits on a partition in the SSDs. For my servers, I set up the L2ARC to consume about 75% of the space of the SSDs because I don't tend to get very large bursty writes on my workload, but for those with a much higher write workload they will want to increase the percentage of ZIL vs. L2ARC.

Once again, like the ZIL, the advantage of L2ARC is to reduce the workload on the HDDs and reduce the IOPS demanded of them. The ARC algorithm has also been mathematically proven to be generally more efficient at common page cache workloads than the page cache algorithm the Linux kernel uses for other filesystems, so there's a "Layer 1 ARC" in RAM, too. And it's adjustable in size so you can tune whether you want to suck up lots of RAM with ARC, or leave more RAM for application data.

For those who would just have HDDs and use RAM buffers to insulate the storage from high IOPS, RAM has three major limitations: one, it's volatile, so it can't safely cache writes for very long; two, using RAM for filesystem caching competes with applications that want to allocate RAM for their own purposes; and three, RAM is very expensive. Also, it's much easier to expand the amount of storage in a server than to expand the amount of RAM: if you have the max. amount of RAM your motherboard supports, you'd have to buy an entire new system to get more. With storage, you can usually just attach another drive, pair of drives, or worst case attach another SATA, SAS, or NVMe card onto the PCIe bus using a spare slot for even more storage. Long story short, you can have a much smaller scale system with terabytes upon terabytes of HDD or even SSD storage, but servers with a couple terabytes of RAM are absolutely enormous, come at a massive cost premium, and require special planning for power, rack space, and system administration, which usually isn't required if you just add a few storage devices.

So, if you want the best of all worlds, being able to use relatively inexpensive commodity hardware (like single-slot Xeon, or even desktop-grade hardware like Threadripper) but have excellent performance for workloads like databases, game servers and so on -- anything that demands a lot of small writes -- your most affordable path is to use Tiered Storage.

You would think that Linux would have a stable, mature, tested, highly optimized filesystem in-house for handling Tiered Storage properly, but it actually doesn't. Not at all. None of the solutions available with Btrfs, XFS, Ext4, LVM2, MD, and family even come close to the performance and feature-set of ZFS with tiered storage. Not to mention that the closest feature competitor, Btrfs, is still such a boondoggle stability-wise that Red Hat is abandoning it as a supported filesystem in RHEL. They also don't have any engineers to work on it, but if it were stable, they wouldn't need to.

I will continue to use ZFS on Linux (at my own peril? Fine.) until Linux offers an in-kernel alternative that matches its performance, featureset and maturity. LLNL has the right idea -- they knew what they were doing when they invested so many dollars into the development of ZoL. They needed a tool that didn't exist, so they built one.

And no, running a Solaris or BSD kernel probably isn't a viable alternative, when most all software is designed and tested for Linux, and the BSD and illumos compatibility layers for Linux are sketchy at best.

For Linux laptops and home gamers, using XFS on a single HDD or SSD is fine, and even if you have a system with both HDDs and SSDs, tiered storage probably isn't of major benefit to you, because you don't have a workload that justifies it. A lot of people do, though, so they're either paying out the nose for way more SSD storage than they need, way more RAM than they need, way bigger servers than they need, ... or they're smart and they use ZoL.

Comment Just build standards (Score 2) 212

The best way to deal with this is to build bona fide standards -- APIs, wire protocols and such-like -- for all the features that systemd is trying to expand into. Do this for anything where the "de facto" way of doing it is really poor for various reasons, and a new design is needed to simplify and clean up the system. Introduce these userland standards at places like the Linux Plumbers Conference, and invite other implementations than systemd to pick them up. Specifically design the standards so individual pieces of the stack can be slotted in, rather than requiring by design that the entire thing be a monolithic project/daemon.

Once you accomplish that, it would be easy for application developers to target these APIs and wire protocols in a way that's easily swappable between systemd and other implementations, and users could either use systemd or alternative implementations of any subset of the features.

This would require a slight change in direction from systemd, towards more of a standards-based implementation and perhaps slow down the velocity, but it would make it much easier for folks to support new features -- things that the old standards like POSIX, X11-related standards couldn't support -- without having to track an arbitrarily-changing systemd design. Then, it would be easy for projects like Debian to decide to support init system diversity as long as they can implement these standards, and systemd would no longer be the only way to get these features. In turn, downstream projects like GNOME would target the standards instead of targeting systemd directly.

There's value to be had in making a standard so general that you basically HAVE to have multiple implementations of it before it gets standardized. That's how the web works -- with a few exceptions like proprietary codecs, by and large the web standards work pretty efficiently at introducing new JavaScript versions and suchlike, but you almost never see websites these days "designed only for Chrome" or something. You can use the latest Firefox, Safari, or Edge to get the same functionality. Sadly, Edge is going away -- I liked it in the sense that it increased browser engine diversity -- but at least there are still the big three.

Having multiple implementations of the standards all but prevents private extensions and arbitrary incompatible changes, because users will complain if a significant market share belongs to a competing product and they can't take advantage of what your site demands. We could do something analogous with the login, networking, logging, hardware management and init systems built on top of Linux, with systemd being kind of the Firefox (the grandfather) and something else being the Chrome (the newcomer).

Comment Speed is worthless if you have packet loss (Score 1) 253

As someone who had to fight (successfully) for over a year to get Verizon to fix packet loss on my gigabit fiber, and as someone who a lot of people come to to complain about tech woes, the #1 problem people have when things are "slow" isn't speed; it's packet loss.

Packet loss can occur for any number of reasons: a faulty router, bad firmware, bad drivers on their PC, high DPC latency on their PC, poorly configured wifi or ethernet settings (receive window, interrupt moderation), using 2.4 GHz instead of 5 GHz wifi, or it could be damage to the physical landlines (fiber, cable or DSL phone lines) causing intermittent packet loss spikes. A packet loss spike is basically the Internet equivalent of your lights flickering, and is the #1 reason why people get dropouts on teleconferences, downloads timing out, lagging out of games, etc.

In my experience, having a connection with at least intermittent packet loss is so common that I'd say at least 1 in 2 people have some packet loss on their home Internet connection. Depending on how severe it is, it can be barely noticeable; only affect you at specific times of day; some complain that it only drops out when it rains; or it could affect you constantly, causing you to get a fraction of your advertised speeds, while being unable to reliably do anything demanding real-time bidirectional communication like VoIP or gaming.

ISPs need to become more invested in helping customers track down and eliminate packet loss. Sure, it's often the customer's equipment's fault, but coming out and verifying that the infrastructure -- that is, everything the ISP is responsible for -- is NOT at fault, is super important. If the infrastructure's not at fault and it's persisting, you can advise the customer on trying Ethernet vs. WiFi, trying a different computer to determine if it's a bad NIC, trying a different Ethernet cable or customer-owned router, etc.

On that note, router manufacturers need to invest more in product reliability and ruggedness. A $500 router shouldn't randomly decide to shit the bed and deliver 7 Mbps instead of gigabit. For any reason. Ever. But that's exactly what happened when I updated the firmware on my Netgear R9000.

The folks who are still getting 7 Mbps ADSL and similar awful speed service do need faster speeds. Everyone should have at least a 100 Mbit connection with 25 Mbps up in 2019, or it's going to be really hard to enjoy the full offerings of the Internet (aside from Facebook, email and 1080p Netflix, I mean). And having zero packet loss is critical for anyone who works from home even occasionally.

I agree that only technologists and those working from home can commonly use the bandwidth offered by, say, FiOS Gigabit -- but that's just a reason for folks to focus more on solving packet loss and latency issues to rule them out as a potential cause for their poor application experience. If you've ruled out packet loss using a tool like PingPlotter and you're still unhappy, then yes, go ahead and upgrade.

So I would say the biggest pain points of Internet service in the US today are:

1) Packet loss on the line, or due to infrastructure, routers, drivers, and/or WiFi being crappy.
2) Obsolete, outdated, or poorly tested client devices (laptops, desktops and their WiFi and Ethernet chipsets) and consumer routers / modem-router combos. These often produce inferior speeds due to bad default configuration or firmware bugs, and sometimes introduce major regressions with upgrades.
3) The fact that probably half of American households get 15 Mbps down or less. If your landline is slower than your smartphone, you know you've got problems. Every landline should be faster than LTE -- period, no ifs ands or buts.

The push to deliver Gigabit to everyone is a significantly lower priority than the above, so I agree with the article on that, but I also don't think it's a reason to take away gigabit offerings from those who want them and know they need them to get real work done (programmers, people who work from home for a living, etc.)

Comment Re:Buggier too? (Score 1) 213

We thought it was "well-tested" before we found Heartbleed, too. Hours run in production do not necessarily correlate to correctness. That said, the performance numbers are just that: performance numbers. No one who knows anything about software is suggesting that everyone delete OpenSSL today and run out and install Rustls as their production TLS library. Of course, there's a Catch-22 somewhere there: OpenSSL wouldn't be as good as it is today if people didn't use it and contribute to its development, find bugs in it, etc. But it would also be much less of a target for blackhats trying to find vulnerabilities to exploit. On the other end, Rustls is being very seldom used in production at all (probably just for some toy sites with no valuable personal/banking data, etc.) so it has very little blackhat interest probably, but due to its relatively smaller development and user community, fewer people are contributing to it. If it gains marketshare, it will gain a certain percentage of its users as developers/contributors, and it'll get better. If no one uses it, we're stuck only with OpenSSL, and blackhats can invest 100% of their time into only finding and hoarding vulns specific to OpenSSL if no one uses anything else.

Having a monoculture of a given OS or library or daemon is generally a bad idea. It means if someone exploits a zero-day they can own the entire world in seconds. Diversity is a good thing; it also encourages competition. Look how much faster Firefox, Edge and Safari have become since Chrome came into being. OpenSSL could start working on performance to compete with Rustls.

All of this is developed in the open, but if your application deals with sensitive data, DON'T adopt Rustls until the library itself and its runtime and compiler are fully stabilized. No one is suggesting you do otherwise, only that it's a promising up-and-coming alternative. Software isn't built in a day.

Comment Re:Buggier too? (Score 1) 213

Rustls and Rust are developed in the open. Are you saying they should be kept private until they've been in production for years and had extensive testing? Then they wouldn't benefit from the contributions of the community that could make the library more useful, sooner. Of course, if you adopt a library that isn't even 1.0 yet as the SSL library for your banking application, whoever approved that to go into production should be fired. The performance numbers are not accompanied by a suggestion that everyone delete OpenSSL today and deploy Rustls into production immediately. Get a clue about how open source development works.

Comment Re:Still way better (Score 1) 378

Except you're not subscribing to individual *content*; you're subscribing to an entire company or studio's products. Why should I have to pay $10/month to watch the ONE show I want on HBO, while others literally love every single show on HBO and only watch HBO and pay the same price? People who want to view one show per studio/company are getting shafted the worst here. For people who don't have $100/month to spend on streaming services, companies like Netflix, Apple, Amazon, and now HBO, NBC, CBS, AMC, etc. are forcing you to select one or two companies for your all-in media watching needs. The problem is that 90% of Fox content is total dreck, 90% of HBO content is total dreck, 90% of NBC content is total dreck, etc.

The future we all wanted (well, those of us willing to pay for our content) would be like something like paying $5 for an entire season of a specific show, unlocked on your Amazon or Apple account for all your devices, available in perpetuity, regardless of who produced/published it. I don't actually care who published it. My tastes for content don't align along corporate ownership, brand, or political lines. I'm a registered Democrat but I love certain Fox shows, and I hate certain HBO and NBC shows. But there's no way I'm going to subscribe to each and every one of these companies' home-grown streaming services.

It's only a matter of time before the streaming bubble bursts and these companies come running back to Netflix, Amazon and Apple TV because running a streaming service is more expensive than they thought and the market is smaller than they thought.

Slashdot Top Deals

Testing can show the presense of bugs, but not their absence. -- Dijkstra

Working...