Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment PHEVs: the proven solution (Score 1) 613

They're way more popular in Europe for some reason, but PHEVs need to make a BIG comeback in the US. You can have your cake and eat it too! Your pure EV range will be on the order of 40 to 150 miles; after that, you have a gas sipper series-parallel hybrid that will take you as far as you can go with gas stations.

They can build a crapton of battery packs that get 40 to 60 miles with the same amount of cells as one 300+ mile pure EV. The bigger the pure EV's range, the more the economics and practicalities make sense for splitting that pack among N plugin hybrid vehicles.

Even if you drain your PHEV's battery pack every day during your commute and still go a long way in hybrid mode on the highway, you're still drawing a significant portion of your trip from electricity. Then, as the grid slowly transitions to a higher percentage of renewables (or you can force the issue yourself by getting home solar or a VAWT), that portion of your trip that comes from the grid gets more and more renewable, so your effective gasoline MPG increases.

This isn't rocket science. Kill conventional vehicles; keep regular hybrids for cheap cars (like the Prius C); reduce full-EV production; and massively increase PHEV production. This is the way to maximize fuel savings and minimize carbon impact with current technology, and all it requires is for manufacturers to build more PHEVs and less EVs, and consumers to buy the PHEVs.

Comment Yet Valve ignores asset flips. (Score 1) 42

It's ironic to me that Valve, the company most notorious for rubber-stamping awful asset flips, is now pretending to have some kind of standards when it comes to copyright/provenance of assets and quality.

For those who haven't noticed, Valve has for years been allowing "developers" to fling thousands of "games" onto Steam, which are little more than compiled tech demos or game engine samples. Many of these games contain zero, or next to zero custom code, and provide a would-be customer about 30 seconds of entertainment at most. While many of the assets they use are legal to use because they're available for download under a royalty-free license, I'm positive that copyright infringement is relatively commonplace within asset flips. Jim Sterling on Youtube is well-known for exposing these asset flips and showing just how awful they are.

Asset flip developers prey upon accidental mouse clicks, Dementia or Dyslexia patients, and people who mis-remembered a legitimate game's title, by polluting the Steam game market with very similar-sounding game titles and developer names. Many users have no idea how to ask for a refund, so if they accidentally buy an asset flip, they just eat that cost. Sometimes an asset flip might pretend to exit, but the program keeps running in the background, trying to get you past 2 hours of play time so Valve won't give you a refund.

True; AAA developers scam, scheme and exploit their customers, too, sometimes for far more money than the asset flippers make. But it is very arbitrary, and very stupid, that Valve chooses to die on this hill, one that actually might deny legitimate developers access to Steam, while still waving through an uncountable number of asset flips every day.

It sort of reminds me of police departments who have an extensive and highly effective marijuana enforcement program, while continuing to allow world-leading quantities of gun homicides to occur in their jurisdictions.

Comment Possible (Score 4, Insightful) 220

His own words admit the faults:

"I have worked for decades to make it possible to write better, safer, and more efficient C++"

POSSIBLE. It's also possible to write safe Intel x86_64 machine code, but who has time to learn how to do that except for someone trying to optimize the hell out of a hot inner loop in the OS's scheduler or something?

On the flip side, it's possible to write Rust code that produces a segfault, but you have to use a keyword that says "unsafe". Pointer deferences in C++ that crash your program can be done without even a compiler warning or a hint that what you're doing may be incorrect.

And there's so much legacy code in C++ out there that any real program is going to end up using libraries with badly designed APIs that have "gotchas", undefined behavior, or call interleaving patterns that cause memory or thread safety problems.

If your whole stack is written in Rust, none of what I said is possible, assuming people don't just use the "unsafe" keyword for laughs, and seriously consider and test each location where it is used.

Comment Re:Requirements (Score 1) 141

Also, let me go from merely implying this, to stating it explicitly. I do believe that it's OK to have a plan to write your product in two different languages!

For example, you could start by writing the prototype in Lua, Python or Ruby. Let that churn around while the stakeholders figure out how things should look, and let prototypes jog their memory about what other requirements the system is missing, so you can get everything out on the table.

Then, when the requirements churn slows down a lot, and the performance and bug rate of the language start to become an issue, spin up a small team to take the current version of the prototype and start rewriting it in Rust. Meanwhile, have your prototype team continue to polish it, clarify requirements with management, and learn Rust on the side.

Once the project is scaffolded out on the Rust side and it vaguely resembles the prototype, bring over the entire prototype team to the Rust team, and have everyone abandon the original prototype and focus on the Rust version.

Trust me, this _works_. It's unnecessary if you have a strong requirements concept from the start, but many teams start coding these days with a one-sentence idea of the requirements, so you kind of need to start with a RAD language. It's just not usually a good idea to _stay_ with a RAD language, because they're so inefficient and buggy.

Comment Requirements (Score 1) 141

A lot of uninformed takes on Rust here, so I'll chime in my 2 cents. Rust is absolutely suitable for _any_ type of application, be it web, mobile, games, backend, OS kernels, drivers, embedded systems, you name it.

What Rust is NOT a good fit for, is rapid prototyping where a team is figuring out what the requirements are "on the fly". You should have a very precise idea of what it is you're building before you start, or else Rust is not a good fit for your project.

If you're coding up prototypes by the seat of your pants and presenting them twice a week (or twice a day) to a lead, manager or executive and they're giving you feedback that requires you to completely change direction and re-architect your approach, Rust is a bad idea, because you're going to end up spending hours polishing your prototype just to get it past the Rust compiler. Why waste all that time making the edge cases correct, when your manager is just going to throw it out and make you start over?

If you're past the experimentation phase and have already proven out your concept in another language that focuses on development speed over correctness, like Python or Ruby, and everyone agrees on the rough shape the final product will take, _then_ you should start writing it in Rust. Your final product will be free of many classes of bugs (except logic bugs, which the compiler can't help with) and perform about as fast as code can practically perform on modern systems without writing hand-tuned assembly.

I'm not saying that Rust requires you to develop using a waterfall methodology. I'm saying that startups in particular tend to have _next to no_ idea what they want, and requirements changes can rip up hundreds or thousands of man-hours of code and demand they be rewritten, with only a few words from an executive. That kind of churn does not lend itself to a language that demands such high standards of correctness.

Figure out what you want first, writing it in a language that will happily give you type errors at runtime, because your boss doesn't care about the edge cases anyway. Once you have a good sense that the product is definitely going in a specific direction, then you can start using Rust and developing API contracts between parts of the system, and so on.

Rust is worth it as an implementation language for a final product in almost any domain, because the saying goes, most Rust crates have a small 0.x version number, like 0.1 or 0.2, because they're _done_ at that point -- finished, correct, feature-complete, etc. There's nothing to "fix" about them that requires the continual release of new versions and API changes. And the reason that such things can be "finished" at a 0.1 version is because Rust doesn't let you ship most types of bugs. Just to get the damn thing to compile, already guarantees that the product is already in a very good state.

Comment Re: New designs may be easier to maintain (Score 1) 176

Exactly the point I want to make to the âoetoo expensiveâ whiners. What is the cost of rolling blackouts? To those who suffer a heat stroke when their A/C goes off? What is the cost to those who die because their life-critical equipment loses power, the elderly, the recovering from injuries? Not all hospitals are designed to handle an indefinite grid outage. They have generators but they chew through fuel at a crazy rate - and delivering more fuel is expensive and not guaranteed during severe grid outages.

The grid is so important that there is no amount of money that is too expensive to ensure it is available in times of crisis. Storm, heat wave, earthquake, drought, war â" the grid has to work. In fact, itâ(TM)s MORE important that it works during a crisis than when times are otherwise good. The best time to have an outage is when you have no other problems besides the grid, but in practice it tends to go out due to compounding factors from other problems. Many can die when the grid is out for long.

Comment Rust Prevents Procrastination (Score 2) 123

Python, Ruby, and other scripting languages let you procrastinate completely, because they have absolutely no requirements before you can ship code that attempts to run. So you can literally ship syntax errors in prod.

Compiled languages with dynamic types at least prevent you from producing a running binary with syntax errors, or a few major language-related errors, so they catch a small percentage of bugs in the compiler before you can ship to prod. But any typing, object lifetime, or multithreading issues are yours to keep.

Compiled languages with static types additionally prevent you from shipping most (but not all) type mismatch related errors. If the language doesn't support automatic conversion between given types, and you try to pass in a type that isn't allowed, you can't produce a working program. So it forces you to understand your data types before you can ship. This defeats a lot of would-be procrastination, but still not all of it.

Rust goes much further than your typical compiled language, even, by forcing you to decide on object lifecycle. This is either left up to the programmer in languages like C, varies depending on the API in C++, or is garbage collected in languages like Go, Java and C#, so you take a performance hit in exchange for procrastination about (not having to think about) object lifecycles when writing your code. It also has a very strict type system, and furthermore knows when you're committing a multithreading sin that will corrupt your data or crash your program.

Rust also makes you think about error handling and possible data inputs, and handle each possibility explicitly. At least, well-designed Rust APIs do -- there are some, especially those that are just C bindings, that have poor error handling that's left up to the user (what do we do if the exit code is negative, 0, or positive?)

To make a correct program that doesn't crash, everything has to work, no matter what language you're in. But Rust just makes you think about and solve ALL of your problems up-front, before it's willing to produce a binary that executes. One of the only types of problems that can still exist in a Rust program that successfully compiles (without the unsafe keyword) is logic errors; that is, your program does something that doesn't meet your requirements. Example of a logic error: you are implementing an addition function, but accidentally use the subtraction operator. It's not doing what you expect, not because of any language problem, typing problem, etc., but because your requirement is for addition, but you used substraction. No programming language can possibly catch that.

I, like many others who have used Rust a bit, continue to be amazed by how, more often than not, a Rust program that compiles is usually 1 to 2 builds away from running correctly, performantly, and without any known bugs. This is less true if you have a lot of dependencies on hardware, device drivers, or C APIs from the operating system.

If most/all of your program is just native Rust code moving data around in memory with CPU instructions -- which is the crux of computing -- there is a very good chance that a compiling program does everything you want it to do, unless your code has logic errors.

Other common pitfalls of coding involve misunderstanding design contracts between systems (like when designing or consuming a network protocol or an on-disk format, or talking to hardware like a GPU). Because Rust takes a while to build a working binary (both in terms of compile time, and getting it correct), I would say that it makes sense to pull out I/O-heavy pieces of code and write it in a more forgiving language that can kind of "figure out what you mean" (even if you're committing programming sins) while you figure out things like API design contracts and what Vulkan APIs you should be calling to use the GPU correctly.

Once you get it all figured out in Python or Ruby or JS or C#, you can then harden the performance and correctness of your program by porting all your code to Rust. By the time you're done, you will have thought through all the error handling, multithreading, memory safety issues that could've plagued your code if you left it in a rapid scripting language, and you will benefit from performance rivaling or beating C.

Comment Re: People still use Windows? (Score 1) 207

WTF? Linux does *not* âoeemulate SCSIâ for SATA or NVMe, by far the most popular storage interfaces for PC. Just because character devices have âoesâ in their name for SATA drives doesnâ(TM)t mean thereâ(TM)s any emulation going on. This is total FUD.

The reality is that Linux reuses certain abstraction layers in the storage subsystem between multiple similar drive interfaces. But emulation implies some sort of wonky translation layer between protocols, which absolutely is *not* how itâ(TM)s being done.

If anything, Windowsâ(TM) storage subsystem is years behind Linux because it canâ(TM)t efficiently and effectively combine features such as tiered storage (on a bootable partition), encryption, data checksums, efficient disk compression, and software RAID. zfs and to a certain extent btrfs can do this, with no âoeyeah, butâ edge cases. ReFS was supposed to be Windowsâ(TM) answer to zfs, but the performance was so bad that they canned it even for server workloads, and itâ(TM)s never really been able to act as the file system on a boot drive either.

Comment Yeah right (Score 1) 180

So they're going to sell you top-tier silicon (say, a chip that's capable of stably running 40 or 64 cores) at the same low price as you'd pay for it today - say, $300 or $500 - and then you pay per month to unlock the rest? Yeah, nah. They'll use the chip shortage as cover for justifying jacking up the price of the low-end model to the high-end model prices anyway. So the base model will be the same price as last year's mid-grade or high-end model, then you ALSO pay monthly on top of that.

Seriously, anyone who thinks this is going to save you money is smoking something. They are ABSOLUTELY going to get more money out of you in the long run with this. And don't count on them offering a one-time payment as a feature upgrade, either! Ongoing charges are the new normal for these leeches.

Comment Re:Great wall of Apple? (Score 1) 77

No. Read up on what iCloud Private Relay does. First, it's opt-in by the user. Second, it uses double encryption, so that neither Apple nor their transit partners (basically, companies with a lot of servers in a lot of datacenters and a huge amount of network capacity) can see both _who you are_ and _what site you're visiting_. And if the destination site is using HTTPS, neither one can see the traffic payload either - only you and the site owner can see the traffic. It doesn't MITM any crypto protocols you have set up, it just passes the traffic in a way that prevents your ISP from knowing which sites you're visiting.

Comment Re:Gobbledygook (Score 2) 77

One of Apple's Private Relay relay node providers is Cloudflare. Another is Akamai. Between them, at least in the US, they have a presence in _most_ major and minor backbones. Follow the fiber -- pretty much everywhere the fiber is, these two companies have datacenters every few miles with edge servers and tier 1 peering with terabits of capacity.

Since Cloudflare and Akamai are by and large responsible for delivering much of the web content we consume in the first place, the modern Internet is already largely dependent upon the capacity planning and uptime of these CDNs. Usually only very small time sites - small biz, individual developers' homepages, etc. - are using an architecture where your request goes out and directly tickles an application server. CDNs are used for anything resembling high traffic sites.

If your Internet hop is proxied/VPNed to a nearby Private Relay node that is deeply embedded in a CDN network already (with excellent peering to all the tier 1s), arguably you might see _better_ speed through the Private Relay than through the ISP, depending on whether the ISP, or the Private Relay, has better peering and routing with the content server CDN.

I've definitely seen instances where VPNs have provided better speeds (throughput, not necessarily ping) than home ISPs in the past, so this is not a new concept.

Comment Re:Time to VPN (Score 1) 77

This is basically how it works. Not sure if the mix network idea is exactly how they implement (no one but Apple knows), but the marketing does point that way. It's actually quite a decent feature... for the users. The reason that carriers wouldn't like that is blindingly obvious to anyone following the recent trends in ISPs (especially mobile carriers) thinking they can do whatever they want with your traffic, including throttling it, inspecting it, data mining it, etc.

Comment Re:Nothing to hide ? (Score 1) 77

Private Relay does more than DNS encryption. I've heard it said that it's "not a VPN" several times, but when you enable iCloud Private Relay and visit whatismyip.com, the public IP that comes back is not your home IP, it's an Apple IP.

Seems pretty much like a VPN to me. The difference is really in implementation details: Apple layers their encryption so that neither they nor the "exit node" (to steal a term from Tor, but please note, Private Relay DOES NOT actually use the Tor network) knows both who you are and where you're going / what traffic you're trying to access.

If the backend is implemented as securely as they say, it's actually quite a nice system for privacy. Which, of course, drives ISPs, TLAs and advertisers absolutely insane, because their business model or data collection process is now broken.

Comment Re:FFFFFFFUCCCCK! (Score 1) 100

I love how, in one breath, you have to carefully buy a specific phone in the Android ecosystem to avoid getting unremovable crapware on your phone, and at the same time call Apple garbage. Seems like you have accurately identified the actual garbage platform all by yourself. I switched in 2014, after being fed up with Android's endless bugs and glitches, and it's the best decision I've ever made technology-wise, aside from switching my desktop from Windows to Linunx.

Slashdot Top Deals

What hath Bob wrought?

Working...