Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Jensen's not gonna like this (Score 1) 27

But those use cases are also less valuable or they would have already been addressed.

Correct, but just as small businesses give much more value to most countries than large ones do, especially in higher Western democracies, the slightly less valuable smaller use cases normally end up much more valuable in total than the original bigger ones.

Comment Re:Nuclear power plants make excellent targets (Score 1) 62

Also because

a) nuclear fallout from Ukraine has a tendency to go East because that's the direction of the prevailing winds.
b) at the point that there were some guts around, it was made clear to Russia that any fallout that does come into Western Europe would count for triggering NATO article 5.

Comment Re:"Flaws"? Seriously? (Score 1) 62

If you believe ANY software can be made 100% secure

This is your fault. You are using a fuzzy definition of "secure." Once "secure" is well enough defined, then you can make the software 100% secure, it's just a matter of money. For example, you can be 100% sure that your code has no SQL injection bugs. You can be 100% sure that your code has no memory errors of certain classes (by using Rust). Using Rust is definitely not my preferred solution, but it IS a solution if you want to go that way.

This is just hiding the difficulty. The real definition of "secure" is inherently fuzzy because it means doing what people expect to happen and not doing what people don't expect to happen. In your case the insecurity is in the fact that you will have given some formal definition of what you expect the system to do but that other people won't understand that definition and will expect and need something to be done which the system definition misses.

Comment Re:Jensen's not gonna like this (Score 4, Insightful) 27

If you increase the efficiency of use of a resource you also increase the number of use cases that can be addressed. That can easily end up with more of the resource being used. An Nvidia CPU is now 5.5 times as valuable as it would have been before. If Nvidia has any sense they will try to build an open source scheduler like this that anyone can drop into any cloud.

Comment Re:In other words (Score 1) 53

PPAs have different problems though. Instead of having a simple "Ubuntu" distro, every PPA you add is a new distro maintainer which you have to watch. Even if they are unlikely (but not impossible) to go totally bad and install malware, they definitely have policy changes, lose interest, decide that there's a better alternative PPA so they can give up on theirs but fail to inform their user base etc. etc.

It should have been something that was agreed with Debian and worked with both base distros - with a kind of playpen with supervision where the distros could ignore what was happening as experimental most of the time, but take over a PPA which had been abandoned or mismanaged and clean up when needed. I suppose, through an Ubuntu software update that's still possible, but it would be very messy.

Comment Re:Just say no to snap (Score 2) 53

The great thing about Linux is that you didn't need fifty copies of every DLL

The importance of this is underestimated even by it's proponents. There might be two copies of one set of DLLs, because you are a developer and need to work work on a special version in order to build some specific application but once you have three or more then you are guaranteeing that there will be security problems in future.

The sad reality is that you actually do need fifty copies of every lib.so. Distros make their own unique changes to the binaries, and then give the modified versions the same names as the originals. Even worse, most of the dynamic loaders the major distros don't support looking up a hash of the needed lib.so instead of a hardcoded file name. (Despite the ELF format supporting it.)

If you did that, then you would allow software to block security upgrades of particular libraries. It's a disaster waiting to happen since every piece of software will start demanding their own version, just as Windows did at one point (still does??). distro, not the software author is the software maintainer and the correct place for security decisions to be made.

Use a unique hash to id the needed lib.so based on it's content instead of a filename, then it doesn't matter where on the filesystem it's kept or what it is called. That change alone would have fixed the need for AppImages, Flatpaks, Snaps, etc. But they refuse to make the needed changes.

That way lies.... NixOS and probably even Flakes. It's not quite madness, but it's definitely different. Anything else, especially providing dynamically linked binaries but locking down the libraries they link to

The LSB was even worse, to the point that most people today only use it as a generic command to find out what distro is running. (I.e. `lsb_release -sd`.) It did result in the alien command, which converts packages from one format to another (ie. deb->rpm, rpm->deb, etc.) and is great when it works, but many packages won't work properly even if they convert successfully. Due to broken lib.so dependencies, (See above.) or distro specific paths. (I.e. deviations from the FHS.)

Again, NixOS has almost completely abandoned LHS, although it does have special methods to provide some form of backwards compatibility. However, it's pretty clear that the FHS has advanced something. This way had to be attempted to prove that partial forms and compromises don't work. That's valuable, but now we need to accept it.

As for backend standardization, I do agree with you. Developers need a consistent platform to target. Which is why until Valve promoted ubuntu with Steam for Linux and Arch with SteamOS, most game developers steered clear of Linux entirely. Ironically, Linux may get standardization around ubuntu's or Arch's way of doing things because of Steam. (Ubuntu is probably more likely due to the Steam on Linux client releasing before SteamOS and The Steam Deck.) Developers will want to target the largest install base, with the most potential users. In the server world that won't mean much initially, but as the years go on and more game devs target Linux through Steam, it will start impacting the servers too. (They won't want to develop for multiple distros on the client side. Let alone the multiplayer server side.) Not to mention those who grew up with SteamOS or ubuntu will expect things on the server side to work the same way when they get hired by someone to develop for or manage servers.

As long as Ubuntu means Snap and Snap means proprietary control of the software distribution this way will lead only to ossification and will never get acceptance from the people that actually have the real commitment to FOSS development. We have seen repeatedly, with Mysql turning into MariaDB, with OpenOffice turning into LibreOffice. The base has to be a fully open solid system with the full maximum performance available to all of the tinkereres. Just as RedHat is dead, the proprietary server behind snap means that Ubuntu is also dead.

Red Hat is a dead duck as far as I'm concerned. [...]

Preach brother.

Any big-name software projects needs to just say no to the bandaid "solutions" like snap flatpak, and appimage that apply just enough bandage to the situation to keep the incentive away from fixing the problem properly.

I don't think this is 100% right. As a fully open source solution. Flatpak both solves a real problem and is fully open source so might be possible to fix. However, what's clear is that it should be a one or two special packages system rather than a whole array of software.

The way this needs to be solved is to have a real rethink of the whole way that software is packaged. NixOS isn't beginner suitable, possibly in the same way that Debian used to be. Maybe someone could build an easy distro on top of it as Ubuntu was build on top of Debian?

If they do anything with those they should standardize on AppImages. At least those have the same behavior as Windows portable apps. Which people have been using for decades, and therefore know what to expect. The others however, I agree with you on. Those can go away. They only make the problem worse, and provide too little benefit.

AppImanges don't solve the key problems - software updating / possibility of isolation or sandboxing / updating of libraries separate from applications. They cannot fix this area.

Comment Re:Batteries are too big (Score 1) 265

Yes, that's a broken battery and a flaw in that particular car caused by what you might call a design fault or a design decision. It's what made me, at one point, not choose to buy one. However the claim wasn't about the battery damaged leafs, but about the starting state:

Doesn't the Nissan Leaf get about 60 miles?

So for the particular cars you and bill_mcgonigle were talking about, which now have a 45 mile range that's much worse - they now have less than 30% of their original range. They should essentially be treated as broken. For electric cars in general it's not because a normal reasonable electric car should have a 150 mile range minimum which is fine for commuting.

Comment Re:Batteries are too big (Score 1) 265

Doesn't the Nissan Leaf get about 60 miles?

I was looking at a cheap used one that was down to 45 miles, I think.

30 miles gets me to town and back so it would only last a few years at that rate.

160 ??

The entry-level 39kWh Nissan Leaf’s claimed range is 168 miles from a charge, and in our own tests, we’ve consistently achieved more than 160, which is impressively close to the official number.

from Driving Electric

To have a range of only 45 miles the battery would have to be seriously broken.

Comment Re:To note: This is individual-specific. (Score 1) 112

\

I do know that PHEVs purchased for commercial motorpools tend to see low plug-in rates because the drivers don't care. It isn't as if the drivers are saving money and enforcement can be a cumbersome.

And some people bought PHEVs back when they still qualified for carpool access, even if they had no ability to charge. I have a neighbor with a Fusion PHEV that did exactly this.

So a correct solution for this would be to a) only give any tax benefits if the driver of the PHEV has free home charging and b) make it so that drivers get the electricity for free but have to pay for gas themselves. In that case there would be a huge incentive to run on electricity wherever possible.

Comment Re:get over yourself its called android no google (Score 1) 67

so how about funding

GrapheneOS doesnt' have the same aim. They happily run on Pixel hardware with binary blobs. In principle this project could be good for though because if the binary blobs can be eliminated then it would be possible to provide hardware support after Google has given up, so GrapheneOS could extend the lifetime of devices considerably.

Comment Re:How much can they fork? (Score 1) 67

Those projects almost all only physical address hardware. This project is about replacing some binary blobs with free software and seems more interested in Androide based systems like LineageOS and similar systems than Linux base systems like postmarketOS. LineageOS already works, so just getting it working on any modern hardware without any binary blobs would be a great achievement. PostmarketOS could then copy that setup and and run on the same hardware, which would also be great.

Comment Re:Relationship to Replicant and other projects ? (Score 3, Informative) 67

The announcement on the FSF web site explicitly mentions Replicant and on the Replicant web site they still seem to be alive, but they are targeting different / older hardware. The announcement also mentions that the project sponsor wants to get rid of binary blobs in his LineageOS install.

My understanding: Replicant is a full AOSP distribution with the binary blobs removed. Librephone is a project trying to eliminate binary blobs in the underlying Linux kernel. When Librephone succeeds it will be easier to move Replicant to more devices.

Comment Re:Interesting Idea (Score 5, Interesting) 67

This is 100% going to require dedicated / custom hardware no matter what. Most phones have multiple important devices that need binary blobs. Most often that includes the cellular radio, so a normal phone with this system is going to be a brick with this.

There have, through history, been a bunch of projects for better / more free / more user owned phones. Fairphone, Purism / Librem / PinePhone and several of them delivered hardware. So far that's always been noticeably more expensive for worse hardware, but it shows it can and will be done. Now that Google looks like they will close off the hardware support for Pixel phones in AOSP, the GrapheneOS project are talking about doing a phone with an OEM so maybe we will end up with something much more competitive soon.

This kind of project can give a focus for getting one of those phones you more or less own actually over the line to the state where you really own it and have all the drivers.

Slashdot Top Deals

Neutrinos have bad breadth.

Working...