Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:AM radio is nothing in terms of volts. (Score 1) 100

AM Radio is absolutely nothing in terms of vaults, because you can literally run one off of a AA battery for hours.

This has nothing to do with saving money. Because when they build these things en-masse and buy them from a supplier that's already making them, they're probably already cheaper than they could build themselves And it wouldn't even add a dollar or so to the cost of the car.

I wouldn't say the cost is quite that low. Modern head units are all digital, so you have to take into account the cost of adc circuitry and integrating it with the software stack. Between that and antenna considerations, it does carry an additional (though negligible) engineering burden.

Either the FM radio in the car is analog or it is doing some sort of software-defined radio thing, but either way, analog-to-digital conversion is still effectively occurring in the radio circuitry already, which means that could be done for AM just as easily. So that part of the cost should be quite close to zero beyond the cost of the AM receiver circuit itself (and for SDR, it would probably be exactly zero).

The real question is whether an AM tuner can usefully function with that much environmental noise, or whether you need to come up with something significantly more complex in terms of the antenna(s), demodulator(s), etc. It's the frequency and the modulation that make it problematic, not the analog nature of the signal, because notwithstanding the existence of HD Radio, a large percentage of FM is analog, too.

Comment Re: AM radio is nothing in terms of volts. (Score 2) 100

Yeah saying it would affect ev range is hilarious straight up lying.

No, they're not.

Controlling RF noise is hard. Especially when you're switching enough current to accelerate your four ton electric tank from 0-60 in 4-something seconds while you're saving the planet or whatever.

You're assuming that they need to actually control the noise. Strictly speaking, I'm not sure that's a safe assumption. The only absolute requirement is that the radio must reject enough of the noise go get a usable signal. Whether that happens by the car having more shielding, by putting the antenna farther away from the source of noise, or through the radio being more capable of rejecting outside interference is an implementation decision.

Some other approaches might include:

  • Phase cancellation. You know exactly what signal is going into those motors, and you can presumably compute the harmonics that get added as it goes through the windings or whatever. Invert the phase, cut the amplitude down a lot, and sum with the (probably slightly delayed) analog signal before demodulating.
  • Using multiple antennas with beamforming, creating constructive interference in signals coming from the direction of the desired AM station (with the direction adjusted continuously as the car turns) and destructive interference in signals coming from the directions of various noise sources.
  • Putting the antenna on top of the car so that the entire body of the car is between it and the noise.
  • All of the above using some insane LLM-based software-defined radio monstrosity, where the actual demodulation process itself is taking data from multiple inputs, using some sort of machine learning to recognize the shape of noise patterns and subtract it from the inputs, etc.

How effective any of those approaches would be is anybody's guess. I'm thinking the last one is probably overkill (and probably wouldn't work anyway), but beyond that...

Comment Re:Percent Revenue licenses are abhorrent (Score 1) 69

You misunderstand me. Most of the companies that use or make money off of open source don't employ people to actually work on that code at all. They simply use it and integrate it. For them, the fork is viable if it does what they need it to do.

If the fork does not see any development then it is pointless. And if the original does get updated unlike the fork, then the fork is non-viable as well.

Nonsense. There's plenty of software out there that is basically mature. Why would you think that this software suddenly becomes non-viable merely because it isn't getting changed?

Suppose, for example, that NGINX or Apache decided to switch to this sort of license. Both are mature products that are entirely usable right now. And apart from security fixes (which tend to be straightforward, and could very easily be replicated in a fork based on the original license, and almost certainly would be replicated in every fork if forks existed), there's no requirement that development continue to be ongoing for it to be useful.

The only way a fork will be viable is if it's being developed, and who will be developing on it for free when they could be paid working on the original?

Employees working for the companies that own the 4.6 million servers currently running NGINX or the 3.2 servers running Apache. After all, arguably most of them are part of a commercial product, in a manner of speaking.

Not sure what any of that has to do with how much 1% of their revenue would be.

1% when the the total OSS budget is 1% of revenue is certainly a lot (100% increase). But 1% when the budget is 50% of revenues, that much less (2% increase), and could be within the normal budget variation year-over-year.

But the total OSS budget for software licensed under this license always starts at zero. That's the whole point. This organization doesn't exist yet, and has no software licensed under its code. Therefore every company is starting at zero. Apart from companies that already *produce* open source software *and* have the right to relicense it under this sort of license (which is probably a very, very small number of companies), it really doesn't matter if the company is paying people to work on open source software already, because none of those expenses will go away under this regime. The only thing that matters is that the second they add one piece of software under this license, they give up 1% of their revenue.

None less, realistically. Companies employ people to improve the things that they want improved. Relying on outside parties to do the improvements means waiting for it to be somebody else's highest priority.

True to some extent. But the big reason it is like this in the first place is because the developers have limited time to spent on what is generally a pet project. Big or niche features will still need direct funding by corporation indeed, but bug fixes and general maintenance could be done by the developers instead of the employees.

While that's true to some extent, big features that corporation A wants still aren't going to be a priority for corporation B unless they have a contract with corporation B that says that they will provide that feature, and even then, only to the extent that the contract requires. In the grand scheme of things, it will still always be way faster to just hire people to do the work no matter how big the existing team working on the tool/library is.

In fact, I'd argue that the amount of time it takes to get interesting features added actually gets longer the more people are working on it, so if corporation A wants a feature added, it would likely take longer to get corporation B to add it than it would if that product were maintained by a solo developer. And that's triply true if corporation A could just hire that solo developer, as is often done.

The fastest way for a company to get things done in an open source project is to hire the lead developer. The second fastest way is to hire a team to do work on the project and contribute it back upstream. The third fastest way is to hire a team to do work on the project and fork it because the upstream maintainers are intransigent or unresponsive. The fourth fastest way is to hire a team of outside contractors. The fifth fastest way is to enter into a contract with the corporation that owns the project. And the absolute slowest is to just ask the corporation that owns the project.

the people licensing that code did so under permissive licenses with the understanding that this would happen

There is difference between understanding/accepting and liking. Currently their choices are limited: free/OSS or proprietary/custom license. And to be paid, they would have to deal with each and every licensee separately, which is time consuming and unappealing.

There's actually a third choice: dual licensing under the user's choice of a paid commercial license or GPL/AGPL, with copyright assignment requirements for all external submissions. People/companies that can live with the terms of the GPL or AGPL get it for free, and those who don't can pay a licensing fee for a less restrictive license that is negotiated on a case-by-case basis with the company that owns all of the rights.

The proof of their interest in a Post-Open like system is the popularity of Patreon.

From what I've seen, Patreon is more a way to turn software into a continuous income stream, mostly paid for by individual users who aren't making money with it. It's kind of the exact opposite of this.

The problem is that the incremental cost of going from zero covered software to a single piece of covered software is 1% of your revenue

It depends a lot of what that "single piece of software" is. A 1% increase because of a fast memcpy library is certainly too much to ask. A 1% increase just for the use the Linux kernel is certainly a steal.

Only if there are no other usable open source kernels out there. If the Linux kernel switched to this license, you'd see every commercial user of Linux suddenly switch to FreeBSD, NetBSD, or OpenBSD, and depending on how loosely "included as part of a paid product" is interpreted, that could even include all the companies that build single-board computers like Raspberry Pi, at which point Linux development would basically dry up overnight.

I mean if it is just 1% of the products that include that software, *maybe* some existing companies might just keep doing it because of momentum initially, but for any new companies entering the picture, they'll have to decide if the difference between Linux and *BSD is worth 1% of their revenue, and most will likely conclude that it isn't. And those new companies will undercut the companies that are still using Linux, which will likely force them to switch to a cheaper kernel just to compete.

"If I add this one piece of software that would cost me X engineer hours to rewrite, it will cost me 1% of my revenue,"

That quite a bit of an oversimplification and hopefully the companies won't fall into that trap. This is basically the same cost as those falling into the NIH syndrome, which is always way more expensive than one thinks.

Oh, but that is exactly how big companies see software development, or at least that's how every big company I've ever worked at has seen it. And you're right that it's not a good idea, but it is still the way MBAs think. I've never worked at a non-startup company that didn't behave that way.

IMO, it is likely to be much easier to convince companies to pay a specific fixed amount per licensed inclusion than a flat fee per unit, because a lot of those products won't derive 1% of their value from licensed inclusions, and all of those products' manufacturers will actively seek out cheaper or free alternatives, and won't end up paying anything into this fund, whereas if you made the fee be per licensed inclusion, then each individual included work (library, etc.) could set a reasonable price for that specific work, and those companies would see a reasonable price per product and therefore most of them would pay money into the fund. Thus, the free market would end up finding some sort of market equilibrium pretty quickly in a way that it just can't with a simple flat rate per product sold.

And even for companies where the cost just happens to end up being the same 1%, they'll still likely favor the per-product licensing approach, because that avoids unnecessary lock-in risk. Consider that from their perspective, a year from now, the fees could double or triple or quadruple. If they're paying on a per-tool/library basis, they could then consider each licensed product individually and decide whether they can cut some of them out to save money. But if they would keep paying the exact same fee until the licensed work count reaches zero, then it becomes an all-or-nothing deal, which is way riskier from a future budgeting perspective.

So whether you're looking at it from the perspective of human nature, corporate behavior, or market dynamics, every analysis favors a fixed cost per included work per sold product or service, rather than a flat fixed cost per sold product or service.

Comment Re:Percent Revenue licenses are abhorrent (Score 1) 69

You're assuming that the companies are actually paying employees to work on it at all. Most of the time, that likely isn't the case.

If the company is not paying the employee to work on that code, then the employee is a hobbyist with regard to that software, and the company can't force said employee to work on the fork instead of the original without violating some laws. And if the re-licensing happened in the first place, it probably means that the company doesn't have the influence necessary to make the fork more viable than the original either.

You misunderstand me. Most of the companies that use or make money off of open source don't employ people to actually work on that code at all. They simply use it and integrate it. For them, the fork is viable if it does what they need it to do.

To use an extreme example, Google brought in $305.63 Billion in revenue in 2023. 1% of that would be over $3 billion dollars.

Yes, and that doesn't mean anything without knowing how much of those $305.63 Billion goes into paying for OSS software already in some form or another (via funding/sponsoring, via paying employees to work on OSS).

Not sure what any of that has to do with how much 1% of their revenue would be.

Then there is the question of much less they would need to pay once Post-Open software can fund themselves and have their own full-time developers instead of relying on Google and others' employees.

None less, realistically. Companies employ people to improve the things that they want improved. Relying on outside parties to do the improvements means waiting for it to be somebody else's highest priority.

Then there is the question of much Post-Open would pay Google for working on Post-Open software.

That's a fair point, but only if its software were licensed under that license, which is likely impossible. WebKit and Chromium (which is based on WebKit) are both based on WebCore, and short of replacing all of that code outright, it can't be licensed under a non-LGPL-V2-compatible license. Similar licensing hurdles exist for Android, Clang/LLVM, etc., though perhaps not to the same extent. Either way, relicensing most of their contributions would likely be infeasible. And that's true for a lot of other companies that contribute code to the open source world.

And that goes double for companies that derive very little of their revenue from software, for whom 1% of their revenue would be completely outrageous.

Indeed. But it is as outrageous for companies to make tons of money without giving back to the community, which is happening **today**.

In theory, sure. In practice, the people licensing that code did so under permissive licenses with the understanding that this would happen. If they had wanted their code to never get used by anyone, they could have licensed it under GPLv3. :-D

Moreover, you're assuming that 1% is a final deal, which it isn't yet and is only an early proposal. Maybe they will offer a tier based system where the percentage decreases as revenues grows (like Unreal/Unity licensing) and/or based on the product categories so that the percentage is roughly based on how critical the software is.

The problem is not whether it is a reasonable number or not for a company earning lots of money off of open source software. The problem is that the incremental cost of going from zero covered software to a single piece of covered software is 1% of your revenue, so no one will even take that first step. And that's a problem no matter what number you choose, whether it is 1% or .01%. And that is true regardless of product categories or any other arbitrary division like that.

Anything that isn't based on how much licensed software is used is a non-starter right off the bat, IMO. The only companies that would be willing to sign on to such a license are the ones who believe that they're getting off cheap by doing so, which is to say companies that plan to use an incredible amount of covered software. Everybody else will take one look at it and say, "If I add this one piece of software that would cost me X engineer hours to rewrite, it will cost me 1% of my revenue," and immediately walk away from the table.

Comment Re:Simple Truth (Score 1) 189

If people want a reference for gold's scarcity, the total amount mined by humans is approximately 5 olympic swimming pools worth.

That's actually slightly more than the total amount of gold that has been discovered (244,000 metric tons), assuming my math is correct. But only about 187,000 metric tons of that has actually been mined, so it is probably more like 3 to 4 olympic swimming pools.

But that's only half of the story. Every year, the world mines O(3,000) metric tons of gold. So in 60 to 80 years, that number will have doubled, assuming the current rate of mining holds. Unfortunately, the currently known reserves will run out in about 20 to 30 years at the current rate of mining, if my back-of-the-envelope math is correct.

And that is why gold is valuable. We're almost out of it. And when we run out, the only way to make more involves bombarding lighter elements with neutrons.

Comment Re:Percent Revenue licenses are abhorrent (Score 1) 69

anybody who wants to can fork it before the license change and treat that as damage and route around it.

Who is this "anybody"? Who would be working on the fork? Why would some hobbyist want to work on the fork for free when they could work on the original and be paid for it?

This "anybody" is any commercial entity that wants to use the code. Unless the new changes are just absolutely phenomenally amazing, they're way more likely to go back to a version that doesn't require them to pay for it, even if that means spending a little bit of effort to bring it up to feature parity in some critical way, because then they can pay to make those changes once and keep using it free forever.

And why would companies prefer to hire and pay their employees to work on the fork instead of paying the original authors through the license?

You're assuming that the companies are actually paying employees to work on it at all. Most of the time, that likely isn't the case.

But even if they are, why do you think those companies would prefer to rent software when they can buy it outright by redoing whatever work went in after the fork? How much code would have to be covered by this sort of license for it to be worth 1% of any real-world company's income?

To use an extreme example, Google brought in $305.63 Billion in revenue in 2023. 1% of that would be over $3 billion dollars. Care to guess how many full-time engineers you could hire to rewrite all of that code for less than $3 billion dollars? Pessimistically, about 10,000 U.S. software engineers. Optimistically, about 226,000 software engineers in Bangalore. Either way, you can rewrite a metric f**kton of code for less than $3 billion annually.

Now think about how much big tech companies give back to the open source world (Chromium, Android, LLVM, Clang, WebKit, etc.). Consider just how much incentive they would have to contribute only to projects that don't adopt a license that asks for billions of dollars in fees every year, and fold in the sheer amount of resources they could devote to supporting those alternative projects rather than joining said consortium and paying those fees, and it quickly becomes obvious why a percentage-of-revenue approach would be completely ludicrous in practice. There's just no way for the open source community to build up enough of a code delta from existing freely licensed versions quickly enough to not be thwarted by companies forking and duplicating that effort to avoid paying the fees, precisely because 1% of the revenue of all of those companies is such a staggeringly high number.

And that goes double for companies that derive very little of their revenue from software, for whom 1% of their revenue would be completely outrageous.

Comment Re:Percent Revenue licenses are abhorrent (Score 1) 69

the vast majority of companies will be massively overcharged

I doubt that there a lot of company who don't benefit massively from free software one way or another, what with Linux and Unix-like environment (Cygwin/Msys, BSD, ...) in servers, dev machines, and IDEs (VSCode, Eclipse, Emacs, Vim, ...), and compilers (GCC/LLVM) and languages (Python/Java/Go/Rust/...), and libraries (openssl, gtk/qt, jquery/react, numpy/pytorch/tensorflow, ...), and databases (MySql, sqlite, ...), etc, etc...

So the overcharge will highly depend on how popular that new license will become. If enough of those popular software switch to that license, 1% could become quite paltry for pretty much everybody.

Realistically, nobody can switch to that license. The software that exists now will continue to exist under its existing license, because anybody who wants to can fork it before the license change and treat that as damage and route around it.

So it's a question of how much *new* software adopts such a license.

Comment Re:Percent Revenue licenses are abhorrent (Score 2, Insightful) 69

This thing Perens is proposing isn't really a license. It's an enforcement agency. And yeah, if you're used to ignoring the license because you don't think it should apply to you then probably you're not going to like it.

I think it's more accurate to say that no business will touch any code written on this license, because everyone assumes that they will eventually have enough revenue to have to pay the licensing fee, and that license fee is likely to exceed the value you'll get from that software. The only companies that are likely to derive more than 1% of their income from some open source library are things like cloud companies that derive huge chunks of income from allowing people to use open source software on their hardware, but they can always dodge the costs by requiring customers to install the software themselves, and you'll be back to square one.

Also, you can hire a contractor to rewrite a decent amount of code for $50k. So in many cases, during the last year before they hit $5 million in revenue, they'll hire someone to write a replacement for the code, and then leave the consortium and pay zero.

The general concept is sound. Having a nonprofit responsible for collecting and distributing licensing fees for open source is a good idea. And it isn't unreasonable to not charge fees to companies that make less than some threshold amount.

However, making it always be a percentage of the company's income doesn't make much sense to me. Developers should be allowed to choose a fee structure that makes sense to them. Other options include a per-copy cost, a blanket cost, a per-user cost, or some combination thereof (e.g. "the greater of $X per copy or $Y per user"). Blanket licensing fees could be as a percentage of revenue for the product of service or a fixed amount, at the developer's discretion.

But a flat-fee license of 1% of income without regard to how much licensed code a company uses doesn't make much sense. A few companies that rely very heavily on licensed code will be massively undercharged, and the vast majority of companies will be massively overcharged, and the latter won't want to join at all, and will choose other code instead. So you'll basically end up with the only people who join the consortium being content creators and leeches. That's not a healthy funding model.

Comment Re:"Open" how? (Score 1) 132

I'm not asking Apple to allow Chrome or Firefox to be able to make their browser engine available as a system library where users can override which browser engine to use in an embedded way. That would be silly. If an app developer wants to use a different browser engine, they can choose to embed it into their app. If not, they'll use WebKit.

Except they can't, because Apple won't approve any browser app for iOS/iPadOS that does not use WebKit.

I think you're missing the context of this thread. I was arguing for why the security rationale isn't a valid reason for Apple to maintain that stance, and why it is reasonable for the EU, which has already required them to remove that limitation on iPhone, to require them to open up iOS on iPad as well.

Comment Re:"Open" how? (Score 1) 132

When ever did Apple not implement a new standard in a timely manner?

There are still major bugs in Safari's support for HTML editing when used in conjunction with the ::before and ::after pseudo-elements that I filed... I think over a decade ago. The HTML editing specification dates back to 2011, and those pseudo-elements date back to the CSS 2 spec in 1998.

There are quite literally entire websites devoted to pointing out the parts of the CSS spec that Apple still hasn't gotten right.

If you don't know this, then either A. you aren't a web developer or B. you're actively trying not to know it.

Comment Re:"Open" how? (Score 1) 132

That’s how you end up with android ecosystem? 50 000 different unpatched devices , 50 000 different compile times , every std under the sun and little consistency between them..

Exactly.

What the heck does users choosing to have a different web browsing app installed have to do with devices not being consistent with one another?

Most apps don't redirect users to external browsers at all, and even if they did, they would have no control over how those external browsers chose to render the content even now. Just because they use the same rendering engine doesn't mean that they can't blow in their own stylesheet, rewrite the URL to go through a rewriting proxy, or do pretty much any amount of damage to the content that they want to do.

So allowing multiple browser engines on the platform does not affect developers at all unless those developers choose to download a different browser engine and integrate it into their app, which is something that they couldn't do before.

Comment Re:"Open" how? (Score 2) 132

Apple doesn't allow other HTML Renderers on iOS/ipadOS simply because no one on the Planet has enough resources to keep up with trying to keep ahead of testing for all the constant Updates for multiple HTML Engines. It's just not worth the considerable vulnerability-risks.

That's an absolutely ridiculous argument.

I'm not asking Apple to allow Chrome or Firefox to be able to make their browser engine available as a system library where users can override which browser engine to use in an embedded way. That would be silly. If an app developer wants to use a different browser engine, they can choose to embed it into their app. If not, they'll use WebKit.

For non-browser apps, there's no end-user benefit to being able to switch browser engines, because A. an app usually won't work with a different engine, and B. a non-browser app typically uses a browser engine for content from the company that created the app, which means it doesn't typically expose normal browser chrome and doesn't take advantage of the features that different engine would provide anyway.

What I (and the EU) say is that they have to allow apps to choose to embed other engines. Once an app developer makes that choice, the responsibility for keeping the engine up-to-date becomes theirs.

AFAIK Apple already has rules that require app developers to disclose which third-party libraries and frameworks they link against (with the sub-bundles being separately signed, etc.), and presumably can block apps that run with versions of libraries that contain known vulnerabilities.

Standalone browsers obviously have to test their browser engine no matter what because... well, they're shipping a browser.

So the only case where anybody actually needs to test with various engines would be people doing actual web development (including app developers who create their own web content, but make it load in an external browser for whatever reason), but almost all of those people already have to do that, because those other browser engines are actively in use on macOS, Windows, Linux, etc.

As for vulnerability risks, first, remember that the operating system is ultimately responsible for limiting the damage that a compromised app can cause.

Second, remember that with the exception of social media apps, most embedded use of browser engines involves web content created by the app developer itself, which means the risk from security vulnerabilities is remarkably close to zero.

Third, remember that when the user chooses to use a different browser, at-risk use of the WebKit engine goes to near zero (except for social media apps). So any increase in risk caused by using a different engine for browsing is matched by a decrease in risk from not using WebKit for browsing.

Having a diverse ecosystem of browser engines actually improves platform security overall. Consider a platform where one third of users run Chrome, one third run Firefox, and one third run Safari. A website that tries to compromise the user's browser would then affect only one third of all users instead of all of them.

And even if the exploit is common between the WebKit and Chrome engines, you've still probably reduced the number of people affected by a third. Even in the worst case, where all engines are affected (almost unthinkable), you still haven't increased the odds of the user being affected, because the rate at which the user visits the offending website doesn't go up merely because the user has multiple browser engines.

There's basically no situation where having multiple engines increases the average risk to users unless one of them gets abandoned. Rather, it reduces the number of people running any given engine, making it harder to target everyone, and ensuring that attacks affect a smaller number of people than they otherwise would. On average, having multiple engines makes it harder for attackers, not easier.

And in 2023, Safari had 11 zero-day exploits versus 8 in Chrome and possibly as few as one in Firefox (which may be because fewer people are looking, given its smaller market share), so it isn't as though Safari is dramatically safer than other engines.

So from a security perspective, having more browser engines is likely to be at worst a no-op, and at best, a major security improvement.

Comment Re:"Open" how? (Score 1) 132

Oh no you are telling me we live in a time where web developers struggled with browsers holding back progress. I'm glad back in my day we didn't have that struggle because Microsoft was always putting out the superior IE browser. Sure customers wanted to use other browsers like Netscape Navigator, but those were toys for children. IE was a real browser for business.

Microsoft got prosecuted successfully by the U.S. government for antitrust violations because of what happened with Internet Explorer, precisely because their dominance over the browser market on their platform made them too powerful. Note that Apple is in a very similar position with Safari/WebKit on iOS. Now ask yourself if you want Apple to become a convicted monopolist for a second time.

Comment Apple is copying Windows now! (Score 1) 40

Ohh, this brings back memories...

Back in the Windows Mobile days, I had an HTC Excalibur (branded as the T-Mobile Dash in my case).

The alarm was useless because it was a dice roll as to what would happen. Sometimes it would make a sound at 7AM like it was supposed to. Sometimes, it'd make a sound at 9:27. Sometimes it wouldn't make a sound at all, but would instead show a silent screen notification. Sometimes it would make its sound, but would do so with like 30 iterations, so you had to hit the 'stop' button 30 times to silence all of the instances of it trying to play the alarm sound. It was like wake-up roulette and will-it-go-off-in-church-when-its-not-supposed-to roulette all rolled into one.

Obviously, this wasn't the greatest of experiences, but it was particularly obnoxious when the screen was completely cracked and I couldn't see the output to disable the alarm I'd forgotten I'd set, but was on a trip and so I was stuck with it for a week.

It's nice to hear that Apple is finally catching up and adding features to IOS that Microsoft was implementing in Windows Mobile 20 years ago.

Slashdot Top Deals

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...