Comment Re:Yes, at least for Microsoft (Score 1) 27
Simplify. The best part is no part. The parts omitted never fail. They don't require maintenance, supply chains, continuous improvement.
Simplify. The best part is no part. The parts omitted never fail. They don't require maintenance, supply chains, continuous improvement.
Reply "yes", then close and reopen this message to activate the link.
No matter how idiot-proof you make technology, God will always create a better idiot. That's why the right way to solve this problem is:
But IMO, the most important one is that last one. We would be a lot better off if the right to a speedy trial were taken seriously. If a year or more passes between committing a crime and being prosecuted, the threat of prosecution ceases to be a meaningful deterrent to crime.
If I were in charge, there would be two nationwide statutes of limitations added that apply to all crimes:
* I'm willing to consider arguments that these numbers should be slightly higher, but not dramatically so.
If legitimate extenuating circumstances outside the control of prosecution warrant a delay (e.g. the defendant being impossible to locate or in another country), a judge could order the statute of limitations tolled. But otherwise, the only exceptions should be in situations where a mistrial or similar forces a new trial (which obviously starts more than 30 days after the initial charges are filed). And even for a retrial, there should be a hard limit of maybe 90 days from the end of the previous trial or thereabouts.
This would result in a very large number of cases not getting prosecuted, but by forcing the prosecution to triage cases and bring important cases quickly, it would ensure that fear of being brought to justice would be a real deterrent to committing crimes. Right now, it is not. Good people don't (intentionally) commit crimes, because they have morality and ethics. Bad people do, because they have neither. Almost nobody avoids doing crime merely out of fear of punishment, and that's a bad thing.
Errr no, they very much do make technology. Quite a bit of it actually. Lots of what is marketed under Dolby Vision and Dolby Audio was developed by themselves and they spend a quarter of a billion dollar every year on R&D. Heck even the noise cancelling ability in video conferencing software along with music detection was largely developed by Dolby.
I would still consider them patent trolls at this point. Legitimate patent holders use patents immediately or hold them to use defensively. They do not sit on patents for an entire decade, waiting for the patented technology to be ingrained in the industry, and then use them to earn income. The patent having been created in-house rather than acquired doesn't change the fact that the behavior is fundamentally similar.
Just because you don't see their products on the shelves at Best Buy doesn't mean they don't make those either. They produce reference monitors for colour grading Dolby Vision content, they have an entire line of cinema audio speakers, and they make the rest of the cinema audio stack as well as a first party product, including multichannel amplifiers and audio pre-processors for Atmos content - a codec they also developed from the ground up.
Dolby Atmos was 2012. Dolby Vision was 2014. How are they not basically a non-practicing entity at this point?
The fact they sit on a bunch of related patents is just the nature of any R&D development.
Yes, but using them offensively after sitting on them violates the doctrine of Laches. In effect, they sat on the patents so that people would end up depending on AV1, because if they sued too early, AOMedia would have designed around the patent, and they would get nothing. So they deliberately delayed action to cause prejudice to the defendant.
At this point, it would be entirely reasonable for a judge to declare that because they failed to act against AOMedia within the 6-year window prescribed by patent law, they lost their right to sue AOMedia for damages in creating the patented technology, and that patent exhaustion applies to all downstream users. And if that happens, I will laugh so hard.
Imagine your little startup patents something and is egregious copied by a large, rich company. If the startup doesn't immediately have the funds to sue, the other company just gets to use the tech without the patent with no consequences. Seems unfair.
Dolby is not a startup. It was founded in 1965.
Also, the doctrine of Laches says you cannot unreasonably delay filing a lawsuit. Waiting ten years from the first release of the specification is clearly unreasonable. Waiting eight years from the first finished implementation is clearly unreasonable.
The bigger problem for Dolby is that patent law won't let you recover damages at all for damages more than six years ago, and the standard has been available for eight. So unless somehow this is some wacky patent where Dolby claims that some use of an otherwise non-patent-protected codec is patented (which should almost certainly result in that patent getting overturned for obviousness), Dolby should be laughed out of court.
But I'm sure they're hoping that Snapchat caves and agrees to go back to a Dolby codec or pay them royalties rather than fight them in court. This is patent troll behavior. Dolby has effectively become a patent troll, IMO.
Unfortunately, from a legal point of view, AOMedia hasn't done anything against Dolby. It's simply created a video compression codec. It doesn't use the codec, it just publishes documentation on how to use it.
From a patent law point of view, it is illegal to create something that violates a patent, not just to use it. Patent law kicks in when you create, offer for sale, sell, import, or otherwise distribute a patented invention.
IMO, one of the biggest flaws in patent law is that it covers the use of inventions in all cases except for patent exhaustion (sale of an already-licensed product). With the exception of pure process patents, IMO, that should not be a violation, as a user has no realistic way of knowing that something they bought violates someone else's patent, and should not even need to worry about such nonsense.
This "feature" of patent law exists solely to give the patent holder more leverage to screw the company accused of violating the patent by holding their innocently infringing customers liable, causing irreparable reputational damage to both companies, irreparable harm to countless others, etc., and it should have been eliminated decades ago.
That said, having seen this behavior by Dolby, I hereby vow to never knowingly buy any product that they manufacture, nor support their products or technology, nor use it except in situations where the content creator or distributor leaves me no alternative. They've gone from being a legitimate technology company to a glorified patent troll. Instead of innovating and making the world better to enrich themselves, they are suing anybody and everybody and making the world worse to enrich themselves.
Moreover, absent gross incompetence by Dolby's legal counsel, it seems clear that Dolby flagrantly and willfully violated the doctrine of Laches to allow damages to accumulate for eight full years from the final release (and ten years from the first specification release), thus allowing AV1 to become the dominant codec so that they could then predatorily use their patents to squeeze money out of the industry. Their behavior is nothing short of unconscionable, and whether due to incompetence or malice, their legal counsel should be formally sanctioned for it.
Finally, if Dolby wins, it is paramount that the entire technology industry agree to never license *any* future Dolby technologies going forwards, because doing so will only encourage them to use the patent system to prevent free and open standards. The only way to prevent patent abuse is to stop feeding the companies that abuse patents.
It is my fundamental believe that data formats should not be allowed to be protected by copyright or patents under any circumstances, because doing so fundamentally violates the rights of the owners and creators of that content. It makes it so that users can potentially lose access to data that they created. And this is wholly unacceptable for the same reason that renting software is unacceptable.
In short, Dolby and its lawyers can go f**k themselves with a shovel.
Compare this to what you would have said last year.
Users should never be able to do things that cause crashes in the same way that drivers should not ever be able to press any button or press any pedal that causes the engine to spontaneously burst into flames.
I don't have crashes.
I'm also a Mac user, but let's not boast here, shall we?
My personal guess would have been at least 10x. Did Microsoft bribe the study authors?
Dude, are you living under a rock?
These bands are creating new music. But the money that allows them to do so comes from their old music. I have bands in my collection that have been making music for 30 years.
And I'm pretty sure even small bands make good money nowadays from touring,
No they don't. They don't even make ok money. Tours are expensive and a lot of people, from road crew to venue security, take their cut before the musicians. The big guys, they make a killing on tours. But the small ones sometimes don't even break even.
In fact, a common wisdom in the industry is that touring is worth it not because the tour itself makes profits, but because it builds a fanbase and drives what is called "catalog discovery" - both old and new fans looking buying the albums with the songs they liked (and for the old fans, didn't know).
This study: https://www.giarts.org/article... says that 28% of income across all the musicians surveyed comes from tours. The share is larger for the rock/pop sector where it nears 40% but even that isn't easy money. And if you consider that only 20% of the rock/pop musicians make more than $50,000 a year, then it becomes a hollow statement.
Plus, it goes directly against your first statement - while on tour the band is not creating new music. So if you want to drive musicians more towards constantly creating (which most of them already do), then you can't make live performances the main income source.
I believe that the Mac Studio fills up the role of the Pro, via the M5 Max and M4 Ultra. In most head to head performance tests, they've been trouncing Windows, be it on Ryzens, Core Ultras or Snapdragons
CPU performance. Now compare GPU performance against a PC built out with eight GPUs to do parallel 3D rendering.
I think the Mac Pro - particularly the trashcan - was excellent
The trash can was thermally limited by its design, and could never be upgraded to hold newer CPUs or GPUs. Anyone for whom the trash can Mac Pro would work could just as easily use an Apple Studio, give or take, ignoring the lack of ECC (which the Apple Silicon Mac Pro also lacked).
I disagree with Apple really should have just been honest with its pro users and said "We no longer care about you,"'.They've abandoned a very specific and shrinking segment of pro users, but the vast majority of pro users are covered by today's lineup with Mac Studio at the top.
Depends on what you mean by covered. Can they do their work? Yes. Are they negatively impacted by hardware limitations? Also yes. A lot of professionals would be willing to pay extra for ECC. The fact that Apple doesn't offer ECC makes their machines less than ideal for use cases where a crash would be expensive. The fact that a lot of pros put up with crashes doesn't mean they like the situation. It just means that they dislike it less than switching platforms and tools.
But the pro users I was specifically talking about here are the ones doing high-performance computing tasks involving GPUs. Their only real option is to change platforms, because even though the new Apple Silicon CPUs are great in terms of performance per watt, the wattage is really low, so if you genuinely need boatloads of GPU, to the extent that Apple was still in the game without NVIDIA support, they completely dropped out of the game when Apple Silicon dropped support for AMD. At that point, Apple computers became nearly useless for most modern high-performance computing/AI workloads, people doing large-scale 3D video rendering, etc., because they're underpowered as shiped and can't be expanded with more GPUs, and parallelizing work across multiple machines is way more expensive and not always practical.
One minor peeve - what is "pro" today? Most office workers can do their work just fine with the some of the cheapest equipment you get - isn't that "professional" enough?
The historical definition of "pro" is people who are running software beyond what a typical user would run. Web browsers and productivity software (word processors, spreadsheets) are not pro apps; they are business apps. Pro apps are mostly things like high-end photo editing (think Photoshop/Pixelmator, not iPhoto/Capture One), 3D modeling, audio/video production, etc.
Even most developers can do most of their work on laptops these days - and if they need more horsepower, that's likely to be on the server side anyway. Don't they count?
Developers at least arguably fall into that category, though they are borderline, because they don't have huge storage requirements or huge compute requirements. Developers can do most of their work on laptops, though Apple's non-Pro laptops are pretty thermally throttled, so they will be miserable. And developers are probably the group who care most about ECC RAM, because they understand enough to know why it matters, but they still often use laptops because they don't want to be tethered to a desk. It's a tradeoff.
And what about project managers, lawyers, and CEOs - aren't they "pro" either?
No, and they never were. While the users might have professional occupations, their computing requirements are indistinguishable from a high school student. "Pro" in this context doesn't mean "users with money". It means "users with needs that exceed typical requirements".
Eh traditionall the big "need many hdmi" tasks was multicamera editing.
I know that at some point Multi Camera support got broken or removed but apparently it came back? Honestly its been a long long time since i've been anything close to knowledable about FCP
Oh. Today I learned that FCP regained live multicamera switching support in 2024... four years after everybody stopped caring and started using OBS, vMix, or Tricaster.
To be fair, outside of GPUs there really isn't much need for third party cards, and arguably even GPUs aren't a show stopper with third party GPU cages.
And Apple's total lack of third-party GPU support on Apple Silicon (beyond a few kludges that use them for AI workloads, but no display), losing the current Mac Pro is no great loss.
And yeah, the BlackMagic stuff I've bought lately is Thunderbolt. Also, most of the software for things like real-time switching runs on Windows anyway, so I'd imagine the market for that on Mac is not huge. And so much stuff gets brought in over networks these days (NDI, SRT, etc.) that HDMI ingest probably isn't that interesting anyway. You're more likely to use a dedicated encoder box that provides streams over Ethernet. My production work has been doing it that way since the pandemic.
Even more than the PCI-lanes, there wasn't hardware to justify it. With Apple Silicon, the GPU is built in and you can't fill the case with cards from NVidia to make it a CUDA-monster or handle graphics beyond the (impressive) abilities of the combined CPU/GPU.
Exactly this. Apple neutered the Mac Pro by making all of its additional functionality useless.
Years ago, they announced that they were killing support for kernel-space drivers. Then they announced a user-space replacement, DriverKit, that is basically half-assed when it comes to PCI, providing no support for any of the sorts of PCIe drivers that anyone would actually want to write. The operating system already comes with built-in support for USB xHCI silicon and most major networking chipsets, nobody builds PCIe audio anymore, FireWire support is dropped in the current OS release, and video drivers can't be written because Apple didn't bother writing the hooks.
That last one is the showstopper for PCI slots on a Mac. The main reason people bought Mac Pro or bought Thunderbolt enclosures was to support high-end video cards. With Apple not supporting any non-Apple GPUs on Apple Silicon, the slots are basically useless. I'm not saying that PCIe is useless by any means, just that the neutered, broken, driverless PCIe-lite hack that Apple actually makes available on macOS is basically useless.
I suppose you could theoretically provide DriverKit support for RAID cards, but really at this point everybody just uses external RAID hardware attached over a network anyway, so the number of people who would buy a Mac Pro for something like that is negligible.
And I guess in theory, you could port Linux video card drivers over if the only thing you're doing is using GPUs for non-video purposes (e.g. for AI model training or offline 3D rendering), but tying it into the operating system as a video output device is likely impossible without additional support from Apple, and nobody is going to bother to do that for the tiny number of people who would want that when you can just run Linux on x86 and not have to do all that porting work. After all, for those sorts of tasks, you probably aren't benefitting much from the OS or the CPU or memory performance anyway.
So basically the Mac Pro was dead on arrival because of Apple dropping support for very nearly every single thing that the Mac Pro could do that couldn't be done just as easily with a Studio (without even attaching a Thunderbolt PCIe enclosure). And once the Studio came out and had a comparable CPU in a much smaller form factor, the writing was on the wall.
More than that, the Apple Silicon Mac Pro is a sad toy that was never truly worthy of the Mac Pro name by any stretch of the imagination. It doesn't even have ECC memory or upgradable RAM. IMO, Apple really should have just been honest with its pro users and said "We no longer care about you," and then they should have dropped the Mac Pro as part of the Apple Silicon transition, rather than shipping something so massively downgraded that is so many miles from being a true pro desktop machine.
Anyone who is even slightly surprised by it being discontinued was obviously not paying attention.
There is more than one study and more than one way to look at it. Especially for streaming, having a catalog matters, especially for the smaller artists who will never have a charts-level hit:
"In 2024, nearly 1,500 artists generated over $1 million in royalties from Spotify aloneâ"likely translating to over $4 million across all recorded revenue sources. What's remarkable is that 80% of these million-dollar earners didn't have a single song reach the Spotify Global Daily Top 50 chart. This reveals a fundamental shift from hit-driven success to sustainable catalog-based income, where consistent engagement from devoted audiences matters more than viral moments or radio dominance."
https://cord-cutters.gadgethac...
Also don't forget that many studies such as DiCola's "Money from Music" focus on the superstars and the big hits. That is true, the charts pop music generates 80% or so of its income within the few weeks it stays in the charts and then drops of sharply.
Honestly, I don't care about the charts and superstars. They wouldn't starve if we cut copyright terms to six weeks. I do care about the indie artists that I enjoy. Who after ten years get the band back together for another tour through clubs with 200 or 500 people capacity. I'm fairly sure they would suffer if the revenue from those albums disappeared. And disappear it would. Maybe fans would still buy the CDs from the merch booth, but Spotify would certainly not pay them if it didn't have to.
[A computer is] like an Old Testament god, with a lot of rules and no mercy. -- Joseph Campbell