Comment Third-party doctrine (Score 1) 63
One would think the third-party doctrine would pretty much scuttle OpenAI's objections.
One would think the third-party doctrine would pretty much scuttle OpenAI's objections.
Yeah, it would probably take legislation forcing all of them to post and advertise prices including taxes. If everyone had to do it no retailer would be disadvantaged by being the first.
That said, I think it's a bad idea, unless retailers also have to itemize out the taxes on receipts so that consumers can see how much tax they're paying, which typically doesn't happen in Europe, as far as I've noticed (other than VAT, which is often itemized out on some purchases so that foreigners can get a VAT rebate). I think it's important that people see the taxes they pay so they can evaluate whether they think they're getting good value for their tax money. This is why I also oppose corporate taxes and any other sorts of taxes that are ultimately borne by individual taxpayers but are hidden by layers of obfuscation. Actually, there's another reason to oppose corporate taxes: Corporate taxes delegate to corporations the decision of how to allocate the cost of the taxes between customers, employees and shareholders. That allocation is an important public policy matter, and it should be decided by legislation, not by corporate bosses.
To be clear, I think there are a variety of public services that absolutely should be funded by taxpayers, and wholeheartedly support taxation for those purposes. But exactly what should be taxpayer-funded, at what level and with what efficiency are all important questions that voters should have input into, and that requires that they actually see what taxes they're paying.
If they're using the enclaves built into Intel and AMD, there may be side-channel issues to deal with. ARM is closer to what Apple is trying with their enclave.
ARM's TrustZone is definitely more secure than the alternatives on Intel/AMD, but TrustZone is also subject to side-channel attacks. To a first approximation, it's impossible to run two workloads on the same CPU and keep them perfectly isolated from one another.
However, I don't think any of these secure enclave concepts are relevant in this case. The way you'd build a private AI cloud is not to run it in enclaves (which are essentially just security-focused VMs) on CPUs that are running other tasks, the way you'd do it is to devote a bunch of CPUs solely to running the private AI workloads. Then your isolation problem becomes the traditional ones of physical access control to the secure machines and securing data flowing into and out of those machines over network connections.
So you missed the part that the book/paper/presentation was authored by a woman, then?
Trump appreciates you donating him permanent space in your thoughts.
...until the punishment is terrifying.
Simple as that.
Some ask "If the market is good at deciding how to pay people based on the value they can produce why are these non-producers making a very large chunk of all the money out there?"
However, most people who ask that do it while pointing to people who are actually quite important producers, such as financiers. Be careful not to conflate "don't produce anything of value" with "do something I don't understand the importance of".
Of course there are people in every profession who get paid a lot more than they're worth. This is less true of manual labor jobs where the output is easy to see and measure, but it's true across the board. Even in manual labor jobs you can have people whose output is negative. They may pick X apples or whatever, but they might do it while making everyone around them work slower.
IIRC in legal theory for liability, they call this the "empty chair" tactic. Where each defendant points to an "empty chair" aka, a party not involved in the dispute and lays culpability to this non-party. If everyone confront then points to the "empty chair" they can shirk responsibility.
Just to complete the description of the "empty chair" tactic, this is why lawsuits typically name anyone and everyone who might possibly be blamed, including many who clearly aren't culpable. It's not because the plaintiff or the plaintiff's attorney actually thinks all of those extra targets really might be liable, it's so that the culpable parties can't try to shift the blame to an empty chair, forcing the plaintiff to explain why the empty chair isn't culpable (i.e., defend them). Of course this means that those clearly non-culpable parties might have to defend themselves, which sucks for them.
Take a look at the size of Wikipedia's bank account. They constantly continue to solicit for funds as though they're desperate for funds on their site despite having billions upon billions of funds, enough to last pretty much off of the interest alone.
Work in AI, eh?
So... you didn't actually look at the size of WikiMedia Foundation's bank account.
WikiMedia absolutely has enough money to run Wikipedia indefinitely if they treated their current pile of money as an endowment and just used the income from it to support the site. They don't have "billions upon billions", but they do have almost $300M, and they spend about $3M per year on hosting, and probably about that much again on technical staff to run the site, so about $6M per year. That's 2% per year. Assuming they can get a 6% average return on their assets, they can fully fund Wikipedia forever, and then some.
So, what do they do with all of the donations instead, if the money isn't needed to run Wikipedia? It funds the foundation's grant programs. Of course, you might actually like their grant programs. I think some of their grants are great, myself, and if they were honest about what they're using it for I might be inclined to give. But they're not, and the fact that they continue lying to Wikipedia's user base really pisses me off, so I don't give and I strongly discourage everyone I can from giving, at every opportunity.
(a) I did that fine previously without AI
Me too, but it took a lot longer and I was a lot less thorough. I would skim a half-dozen links from the search result, the LLM reads a lot more, and a lot more thoroughly.
(b) Nobody is following any of the links that supposedly support the conclusions of the AI; nobody is reading any source material, they just believe whatever the AI says
I do. I tell the LLM to always include links to its sources, and I check them. Not all of them, but enough to make sure the LLM is accurately representing them. Granted that other people might not do this, but those other people also wouldn't check more than the first hit from the search engine, which is basically the same problem. If you only read the top hit, you're trusting the search engine's ranking algorithm.
into AI-generated slop, such that (d) Humans can no longer access original, correct information sources. It is becoming impossible.
That seems like a potential risk. I have't actually seen that happening in any of the stuff I've looked at.
New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman