Forgot your password?
typodupeerror

Comment Re:Ingenuity? (Score 3, Interesting) 73

The first steps to real human space usage are: (1) building a prototype centrifugal habitat in low Earth orbit, and (2) building remote operated vehicles to mostly replace EVAs. (1) is necessary to determine what gravity is needed for humans to be able to live off-Earth for longer than a year without debilitating health consequences. It's unlikely that the moon's gravity will be enough for this, but maybe mars is, or maybe not. The cheapest way to figure this out is not to build a moon or mars colony first. Seems like it would make a lot more sense to know the health effects of moon gravity before planning the structure of a long-term moon installation. As for (2), using EVA for essentially all external maintenance, as is done on the space station, is not viable for accomplishing large amounts of work in space. Having humans present locally to operate the ROV is a big plus, however putting those humans in space suits just doesn't make sense. Just to have gloves that a human can squeeze without excessive fatigue, the person has to go through a lengthy decompression-recompression protocol. This is not viable for efficient and effective everyday work. Suited humans likely have their place, but most maintenance should be done by ROVs, just like it is done on deepwater installations on Earth.

Until these two things are on the agenda, manned spaceflight seems to be just a vanity project targeted at space tourism (short-term visits) or showing up other nations. (Note that the push for effective, affordable heavy lift is separate and has many other benefits outside manned spaceflight.)

Comment Designing an AI system to do homework is evil (Score 3, Insightful) 153

Maybe it's time to just say that designing and marketing an AI to be good at "doing homework" is destroying things and hurting people. It's like social media targeting increased "engagement" at the cost of truth, social function, and the mental health of its users. It's just morally bad even if it isn't illegal. Providing wholesale solutions to homework someone is supposed to be doing to improve their skills has always been a Bad Thing(TM), it just hurts the person that it is being provided to. The grade they are getting is far less valuable than the increased knowledge obtained by doing the homework. Homework problems only produce new knowledge in the person doing them, not for society overall. They are always exercises with known solutions.

It seems a lot of people have been sold the myth that LLMs are just a tool like a calculator. But they aren't marketed and used as an assistive tool like that. They are often marketed as a plagiarism database (encouraging the user to represent the intentionally obfuscated output of the LLM as their own work, not as the output of a piece of software.) The evil part is that in many cases this appears intentional on the part of of the company running the LLM. i.e. they have intentionally made it a better tool for fraud and adapted to that use case instead of a more productive use case.

Comment Re:That means they had 620K in BC, right? (Score 1) 67

Funny, mostly what the Fed does is *destroy* money. "Money" is naturally created because banks can lend out what is deposited, over and over. The Fed limits that by charging banks a fee if they lend out too much. But the money that goes to the Fed to pay the fee vanishes. It is funny how badly people who bad mouth the Fed misunderstand how fractional reserve currency actually works.

Comment Re:I have multiple opinions (Score 1) 50

No, even a "trained" person is not allowed to just reproduce copyrighted works and get paid for it, even as a mishmash. And that's not even getting into the degree to which the "AI" might just be a glorified index and storage system, which means it really does have a copy of the copyrighted work in it and therefore the model weights themselves are a derivative work.

I think there should be no question that a semi-arbitrary algorithm optimized to reproduce a dataset under certain operations is a derivative work of that dataset. Full stop. A model is a derivative work of the training data. Anything else is just a misunderstanding of how the technology works and makes no sense. Yes, OpenAI's business is basically built on intentionally pushing this misunderstanding. There is an indexing exception to copyright, but for what we are currently calling "AI", indexing and storage are inseparable, so this exception should not apply. Anything else just makes AI a copyright washing machine and copyright basically ceases to exist.

Comment Re:Blue Origin? (Score 1) 51

I think you are missing that New Glenn is a heavy lift rocket and it works. For any other vendor except SpaceX, a successful first launch to orbit with another one coming up within less than a year (next week) is clear success. (SpaceX was doing that with landing and reuse on the months timescale, but there was a bit of a stumble on that earlier this year.) And Blue Origin at least seems to have a shot at making New Glenn reusable, again the only competition for SpaceX. With Starship and New Glenn in operation, two reusable heavy lift rockets, the space industry will completely change. This is why the comments referenced in the summary appear so out of touch. The entire structure of how things are planned and executed in space is about to change, which the existing Artemis plan accounts for. But the announcement indicates thinking about things in the old form.

Comment Re:The takeaway (Score 1) 56

In case anyone else is wondering, it does seem the summary is misleading at the end. The table indicates that before first unlock they can only get a small amount of data - the OS and some info about installed apps, no user data. (The summary says Pixel 9 is "supported" BFU, but that is only what cellebrite apparently calls "BFU data" not full file system (FFS).) So, as parent states, none of the Pixel devices listed are vulnerable to full access from a cold state.

Comment Re:Corporate free speech is bollocks (Score 1) 61

Well, personal responsibility is the con the right is selling, but I think the current situation is exactly what those who fund the right want. One policy position that does address this on the right is for "Tort reform" which usually means making the owners even less accountable by protecting the owned company from financial liability. Your initial analysis was correct, if people don't want the government regulating their actions they must also give up the government protections for liability. One can advocate for more or less effective regulation compared to a chosen goal, but just advocating for "less regulation" is nonsensical, since the limited liability inherent in the existence of the company is itself a huge regulation restricting everyone.

Comment Re:Defund the police (Score 2) 133

Yeah, to be more specific "defunding police", when used by those advocating for it, meant diverting funding for police to mental health services or withdrawing funding from police departments that are regularly assaulting, maiming, or even killing the people they are supposed to be protecting. It really is just an extreme level of police reform that is advocated in particularly bad situations that do exist in some parts of the US.

Comment Re:Rendering time? (Score 1) 17

Thanks for the TLDR. So presumably the iframe logic (the enclosing code being able to do some manipulations of what is displayed in the iframe) was extended to apps opening other apps. i.e. even if an app doesn't have the permission to draw over other apps, it can still do some visual effects that are allowed in iframes for apps that it opens. I would argue that this is a bad idea and not an intuitive definition of what "draw / not draw over other apps" means. So this should be changed so that no operations on the content shown by the other app is allowed. (or I might still be misunderstanding)

For this exploit, one does need to be using a malicious app. Also, I don't have any apps that open google authenticator directly from the app, and that would seem kinda weird to me anyway.

Comment LLM as a copyright washing machine again (Score 0) 17

There are already plenty of news stories that appear to be just someone rephrasing the AP or some other story. i.e. "The AP has reported..." with no additional info just "summarizing" the AP story. (Rephrasing its first paragraph.) Yeah, an LLM can do that pretty much by design of the architecture. A copyright washing machine. I don't know that this was ever genuinely worthwhile human work in the first place. Now whether or not that "should" be a legal non-derivative work is a separate question.

Slashdot Top Deals

About the time we think we can make ends meet, somebody moves the ends. -- Herbert Hoover

Working...