Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:No such thing.... (Score 1) 32

The fact it started the night we brought her home after the boosters may be one indicator...(First Grand Mal Seizure that night, fever of 105, bald spot within days, then absence seizures started after that. More Gran Mals, pissing herself, turning blue, Etc... ) The fact that NONE of the doctors would speak to us about the DTAP being the possible cause, was another... The research that I have done into THOUSANDS of similar reactions, would be the third. Only one issue, without a doctor STATING that this was a possible cause, we cannot even seek compensation OR report it to VAERS... WAKE UP PEOPLE! I'VE LIVED THROUGH THIS NIGHTMARE! Personally! My Daughter lives with this to this DAY! (Last seizure caused her to get into a car accident on her bike and nearly killed her coming home from work. She is 25 now. Her stitches aren't even out yet... So, yeah, we are pretty freakin CERTAIN!)

You may be certain, but that doesn't necessarily mean you're right.

What your daughter is experiencing is a febrile seizure, which is a seizure that is triggered by a fever. The rate of these seizures increases by a factor of about 1.5 in the three days after the DPT shot. That doesn't mean that they are caused by the DPT shot, though. They're *triggered* by the DPT shot. A kid is either prone to having seizures during fevers or isn't.

Almost nobody who gets a febrile seizure ends up with epilepsy, though. Out of the 277 people in the study linked above who had febrile seizures, zero of them were the beginning of a history of epilepsy; either they already had a history of seizures before or they never developed it.

So although it's *possible* that your daughter is an incredibly rare exception, but it is orders of magnitude more likely that the epilepsy and the febrile seizure are unconnected, and that the lack of a prior seizure is a fluke.

That said, a temperature of 105 is considered a medical emergency, and fevers over 105 can cause neurological damage, so I can't definitively rule out the possibility that it was caused by the vaccine (or maybe by the vaccine happening to coincide timing-wise with some other illness, e.g. picking up some virus while at the hospital/clinic to get the vaccine).

There's also the remote possibility that the vaccine somehow triggered an autoimmune condition, and that this is the root cause of the seizures. The hair loss is also a red flag for an autoimmune condition. Has she been evaluated for autoimmune disorders?

Comment Re:Get root, do anything... (Score 1) 7

It's been a universal truth since the first days of Unix that root permissions allow unconstrained access to configure, execute, steal and destroy. Still, the fact is that while secured Linux installs are going to make running such an exploit very hard or even nearly impossible with regular user access, not all systems are secured. I've seen people intentionally reduce or remove security rules to get something running, so there are no lack of improperly secured Linux systems out there.

Comment Re:Why is Microsoft not anti-competitive? (Score 1) 74

>> Only if your Xbox has an optical drive and supports optical drives. Wrong again, right out the door. https://www.amazon.com/EA-SPOR... One of many different places, that sell for different prices, that MS doesn't get a percentage cut from the seller.

I don't know the details of the arrangement between Microsoft and EA Games, and it probably isn't a full 30%, but I can pretty much guarantee you that Microsoft's Xbox store isn't providing those digital codes and download bandwidth for free.

>> Of course, ultimately, the impact on commerce is still mostly the same whether you're talking about a device that only plays games or a device that you use for other things, so while from a consumer perspective, the harm is greater from Apple doing it, the harm to the free market is similar, just at a smaller scale. No, again, it's how they are marketed and advertised. You keep wanting to compare two very different things and somehow want them to be the same. They aren't. It doesn't matter how from a consumers perspective, or how ever you want to rephrase it. These aren't the same. Apple's marketing killed that option, you can't say X and then claim that it's not X at the same time. It's like asking how come my car drivers license doesn't allow me to fly a plane, both get me from A to B. These aren't the same, and never will, no matter how you try to rephrase it.

While false advertising can create unfair competition, that's an entirely different part of the law than antitrust (Sherman Act).

Also, you are incorrect. The Xbox is sold as a device for playing games. Consumers buy games from third parties to play on the Xbox. The iPhone is sold as a device for running apps. Consumers buy apps from third parties to run on the iPhone. Apple never said that you could buy software other than through them. Therefore, there is no difference that is meaningful here.

>> No one is batting an eye about Apple TV because it is a niche platform that almost nobody actually uses, which means it doesn't cost anybody enough money to sue over. Apple has a single-digit percentage of the connected smart TV market, behind Amazon, Roku, and Google. No, it's because it's not being marketed as a do all, but privately not do all.

Again, marketing has nothing to do with antitrust except as an additional illegal act that can contribute to an attempt to monopolize.

Comment Re: Paradigm Shift (Score 2) 160

But the kicker is that that plastic pipe didn't need to be there. It could have been a length or the same rubber tube present elsewhere in the coffee maker and used for the same hot water.

I ended up replacing it with a $200 coffee maker. And it still annoys me that a one dollar part made me replace a perfectly usable appliance.

You don't own a 3D printer? Turn in your geek card. :-D

But seriously, yeah, broken plastic does tend to be the most common cause of things getting thrown away these days, and it usually isn't worth the time to 3D print a replacement part unless it is something pretty simple. Then again, if there are enough of them, you might get lucky and find that somebody already modeled it. :-)

Comment Re:Why is Microsoft not anti-competitive? (Score 1) 74

"Microsoft takes commission off of every sale, why can't Apple take?" No they don't. Right out the gate, you are wrong. I can buy Xbox games from Walmart, BestBuy, Amazon, GameStop, etc... And Microsoft doesn't get a percentage of that sale.

Only if your Xbox has an optical drive and supports optical drives. Microsoft is phasing those out, at which point they will be taking a commission on every sale, and they're doing it so that they can take a commission on every sale.

There are only two real differences between Microsoft and Apple in this matter. First, Apple's iOS started out as a closed platform, rather than starting out as an open platform, and staying closed is a lot easier than becoming closed. Second, Apple got successfully sued, whereas Microsoft hasn't been successfully sued yet.

Give it time. I'm reasonably certain that the whole point of Epic taking on Apple was because they were only a tiny fraction of Epic's sales, and they could afford to give up those sales for a chance at winning a case that could then be used as leverage against Microsoft, Sony, and Nintendo.

Then there is the basic fact that Microsoft markets the Xbox as a very limited device, but Apple has done the opposite and marketed as a 'do all" device" ("There's an app for that', "Whats a PC?"). Again, this shows these are two very different matters.

That is certainly true, and that could absolutely weigh in favor of Microsoft being able to restrict their platform in ways that Apple couldn't. Of course, ultimately, the impact on commerce is still mostly the same whether you're talking about a device that only plays games or a device that you use for other things, so while from a consumer perspective, the harm is greater from Apple doing it, the harm to the free market is similar, just at a smaller scale.

This is why no one is batting an eye about Apple's control on Apple TV, because it's marketed and sold as a limited function device.

No one is batting an eye about Apple TV because it is a niche platform that almost nobody actually uses, which means it doesn't cost anybody enough money to sue over. Apple has a single-digit percentage of the connected smart TV market, behind Amazon, Roku, and Google.

Besides, the company that would have the most to gain is Amazon, and they have their own competing hardware that is so vastly much more popular than Apple TV that they'd be better off making the Apple TV experience worse and showing "Better on Fire" ads, rather than bothering to sue over the token losses from not being able to do direct sales on that platform.

Comment Re:Won't matter (Score 1) 263

It's 38.4 cents per gallon in Texas (20c state and 18.4c federal). $200 is 520 gallons per year. If you have a car that gets 25 mpg that's 13K miles per year, or 1,083 per month. Not a large amount of miles, especially if you must drive to work daily and you live more than 20 miles from work - also not hard. I'm not going for big SUVs that get 10 or hybrids that get 60.

The average MPG for passenger cars (which is what nearly all EVs are) sold today is 33 MPG. And the average vehicle drives 13,500 miles per year. So 409.1 gallons times is $157.09. At $200 per year, that's already overcharging EV drivers by a large margin, on average.

Comment Re: Ok but... (Score 1) 179

In this latter case, the new Chrome owner would continue capturing the exact same data Chrome currently captures, and providing it to Google exactly the same way it already does. The difference is Google would pay the new Chrome owner for non-exclusive access to this data, the new Chrome owner also selling the exact same data to other purchasers, such as Microsoft, OpenAI, Amazon, Facebook, Apple, and whoever else wants to buy it.

Your first mistake is assuming that Google is using data from Chrome for ad purposes. I don't think that this is the case. They have analytics tracking code on websites that they use for that. They have no need for getting the data directly from Chrome.

So in your ideal world, your browsing data is being sold to advertisers by the browser vendor, which means VPNs won't hide you like they do now. It means potentially that Incognito Mode won't give you clean advertising identifiers and a clean set of cookies per window like it does now. And so on. Basically, you're advocating for a blatantly privacy-raping version of what is currently relatively good privacy-wise.

And this is why browsers are a money pit. Unless you start exploiting users' data in ways that AFAIK no major browser vendor does, the data collected is worthless, because it is just being used for serving personalized search suggestions and similar, rather than for purposes that they can make money from. And this is why I say that the end result of any sale is almost guaranteed to be worse for users — particularly when it comes to privacy.

Comment Re:BS (Score 1) 149

LLMs perform very well with what they've got in context.

True in general, I agree. How well any local tools pick out context to upload seems to be a big (maybe the big) factor in how good their results are with the current generation of models, and if they're relying on a RAG approach then there's definitely scope for that to work well or not.

That said, the experiment I mentioned that collapsed horribly was explicit about adding those source files as context. Unless there was then a serious bug related to uploading that context, it looks like one of the newest models available really did just get a prompt marginally more complicated than "Call this named function and print the output" completely wrong on that occasion. Given that several other experiments using the same tool and model did not seem to suffer from that kind of total collapse, and the performance of that tool and model combination was quite inconsistent overall, such a bug seems very unlikely, though of course I can't be 100% certain.

It's also plausible that the model was confused by having too much context. If it hadn't known about the rest of the codebase, including underlying SQL that it didn't need to respond to the immediate prompt, maybe it would have done better and not hallucinated a bad implementation of a function that was already there.

That's an interesting angle, IMHO, because it's the opposite take to the usual assumption that LLMs perform better when they have more relevant context. In fact, being more selective about the context provided is something I've noticed a few people advocating recently, though usually on cost/performance grounds rather than because they expected it to improve the quality of the output. This could become an interesting subject as we move to models that can accept much more context: if it turns out that having too much information can be a real problem, the general premise that soon we'll provide LLMs with entire codebases to analyse becomes doubtful, but then the question is what we do instead.

Comment Re:BS (Score 1) 149

I could certainly accept the possibility that I write bad prompts if that had been an isolated case, but such absurdities have not been rare in my experiments so far, and yet in other apparently similar scenarios I've seen much better results. Sometimes the AI nails it. Sometimes it's on a different planet. What I have not seen yet is much consistency in what does or doesn't get workable results so far, across several tools and models, several variations of prompting style, and both my own experiments and what I've heard about in discussions with others.

The thing is, if an AI-backed coding aid can't reliably parse a simple one-sentence prompt containing a single explicit instruction together with existing code as context that objectively defines the function call required to get started and the data format that will be returned, I contend that this necessarily means the AI is the problem. Again I can only rely on my own experience, but once you start down the path of spelling out exactly what you want in detail in the prompt and then iterating with further corrections or reinforcement to fix the problems in the earlier responses, I have found it close to certain that the session will end either unproductively with the results being completely discarded or with a series of prompts so long and detailed that you might as well have written the code yourself directly. Whatever effect sometimes causes the these LLMs to spectacularly miss the mark also seems to be quite sticky.

In the interests of completeness, there are several differences between the scenario you tested and the one I described above that potentially explain the very different results we achieved. I haven't tried anything with Qwen3, so I can't comment on the performance of that model from my own experience. I was using local tools that were handling the communication with (in that case) Sonnet, so they might have been obscuring some problems or failing to pass through some relevant information. I wasn't providing only the SQL and the function to be called, I gave the tool access to my entire codebase, probably a few thousands lines of code scattered across tens of files in that particular scenario. Any or all of those factors might have made a difference in the cases where I saw the AI's performance collapse.

Comment Re:I for one am SHOCKED. (Score 1) 52

You don't appear to consider the cost to everyone who didn't buy the glasses, but encounters someone wearing them.

This is the thing that people saying things like "You have no reasonable expectation of privacy in public" seem unable to grasp. There is a massive and qualitative difference between casual social observations that would naturally occur but naturally be forgotten just as quickly and the systematic, global scale, permanently recorded, machine-analysed surveillance orchestrated by the likes of Google and Meta. Privacy norms and (if you're lucky) laws supporting them developed for the former environment and are utterly inadequate at protecting us against the risks of the latter.

And it should probably be illegal to sell or operate any device that is intended be taken into private settings and includes both sensors and communications so that even in a private setting the organisations behind those devices can be receiving surveillance data without others present even knowing, never mind consenting.

Perhaps a proportionate penalty would be that the entire board and executive leadership team of any such organisation and a random selection of 20 of each of their family and friends should be moved to an open plan jail for a year where there are publicly accessible cameras and microphones covering literally every space. Oh, and any of the 20 potentially innocent bystanders who don't think that's OK have the option to leave, but if they do, their year gets added to the board member or executive they're associated with instead.

Comment Re:BS (Score 1) 149

FWIW, I was indeed surprised by some of the things these tools missed. And yes, the worst offenders were the hybrid systems running some sort of local front-end assistant talking to a remote model. Personally, while small context limits get blamed a lot for some of the limitations of current systems, I suspect that limitation is a bit misleading. Even with some of the newer models that can theoretically accept much more context, it would still be extremely slow and expensive to provide all of a large codebase to an LLM as context along with every prompt, at least until we reach a point where we can run the serious LLMs locally on developer PCs instead of relying on remote services.

Even with all of those caveats, if I give a tool explicit context that includes the SQL to define a few tables, a function that runs a SQL query using those tables and returns the results in an explicitly defined type, and a simple prompt to write a function that calls the other function (specified by name) and print out the data it's retrieved in a standard format like JSON, I would not expect it to completely ignore the explicitly named function, hallucinate a different function that it thinks is returning some hacky structure containing about 80% of the relevant data fields, and then mess up the widely known text output format. And yet that is exactly what Sonnet 3.7 did in one of my experiments. That is not a prototype front-end assistant misjudging which context to pass through or a failure to provide an effective prompt. That's a model that just didn't work at all on any level when given a simple task, a clear prompt, and all the context it could possibly need.

Comment Re:BS (Score 1) 149

As for their ability to infer, I couldn't agree less with that.

Dump an entire code base into their context window, and they demonstrate remarkable insight on the code.

Our mileage varies, I guess. I've done quite a few experiments like that recently and so far it seems worse than a 50/50 shot that most of the state-of-the-art models will even pick up on project naming conventions reliably, and far less that they'll follow basic design ideas like keeping UI code and database code in separate packages or preferring the common idioms in the programming languages I was using. These were typically tests with real, existing codebases on the scale of a few thousand lines, and the tools running locally had access to all of that code to provide any context to the remote services they wanted. I've also tried several strategies with including CONVENTIONS.md files and the like to see if that helped with the coding style, again with less than convincing results.

Honestly, after so much hype over the past couple of years, I've been extremely disappointed so far by the reality in my own experiments. I understand how LLMs work and wasn't expecting miracles, but I was expecting something that would at least be quicker than me and my colleagues at doing simple, everyday programming tasks. I'm not sure I've found any actual examples of that yet, and if I have, it was faster by more like 10% than 10x. The general response among my colleagues when we discuss these things is open ridicule at this point, as it seems like most of us have given it a try and reached similar conclusions. I'm happy for you if you've managed to do much better, but I've never seen it myself yet.

Comment Re:BS (Score 1) 149

I've done some experiments recently with LLM-backed tools to try to understand the current state of the art. FWIW, my own experience has been that for relatively simple boilerplate-generation jobs they can often produce useful code, but their limit is roughly the capabilities of a junior developer. They make mistakes fairly often. Maybe more importantly, even when their code technically produces the right answer, they rarely infer much about any existing design or coding standards and their code often doesn't fit in with what is already there. I have found that to be the case disappointingly consistently, even in projects small enough to fit the entire codebase in the context, and across a variety of prompting strategies and tools.

So far, I'd say a relatively good session can produce a lot of correct boilerplate without much human intervention other than the prompts themselves. I'm uncertain about whether it really does so significantly faster than a senior dev who could stream the same kind of boilerplate as fast as their fingers could type it, once you take into account the need to proofread and correct the LLM's output, but it was probably faster than a mid and certainly faster than a junior in most experiments I tried. In contrast, a bad session can last an hour or more and still result in discarding the entire output from numerous interactions with an LLM because it has produced literally no code of sufficient value to keep.

Comment Re: Ok but... (Score 1) 179

And that increased competition will benefit everyone via quality increase and price reductions, also at all levels.

Except that it won't, because web browsers are massive money pits that every platform has to have, but nobody wants to do the work to actually develop, so it seems likely that Chrome and Chromium development are a huge net loss, despite any claimed advantages from synergy.

So where previously, those losses got buried in operating costs of a giant company, the separate pieces will still need that revenue to pay the bills. And that decreased subsidization will harm everyone via quality decreases and price increases.

One of us is right, the other is wrong, and it's hard to know which with any certainty, but the general consensus in the industry is that browsers are a money pit, so in the absence of strong evidence to the contrary, I suspect that splitting Chrome from Google will cause significant net harm.

Comment Re:Seems a bit harsh (Score 1) 71

According to Wikipedia, his family sent him to the United States at age 16. As a minor. That right there very seriously contradicts your spin. At 16, you don't choose to leave the country. Your parents *send* you out of the country.

So he's still a criminal for entering illegally, and his parents are accessories as well. Got it.

Being in the U.S. without documentation is not inherently a criminal offense. In this case, assuming the DoJ's accusations are accurate — that he crossed the border illegally — it would be a misdemeanor. However, to my knowledge, those accusations have not actually gone to trial, so this cannot really be assumed. He did have family in the U.S. at the time, so there is at least a nonzero possibility that he had a visa for visiting his family. Either way, the government would have to prove that he crossed the border illegally beyond a reasonable doubt, which as far as I'm aware, it has not done.

Also, no court actually ruled that he was a gang member. A court ruled that the government's hearsay evidence — someone said he was a gang member — was adequate reason to not let him post bail to be released from jail pending trial. There's a rather large difference between that level of scrutiny, where the government's statements are presumed to be correct until proven otherwise, and the level of scrutiny required for deportation, where the government actually has to prove its case, where the person making the claim has to actually testify in court under oath, etc.

At that point it doesn't matter, because he is already an illegal alien which makes him both a criminal and eligible for deportation.

As noted previously, being an undocumented immigrant does not per se make you criminal, though it does make you eligible for deportation in the absence of eligibility for asylum. Unfortunately for you, in this particular case, he had a standing court order saying that he was not eligible for deportation, yet he was illegally deported anyway.

In the more general case, it depends in part on whether the person has successfully applied for DACA and renewed it every two years, whether the person has been granted asylum, and various other factors. It isn't a simple "yes or no" as you make it out to be. That, plus the risk of mistaken identity, is why every deportation requires a hearing, requires a lawyer to be present to argue for the accused, and requires substantive due process to ensure that we aren't sending people away who shouldn't be deported.

Skipping that process means that I can accuse you of being an illegal alien, and unless you can prove that your Social Security card isn't a fake (with that "not to be used for identification purposes written right across the top, that's gonna be hard when you're locked up and can't make a phone call), you should enjoy your time in an El Salvador prison. Preventing situations like that is why we have laws that require due process. And the fact that you seem to not understand this basic reality makes me gravely concerned for the future of our country.

Slashdot Top Deals

We all live in a state of ambitious poverty. -- Decimus Junius Juvenalis

Working...