Comment I switched to Garmin (Score 1) 19
So glad I switched from Fitbit to Garmin. Google has done everything possible to lose me as a customer.
So glad I switched from Fitbit to Garmin. Google has done everything possible to lose me as a customer.
Well, you just said flat out, user pays $150/month in lieu of ISP+power.
Note that last month my ISP+Power was less than $100/month (thanks to solar offsetting it).
"One big reason the XFRA model works is that the average American home only uses about 40 percent of its electrical capacity,"
Yes, sure, on the individual level, a house may average 40 percent and the 200A is just peak demand and/or anomalous residences, but I guarantee that the grid is *not* sized for everyone to continuously pull down 200A all the time.
When power demand gets pressure due to prolonged weather events, you get rolling blackouts precisely because the grid is not sized to handle the load, even though *technically* everyone is operating within their individual 'capacity'.
Power grids are oversubscribed, and this concept pretends that the aren't.
Per the article, the homeowner has to pay to have this unit at their house, but the cost of the monthly fee goes to also cover ISP.
So you don't get free anything, but they claim that, in theory, you would get lower monthly bill than ISP + Utility.
More like the opposite.
They would want to divert resources away from your usage and into a locked enclosure that you would not have access to.
It would not upgrade your home infrastructure one bit. You even still have to pay them for the privilege, though they argue it will be less than you would have otherwise paid to an ISP and utility company.
Can anybody say what this app does (other than report your GPS location)? Does it do anything you can't do with a web browser and going to
I agree with you, but the big tech companies *seem* to be winning their arguments, even when the plaintiff shows output that includes even the watermark of the plaintiff's stuff on something that looks like the plaintiff's assets.
So it's at least more pragmatic to show that the acquisition and likely redistribution of the works while torrenting were a problem without even bringing up the whole AI ingest argument.
The problem is whether the AI "learning" is analagous to "learning" in the human sense.
For example, there was an existing library implementation of some function, but it didn't work the way I wanted and it was too deep to reasonably modify and the function wasn't *that* involved, so I decided to roll a new one.
Now I type a function name and LLM offered to tab complete the whole function body. The function body for a non-trivial function matched verbatim the very library that didn't work for my purposes, down to name choices. I compared for curiosity but discarded since the whole point was I needed it to work differently than the one reference implementation I could find.
A human might have vaguely recalled how it worked and you could maybe see the structural resemblance in what they produced, but for a function this significant no human "learning" would have resulted in such a verbatim copy.
I said that because it was consistent with the general classwork that was expected of them from the history and english classes. Having to do write ups of this sort of nuance, but then carving out some 'yay AI' fluff in the middle seemed just wholly stupid.
It’s obvious the criteria is going to be that all answers the AI gives are "not woke". This has absolutely nothing to do with safety, it is censorship.
He is suggesting an asset tax, but you can evade it by not using the asset as collateral for a loan.
I would think the main threat is from fooling users into running some downloaded executable code.
In the 90s, the school systems were kind of left to fend for themselves. The vast majority of the computers in my schools were systems the area companies were scrapping, but donated them on the way out. A decent part of my programming class was trying to salvage 20 out of 24 systems that a business donated that wouldn't boot. They spent what budget they could on a handful of computers capable of running encarta for the library.
In the 2000s, things started shifting a bit, in a college course we were handed out 'donated' copies of Visual Studio, but the teacher said that's for us but wasn't going to be used for class at all.
Since 2010, things have gotten a bit worrisome as a lot of the big tech have started getting awfully opinionated and wanting to 'help' kids learn to code. Education is all well and good, but when the big corporate interests get actively involved and prescriptive, things drift toward indoctrination more than education.
At least with the 'learn to code', a skill that needed significant develop was being theoretically served, though a lot to be worried about there, with the LLM scenario, it's pretty much just indoctrination. To the extent an LLM works or does not work is not something that takes a significant amount of time to sort out.
As an example, my kid was asked to write a brief thing on what excitingly awesome thing they are looking forward to using AI to do as part of an "AI challenge" at school sponsored by a local tech company. Not to take a critical assessment of things, of evaluating the nuance of benefits and drawbacks, nothing on helping them understand how to best use it, just to blatantly write a puff piece about how awesome AI is/would be for something. Basically soliciting marketing fodder and awarding three kids a couple hundred bucks. It was going to be a grade and so they had to do it and take it seriously..
Those guys can't even vet their own social media posts, and those are ~100 characters of ASCII text. The chances of them being able to meaningfully review a multi-gigabyte binary file are exactly zero.
... other, more obscure platforms are also supported, but if you want to run NetHack on them, you'll have to compile it yourself from source. Kind of a baller move if you ask me
Man must shape his tools lest they shape him. -- Arthur R. Miller