Forgot your password?
typodupeerror

Comment Re:On Star Phone Home (Score 1) 37

In my younger and more foolish days I had a Pontiac and I opted out with wire cutters to the Surveillance module's power cables.

At the time I was actually more concerned with remote unlock hijacking than tracking but still I didn't trust GM.

All together now: WE TOLD YOU SO.

If I had to guess 20 years later doing that would disable the ECU.

Comment Re:Larger teams will move faster than smaller team (Score 1) 85

No, it's more about how teams work. Teams have a scope. They don't typically go beyond that scope. So if my team owns the Foo and Bar modules, I work on those. But if there's little important work on Foo and Bar, but a lot of important work to be done on Baz, it's generally organizationally difficult for us to work on Baz. Typically we need to be lent out by our manager and seconded to the other team. Which can be a lot of red tape and politics.

Now if you're imagining some alternate world where programmers an be moved at will- then we're already one big team instead of multiple small teams.

And no, a smaller team doesn't win every time. If it did, then then smallest team possible is teams of 1 and we'd all do that. There are sweet spots, which depend on the organization, the work to be done, and the importance of that work. For some that's bigger, for some smaller. I've definitely worked on teams that were both too small for the work, and that were too big.

Comment Re:Larger teams will move faster than smaller team (Score 1) 85

They can, under some circumstances. If the scope of what they work on is too small to fill the team's feature set. Or if the work they would be doing is significantly less important than other work to be done, having them in one large team makes it easier to move to more important work and can get critical features built faster. In that case it may not be overall more work done, but it may move the important stuff quicker. If larger teams weren't useful on some level, we wouldn't have teams at all- we'd all be individuals.

Comment Re:Depends on your goals, I guess. (Score 1) 85

In the end- good engineers with sufficient experience and support will get stuff working with any methodology. Bad ones or ones insufficiently supported will fail with any methodology.

There are some things that agile works well for, but it's really limited to domains where you can quickly build something tangible for feedback and you have stakeholders willing and able to give frequent feedback. UIs are a good example. It's a horrible fit for anything that requires actual research, or that can't be shown to low technical knowledge customers frequently (in other words anything that actually needs weeks or months of backend work, algorithm writing, or infrastructure to be written).

Comment Re:One behemoth isn't a trend (Score 1) 85

The problem with that is the skills needed to manage and the skills needed to do real work (let's take programming as an example) are pretty distinct. Someone can have both, but they tend to have one or the other. Forcing those without the skills to do the practical work into doing it doesn't actually help the team, it just slows everyone down. And if they get on the critical path of any project you can be royally fucked.

There are a couple of ways to solve this problem:

1)Larger team sizes. This can work if the team owns enough to keep everyone busy, but it can lead to effectively being independent subteams calling themselves one team while being inconvenienced by each other.

2)Each manager managing multiple independent teams. This can work if it doesn't overload the manager. The biggest problem is when the manager decides one team is more important and doesn't support the other(s) enough. This works better the closer the teams are, as it requires the manager to know fewer sets of collaborators and politics

Comment FlashAttention (Score 2) 46

I did some math the other day on running local AI models and the net result is most homes can't afford to run the current median models.

They don't just need 80GB of VRAM, they need newer architectures - to be supported by CUDA, to be supported by pytorch, etc.

These problems may well be solvable with more clever use of hardware, MoE, acceptable quantization, etc., but today you're in for several grand and something north of 100W idle to use what is effectively a $20/mo plan.

A small enterprise can afford local, so that's good. We paid more than that for one SGI machine back in the day.

The point of the exercise was to plot the position on the curve. We're at something like 2006 YouTube where nobody could afford the drives or bandwidth that YouTube/Google was giving away for free (aka with VC money). Eventually hard drives got cheaper, people got gigabit at home, FlashServer was replaced with h.264/HTML5, phones could stabilize video locally, etc.

So it looks like these AI companies need to stay alive for about seven more years giving away product at a loss, or at least highly oversubscribed, to turn a profit. Hence the low token allowance, the banning of OpenClaw, etc.

On the other hand, I read the blog of a security researcher yesterday who found an exploit with (IIRC) Claude, tried to refine the PoC, but got dinged on "out of tokens" before he could finalize it. So he just deleted the work and moved on.

It sounds like they're trying to not lose money at such a velocity and are trying to find a sweet spot where people don't just declare it too underpowered to use.

A global energy depression may well take out the supermajority of the companies that believe they can burn investment money for seven more years. There is circular financing money, then there is real return on capital money. One is to fool the markets, the other is grounded in current physics.

Slashdot Top Deals

"Money is the root of all money." -- the moving finger

Working...