Comment Re:It's literally named... (Score 1) 54
That's sort of their naming convention for the model sizes: Haiku Sonnet Opus Mythos
That's sort of their naming convention for the model sizes: Haiku Sonnet Opus Mythos
SpaceX owns xAI/Grok and has a bunch of servers in their datacenters that aren't profitable.
As for space cooling - that's a problem, but they only need to dissipate as much energy as they absorb. It's not impossible
I made the swap from Fitbit to Garmin as well. They don't have as many "apps" available, but on a watch it's not a big issue. The battery life and activity tracking is just far superior.
A couple years ago I was chatting with a guy that used to work at Fitbit - he wore a Garmin
It's quite telling that a company having ethics designates them a supply chain risk.
Especially after it was reported that the US bombing of a school was one selected by an AI tool and the government theoretically had policies already that aligned with their ethics
No, but it's actually one of the states in the United States of America which IS invading other countries.
We can easily have both and cut money from stuff like invading countries
There's a good site with multiple studies on this question: https://transitcosts.com/
Basically there are a lot of reasons, but a few key ones are:
- The US doesn't have centralized organizations for planning and designing rail systems. We mostly have small regional systems and each stretch of rail and each station is bespoke requiring an outside (expensive) company to design and built it.
- Bidding process along with accountability is a bit broken. Contractors aren't required to show line items and penalties for overruns and delays cause them to submit higher bids. The desgin/build process is also poor in part due to the previous point.
- Environmental restrictions cause massive delays, don't have proper coordination to expedite, and are easy for people against a project (or even sometimes for) to submit complaints that cause everything to become later and more expensive.
I think you've got it mixed up. When tax rates go down, tax revenues goes down. It's pretty obvious and we've seen the extremes in places like Kansas where they reduced taxes dramatically and had massive budget shortfalls.
Likewise, states and municipalities frequently raise money via raising taxes.
You lumped politicians into one category, but it should really be two: Those that run functional governments, and Republicans/libertarians who think lowering taxes will result in greater income
Yes I did claim that because it's true. LLMs don't need a written informal and incomplete spec, they can determine that from their training data
I entered the discussion when you said an LLM would not be able to find "obvious vulnerabilities".
Kind of unfortunate that you don't even have an academic view if your view that looks at the facts reaches the opposite conclusion that the facts suggest. Guess you're just behind on reading
Sounds like you have a very academic view of the situation.
It's unlikely you could write software to find *all vulnerabilities* except in very basic cases, but it's very likely you could write software or train a model to find lots of vulnerabilities.
Yes it would require context in some cases - solution: Provide context!
LLMs can even deduce some of the intent automatically. There's plenty of data in the training explaining how a browser should work and previous vulnerabilities
Well it's never happened before so that makes it pretty news worthy
Claude: I see multiple hostile messages and an invite involving your (soon to be ex) wife is scheduled. Found a few suggested hikes via the AllTrails app - should I proceed with creating a day plan?
LLMs are good at matching patterns - they produce output based on inputs. You might have some weird warped definition of "capable of insight" that this doesn't fit, but in reality the results are insightful. They don't do natively do math (these days they'll usually offload that to a different model), but that's not what we're looking at here.
There's lots of research on vulnerabilities in software. For example, in general, if a user input allows you to read data from the stack through a buffer overflow, that's a vulnerability. I could list dozens of similar examples and LLMs are trained on data including these. No formal specification is needed and remember that you can feed in enough code for the LLM to have the context needed to determine if it's a valid vulnerability.
This recent report from Mozilla proves that the results can be good, they are real world use cases, and are finding vulnerabilities. Continuing to pretend that it's not a viable technology just makes you look like you're plugging your ears going "la la la"
A continuing flow of paper is sufficient to continue the flow of paper. -- Dyer