Comment Re:HEED the WORD of the LORD (Score 1) 122
An android is by definition male. The clue is in the etymology. Andro-id, meaning male being. A female version would be an ogynid. Asimov discussed this in a conversation about I Robot.
An android is by definition male. The clue is in the etymology. Andro-id, meaning male being. A female version would be an ogynid. Asimov discussed this in a conversation about I Robot.
I'm as certain as it's possible to be that LLMs are not the path to more accurate AI. They will be a part of it, but statistics just doesn't work that way. They are as accurate as they will ever be. Something else is needed to correct their errors, and to the best of my knowledge nobody knows what that is as yet. These Companies keep claiming they've found it but every time their claims don't stand up to any scrutiny. That's not to say it won't happen in the near future, nobody can predict when that sort of breakthrough will happen. But equally it may not happen for decades if ever. We don't understand how human thought and reasoning works well enough to emulate it yet.
So my answer was based on the models we have now. That's the question that was asked after all. Predicting what may happen in the future is a fool's game, but right now the models we have aren't good enough for anything but the simplest of problems, and only usable on problems that a human has already solved. Using them is the equivalent of looking up the answer to a question every time, and not bothering to remember that answer or understand the subject on the basis that it will always be available to look up. This is also making predictions about the future that may or may not come true.
Your machinist is fine for as long as a CNC lathe is available but without it may be useless. That's still better than a programmer relying on current AI because that lathe can at least do every job conceivable in that domain. In the programming domain AI can't yet and may never be able to.
I think the point here is that it can be useful for someone who isn't really a coder, to produce just about usable simple things. But if you are a coder it's not really faster. Maybe in the short term, but in the long term it's worse.
As somebody who's been coding for 40 years, in multiple languages, I already have my own libraries full of the vast majority of stuff. These libraries are not just fully tested already, I understand them thoroughly and so can write the small changes to them needed for each project. I only need to test the changes, not the main body of code. If I use LLMs for it, I need to test every single bit of it every single time, and because I'm not learning the algorithm myself I make myself dependent on the LLM going forward. Even on those occasions where I don't already have something similar in a library it's better to write it and understand it myself rather than rely on an LLM, just in case I need something similar again in the future. And if I do the code I've saved will be fully tested in advance.
So in summary an LLM can be useful if you don't code often, and can speed up work in the short term. But using it will prevent you becoming fully experienced and will mean that you are always slower than someone like me no matter how many years' experience you get, because using it prevents you from gaining that experience. And it will always be useless for larger, more systems level projects because it is too unreliable and if you don't have my level of experience you won't be able to spot the thousands of subtle bugs it would put in such a project.
Not that most systems level projects aren't already full of these subtle bugs, but that's a whole different problem caused by Companies not wanting to pay people like me what I'm worth.
And then Microsoft will hire them all, at reduced salaries because there's a surplus, and laugh while all their foolish competitors fail. It's a dastardly plan but it mmight just work.
You can also, in most cases, use the 'back' button on your browser after clicking on something, then use the 'Don't recommend channel' option. It doesn't aways work, sometimes the reccomendations refresh. I haven't tested, but I think this might be down to the amount of time it takes you to realise the video you clicked on is shit. So the quicker you realise this the better. There should really be the option available while watching a video as well, but whoever said that Google were good at UIs?
On quite a few occasions I've had friends decry how they clicked on something problematic, or something they don't want to see, and now their feeds on whatever Social Media site are full of such stuff. I still find it a little surprising just how many people don't realise how this stuff works. On all the sites I've looked at there is an option to train the algorithm by telling it if you liked something or not. If you don't use this option, the only information it has to go on is what you have clicked on. So if you particularly like something then use the option that says something like 'more like this please', and if you dislike something use the option that says something like 'less like this please'. Surely this is just obvious? Apparently not, because humans, even the intelligent ones, tend to be daft in a lot of ways.
'The whole idea of "spaghetti" code is that programmers can't easily understand what it does. I am not sure that is problem for AI.'
That is in fact a major problem for 'AI' because it doesn't understand anything. It is not parsing the code, understanding how it works and then working out how to add new features. It's looking at how programmers have solved a problem in the past and copying that. By it's nature, it will make code more spaghetti-like and never less. It's also (at least so far) shown itself unable to understand concepts such as scope, and how a new feature may interact with already existing features. Again, this lack of understanding will lead to more spaghetti code.
An aspect that will make this problem even worse is the poor quality of the bulk of code out there. Projects that have been carefully considered, designed and optomised from first principles are extremely rare. Much more common is poorly designed and written code that's been fixed after publication, and usually in a hurry as critical bugs have been discovered. And since memory and storage got cheaper this problerm has only got worse. Spaghetti code is the norm in this industry and so this is what the 'AI's are mostly trained on. Expecting them to write better than the average human programmer shows a complete misunderstanding of how these things work.
So what we've got is a search engine that's almost as good as Google used to be (not as good because sometimes it just hallucinates the results) while using a hell of a lot more energy than normal Google search does. Luckily, there are search engines out there that are as good as Google used to be without all the ads, and are not using so much energy to do it. So what exactly is the point of a substandard search engine that uses far too much energy?
I would place a bet that they already do.
As per the article, they are setting up a licensing system before resuming exports. I would guess that this licensing system will be intended to prevent them being sold on to the USA, with any instances of this being grounds for license revocation. I would not expect this to take very long to set up, but perhaps a slight delay is intended to focus minds and dissuade other Countries from allowing them to be sold on to the USA.
'The System' is specifically designed to ensure this. All those built-in checks and balances designed to prevent one rogue element destroying the System through Extremism work precisely by giving those two Parties the bulk of the power, and to make it a waste of a vote to vote for anyone else. That's their purpose.
Most people think of this as a good thing, and if your System is working effectively then it is. You don't want wild swings in policy every few years, because even if a rogue actor with nefarious intentions doesn't get control those swings will shake your Society apart anyway. A System without those checks and balances wouldn't survive 100 yewars, never mind the nearly 250 the USA has lasted so far.
The problem arises when the System in place is not working. Those checks and balances that have kept it together for so long work to prevent any necessary changes as well as unwanted changes. There needs to be an effective way to make the changes needed without breaking the checks and balances. Unfortunately I don't think the USA has that, and it's problems have got worse over the last 50 or so years without anyone managing to make the necessary changes. Now those problems look like they are going to break the checks and balances, and with them the 250 year old System. What will replace it is anyone's guess at this point.
In short, the voters are only doing what they are supposed to do within the System they live. The System for keeping them in line and voting for the 2-Party System includes misinformation and propaganda in it's toolkit. People en masse have inertia, especially when they news and education systems are designed to feed that inertia. It's very difficult to break out of the miindset you've been conditioned into, and there is unlkely to be enough doing so to break the System. Those few with influence who have done so are never enough against the weight of Establishment-think arrayed against them.
Much as I despise Trump, in a lot of ways he's right. The system urgently needs drastic changes if it's to survive, It was rigged from the start, and that rigging is what will destroy it. I just don't think Trump intends to replace it with anything better,rather with a aworse, even less Free system than the one Americans have been conned into calling Freedom over the last 250 years.
You and the first reply to you are conflating different things. Yes, there is a whole load of corruption going on, pretty much all of it much more corrupt than this specific example. Money paid to Tesla or Starlink, while all involving Musk, is not connected to this specific bit of corruption in any other way. So we have numerous examples of contracts being given directly to Musk Companies, often when they aren't the best for the job. Blatant and obvious corruption. Whereas here, we have NASA explicitly confirming, in writing, something that everyone knew anyway, that when Starship works NASA will pay to use it.
No money has been paid to SpaceX in this case. No money has even been promised that wouldn't have been forthcoming regardless of Musk's involvement in the Government. All that's happened is that NASA has in effect given a guarantee to any potential investors that there will be a return on that investment, provided SpaceX can deliver a working Starship. That was already a given, this just makes due diligence a little easier by explicitly guaranteeing sales when the product is ready.
So element of corruption, in that this guarantee was only given because Musk is involved in the Government and probably wouldn't be given to any other Company in the process of developing a launch vehicle. They would be expected to get it working before it's use was specifically guaranteed.
I was not directly commenting on all the other corruption Musk is a beneficiary of, although I alluded to it when I said that this is minor compared to the other, far worse, corruption occurring. I would expect techie types to understand scope a little better than you've shown here. Your 'tsunami of corruption' is outside the scope of my 'element of corruption' comment.
This is a little unfair. SpaceX is not some random person off the street, they already provide this service for NASA with different vehicles. In effect this is just stating the bleeding obvious. IF SpaceX get it to work effectively then NASA will use it for launches. It's mostly pointless to even say that, it's simply a given. I doubt anyone doubted that for a second.
However, there is an element of corruption here, albeit a very small one in the scale of all the other corruption going on. Musk has financial issues. Tesla's share price is sinking and their sales are going down. It's unlikely that this will reverse any time soon, and may even lead to bankruptcy. Musk's personal credit rating, and those of his Companies, will have taken a hit from this. People will be much less willing to either lend him money or invest in his ventures. SpaceX may not be able to fund this vehicle to a working state because of this. So he gets NASA to state explicitly what everyone already knows implicitly, that they will use this rocket when it works. This should help to improve SpaceX's credit rating at least a little.
It's not really necessary, it probably wouldn't have happened if Musk wasn't involved in the Government. It's a minor form of kickback Musk gained through backing Trump. So it is corrupt to an extent. But it's only minor corruption compared with all the rest of the corruption going on right now.
Oh, I see. You've come up with one counter example and extrapolated that for all tests everywhere. Great. It's sunny today, so I guess it must be sunny here every single day, since one example covers every possibility. And even in your one example, you have made a fundamental error. When a student takes a second, third or even five hundredth SAT they are not retaking the exact same test. They are given different questions. When these LLMs are tested they are tested on the eaxct same questions that they failed last time, and have now been specifically trained to get right. This is an entirely different thing to retaking a type of exam that you fiailed the first time. If I were given the exact same test twice I would expect to get 100% the second time.
And yes, computers are very good at doing calculations. It doesn't require an LLM for this, we've had pocket calculators since 1972. An LLM has access to the calculator function on the computer, and yet they still sometimes get the calculations wrong. And getting the calculations right has many times been reported by these companies as passing a Math Olympiad, when the difficult bit was done in the prompt designing by a human. This is exactly what they have done.
That's complete nonsense. The mistake you've made is that you have assumed that people get to retake tests over and over again until they get it right. In most cases they don't, they get one or at most two resits. And even then the fact that they required a resit is recorded. They don't get to keep trying until they pass no matter how many attempts that takes.
>p> 'AI' however does, at least in the internal Company tests. This is why the results published by these Companies is vastly different to the results published by external benchmarkers, who don't give the 'AI' multiple tries. And not only do these Companies give them multiple tries, they often even deliberatedly cheat. For example, the 'Math Olympiad' tests are first transposed form the natural language they were written in to symbolic language easier for the 'AI' to understand, which is actually the hardest part of the question. Then the 'AI' simply does the calculations, which we all know that computers are very good at. This is then reported as the 'AI' being able to win a Math Olympiad.
Tthere is nothing in it to breach. It's simply a declaration of intent, not a set of constraints. By refusing to sign it The USA and UK have given the impression that they don't care about it. And public criticism will occur anyway if any Company behaves unethically, Chinese Companies included. If regulatory agencies choose not to hold Companies to a set of rules public complaint is mostly irrelevant. Is Trump's Government likely to care any more than China's about that? It's the optics of this that look bad. It's choosing to make your Country look bad on the World stage for no actual gain, and signalling that you don't care about the common good. It's foolish. It's exactly what I would expect of Trump, but the UK joining in disappoints me while not surprising me.
How can you do 'New Math' problems with an 'Old Math' mind? -- Charles Schulz