Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Directly monitored switches? (Score 1) 36

Obviously the black box can only record what the computer tells it is the state of the switches. There's no camera looking at the switches to confirm they actually were moved. No doubt the switches are wired such that a short or an open circuit will not fool the computer into thinking the switch was moved and shut the engines down. But if something caused the computer to think (pardon the expression) the switches had changed state, it would shut the engines down and the flight recorder would dutifully record this change of state.

Suppose for a moment a computer glitch did shut the engines down. The pilot, upon noticing this asks the copilot about it and he says, no I didn't shut them down. Knowing he has to do something, reaches over, flips them to off and back to on again to try to get them going again, after which the engines did restart but sadly not in time to prevent disaster.

Comment Re:Don't blame the pilot prematurely (Score 1) 36

Mods, this should not have been rated -1 flamebait! Totally inappropriate mod.

I deeply respect Captain Steeeve and his videos are great. Any nervous flyer should watch his videos (except the Air India ones!). And indeed Captain Steeeve's summary of the report is accurate. And his videos about the cutoff switches are accurate too. The chance of those switches being flipped inadvertently or on their own from mechanical wear and vibration is zero. And indeed the computer shows that inputs from those switches went from on to off and back to on again with timing suggestive of human intervention.

That said, one of Captain Steeeve's youtube collaborators, Garybpilot with whom he has done videos about Air India (Hanger Talk) has done his own videos on Air India. In one (https://www.youtube.com/watch?v=M0n3iIjvQk8) he mentioned that at Air India, there is not one pilot who believes the official report blaming the pilots. These are pilots who knew well both of the pilots in the cockpit on that tragic flight and find the suggestion difficult to believe. The Indian investigation board has been mired in political intrigue and controversy the whole time (before even). They were definitely under pressure to exonerate Air India and blame the pilots. Also to exonerate Boeing. Not that long ago a 787 had both engines shut down during landing. And there is a minor history of electrical anomalies on 787s, including RATs deploying mid flight for no discernible reason.

If the pilots did not shut the engines down, I don't think we will ever know what actually happened unless there is another accident. And given the problems Boeing has had in recent years (and other planes with engine shutdowns during flight), another accident is a possibility.

Comment Re:Don't blame the pilot prematurely (Score 1) 36

Those words were said, definitely. and the other guy responded, "I did not."

I don't know anything about what conspiracy theories are going around on the Internet, but I do know there among some professional pilots there is skepticism. There are no pilots at Air India who knew well these two pilots who believe they were simply suicidal. Plus there was at least one other incident this year with a 787 where both engines shut down during landing. The investigation has certainly been fraught with political tension. Obviously it's in Air India and Boeing's best interests to blame the pilots.

Comment Re:Google? wtf (Score 0) 86

It's easy to have unique keys in your spreadsheet so that you can easily relate information on different sheets to one another. The problem is, actually doing the processing that a SQL server would do trivially is irritating, and then it will be processed slowly every time. Whatever Excel does or doesn't cache, it isn't enough. You can do big complicated things, but they work slowly, and maintaining it is irritating at best. When you do complicated things either your formulas get long, or you wind up having to write code, or in fact often it's both. At that point you're way better off IMO doing it in something else so that at least performance is good when you're done, and you never have to screw with editing a long formula.

Comment Re:Google? wtf (Score 1) 86

But, is 2e7 cells really that many? If I spent 5 minutes brainstorming I could probably think of 20 pieces of metadata you'd want in columns of a spreadsheet tracking financial transactions

That's exactly why it should be in a database and not a spreadsheet. Spreadsheets are best when you have a reasonably limited number of columns. It's also a horrible PITA to use them as a relational database (it's more or less possible, but you don't want to do it) so hiding pieces of that complexity in other sheets in order to limit the data the user interfaces with on the main sheet is just a lot of extra work you wouldn't have to do if you used another solution.

I'm mostly surprised that Google Sheets chokes on what feels like a fairly small amount of data. My best guess is that it's some insane formulas that it struggles with more than the number of cells.

It doesn't really matter where it fails, if Excel can do it and Sheets can't then Google has to admit inferiority to Microsoft which is never a good look.

Comment Re:Those who cannot remember history (Score 0) 206

When in the last two centuries have the French, or the British, or the Germans, or the Belgians, or the Italians moved in a way to unify that continent to stand up to this kind of genocide?

Biden went around congress to fund a different genocide. Pretty words, but living up to them is another matter.

Comment Re:Europe has itself to blame for this (Score 3, Insightful) 206

Eastern Europe was screaming about how dangerous this was, but they weren't listened to.

One of the most insane things is how after Russia's surprisingly poor military performance in the Georgian war, the Merkel government was disturbed not that Russia invaded Georgia, but at the level of disarray in the Russian army, and sought a deliberate policy of improving the Russian military. They perceived Russia as a bulkwark against e.g. Islamic extremism, and as a potential strategic partner. They supported for example Rheinmetal building a modern training facility in Russia and sent trainers to work with the Russian military.

With Georgia I could understand (though adamantly disagreed) how some dismissed it as a "local conflict" because it could be spun as "Georgia attacking an innocent separatist state and Russia just keeping their alliances". But after 2014 there was no viable spin that could disguise Russia's imperial project. Yet so many kept sticking their fingers in their years going, "LA LA LA, I CAN'T HEAR YOU!" and pretending like we could keep living as we were before. It was delusional and maddening.

The EU has three times Russia's population and an order of magnitude larger of an economy. In any normal world, Russia should be terrified of angering Europe, not the other way around. But our petty differences, our shortsightedness, our adamant refusal to believe deterrence is needed, much less to pay to actually deter or even understand what that means... we set ourselves up for this.

And I say this to in no way excuse the US's behavior. The US was doing the same thing as us (distance just rendered Russia less of a US trading partner) and every single president wanted to do a "reset" of relations with Russia, which Russia repeatedly used to weaken western defenses in Europe. And it's one thing for the US to say to Europe "You need to pay more for defense" (which is unarguable), even to set realistic deadlines for getting defense spending up, but it's an entirely different thing to just come in and abandon an ally right in the middle of their deepest security crisis since World War II. It's hard to describe to Americans how betrayed most Europeans feel at America right now. The US organized and built the world order it desired (even the formation of the EU was strongly promoted by the US), and then just ripped it out from under our feet when it we're under attack.

A friend once described Europe in the past decades as having been "a kept woman" to America. And indeed, life can be comfortable as a kept woman, and both sides can benefit. America built bases all over Europe to project global power; got access to European militaries for their endeavours, got reliable European military supply chains, etc and yet remained firmly in control of NATO policy; maintained itself as the world's reserve currency; were in a position that Europe could never stop them from doing things Europeans disliked (for example, from invading Iraq); and on and on - while Europe decided that letting the US dominate was worth being able to focus on ourselves. But a kept woman has no real freedom, no real security, and your entire life can come crashing down if you cross them or they no longer want you.

Comment Re:AI detectors remain garbage. (Score 1) 32

They clearly didn't even use a proper image generator - that's clearly the old crappy ChatGPT-builtin image generator. It's not like it's a useful figure with a few errors - the entire thing is sheer nonsense - the more you look at it, the worse it gets. And this is Figure 1 in a *paper in Nature*. Just insane.

This problem will decrease with time (here are two infographics from Gemini 3 I made just by pasting in an entire very long thread on Bluesky and asking for infographics, with only a few minor bits of touchup). Gemini successfully condensed a really huge amount of information into infographics, and the only sorts of "errors" were things like, I didn't like the title, a character or two was slightly misshapen, etc. It's to the point that you could paste in entire papers and datasets and get actually useful graphics out, in a nearly-finished or even completely-finished state. But no matter how good the models get, you'll always *have* to look at what you generate to see if it's (A) right, and (B) actually what you wanted.

Comment Re:This is why we use "agents" instead of "LLM's" (Score 1) 109

Yeah I have used Semantic Kernel to code AI in .NET and I did not give it the capability to tell the current date and time but it would be a 5 minute fix to do so since getting the current time is trivial. The bigger problem would be ensuring the offline server the AI runs on has its clock set correctly.

Comment Nope (Score 1) 109

Any modern AI model can be provided "tools" that it can use to perform various tasks or retrieve various information. The current date and time is easy to do. I can't say why the author and/or ChatGPT seems to have trouble but you can easily set up a tool to return the current date and time, instruct the AI "this will return the current date and time" and then if the user asks for it the AI will automatically leverage the tool. It's possible ChatGPT just has a lot of tools at its disposal and is getting confused about which one it should use (for example, searching online for the current date time) or perhaps OpenAI wants ChatGPT to use a less specific online search tool which can also return the current date/time when asked but sometimes ChatGPT doesn't quite search for the right thing. As someone who has leveraged AI you can provide specific tools but I expect OpenAI wants to provide far more scope to tool functionality in ChatGPT, so may provide more general tools like web search, which may cause problems.

Comment Re:AI is just limited. (Score 2) 109

I find the various LLMs are helpful as a form of search engine, enabling me to drill down to potentially useful information more quickly. However at the same time they are far worse than a search engine because they aren't able to actually give you the sources to check. When ChatGPT generates a chunk of code, if you ask it where it got it from, it will say it didn't get it from a specific site, it just knows this stuff. Which of course ends up wrong half the time. So you end up with wrong stuff confidently passed off as accurate, which is ultimately stolen from real human sources. When I was in uni it was drilled into me to list my sources. Why should LLMs be held to any different standard? Google's AI summary does show sources, at least few, which is good. I always check them.

Even Claude AI which is supposed to be geared towards coding suffers from these same problems. I am trying to do some esoteric Qt 6 programming involving OpenGL, and all the AIs really struggle here because there's a limited amount of source material to steal from. It's certainly not capable of digesting the API documents and synthesizing code to do something without first seeing someone else's code. Claude AI seems to work best if you use a popular library or framework with lots of online discussion and github code for it. The popular languages and frameworks of the day.

Comment AI detectors remain garbage. (Score 5, Interesting) 32

At one point last week I pasted the first ~300 words or so of the King James Bible into an AI detector. It told me that over half of it was AI generated.

And seriously, considering some of the god-awful stuff passing peer review in "respectable" journals these days, like a paper in AIP Advances that claims God is a scalar field becoming a featured article, or a paper in Nature whose Figure 1 is an unusually-crappy AI image talking about "Runctitiononal Features", "Medical Fymblal", "1 Tol Line storee", etc... at the very least, getting a second opinion from an AI before approving a paper would be wise.

Comment Re:Really? (Score 4, Insightful) 109

automated image pattern matching has been around for decades

The problem is that the LLM only does one trick. When you start integrating other software with it, the other software's input has to be fed in the same way as your other tokens. As the last paragraph of TFS says, "every clock check consumes space in the model's context window" and that's because it's just more data being fed in. But the model doesn't actually ever know what time it is, even for a second; the current time is just mixed into the stew and cooked with everything else. It doesn't have a concept of the current time because it doesn't have a concept of anything.

You could have a traditional system interpreting the time, and checking the LLM's output to determine whether what it said made sense. But now that system has to be complicated enough to determine that, and since the LLM is capable of so much complexity of output it can never really be reliable either. You can check the LLM with another LLM, and that's better than not checking its output at all, but the output checking is subject to the same kinds of failures as the initial processing.

So yeah, we can do that, but it won't eliminate the [class of] problem.

Slashdot Top Deals

"Mr. Spock succumbs to a powerful mating urge and nearly kills Captain Kirk." -- TV Guide, describing the Star Trek episode _Amok_Time_

Working...