Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Democracy has failed? (Score 0) 259

I think it's safe to say that the European style of democracy, where for some reason every single decision is closely scrutinized and can be vetoed by just about anyone, and every industry is regulated to the point were any change is essentially impossible, and new industries are killed before they can even get off the ground, has turned out to be a bad idea, and the US should immediately turn away from these kinds of policies before we follow suit and become irrelevant.

Comment Re:This is why we use "agents" instead of "LLM's" (Score 1) 114

Yeah I have used Semantic Kernel to code AI in .NET and I did not give it the capability to tell the current date and time but it would be a 5 minute fix to do so since getting the current time is trivial. The bigger problem would be ensuring the offline server the AI runs on has its clock set correctly.

Comment Nope (Score 1) 114

Any modern AI model can be provided "tools" that it can use to perform various tasks or retrieve various information. The current date and time is easy to do. I can't say why the author and/or ChatGPT seems to have trouble but you can easily set up a tool to return the current date and time, instruct the AI "this will return the current date and time" and then if the user asks for it the AI will automatically leverage the tool. It's possible ChatGPT just has a lot of tools at its disposal and is getting confused about which one it should use (for example, searching online for the current date time) or perhaps OpenAI wants ChatGPT to use a less specific online search tool which can also return the current date/time when asked but sometimes ChatGPT doesn't quite search for the right thing. As someone who has leveraged AI you can provide specific tools but I expect OpenAI wants to provide far more scope to tool functionality in ChatGPT, so may provide more general tools like web search, which may cause problems.

Comment Plainly obvious. (Score 1) 289

If you’ve ever tried to argue with an LLM, it quickly becomes obvious that they don’t even really know what words mean, and this article explains why. They don’t actually have any real life experiences to relate to those words. They’re not trying to understand the world, they aren’t even really exposed to the real world as a source of data, all they’re really trying to accomplish is arranging words into patterns that humans will find authentic and convincing. That’s the main thing these models are optimizing.

Comment Re:AI is terrible. (Score 1) 55

I've used them, I am well aware of what they are actually capable of, which is mostly expanding a small piece of text into a large one without adding anything of value. In most cases, it sounds like a highschool student trying to BS their way to meet a necessary word count without really engaging with the material. Trying to call it "superhuman" is actually hilarious.

Comment Re:AI is terrible. (Score 1) 55

Yeah, it seems like AI coding is totally misguided today. AI would be good for helping people with syntax, or identifying typos. Maybe you could use it to produce early demonstration versions of software to help set requirements, but using it to actually write code doesn't make sense for any kind of real product you intend to ship to customers, especially if security is any concern.

Comment AI is terrible. (Score 2) 55

It’s good for a handful of things, but as a rule content that is largely AI generated is not useful. AI chatbots and ai generated answers to questions are not particularly informative and often contain glaring logical contradictions, nonsensical statements, and even factual inaccuracies. AI, in its current form, will never be able to think, make decisions, or teach students. It’s as if we created the world’s smartest insect and asked it to raise our children, when it should be summarizing wikipedia articles and filtering spam.

Comment Re:I'm in two minds about this (Score 3, Informative) 20

This all boils down to a company trying to quash speech they don't like. There's really no other way to interpret this. They posted unlisted but public videos instead of making them private if they really wanted them private. They intentionally posted them on a publicly accessible website intending them for consumption by potential customers. When students started spreading this page and videos to warn about the capabilities of the company's product, the company immediately moved to try to suppress this speech by removing the videos, and then sued when it didn't work. They lied to the judge about the videos being publicly viewable (though Hanlon's Razor suggests this could have been their own incompetence in not understanding YouTube) and then lied about their lying when caught.

Comment Re:Already an option for 'advanced users' (Score 3, Interesting) 36

The problem is that alternate app stores would have had to verify all their apps with Google which defeats the purpose of being alternate. This move would allow them to actually exist again as they currently do, but it does raise the question of how this will be different from the current method of allowing alternate app stores to install apps, which has already existed in android for quite some time.

Slashdot Top Deals

"Help Mr. Wizard!" -- Tennessee Tuxedo

Working...