I went to that "cool" web site. I clicked on the button "Invest now" It links here: https://www.hardt.global/inves... Which goes to their error page.
You did not really expect those people to put a lot of effort into their Potemkin village?
I understamd how Americans fall for this nonsense, but Europe has a well developed railroad system and efficient short distance flights.
Why would the Europeans fall for this inefficient, ineffective, economically insane, dangerous, unproven, ridiculous scam?
These project are never about anything technical or practical, they are just vehicles to funnel tax payer's and gullible investor's money into the pockets of some shifty people. Not really different from the frequent occasions where some municipality pays $$$$$$ for some shitty "artwork", made with very little effort by someone who knows someone who...
I wonder if this is just one of the mill corruption
It is that, plus the activity the money is burned for allows for some PR where politicians can claim they are doing something "innovative". Just like with solar freaking roadways, which everyone having few years of physics education can easily debunk as inefficient, and they never worked anywhere, yet time and again new sponsored projects of the same type pop up somewhere on the planet.
The US patent office just stated they would accept patent applications for inventions made by AI. So please go ahead and invent something, make it something creative, new, but also profitable.
Unsurprisingly, the generated responses read like PR articles from contemporary "startups" that want to do something-something-AI while not having an idea how to technically implement their vision. And of course, the "ideas" sounded awfully similar to what one has seen in fictional literature.
So I guess the patent office will just receive AI-slop in addition to the often dysfunctional "inventions" they already have been receiving from human authors, before.
PayPal's agentic commerce system pushes product discovery through AI platforms like Perplexity where recommendations, checkout, and fraud checks all happen inside someone else's controlled environment.
There is a spurious word "checks" in above sentence that does not belong there.
Straight up the middle schlock aimed at the lower half does far better, and AI can do that already. In fact, for a large portion of the populace, nuance and subtlety of thought isn't just wasted on them... they're actively antagonistic towards it, and popular culture has made that a laudable stance. Willful stupidity.
The same is true for computer software, and it is not "popular culture" that applauds to AI slop, but tech-incompetent "decision makers", who are all in favor of mediocre code and do not even want "their developers" to be innovative, highly skilled or optimizing diligently. Software enshitification is set to be the top trend for years to come.
So better use a slogan like "AI does not think like humans" to headline a critical review of contemporary AI capabilities.
Maybe we should just say "AI thinks like a calculator".
Technically correct, but that might be a bad choice because people experience calculators as something that always provides correct results, and it would require lots of additional sentences to then explain why a correctly (according to the weights) chosen most probable continuation of a text may contain very wrong statements.
AI is a computer programmed to give the most believable answer tailored to its apparent audience. Perhaps we need to inform people that the answer they get from AI may be different than the ones it gives other people who ask the same question.
If other people use the same prompt and (not randomization enabling) parameters, and have no prior context stored with the LLM, the LLM will generate the same response for them.
The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit. It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.
Present questions to and corresponding responses from contemporary LLMs to random people on the street, and ask them if they think that generating these responses required thinking. You will find that a vast majority of people will answer "yes" to this, even more so if they are not told the responses were generated by a computer. You and I may know how to spot the hints where LLM generated responses differ from what a human would typically respond with, but that does not matter: If you want to educate people on the risks of over-confidence into LLMs, you cannot convince them by starting with an "AI cannot think" statement, which contradicts their personal experience.
people who don't understand technology and can be deceived into thinking that LLMs really are a magic box, and will not question it's outputs.
We both certainly agree that this is a huge problem with how LLMs are marketed today. I'm just proposing to not use the claim "AI can't think" as an argument towards those "who don't understand technology", because it will not be a convincing argument to them.
Failing to accept that marketing is being used to deceive and manipulate, (starting with Sam Altman), and allowing LLMs to have things like 'reasoning' in their model name is a problem. No different than Musk naming his software 'Full Self Driving' when it clearly isn't.
I think there is a big difference here: A deceiving marketing name like "Full Self Driving" evokes a pretty precise expectation of what that thing supposedly does (but does not) in everyone - and it also is pretty easy to precisely define what "Full Self Driving" means (or should mean). The word "think" on the other hand has been used extensively for pretty mundane processes - like a chess software displaying "thinking.." while calculating its next move - and nobody has a precise expectation of what that means, or would feel deceived by finding out the chess software does not emulate a human brain to find its next move.
So better use a slogan like "AI does not think like humans" to headline a critical review of contemporary AI capabilities.
To be awake is to be alive. -- Henry David Thoreau, in "Walden"