Comment Re:Where does innovation come from? (Score 1) 102
AI has learned the language
AI has learned the pattern of the language. It's a small but very meaningful difference.
Not unlike many people. https://blog.thelinguist.com/p...
AI has learned the language
AI has learned the pattern of the language. It's a small but very meaningful difference.
Not unlike many people. https://blog.thelinguist.com/p...
A LLM can only give answers based on what it was trained on i.e. the past. I creates nothing new, instead it rapidly pulls together solutions from existing knowledge.
AI has learned the language from those code examples and repositories. What it does with the language is often (not always, mind you) original.
LLMs have learned English (and other languages) from vast amounts of written text. It is easy to use LLMs to create work that is original (say, prompt it to create a poem in Shakespearean style about AI utilizing tennis racket to paint a house - or whatever). Similarly with software development, AI has learned the syntax and coding styles for different programming languages, and utilizes that information to create original code.
Being the AI is just using code scraped from public sources, including public GitHub, GitLab etc.. repositories. How are any Copyright licenses being handled I wonder.
Stop right there, this basic premise is false. AI has learned the language from those code examples and repositories. What it does with the language is often (not always, mind you) original.
LLMs have learned english (and other languages) syntax from examples, it is easy to use them to create work that is original (say, prompt it to create a poem in Shakespearean style about AI utilizing tennis racket to paint a house - or whatever). Similarly, AI has learned the syntax and coding styles for different PROGRAMMING languages, and can utilize that information to create original code. The syntax of the programming language is not copyrighted.
t it won't materially change the system except improve the quality of ads by verifying your identity.
I don't think it will affect the QUALITY of the ads themselves. It just makes the ad space easier to sell for Facebook and cheaper for marketing people to target the group that they want to part with their money.
LLM's rely upon previous use of language. Words can have multiple meanings of course but we have context to sort it out. But images? what is the basic unit of it? How does the AI recombine these image sub-components to something new? anybody know?
There are a lot of scientific papers written about the research done by these companies. See e.g. here: https://openai.com/research/in...
The energy would really have to be free for this process to make economic sense.
In many markets (not just neighborhoods), there is nowadays so much wind power on a windy day, electricity IS free. That's why they are coming up with ideas, how to use that cheap energy.
The solution presented in TFA converts Hydrogen to Methanol and that is converted to gasoline. It is wiser to build the infrastructure to use that hydrogen directly. In fact, they already are doing exactly that, here in Finland at least: https://gasgrid.fi/en/developm... And not just in Finland, but extending to Baltics and other Nordic countries as well.
There are two contexts here.
In one, if there's a test that the system can use on the results, it should work quite well.
In the other, if there's no test the system can use to validly evaluate the results, you will probably amplify garbage.
Exactly. And in the case presented in TFS, the system created python programs to solve problems. And it would run those programs to see, which ones worked. And used this information to improve itself.
That one is called "overfitting". All it does is amplify GIGO.
If you set aside your hate for anything LLM-related, could you please explain, why this amplifies GIGO? I understood there is a clear reward-function, which should steer it towards the working solution.
In a not so dissimilar way, AlphaGo played against itself millions of games, learning along the way the winning strategies, eventually beating humans. It it worked for those kind of neural networks, why wouldn't it work with LLMs?
The summary has not one but two examples of groups critically studying dark energy.
Yes, but they still are starting from the hypothesis that there IS dark energy and it has been accerating expansion.
From TFA: "This adjustment showed that not only had dark energy changed over time...". I.e. they are trying to FIX dark energy calculations with adjustments.
Read my original post. In very similar way, the science tried to fix geocentric model with epicycles end deferents, resulting in REALLY odd orbits. And somehow those still seemed to fit observations. It required a more radical shift to heliocentric model to "beautify" the physics again back to simple formulas, not needing layers of adjustments. https://en.wikipedia.org/wiki/...
Like physicists are not trying to provide evidence for or disprove the dark energy model.
Actually, no, they are not. Meaning that dark energy is now mainstream physics, and if you dare to think otherwise, you are sidelined - very difficult to get funding for studies, or publications in peer reviewed journals. Hence, the only ones trying to prove otherwise are considered crackpots.
One plausible theory is that dark energy is simply not constant, the constant value is an assumption, but only that.
Again, epicycles.... Instead of trying to create a mathematical model for dark energy, one can explain the supernova observations themselves with e.g. variable light speed theory (i.e. in the past light travelled slower/faster). Cannot be easily tested or proven, but throwing away the assumption that light speed is constant over time, it is easy to come up with theories, in which dark energy is not needed.
But regarding light speed, we have now dug ourselvers into a hole by defining even time itself (one second) using c (time it takes for light to travel 299792458 meters). So, from now on, by definition, light speed IS FORCED to be constant (if the speed actually changes, it is the time (one second) that is flowing faster/slower.
What they fail to mention is that the total cost was more than if 0% had come from renewables.
It's not about costs. It's about saving the frigging planet. A concept that may be too hard to grasp for many of your kind.
We haven't even eliminated magstrips. We still have them around for backup. An attacker can disable a chip reader by making a special card that applies epoxy to the contacts when it's inserted, which you can do with e.g. a dremel, forcing subsequent users to fall back to the strip.
Theoretical scenario, no? Going that route, the attacker can fill the whole damn card slot with epoxy, and no card, be it magnetic stripe or chip, can be inserted at all.
Take your work seriously but never take yourself seriously; and do not take what happens either to yourself or your work seriously. -- Booth Tarkington