Comment Re:If it's anything like what craiyon generates (Score 1) 89
That was the best and most accurate description of the process I've seen, well done.
That was the best and most accurate description of the process I've seen, well done.
Only creative aspects of code are copyrightable, not functional aspects. An algorithm for sparse matrix transposition is going to be extremely functional and have little or no creative aspect and thus quite likely not protected by copyright.
I tried "two hydrogen atoms and one oxygen atom". Image was pretty, but it made it clear that it can't count. Great for inspiration, but not good enough for making production images.
Language models require a fairly large number of parameters before they can spontaneously learn to count. Dall-E mini uses a tiny language model, so it can't count.
Dall-E mini is absurdly small compared to Dall-E, Dall-E 2, Imagen, etc.
It also seems highly speculative to consider an AI model based on lots of other people's work (with no regards to licensing incompatibilities) to itself not be a derived work. It hasn't been tried in court yet, but I wouldn't be betting my business on it.,
Copyright only applies to creative aspects of a work not 'merely functional' aspects. Generic function names are by definition not creative, as are standard algorithms and code expressions. Since these are the only things that the models can do, there isn't likely to be a risk of copyright violation.
Reagan is a leftist compared to the modern day party. Imagine a republican today saying he wants amnesty for people in the country illegally. Reagan said that in 1984. See my signature for another great point.
Reagan wanted Republicans to have good odds of winning Florida in the future, and converting illegal Cuban immigrants to citizens was the easiest way to do so.
What they probably experience is more analogous to anxiety than pain, though the severity of it is anybody's guess.
They have physiological distress that results in an aversion response. They are on the level of a simple state machine (30 neurons for processing and responding the rest of the neural circuits are relaying sensory information in and motor commands out).
Those 30 neurons are being used for motion planning; shelter assessment; threat assessment; sexual assessment; and food assessment.
But I do recall tests done with either lobsters or crabs where approaching a region in the test environment caused a "shock". The subjects reversed course, and then practiced avoidance of the area.
Call it what you want, but if the sensation made them not want to have it reoccur, it seems like it would be pretty far down the analogous-to-pain path.
Plants and bacteria have reflexive responses to aversive stimuli. Body's that are physiologically functional, but brainless can respond to aversive stimuli.
You can program robotic cockroaches to respond to various stimuli in a similar manner to cockroaches (scuttle from light, protective response to 'injury', etc.).
This is separate from 'experiencing' pain.
Lobsters and crabs are 'meat robots' - like insects, they simply have too few neurons for subjective experience and their responses are simply preprogrammed reactions like 'if then' trees.
A lobster only has 100,000 neurons. That is low enough that it is a 'meat robot'. While it can respond to aversive stimuli - it certainly can't "feel". It doesn't have the neurological machinery for it.
This is far less sophisticated than the software we are using for image and sound processing in machine learning. It is also a trivial number compared to what is used in GPT-3 for writing generation.
I could see extending such protections to octopodes. But lobsters and crabs, this simply makes no sense.
I wonder if the high minimum spec requirements are to account for it being alpha quality and/or unoptimized?
I think it is because they are initially only supporting GPUs with Vulkan drivers.
I thought those games required windows not SteamOS/Linux?
Steam uses WINE (rebranded Proton) to run games without Linux ports on LInux.
As an enterprise developer, I care more that you are using correctly named variables, that you are following the enterprise coding style guide, that you are creating unit tests around your code to verify in CI that it is working correctly, that you have low coupling and high modularization, and that your software satisfies the complete set of functional and non-functional requirements.
The style it has in the competitions is because that is the style used by programmer in these competitions and it is simply imitating their style. If you train it on the style you want, it could do that instead.
Just like GPT-3 does writing style imitations - it can imitate whatever you want.
I'm curious how the problems got transformed into a "specc" the "AI" could comprehend.
There wasn't any, it was the raw text. It uses natural language understanding to interpret the text description of the problem.
It looks like they are providing a description of the problem and a set of inputs and outputs (effectively some unit tests). Is the problem considered "solved" by the AI if the unit tests pass or does the code need to be proven correct? Is the code even comprehensible to a human?
If the requirement is just code that passes the (who knows how limited) unit tests this is not so impressive (or useful).
It has to pass a list of hidden unit tests with inputs that are intended to break incorrect and algorithmically inefficient implementations.
Just looking at the first question and a few high-scoring answers, it seems to me the competition is not particularly challenging.
The competitions have easy and hard problems, each problem gets an Elo rating, that problem has an Elo rating of 800 which means it is a very simple problem.
This problem has an Elo rating of 1100, (so fairly easy)
https://codeforces.com/contest...
so fairly eacy
This one is Elo 1600, so somewhat difficult
https://codeforces.com/contest...
This one is 2300, so difficult
https://codeforces.com/contest...
And this one is 2500, so extremely difficult
I'm curious how those 10 were chosen.
They were based on date. Earliest data is training, next period is test, the period after that is validation, then they did the most recent competitions after the validation set.
The particular example shown is fairly straightforward and there's not much optimization potential. How would it handle something that might need A* search or a custom hashing function?
The example was likely chosen because the question and implementation are more easily understandable by non-programmers.
This competition they scored about top 20 percentile,
https://codeforces.com/contest...
This one they scored around 44th percentile
"If it ain't broke, don't fix it." - Bert Lantz