Comment I already figured it out 10 years ago.... (Score 0) 95
AI's killer app is these node graph style 3d engines like my gameapi builder tool at https://meshpage.org/meshpage_...
AI's killer app is these node graph style 3d engines like my gameapi builder tool at https://meshpage.org/meshpage_...
...so any ideas of building AI technology in europe will have to deal with the issue that they cannot use pirate databases to teach their ai models...
once copyright owners can order censorship, next in line is nude pics/religious groups ordering censorship of the sex industry...
not very good FOSS, if their user base consist of movie pirates mainly...
the crowds could just use my technology....
bullet avoided. at least it wasnt meshpage.org
I think marketing area is to blame for lack of innovation in software. I spent 10 years getting technology ready and noone is using it/i got $6 worth of sales from 10 project. Clearly marketing fails to deliver the tech to customers... not a software development problem...
> I'm not sure how that alone detects infringement
This is based on the same test that courts are using to detect infringement. Usually it just works the other way around, i.e. you remove the non-infringing material and leave infringement area only. But we've inverted the filtering process in the court's tool selection and got the result where only the valid non-infringing material are left untouched.
This inversion is warranted when you want to ensure that your product is non-infringing. The courts are trying to prove that infringement happened, and this isn't suitable for verifying product to be non-infringing. Thus inverted process is better while you're designing new products to the mass market.
Because legal landscape for AI database collection activity has significant risks in it, we decided to reject AI technology outright. The AI vendors need to understand that they had easier market entry, because professional authors are rejecting their area for copyright reasons. These legal rejections are dangerous, because while they reduce competition in the area, they also indicate that there are significant legal risks involved. Basically professional authors are voting with their feet and leaving the area to would-be criminals or people who can absorb 300,000 dollars damage awards,
We have developed easy way to execute a test for detecting copyright infringement in AI databases:
1) remove all content that was not properly licenced from original owner of the material
2) if your product still works, you're ok. If it doesn't work, you're infringing.
Basically the teaching phase of AI systems are all failing in this easy test.
Consiquences of this easy test for AI systems is that the content creation must happen before AI system creation. So all investment to content is still included in the resulting AI system, and there is clear causality relation between content creation and AI system creation.
This time-based causality is good way to ask for damage awards from courts.
If AI is overhyped, then pick the first non-AI technology that uses gpus? I just happen to have one ready at https://meshpage.org/
> Either literally every artist, sculptor, musician, writer, and photographer since the dawn of their arts has been an infringer
How large companies where copyright following is important handle the issue with 2 different rules:
1) GLANCE, i.e. your engineers should turn away their eyes when your family shows off the competitor devices.
2) NO COMPETITOR DEVICE ALLOWED IN THE PREMISES, i.e. if your engineers purchase competitor device, it means immediately fire the engineer.
These rules together ensure that competitor features are not getting into your development pipeline. It's just how strictly your employees can enforce those two rules.
Then evaluating some other technologies are more appropriate. For example my https://meshpage.org/ has not received more than $6 investment from customers and $56 investment from spinoffs. (no, that's not millions or even thousands, it's plain $6).
Given the 10 year development history of the project, we expected slightly higher level of investments for our project. But guess not. Play with your AI sandboxes and hope for the best.
But guess we did serious problem by rejecting AI early in our project. The copyright issues with collecting terabytes of databases simply wasn't suitable for one-person project like ours.
If Machiavelli were a hacker, he'd have worked for the CSSG. -- Phil Lapsley