If your boner for microkernals lasts more than 25 years, you should probably consult a physician.
I recommend a look at Andrew S Tannenbaum's baby:
MINIX 3 is a free, open-source, operating system designed to be highly reliable, flexible, and secure. It is based on a tiny microkernel running in kernel mode with the rest of the operating system running as a number of isolated, protected, processes in user mode. It runs on x86 and ARM CPUs, is compatible with NetBSD, and runs thousands of NetBSD packages.
Still a very small market. Lets see, they can spend resources working on the next card that can make them million or spend the same resources suppoting a small market that may make a few $100K. If you ran the company which would you choose?
Are you retarded? How is publishing documentation the same effort as evolving a GPU design?
PS. Using profanity just makes you appear to be an illiterate idiot.
Right, and shitting made up numbers out of your arse makes you a fucking genius...
How is a private company obliged to support your project?
Because "to live in society, while being free of it is impossible." (Lenin)
Now go fuck yourself!
EA is not redeemable.
They could, if they try really, really hard...
They're the biggest pile of greedy shit in the gaming industry.
signed.
I was clearly in your second group a few years back...
Maybe you just were not sexy enough, hm?
Except of course that it IS mathematics.
And the obligatory: xkcd
There will always be some outliers/exceptions, but it should be possible to sufficiently specifically define the rules and vocabulary of a given system, possibly by breaking it further down into facets/perspectives and then mapping the relations and constraints.
So then you could have many ontologies, which will gradually converge over time. I'm talking long-term, of course. The annotation part could also require consensus, or vetting, by multiple recognized entities. All in all, the result would still be more or less a fluid body, but then so is everything around us, as the only constant in our world is that everything is changing.
And I agree with you that ML and annotation/classification & co. are complimentary tools. And it will take a lot of work to have end users semantically enrich their output.
Where I disagree is in your definition of a model, which is not necessarily an incorrect representation. It's just a representation, the level of detail varies from use-case to use-case.
So anyway, the big question is how to get there...
And if I've misrepresented rockmuelle, or misunderstood your question, qpqp, it's because I don't have an exact model of what you're saying.
Come now, don't blame everything on me!
What I meant by exact model is of course a predictable, and in a sense deterministic process; inasmuch as that is possible for the given case.
Even with machine learning you create a representation of the surveyed system, but this model will (currently, and in most cases) always be an approximation.
By mapping concepts, their (often ambiguous) meanings, usage scenarios and other relations from different areas to each other, supported by these approximations, it should in time be possible to avoid the issues related to the fuzziness and create a truly smart and adaptive system.
Of course, our universe (as far as we know) is (inherently?) non-deterministic. And obviously, if that is so, you'd have to somehow cheat (e.g. be able to observe our universe from more than the 4 dimensions we can perceive) to get a truly exact model, assuming that some (reachable) abstraction point is deterministic.
What I'm suggesting is that with some effort it should be possible for us to come up with something with the ability to understand something (like you did with my question, despite lacking an exact model;) ). And while ML is quite crude and more like a sledgehammer, an accurate definition is more like a chisel. At least with respect to the model(s).
Assuming such a system is created, it will have similar limitations like humans with regard to the ability to understand something, as we do not know everything as far as I am aware.
But anyway, the librarians didn't have the technical capability to create such a multi-dimensional mess like we currently can, so maybe these things we're talking about just have their own math that we just need to understand the proper rules for. It's all metadata anyway, but currently, I guess the closest we have to an exact model is in the hands of the NSA...
why did it fizzle out?
I think it's too early to say that it did. Scholar has 10.5k hits for articles from this year alone...
Where there's a will, there's a relative.