No, but it does sound like malfeasance. Which is a felony.
Some of them can write.
I'm not sure. I suspect that this is going to largely be "an invention looking for an application" for a decade...just like the laser was.
The problem is we've never been able to create alloys as a tightly controlled gradient of multiple metals before. Now if it could print a sharp disjunction between the materials, and especially if it could also print an insulating layer, then the applications would be obvious, but this is a very different thing. Different metals, e.g., conduct both heat and electricity differently. What will the effects be is one can print a gradient that oscillates between two different metals? How well can alloy crystal properties be predicted?
I think this is something that has a LOT of potential, but what those potetials actually are may well take quite awhile to figure out.
Yeah, but either could just sell that part of their business, or even just decide it wasn't worth the effort and shut it down without warning.
FWIW, I seem to recall approx. that already having happened, though I can't give a specific reference. The only real answer is to make backups BEFORE you put the data out to the cloud, and keep the backups (and test them periodically).
Trusting a(nother) company to guard your data has a long history of failures. But so does relying on local backups. You need both.
Which is why they need to be searchable by Google. But multiply this by most of them can't write coherently. And many of them don't want to really spill their secrets, just to prove that they have them. (This is the basis of many companies, so don't laugh too hard at them. Also remember the astronomer who published a coded note when he first sighted Uranus, so that he could claim priority if someone else completed writing their paper on it before he did. That still happens, if not so blatantly.)
I only ever found one of their journals of any value whatsoever (Computing Surveys). Their "collected algorithms" was lousy. If I were interested in representation of polynomial equations in Fortran it would sometimes be useful...but I haven't done that since college...decades ago.
Occasionally I'll follow a link that ends up in the ACM members only section. Sometimes it looks interesting, but back in the time I could follow it into the article only once was it really at all interesting, and that time it still wasn't useful.
If you've got a set of Knuth's books, then I don't think the ACM has anything to offer.
WRT ACM articles linked from Google: They are there, if only as indirect links [not sure], because every once in a while I end up on one of their "you can only read the abstract" pages. I never regret not being able to read further, because I *was* a member and *could* read the linked article for awhile. Every single one was worthless (for my purposes).
The only useful thing I've ever gotten out of the ACM site that I didn't find in Knuth was a date algorithm. And I already had most of it down. And their version still didn't deal with pre-Gregorian dates (except as if they had been Gregorian dates). (To be fair, Julian dates are rather different. Still...) Also it didn't properly handle dates BC, even in Gregorian terms.
Well, they guy that wrote the algorithm was still really tight on conserving RAM usage. It *was* a very concise algorithm. And it worked without problems (in Gregorian) back to 1AD. It was also (IIRC) nigh unintelligible because of embedded magic numbers. When unpacked it basically just said skip leap years for centuries unless the century divided by 400 is an integer. But he did it in one line of fortran.
Then you're dong it wrong. For the class of problems I'm interested in each process needs an input queue, and the ability to detect (somehow...there are several plausible means so I'm not choosing) what other processes are around and how to write to their input queues. And you need to be able to examine your input queue to tell whether there's a waiting message without blocking. All fairly straightforwards. You don't synchronize the processes, each one runs as far as it can with the inputs it has available and then waits for additional input. What is queued should be messages as short as possible, but that's true whenever you copy an array. And the queue should hold either a deep-copy of whatever is being exchanged or a reference to an immutable instance. No scheduling, per se, except that you might want to be able to adjust the priority at which the processes execute.
FWIW, I'm currently implementing something that operates in this way, and most of the tools I'm using to do this are excessively slow BECAUSE they are capable of a lot more than I'm asking them to do. I'm only planning on having around 8-16 processes because I've only got about 8 processors. I expect that most of the processes will keep busy all the time without needing new inputs. The messages that I'm planning on passing all have the form (action, key, value), and for my purposes key will always be either a string or an integer, and value will be an array of stuff. The kind of stuff will vary depending on what kind of process is receiving the input and what action is to be preformed. (Which is why I really don't want a static type.) Generally, however, it will begin with a few numbers, then a few (usually 4) arrays of structures(without internal pointers) and then possibly a string. This kind of thing is quite simple to handle in a language with dynamic types, and a real pain in languages with static types.
Do note that this means that most of the messages will be longer than is optimal, and that the length will not be consistent. It's the kind of thing that marshall, pickel, yaml, or json and handle trivially. No class serialization needed.
This means that each process totally controlls it's internal synchronization without external conflicts. Thread synchronization is not a problem. Scaling is trivial. Efficiency...well, I'm not so sure of that. I need to set things up so that most processing happens without IPC, and I'm not sure how possible that will be. I may need to go all out and find or build an even simpler IPC mechanism. (I think what I'm currently planning on has TCP/IP sockets burried within the implementation. I'm using localhost, so that probably gets translated into UNIX domain sockets, but even that may not be as fast as possible. OTOH, I don't want the input queues to be bounded by a pre-determined amount of RAM unless I must.)
IIUC, Android under the hood is largely C with some C++ on top of that. True, the part that makes Android different from Linux may be largely C++...
The thing is, startups pay lousy money, but sometimes you get compensated in stock options, or even stock, and sometimes that stock turns out to be worth a lot later. Granted it's a crap shoot, but there's no safe way to make lots of money unless you already have lots of money, and even then it's not certain.
While there is a need for strongly typed languages, that doesn't imply that all languages should be strongly typed. More to the point, however, Scala appears to be staticly typed (I'm believing documentation here, I've no experience). Many problems are addressed only with difficulty via a staticly typed language.
Compatible with Java. OK. So is Jython, so is JRuby. Object-functional? Not quite sure what you mean, but I would guess that so are Jython and JRuby. Also Groovy.
This isn't really a response to the article, but rather to your comment. Unless you are in love with the Scala syntax, you don't seem to justify your point. Even Clojure would meet all the benefits that you list. (As well as several other languages.)
Personally, I dislike intensely Java's 16-bit char system. I much prefer either utf-8 or utf-32. Perfferable either chosen as needed. Alternatively the Python3 opaque string type with conversions to the desired representation also has its benefits. (My real preference is uft-8, but then most of what I work with is ASCII, and I only need occasional double or triple byte characters. But for that to work the language MUST have appropriate library support. As Python, Vala, D, etc. have. Ruby has it via an add-in gem. Java doesn't seem to really have it, and as a result neither do any of the languages that are symbiotes. C and C++ are, admittedly, as bad as Java. You need a large and clumsy external library. Racket Scheme has this aspect handled well, but there are other reasons that it's less than desireable.)
So. Which languages will you need in 10 years? It's one that isn't popular yet. Vala is a possibility. So is D. And prehaps there will be applications for which Swift is desireable. I'm really dubious about Java. C will probably still be necessary, but I'm not sure about C++. Some successor of the current Scheme versions would be desireable, but it MUST implement IPC much better than any current Scheme does. Some dataflow language would be highly desireable, but I don't know of any decent conderes. (The one's I'm aware of are too specialized...though one of them could grow out of that.)
The language really needed hasn't yet been written. It will be designed to be easy to write multi-process programs in. And it will be easy for processes to submit messages to each other's read queues. Erlang is almost right, but it concentrates too much on immutability, which works quite well for a certain subset of problems, and is terrible for many others. The reall concept needed is isolated mutability, where mutability is all "thread confined" (except that I mean process confined). I don't think that it should be possible to pass pointers between processes, but perhaps it could be done if the pointer only pointed to totally immutable data and it's recursive equivalents.
As I said, this language doesn't seem to exist yet, but various languages have implemented pieces of it, so I don't see any intrinsic difficulty in creating the language.
Personally, I count the time that TSR took over D&D as the point at which the game started delcining and rigidifying. Prior to that is was much more creative and interesting.
OTOH, they did make it MUCH easier to mover characters from game to game.
The problem affecting the kernel appears to only be enabled with a specific set of optimizations, and only to matter for a specific class of programs.
Also, apparently the problem has actually been present for a number of iterations of the compiler, but a shift within the Linux kernel code has caused the compiler error to manifest. But the shift within the Linux kernel code was still valid C (C++?) code, so it was a compiler problem, even though it didn't affect most programs.
I, personally, dislike swearing even when "sanitized".
OTOH, I do realize that this is my personal taste. I feel it makes the communication less clear.
OTTH, written communication lacks the richness of communication by speech. This means that there is no inherent channel corresponding to tone of voice. When someone uses swearing as a substitute for certain tones of speech, it's really hard to say there is a better option. The alternative work-arounds tend to be verbose. Also, swearing via the use of the term "shit" appears to be something we inherited from our common ancestor with chimpanzees, because if they are taught to sign they will automatically use the term "shit" to describe persons and situations that they dislike.