That is a despicable, sinister attack on Chelsea Manning's character as well as those of the transexuals in general. What makes you believe you have a higher moral ground? Do you believe you are a righteous person in the eyes of God (Romans 3:23)? If not, why can't we afford the same grace we received from God and pay it onward to our neighbors (Matthew 18:21-35), even if our neighbor is a transexual?
To counter the previous AC post, many ordinary people are named after famous historical figures that people admire. In Latin America, "Jesus" (hey-soos) is a common name. Of course, none of that indicates mental illness.
I've had similar ideas and looked into it a few times in the past, so I can see a few reasons why this is not done. First, if you are working with raw video, then the bandwidth requirement is going to be the bottleneck, and you're not better off offloading the shader to GPU. In order to make good use of GPU, you have to integrate video encoding and decoding into your GPU pipeline as well, and that takes specialized drivers to do. I don't see any way to do it with OpenGL/OpenCL. If it's doable and makes sense to do, it is unlikely to come out of a hobbyist project because of the technical hurdles.
The likely place for an open source project like this to originate is a startup company selling cloud video rendering service using a stash of cheap Raspberry Pis. A company like that may or may not exist already, but it still hinges on whether they are willing to open source their software.
Grades exist to provide feedback for the teacher to assess the teaching method. If most students do poorly, that indicates there is a systematic failure in the teaching method or the way grades are evaluated, neither are desirable. Minimum standard exists to avoid spending disproportional amount of effort catering to the long tail. After all, teachers have limited time and resource, so the education system has to maintain a certain efficiency. Maybe these students have to figure out on their own what method of learning works for them rather than putting all the burden on the teacher.
Of course all of these are fairly ideological, assuming that both the teacher and the students carry out their responsibilities in earnest. It's easy to become cynical after seeing many bad examples. I think we tend to confuse the mean and the end. Grades are a mean to an end which is learning, not the other way around. If the goal is not set right, all the effort to fix the system would become wasted.
According to the Information for Maintainers of GNU Software, a package becomes adopted by GNU FSF when a maintainer volunteers to do so. They could bring in a package they didn't write, as long as the package source has a GPL compatible license. It's also not required to transfer the source code copyright to FSF.
Libreboot was derived from Coreboot by removing the proprietary blobs. Leah volunteered to be a GNU package maintainer and started recruiting developers to work on Libreboot, and not for long decided to step down as a maintainer. There is no rule forbidding Leah from continuing to work on Libreboot without being associated with GNU/FSF. She is entitled to stop volunteering for GNU anytime for no reason whatsoever.
What Richard Stallman says is that a GNU package could be orphaned by its maintainer and often remain a GNU package until a new maintainer picks it up, but in this case he was compelled to make a special exemption to excise Libreboot from GNU. GNU/FSF's role is a librarian/publisher, and the maintainers are more like curators. It makes no sense for a package to "leave GNU" just because the curator stopped volunteering.
Right, I think it's useful for managers to have a technical background.
Although I'm not a manager, just served as tech lead on some projects, the kind of issues we've dealt with is more about should we have written this code in the first place. Using your websocket example, the develop might be working on acknowledgement of batched and out of order updates and allow individual ones to be retransmitted if lost or corrupted. I would point out websocket is over TCP, so the transmission is reliable and in order. The developer would argue that a future product might use a lossy datagram socket. I say don't worry about it. The reordering and loss detection could be based on sorting which is O(n log n), but we wouldn't even need this code.
Although you were putting words into my mouth, it's actually not such a bad idea, and I have to give you credit for it. There is no point repeating the same exponential time computation on every device in the world. It's just a waste of time. One should coordinate the computation on a supercomputer where the results could be memoized and then shared with the people that need them. The same problem only need to be computed once.
Like the weather forecast service.
Also, dynamically programming could make the algorithm run in polynomial time. Why not?
I have seen both sides of the story about junior programmers, having served 10 years in the academia and several years now in the industry. The problem of failing to adapt to new problems didn't happen only after students graduate. At school, we try to expose our students to a variety of problems. They either aren't interested or simply get overwhelmed. I think only about 10% of the students really took advantage of the education they got. Those really good ones are self-motivated. To the rest, you simply have to spoon feed.
Since we can't just fail 90% of the students, to make the best of the situation, there are a few key messages that we try to hammer into the student's head: memory hierarchy (amortizing on storage bottleneck), queuing theory (service load modeling), and of course the big-O analysis. I can comfortably say these are some universal principles that remain true even as technology evolves. We didn't do critical path analysis, but that has become important since parallel and distributed computing took off---I think this is likely the change in data requirement and bottleneck you observed.
Now in the industry, I still see limits of this mostly theoretical approach to computer science. We hired a fresh college graduate who delivered code where all unit tests pass, but the code doesn't run. It's just not doing the things real code is supposed to do; the unit tests mocked everything out. But we hired an industry veteran who also wrote code like that. I'm not sure what the problem is. Maybe some people really just shouldn't be programmers.
The real good ones design optimal algorithms in their first attempt, and the optimizations are about how to make the code understandable to others who don't know programming as well as they do. But even the best programmers fall into the temptation of writing code they shouldn't have written in the first place.
I sometimes work with people who are slow in delivering code, and although "I'm spending time optimizing" has never really come up as a direct excuse, there are variations about the them spending time on something they shouldn't have. I think having visibility into their development process early is the key. It could be that the developer thought the problem is more complex than it should be. It's useful to find out early and clarify the problem statement. If there are roadblocks, it's better for the team to help these individuals early.
A lot of time it's the question of "should we be writing this piece of code in the first place?" rather than "how much should we optimize this code?" If we decide we need this code, we try to deliver it in high quality. I think even for internal applications, these are meant to automate paper-pushing jobs away, and they will be used more as the company grows. Using a polished internal application is actually a nice morale boost. It shows that someone cares, and it encourages other employees to put the same efficiency and attention to their work.
It was only 3 years ago we found out that Windows Update in Windows XP used an exponential algorithm that silently wasted away hours if not days of CPU cycles of at least one billion of machines worldwide every month during the patch Tuesdays. Please don't forget the lesson.
Maybe the exponential algorithm was fast when N is small, but you never know what kind of data the code will be tasked with in the future. Implementing suboptimal algorithms is a technical debt, and if your code is part of Windows, multiply the debt by one billion times. Speaking from experience, most common algorithms are O(n log n), with O(1) attainable with clever use of hash table, so I see no excuse implementing anything n^2 or worse. Such code needs to be documented with a prominent warning.
I'd argue that any code that runs on more than a handful of machines deserves some thought and polish. The impact is multiplied by the number of users, and I think we all want our code to be generally useful to as many people as possible. You should also have some idea where the bottlenecks are before writing a single line of code. Don't worry about premature optimization. The concern is overrated. It sounds like you are just adverse to criticisms of your code, and wanted the critiques to carry more burden of proof. Criticism is good: it means that your code is getting some attention, so be glad!
Microsoft makes both Surface and Windows, so they simply put in an optimized video playback pathway that only Edge knows how to use. It is easy to game when you control both the hardware and the OS. Safari on Mac OS X has an advantage in playing HTML5 videos over all other browsers on Mac OS X. Chrome, not surprisingly, works the best on a Chromebook.
Linux as an OS is about as impartial as you can get in terms of the playing field when doing browser performance comparisons, so if Microsoft ever releases Edge for Linux, then we can have a fair comparison.
The real reason is much more nuanced than language differences between C++ and Java. The Seastar network architecture bypasses kernel TCP/IP stack entirely, but instead implements user mode TCP/IP stack using dpdk, which allows user mode to poll network card's packet buffer directly over memory mapped I/O. The user mode stack runs on single core only, but you could run multiple instances on multiple cores. It can scale linearly because there is very little shared state across cores.
C++ with custom network stack vs. Java with traditional network stack is not an apples-to-apples comparison. In theory, you could implement a Java based custom network stack over dpdk as well to make the comparison more fair.
That makes SSL for
"Gotcha, you snot-necked weenies!" -- Post Bros. Comics