Comment Re:Tell your sister.. (Score 1) 144
https://opensourcesecurity.io/wp-content/uploads/2020/05/149d6-stallman.jpg
https://opensourcesecurity.io/wp-content/uploads/2020/05/149d6-stallman.jpg
Stallman was right
Tell your sister Stallman was right
https://opensourcesecurityio.files.wordpress.com/2020/05/149d6-stallman.jpg
What he's saying is "ma klounkee" or "this will be the end of you"
The change adds another connection to the prequels
https://scifi.stackexchange.com/questions/223033/why-does-greedo-say-maclunkey-in-the-mos-eisley-cantina
http://www.completewermosguide.com/hutttranscript.html
Does this mean next year will be the year of the Linux desktop?
I started with RedHat 5.2 in '99. I think the distro I put on the G3 at the high school was LinuxPPC.
SuSE 6.3 was great - there was so much software on all the CDs.
I liked how the dev version of Mandrake had really current packages so I upgraded my live running system from SuSE to Mandrake Cooker. This was a terrible idea especially since that was still before the Filesystem Hierarchy Standard. I made it work.
I rebuilt and modified Mandrake and made my own version which I called Malcolm Linux (with the Malcolm X Window System of course)
After I while the folks at the Rice Linux Users Group sold me on Debian
Debian ran too well - I missed fixing things that broke. So I installed Gentoo, which provided countless hours of fun.
When I wanted things to work well again I switched to Ubuntu and that's where I'm at now. I maintain a PPA of a few modified packages, but mostly it does everything out of the box.
Nah. If this was happening in China it wouldn't be all over the news. They'd just quietly find Assange, put a bullet in the back of his head execution-style, and nobody would ever be the wiser.
> I'm at a bit of a loss at how much better it
> could be to see it through a tiny window.
Just about every astronaut I've heard has said that staring out the window was one of their favorite activities during downtime in orbit, and they'd all jump at the chance to go back for more.
That would be cool and all, if this was actually subsidizing solar power research... the summary says they money is going to companies building production plants, not early-stage research. In other words, just another government distortion of the free market. I'd rather have solar panel producers who can stand on their own commercially instead of giving handouts to yet another industry who will then become dependent on said handouts.
No, engine start takes place at around T-2.5 or T-3.5. If the engines have all built up to the proper thrust levels, and everything else checks out, THEN the computer releases the hold-down clamps at T-0.
You kid, but I remember reading an interview with a high level manager in the JSC astronaut office, wherein he said only half joking that the astronauts under his purview were more than willing to strap into a Shuttle with only one functioning SRB, etc, and a big part of his job was basically to convince them of reasonable safety standards for their OWN lives!
I guess that puts an end to the phrase "when Debian freezes on a regular schedule"
Right ok. I got a bit off topic... to get back on topic, with a 10 usec call overhead, it's not that hard to design kernel invocations that run for > 10X that time, thus minimizing the driver overhead, and getting pretty close to maximum GPU speed. BUT the one big caveat is that in many cases, this means you cannot use (for instance) CUBLAS but must write a custom kernel. You can do in one custom kernel invocation what could take three dozen CUBLAS calls, and the reduction in setup overhead can help. But an even bigger factor can be better use of the tiny cache on the GPU, where doing everything you need to do to a particular piece of data in a custom kernel means saving a ton of memory fetch overhead versus making 20 passes over the data by doing one thing at a time to it with CUBLAS. All in all, with a bit of work it is quite possible to get it to where feeding data from the CPU to the GPU is not the bottleneck.
I haven't timed GLX calls-to-the-card, but I have timed CUDA calls-to-the-card, and IIRC it was about a 10 usec overhead per call. If it was the same for glxgears, that would indicate a framerate of about 100,000 fps. However, when I run glxgears, I get more like 2000 - 20,000 fps, and it varies based on the size of the window -- it definitely seems to be slowing down based on the size of the window that must be cleared/redrawn. So there are definitely points on the glxgears performance spectrum where drawing operations (including copying 2D buffers around) overtakes simple driver overhead.
You forgot the part where you divide by c^2. Let's say you add one megajoule of heat to your bottle of water. How much mass is that? 11 nanograms.
Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen