I haven't had an internship in several years (unrelated illnesses) but when I was still working while doing my undergrad CS degree, I was very forward about suffering depression and anxiety/panic disorders. Not a single person cared in a negative way, and if I missed a meeting because of a panic attack I'd get help from coworkers (programming some stuff for a tai-chi exercise game, seifu was working with us on that one). The other students understood, mostly, and even opened up about their issues.
Part of that is setting, I'm sure. The university undergrad scene is much younger than the "old guard industry programmers". But staying silent just means that the "old guard" never have to learn or accept; they can just go on being ignorant of these issues. And that doesn't help anyone.
It's still a very personal choice. You have to make the decision that fits you best.
In a previous life, I worked with mostly medicated kids in a clinical K-12 setting. It was absolutely the norm for them to be inconsistent with their meds.
I've been told that the segment of people on meds for psychological disorders who go off their meds when they shouldn't, at some point in their treatment, approaches 100 percent. (And when I say "when they shouldn't have," I mean the solution for the problems that inevitably arise ends up being to get back on the meds, or similar ones.)
I could almost believe that. Most of the drugs are still in the "we think this is how they work" category. You have a psychological disturbance that results in paranoia (which can and does happen to people even with no diagnosed illness or even on medication), and the medication is an easy thing to lash out at. Or you experience tons of the listed side effects (either real or imagined, it wouldn't matter) and can't convince a doctor to change the medicine. The latter happened to me, real side effect was losing memory; found notes that I had told the doctor several times over a year, and he did nothing. I called their 'emergency assist' phone, left a message that I would stop unless I heard back from them. Never did, so I went cold turkey and switched doctors after the weekend was over.
Had mine been for anything other than pain and depression and insomnia, that withdrawal could have been hilariously amusing; instead I just sat up reading a book for over 48 hours til I passed out.
But my depression is a strange one; mild sufferers (by the DSM meaning of mild depression or any other illness or axis) of illnesses with no Axis 1 or 2 components who doesn't suffer from delusions aren't likely to stop taking meds that work. Additionally, barring a massive incident, most non-psychosis and non-paranoia disorder sufferers are very likely to stay on a med that works; without something that alters perception of reality, they have no reason to go back to the pain and suffering of before. Incidents like moving (the wait list for a psychiatrist here was over a year!), insurance covering a different doctor, losing a job/house/etc, that are outside the individuals' control shouldn't be counted.
Using multiple cores turns out to help the attack (by shifting down the signal frequencies).
Say what? Through what mechanism would multiple cores shift down the frequency? And what about parallel instruction streams contributing to noise?
Let's see, the tiny amount of L1/2/3 cache currently is dictated by the energy budget of the CPU. Looking at the energy budget of the 4900MQ and the 4960HQ chips, you can take some wild arse guessing to get that the 2 megs of L3 cache sacrificed got back enough to power the 128 megs of L4. Then consider that there is only 64K (yes, kilobytes) of L1 or 256K L2 per core on the Haswell chips, and at 3.9GHz desktop chips you are looking at 84 watts of power dissipated . . . you can start to work out how much of that is due to leakage current from the 6 transistor L1/2/3 cache design.
Let's face it, SRAM isn't tiny, it leaks amps like a sieve at the tiny process size that everything is done at now days, and it's main advantage is that it doesn't take a controller to access and it's bloody fast and the bandwidth can be pretty sizable. A gig of SRAM on die would, I suspect, heat a small room; that much DRAM per core would slow the cores down due to the inherent latency of accessing DRAM.
So, sure, DRAM chips may be cheap, but putting them on the CPU die would be horrid. And SRAM still isn't cheap; either in die space, energy budget, or dollars!
As for battery life, I have no idea. It might use more power, since DRAM requires constant power to refresh data where SRAM is pretty stable; but the lower leakage of using a single transistor instead of 6 might prove to be a benefit. It would take a good bit of time and some pretty good test code to figure the difference, I suspect.
And yet if you are sitting on a jury in a trial, they can and have made laws requiring you not to talk about what you've learned til after the trial. Is that not also a law abridging freedom of speech? Gag orders on the press covering a trial also exist; same question.
With the number of counter-top vacuum preserver devices, doing sous vide in home is not that hard. It's not as perfect as a full industrial vacu-sealer, but it works. Additionally, LN isn't too hard to get in small amounts as an engineer; and for in-house use you could use dry ice or LCO2 from a fire extinguisher.
But I'm one of those home cooks who likes trying crazy chemistry shit, and has the gear and respect for the chemicals to do it safely. Might have gone to the cooking industry if I had gotten into cooking sooner. So the big set of books is still something I want, but couldn't justify the $500 for. Bet they'd look pretty in PDF format, even if the pictures were lower resolution.
Unless a userland process has a ton of OS level locks on the I/O devices (disk read/writes, managing it's own cache in files, other strange behavior) that all result in OS API calls. If the userland process does all of that, than the OS is going to grind along trying to manage all of the coder's stupidity.
Which probably explains both Adobe and the early JREs, in fact.
Posting from my phone is an awful way to try and teach about the legal side of reverse engineering code, or re-licensing, or any other legal topic. It can be, and is, done. It is one of the arguments made when talking about copyrights on algorithm. Hit google, find some articles about it, look for things like math formula copyrights and similar, and clean room reverse engineering. There are lots of articles and explanations out there.
Not at all, in fact I recall the usenet discussions regarding how to avoid just copying code. It was just a fact that they wanted to re-license under a floss license, and they wanted to migrate the code; so both were done at the same time. Some functions, it is as easy as "chinese room" coding, one person reads the old code and writes a plain English description of what it does: casts a ray along vector V, or finds normal of surface S. Another person who doesn't look at the old code writes these new functions. Considering how long the POV-Ray team have kept lawyers around, they made sure to do it right. And more recent additions, like SSLT, were written under duel license; one for the old codecode, and one giving the team permission to open it under another license.
As a disclaimer, I was lurking in their developers group on Usenet at the time (trying to contribute til life got in the way). So I got to see the hoops they jumped through to keep it clean. Lots of work contacting old devs, finding out who was legally responsible for what code and who could change the license. I think thethe archive is still readable, makes very interesting legal reading on how to reverse engineer code without breaking EULAs or licenses.
I would counter balance that book with one on listening, the other half (and much neglected part) of communication. Unfortunately I can’t think of any off the top of my head. Susan Cain put out a excellent book called “Quite”. It’s not quite on point for this topic but it may be worth a read.
Let me second that, since I don't have mod points right now. Communication isn't real communication if you only ask others when you have a problem or they are your boss telling you what needs to be done. The internships I did while in Uni, I had two that were of the later form, where I'd talk to a professor, then go write the code they wanted and turned it over and moved on to what ever else they needed. No collaboration, no meaningful communication. Those projects, from my coding perspective, turned to crap and I felt like I did very little in the scheme of things. On a different project, the Ph.D I was working for wouldn't accept that; she didn't know enough about code to just hand it all over to me to do as I wished, but she also wanted to understand what I was doing in software and get my feedback on the human interface side of the project (HCI isn't my specialty at all). 90% of my brainstorming and notes and test code were done at home (still billed, of course), but I was generally only allowed to write the final code when we were both in a lab working on the project. Traded notes on the artwork, debated the file formats that would work, maximum polygon counts, and so on. We also BSed about music, books, movies, whatever. It didn't cut into the workflow, because we both understood that once a brainstorm hit you either worked it out or forgot about it; but between intense bursts of coding we just chatted while we brainstormed. End of the day, we'd trade notes and comment some more on anything that jumped out. Beat a lot of roadblocks that way, like finding the polygon limit, acceptable file formats for models in the engine I was using ("what do you mean, I have to export? Can't the engine just us a Maya file? Can't you make it use the Maya file?") instead of banging up against those walls much later and having to do some last minute kludge to jam an incompatible file type into a graphics engine. If we had waited, I would have just gotten finished files in an email, and had no way to install Maya and convert the files to something usable, and would have been forced to learn a file format and code a parser for it. Instead, she just told Maya to use a format we could agree on. Days of work that would have been past deadline, averted.
The other projects, no one asked for my notes on. I was halfway done implementing a user editable AI, where each creature would load it's own script from a master AI class that just handled object creation and destruction. Since I kept getting sidetracked to other parts of that project, by the time my days there were up, there was only a skeleton of this AI with some notes on how each part connected. The next intern or grad student probably scrapped it and started over. The tools didn't have a good place for my notes, and even when I offered copies of the notes and diagrams they weren't taken; so no loss to me.
And that's why just sitting and BSing about music or books or movies or whatever while working can result in better code. You have to listen when a colleague brings up a minor concern, or just a stray thought, and see if that applies to something that you are working on. And if it is, then you can take some time to work out what looks like a small detail ("No, I don't think you could create a model that was too high poly count. Maya should limit you....wait, you are using over a million for just the eye? wt...") that may be a bigger problem than anyone thinks. But if you skip the small talk and don't listen to the minor concerns that aren't really your area of expertise, you may miss the clue to the issues that you'll be forced into dealing with later.