They bounced me around through escalation teams for 45 minutes before letting me cancel. I bought a house in an area that isn't serviced by them and they tried to get me to agree to paying $200,000 to run cable to my new house. Bastards!
If they require a master's or PhD, it's not an entry level position.
They either a) are trying to change the world with new or hard stuff and want a theory guy to guide things or b) don't know what they are doing or c) don't want to mess around with kids straight out of school who haven't figured out the corporate metagames and "git'er done" culture yet.
There's the optimal implementation on paper, given infinite time for implementation, and there's the "We have two weeks, do what you can pull off" implementation that business is usually looking for. Business values programmer time more than academia does. I know my CS degree didn't prep me for that very well.
Actual raw engineering is a bit less wild wild west than software... there are legal definitions of what a certified engineer is responsible for; i.e. if people die as a result of your engineering mistakes, it's your fault, not just some edge case bug. But the same corporate BS is still driving it, so the same stuff applies... HR is still about risk avoidance, it's just that a guy with a master's or PhD had to jump through more hoops to get to the table and thus the wheat is seperated from the chaff so to speak.
Business doesn't care about getting the best candidate, they care about getting the guy who looks like he's good enough for the money they are willing to spend on him and won't end up as a disaster. And also, some of those job postings may require a master's or PhD so they can legally justify hiring an H1-B after there is no one "qualified" to be found.
Business (HR specifically) doesn't give a shit about your degree. They care about a) that you have the checkbox, b) who you worked for previously and are not lying about it, and c) whether it looks like you aren't a total fuckup who will cost them. It's about risk avoidance.
The actual team you interview with (if it wasn't an HR drone) cares that you look like you know your shit and can carry your weight.
Engineering and especially computer degrees are such a total crapshoot on the skills you get in a candidate, that they don't know how to weigh your degree. Even degrees from badass schools sometimes come with folks who still can't code their way out of a wet paper bag. Besides, most of that senior level theory stuff in the degree won't help you much in a real world job until the late stages of your career, and will piss off your peers who don't have the same background, and definitely piss off management, who barely understands what a linked list is.
The quality of in person versus remote will depend on your learning style, and whether you actually would make use of those in-person office hours anyway.
Look, digital electronics are still subject to analog limitations. When you overclock, you squeeze the hysterisis curve, increasing the probability that your chip incorrectly interprets its the state of a particular bit as the opposite value. i.e. you get random data corruption. This is why you eventually start crashing randomly the more you overclock.
While overclocking a chip that has been conservatively binned simply to reduce manufacturing costs but is actually stable at higher clock rates is reasonable, trying to overclock past the design limits is pretty insane if you care at all about the data integrity. Also, you tend to burn out the electronics earlier than their expected life due to increased heat stress.
I never overclock.
Most devices barely work in one operating system, let alone having to deal with being initialized and controlled by multiple driver models and switching back and forth between them hot.
They are simply not designed for that scenario. Hence, the hypervisor, and virtualized devices under it.
Radio Shack has been a ripoff for years. Why the hell would anyone who knows enough to DIY pay $4 for a 5 cent part? Sure, it might take a few days for it to come from Mouser, but honestly when you're designing a circuit, you need a lot of components, generally plan out what you need in detail and a retail place just isn't going to stock whatever exotic parts you are going to need for your project anyway.
Since there are far more folks who aren't with it enough to DIY, Radio Shack is far better off overcharging the masses for extension cords, sub par computers, and low grade RC cars in the mall. They just want the masses to THINK that smart people shop there.
Western's CS program is one of the ones that grew out of a math base. It's pretty hardcore on the theory, but you're sort of on your own for learning the stuff that business wants. Which is fine.... even if the program focused on exacty whatever buzzwords corps want these days, corps don't generally hire CS grads straight out of school. The stuff you learn in the 400 level classes is great for senior developers to know....but you're not going to start out as one. It wasn't till my 3rd job out of college (which I'm still at) that I actually got to touch source code at work. For long term personal growth, I'm really glad that I had my ass kicked with the theory; I find that the rigorous methods that were drilled into me really help me tackle the hard problems I work on every day. (debugging nasty kernel mode race conditions in code written by others for example). Besides, if you can handle the proofs and algorithm stuff, you can handle anything else, though you'll sure as hell not enjoy writing silly business apps over and over.
You know what the job finding foks at Western tell you about finding a job once you graduate? They tell you to forget about finding anything remotely in your field. The real difficulty in getting hired after college has less to do with your skills and what you're taught and more to do with risk aversion for employers...they don't like hiring green kids who don't understand corporate politics yet. You have to persevere in order to get to do what you love.
Computer science is supposed to be hardcore...unfortunately there is a huge variation in what different universities consider to be computer science, let alone what the business world thinks. For some, any old programming is CS, for others, they focus on software engineering methods, and some hardly touch on theory and math at all; others still consider web page design to be CS. CS is about understanding the extreme limits of what computers and software are capable of and pushing the limits of what's possible....it's not supposed to train you for "IT" (which most businesses consider to be the guys that fix their computers).
You really should not be doing a computer science degree unless you are going to be some kind of developer and you get off on things that require in depth knowledge of how to design and compare the performance of different algorithms, want to fix bugs no one else can, want to write really hardcore software (such as doing speech recognition, computer vision, or 3d rendering) at the bleeding edge, and need to be able to prove why your design is better than someone else's design. The industry is already full of very experienced, very compentent people who don't have CS degrees. In fact, many of them started before such degree programs even existed. They know how to code, but they generally don't have any exposure to the more advanced theory stuff and are therefore not inspired by it, nor do they generally value it. The degree is MUCH more a long term investment for your career than a credential to get your foot in the door, as you'll eventually get to apply the theory and start doing things that wow. After you've taken your lumps that is.
You're doing work for the hospital on the system; therefore they need access to it.
Not only that, but there are all sorts of legal requirements around any data on the damn thing. Technically, your calendar, which includes appointment data and scheduling for when you worked on which patient's stuff probably falls under the domain of medical records....
There's a reason that beaurocracy isn't real compatible with you throwing up a server for whatever.. there are legal requirements that make it so every little thing needs to have enterprise grade bs and management behind it. At least on paper anyway.
Not only that, but once you've used it for that, who'se going to sanitize the data off it when you're done with it? I'm surprised the IT guys didn't show up with crowbars demanding admin accounts, followed shortly by dismantling the thing.
That said, I'm sure it's a sweet iphone calendar thingy or whatever.
I hate wrestling, and I hate Ghost Hunters. It's all they show now. Neither one is science fiction or epic fantasy. Those idiots who took over Syfy don't understand that the people who used to watch SciFi don't watch anymore, because of their stupidity. They have killed off every show that was even moderately interesting to watch.
The whole point is that Scifi was a place where stuff that wasn't mainstream could flourish. The audience doesn't want the bland stuff that's dumbed down for people with a 50 IQ. Now the morons who own it have turned it into another version of TBS.
With Scifi dead, I have no reason to bother keeping cable other than the History channel, which is also starting to go downhill with stupid reality shows. (Pawn Stars is great though..it's actually genuine.)
It's a stupid idea.
Besides, the economic impact alone from breaking the internet in the US for any period of time makes "pushing the kill switch" political suicide anyway.
Also, it's exactly the same power as "we want to shut down the phone system so you can't communicate or call 911 during a revolt, or whenever, you know, some politician feels like it".
Also, what people don't realize is that the internet is already a loose confederation of networks owned by only a few corporations who have peering deals with each other, and they already throttle each other under the table.
There have already been incidents where the Internet experiences massive failures when these companies get into pissing contests with each other and shut off each other's access to influence negotiations.
Akamai is very different from a "two tier strategy".
Akamai is all about having local data centers nearer to high traffic population centers. This has the side effect of relieving congestion on the main internet backbones by essentially doing local caching. You want the data, and it happens to be located on a server closer to you, which by coincidence does not have to bottleneck through the backbone as much, so you get better scaling and performance. This strategy is net positive because the internet as a whole benefits by reduced waste and the hosts can deliver content more efficiently with a better user experience.
A two tier internet is something *very* different. That's taking the same pipe, and allocating priority to the rich and powerful at the expense of those who don't pay the premium; there is still the same amount overall of bandwidth available but they want to allocate less of it to you and more of it to companies that pay. How that will actually work is that those who pay more get internet hosting that works, and everyone else gets screwed with a broken, high latency, congested network. Oh, and the price for them will also go up while the service goes down.
Everyone else should get really pissed off about this crap, once they figure out how bad the deal is for them.
Let me put it this way: if this sort of thing is allowed, more advanced internet services developed over the next few years will only be possible when they are run by huge corporations with deep pockets, and all other innovators will be shut out in the cold. And that means you get to pay more for those services because there won't be any competion.
Does your buddy know the encoding for the data?
Does his method work on "known good" flash memory?
I'd make sure that you understand the data encoding in the flash memory, and which type it is and how it maintains the data before drawing too many conclusions.
If the flash is failed due to wear, then that's expected. (if you run disk stress against an early generation flash key, you can wear it out pretty fast)
Most of the time, the need for physical drive recovery is due to one of the following cases:
The controller board on the drive went bad. Replaceable with minimal effort, and the right part.
A moving part failed (e.g. the reader arm or whatever) Replaceable with some effort, and the right part.
Somebody hosed the partition table. Usually possible to fix with a hex editor if you can manually reconstruct the table and/or use backup copies elsewhere on the disk. There is (expensive) software which will do this sort of stuff for you. Not for the faint of heart.
Filesystem corruption. Good luck, unless you have filesystem internals knowledge. Bits that aren't corrupt might be saved, with much effort, by a filesystem developer. (i.e. mere mortals are screwed).
RAID failure causes inaccessibility of some stripes. (eg two simultaneous disk failures in a RAID5, or one disk failure in a RAID0) You might get some data off the remaining stripes, but it is likely that unless your file happens to be smaller than the stripe size, and happens to not cross a stripe boundary, you will have lost significant portions of the data. Takes an expert to reconstruct what little can be saved.
Physical damage to the platter.(e.g. a scratch) If you are lucky you might be able to read some bits off the parts of the disk that aren't damaged. Depends on the nature of the physical damage though.
Failure due to hairline fractures can sometimes be worked around by freezing the drive long enough to get the data off.
I'm discounting eloborate theoretical scenarios where you use some kind of external reading equipment on the drive. In the real world, recovery companies try the above techniques and give up if they fail.
With solid state drives, you have the same story for partition table and filesystem issues, and for controller boards, assuming it's a seperate piece. I suppose the equivalent to physical scenarios with platters would be if you desoldered the flash chips and moved them to another identical drive, which is more difficult and far more expensive labor. Plus, only some of the chips may still be good, and the way that wearleveling blocks work, your data will actually be scattered across the chips noncontiguously for the most part, so you're likely to only get partial recovery anyway.