This. Samuel L. Mother Fucking Jackson is always the winner.
or a great ape in disguise.
I don't know about you, but I would probably read the shit out of Thor becoming a great ape in disguise...
That would be why I posted that caveat... Obviously it isn't a 3.5'' or a 2.5'' platter drive, those are literally bigger than the phones most of the time, but conceptually it is the same principles for OS data storage and access (probably isn't using magnetic platters, but neither do SSDs and you can do the same things to both).
As far as I know, the hardware is no different than a standard platter drive and since most phones can be mounted to and read/written from a normal PC, I really see no reason why you couldn't use a secure rewrite with something like CCleaner or even use killdisk if you want to WIPE the phone. Don't quote me on it, because I've never tried myself on a phone but I would think it is fine.
Most people who say to "destroy" the drive are just being overly cautious. For anything that does multiple overwrites on all drive sectors you should be fine for the most part. Technically yes the only way to guarantee the drive is unreadable is destruction, but for an individual that is normally over the top (says this as I've destroyed a few old drives myself...).
I disagree. I've got the experience now, but the foundation you get from the degree I feel is more valuable. And as I stated, you also pigeon hole yourself into a technology set. You may learn *something* related to the field, but not the specifics to build a general foundation for software development. Some people may vary from that, but that is my opinion and experience.
Based on what I've heard from friends and family too, in their fields and in software development, having the degree makes a huge difference because many companies will leverage the fact that you don't have a degree to drive your salary down. When you have a degree, they have no leg to stand on, especially if it is a general computer science degree. I've heard/seen some people that had something somewhat related to computer science (telecomms engineering and the such) they even used that against them to get their salary lower. When you have a computer science degree, there is very little they can say.
My advice is invest in the education. I had a similar situation when I was in my second year (I had been doing software development as a hobby for about six years by then). A guy came to me who had a small business developing business websites and managing them. He wanted me to come work for him, but it would have turned into a full time job and I likely would have had to cut back on study time. Yes, I would have made a lot more money earlier, but there were several pitfalls I identified and they kept me from doing it.
First, pigeon-holing yourself into a technology set is a bad idea. You may know HTML/CSS/PHP/mySQL, but those technologies have somewhat limited job opportunities. If you have a very strong fundamental understanding of Computer Science (and in spite of the nay-sayers, the piece of paper to back it up), that becomes a huge asset in the job market. I work with the
Second, even taking the money now, you will limit the amount you make in the future and in the long run end up making a lot less. The guy offered me pretty good money, but I already make double what I would have made with him. In three years I have out-earned what I would have made in six working there, and on top of it I have a LOT more room to move up still doing what I love. Not to mention I have the option to go back for my master's now and open up even more future opportunities. There are always outliers that will drop out into some great thing and make tons of money (Gates, Zuckerburg, etc.), but the odds are really not in your favor. If you are going to make tons of money, it will probably be later on in life anyway.
Third, you don't truly know what you may enjoy yet. I went through several iterations of what I wanted to do within the field before I settled on what I do now (heavy business logic and engines as well as architecture software development). I originally wanted to do game design, then moved a bit into web, and then a bit into securities (I still do a bit of these three, but they are not may passion). My senior year is when I really figured out where I wanted to go because I saw and tried a bit of each part of the field. You may end up sticking with web development as a passion, but I would give it some time first. The experience and such you get from going through a CS program is very different than just reading up on the subject and playing with things yourself. Not to mention having a basic understanding of the other aspects of Computer Science will help your chosen field. I honest to god hate graphics work, but understanding the basics of it makes it a lot better when I write code other people have to hook into.
The one thing you will want to do though, work on some personal projects, which it sounds like you already do. I did several in my spare time when getting my degree and it greatly impressed the employers that looked at me. Prioritize your studies first, but the side projects can give them an idea of what kind of initiative you take, your level of creativity, and even let them somewhat see how you've grown as a developer (which gives them good indicators how much you can grow with a professional entity and access to much better resources). Keep with it I say, once you graduate you will see how valuable that degree ends up.
There is always a way to game the system. In my own anecdotal experience I agree that the professor makes a massive difference. In both CS and non-CS courses I understood subject material so much better when I had an engaging professor. Hell when I took data structures the concepts that I was shaky on from discrete math became substantially clearer thanks to the instructor I had (and he was just a teaching fellow!). I can program and automate a lot, but a proper teaching program? I believe there are way too many cases to make it truly effective without some crazy break-through in something like adaptive AI and human simulation.
To be fair, unless you actually GET a STEM degree, that is pretty much what everyone does. It was rather pathetic when I took my math placement test for college, out of the entire probably 300ish people that were taking it during my introduction block/week, about 2 maybe 3 of us (I know because the lab tech told me) tested out to Cal 1 which was the highest you could get (Me and another guy were from the same high school class and both took our AB Calc exams, already had credit). 70% tested either college algebra or one class higher. When reviewing other course catalogs, there was not hardly ANY requirement to get to Cal 1 unless you were doing a STEM degree.
Hell, when I did digital logic, half the class was fucking horrible at boolean arithmetic of any form and they WERE engineering students. I quickly discovered most of them were cheating off of the handful of us that actually understood how to do it.
- When writing code, you are more likely to write comments, again because it takes less effort.
Rather, I write comments because they help clarify why certain code exists in the way it does, and in some cases, what it's doing. Have some mercy on the next guy who has to work on it.
There are actually quite a few instances where you can put too many comments in the code. Often times if there are lots of comments to explain what a logic block does, rather than just explain design decisions, that means you are not using meaningful class-variable name structuring. There has actually been a very good movement at my company the past year or so to drastically improve coding standards and that was a big point that was brought up (I am quite glad too, between that and the damn hungarian notation in a strongly typed language some of our legacy code was just ugly as hell for no reason).
Coding is necessary to competently use a computer to solve problems.
I have to strongly disagree here. I work as a software engineer and I have seen both sides of this coin. I have seen multiple people working as software engineers that could model and create respectable algorithms that couldn't use a computer beyond that to save their lives. CS =/= IT. I have also seen people that couldn't write "Hello World" if I gave them Eclipse and had it auto-create and format the shell for them, but they could do stuff with Excel and other pieces of software that I was unaware that software even had those features.
I am all for this movement of we need more software developers, because we have tons to be done and no where near enough people (course this kind of works in my favor, but that is neither here nor there), but bottom line is software development is not some elementary skill that you should teach every kid in the world. Some people are just not geared to do it. That doesn't mean that software developers are inherently better or something, just different. There are still plenty of things these people can do. I just feel like we should make sure the opportunity is there (which in a lot of cases it is not right now), not try to cram it down everyone's throat (like what some of these movements are doing, and in many cases they seem to only have a rudimentary understanding of what they are trying to do).
Code.org specifically I am on the fence about still, but there are quite a number of these other movements that are just plain hogwash ("learn to code in a year, in your spare time!" yea, right).
Googling is hard. I now have proof I am masochist, I keep answering people that clearly are just trolling.
Lets not start arguing semantics, its a substitution cipher. You could call it a key of sorts I guess? Not the same as the key you would use for AES or others though.
Hmmmmmm, that is a very good point and I may actually steal this idea. I never thought about doing that to break their crap. I've seen an adobe crack for CS5.5 that does something similar for the DRM (which is downright hilarious that adobe's DRM is that bad).
The definition for decrypting something is hazy at best as technically using a dictionary attack against a hash function both "decrypts" it and is loss-less assuming you have any related salts etc (this includes even things like SHA2 because with enough time/resources, admittedly ludicrous amounts, it can be "decrypted" or "de-hashed").
Speaking theoretically it should really be acceptable to say "one-way encryption method" although, as of course everyone was undoubtedly going to point out when I said that, hash functions are technically not "encryption" depending on how strict a definition you use. Really it is a grey (or gray, yes I am being a smartass) area, and we are all nitpicking the shit out of each other. Kind of personal preference I think, if I referred to it as one way encryption people know that as a hash, and if I said hash people are thinking encryption as well since both are part of cryptography.
If we are getting into the technical definition, bit length, key length, etc. doesn't really pertain to something being encryption or not. By definition a Vigenère or Caesar cipher are consider encryption methods (and Caesar doesn't even use a key) but those are very primitive versions of encryption. If I remember correctly yes, you can still have collisions on something like your example depending on the method used (been a while since I did any of that, so I am a bit rusty). MD5 has lots of known collisions and even SHA1 has some known collisions.