This is also nonsense. H1B workers are paid at least as well as their peers and cost significantly more due to legal and hr costs. The only valid criticism is that they are afraid to change jobs or rock the boat because they can be let go and then they must return to their home country. But right now talent is in such short supply that no one wants to upset a good worker. Big tech companies are drowning because they can't hire fast enough. Migrant laborers may create an underclass, not highly skilled h1b workers.
I am involved in hiring decisions at my company and at many companies prior to this one. I see what the applicants are. I see what the general talent pool is. I have several peers who are H1B and I can tell you they make very good money and I am absolutely not able to hire a similar citizen, otherwise I would simply hire both.
This AC is mostly nonsense in regards to the state of the industry. I agree about technical colleges though.
Companies would love to hire locally rather than H1B if there was talent. Blaming H1B is racist scapegoating. There are plenty of programmers out there. There aren't plenty of good programmers. If you learn the same web scripting language as everyone else and expect to make 6 figures right out of school you're in for a surprise. However, there are a LOT of companies who are hiring near 6 figures for talent immediately out of a 4 year program.
If you spend your 4 years writing only those programs assigned to you I'm sure it is difficult to find a good job. However, if you take an interest in opensource, do a good internship, or show any capability outside of filling in the last 1/10th of the program that your professor left blank for you, you'll have no trouble getting a job in today's market. What you get out of it is proportional to what you get in though. You can't just skate through and expect someone to hand you a pile of money. You're not entitled to anything just because you went through the motions and did what was laid out in front of you. You're competing with all of the other people who did the same, including those in other countries.
The crack at management is also unfounded. Everyone seems to know examples of mismanagement which lead to the failure of companies and the dissatisfaction and disenfranchisement of employees. Why then is it so hard to conceive that it is a difficult job that few people excel at? There are definitely good managers out there who can extract work from their reports at a higher level of satisfaction. You should learn to spot them and maneuver onto their teams at your earliest opportunity.
I'm a 10 year+ FreeBSD contributor. You're all missing the point. Linux and BSD target different markets and are optimized in all ways, organization, release process, license, code, to fit these different needs. One isn't better or worse. Obviously Linux is larger in all ways than BSD but larger doesn't mean better or we'd all just be using windows. This isn't a question of llvm being better than gcc, bsd being better than linux, or bsd license being better than gpl. They are just different and do different things. Use what's appropriate for your needs and leave it at that.
I can say as a long time contributor to opensource software I am disgusted at reading the comments of blowhard 'enthusiasts' who denigrate the hard work and contributions of hundreds of people when they get in these pissing matches. I am friend with Linux kernel contributors and I can guarantee we don't flame each other in this manner.
You're missing something.
Erase blocks and data blocks are not the same size. The block size is the smallest atomic unit the operating system can write to. The erase block size is the smallest atomic unit the SSD can erase. Erase blocks typically contain hundreds of data blocks. They must be relatively larger so they can be electrically isolated. The SSD maintains a map from a linear block address space to a physical block addresses. The SSD may also maintain a map of which blocks within an erase block are valid and fills them as new writes come in.
Without TRIM, once written, the constituent blocks within an erase block are always considered valid. When one block in the erase block is overwritten, the whole thing must be RMW'd to a new place. With TRIM the drive controller can be smarter and only relocate those blocks that still maintain a valid mapping. This can drastically reduce the overhead on a well used drive.
I am incredibly offended that you would compare this bloated, brute-force, abomination of a chip to the incredibly well designed, elegant, and efficient Alpha (may it rest in peace).
"Like many things, there are always tradeoffs around, and if the goal is to play the "my file system has a longer d*ck" game, it's almost always possible to find some benchmark which "proves" that one file system is better than another. Yawn..."
Really Ted, where did I mention that softdep was better? This is a bit inappropriate. You seem keen on convincing everyone that softdep is so terrible for what reason I can't imagine. I'm not knocking your work. I've read your blog a bit, you're doing some great stuff. I'm just trying to clear up misconceptions.
There's a lot of misinformation in this thread about softupdates. I only have so much time to reply so I'll hit a few key points. I'm the author of journaling extensions to softupdates so I have some experience in this area.
This notion that softupdates was so complex and so inhibited new features in ffs is bogus. I've seen it repeated a few times. There simply was not much pressure for these features and the filesystem metadata did not support it until ufs2. The total amount of code dedicated to extended attributes in softupdates can't be more than 100 lines. ffs sees fewer features because we have fewer developers period.
Furthermore, softupdates is just a different approach. It is no more complex than journaling. When I review a sophisticated journaling implementation such as xfs I see more lines of code dedicated to journaling and transaction management than softupdates requires for dependency tracking. I have worked on a number of production filesystems and while softdep is definitely not trivial, neither were any of the others unless you compare to synchronous ufs. I think a lot of people who are familiar with COW and Journaling are looking at this unfairly because they already know another system and forget how long it took to become comfortable with it.
In cpu benchmarks softdep costs more than async ffs, this is true. However, rollbacks are actually quite infrequent because our buffercache attempts to write buffers without dependencies first. Generally there are enough of those which satisfy dependencies on other buffers that you can keep the pipeline busy. Looking at the code size and depth in any modern filesystem it's clear that a lot of cpu is involved. Are journal blocks not consuming memory? Is the transaction tracking free? Most dependency structures are quite small compared to generating a copy of a metadata block for a jouranl write.
NetBSD abandoned softdep for something much simpler because they didn't have the resources to fix the bugs in it and they didn't incorporate fixes from FreeBSD. Their journaling implementation is similar to our gjournal which is mostly filesystem agnostic and does full block logging in a very simple fashion.
The journaled filesystem project was started simply to get rid of fsck. I think this hybrid solution is very promising. It gives us a place to issue barriers which can affect arbitrary numbers of filesystem operations. The journal write overhead is much lower than with traditional journals.
And regarding benchmarks; FreeBSD doesn't really have a comparably developed journaling filesystem to benchmark softdep against. I think it's unreasonable to compare linux with ext4 to FreeBSD with ffs+softdep for purposes of evaluating the filesystem design. Too many other factors come into play.
You can read more about softdep journaling at http://jeffr_tech.livejournal.com/
You're talking to a group of people who mostly had regular access to internet pornography throughout their teenage years. I'd wager most managed to still become normal productive citizens. I bet a lot of them still did homework even. Not that I did, but it certainly wasn't due to porn. You can only wank for so many hours in a day, hormones or not.
Censoring kids just makes them sheltered and naive or criminals when they circumvent it.
We don't really understand it, so we'll give it to the programmers.