Slashdot videos: Now with more Slashdot!
DreamHost has a diverse array of services, geek-oriented tech support, and a community oriented around tech-friendly features. I've been very satisfied for many years. If they don't support it, I guarantee one of the in-house developers has an unofficial install working somewhere that they'd be happy to copy over.
Of the nine U.S. Unified Combatant Commands, AFRICOM (United States Africa Command) is the newest. Both W and Obama have expressed significant interest in expanding the role of U.S. operations in Africa, for purposes of counter-terorrism and a desire to improve stability (ironically, special operations forces are historically used to invoke instability in a nation-state). The Obama operations in Libya during its civil war was actually AFRICOM's coming-out party, its first chance to be a real boy.
Since then, AFRICOM has moved forward in supporting policy roles by expanding U.S. military facilities, particularly those supporting drone operations, on the continent. this implied that special operations were the next step--why just spy on a terrorist cell when you can try and capture it's leader, too?--so learning about these events is not a big surprise. However, the relative failure of these efforts (at least the Somalia operation--the Tripoli operation may or may not be a disappointment at the policy level, depending on how sincere their protests are) is something of a black mark on AFRICOM's plans. There is some serious head-scratching going on, I am certain, and the role of tactical operations is likely being re-evaluated.
The shutdown has not yet affected me, save to the extent that we can get a heck of a lot more work done without the government contractors constantly throwing spears and interfering with forward movement for the sake of satisfying their own egos. The rest of my industry's getting hammered--NASA in particular--so I know how fortunate I am. Still, I can't help feeling that the shutdown as a whole is experiencing a great deal of hype, and I'm tremendously disappointed in the way which so many officials are doing their best to exaggerate and exacerbate the impacts ("shut down" Twitter feeds, websites, and parks being the three best examples).
Children can learn programming from a very early age. Kindergarten is a good place to start, because by that time interested parents (not teachers) have likely introduced the child to the only two real prerequisites:
- Basic arithmetic (2 + 1 = )
- Elementary logic (this-then-that, if-then)
The remaining skills necessary to become interested in, and capable of, writing code can be picked up concurrent to coding itself:
- Variables and assignment
- Control flow
Now that we've dispensed with the technical items, we need to address something amiss in this discussion--and where TFA gets it wrong. Teaching your kids to program is NOT a matter of pushing your school to displace an art, music, or language class. If you are truly interested in getting your kids to code, this is a topic best taught by YOU. My daughter is currently 3 years old, and while she's fascinated by computers, we've got to tackle the prerequisites listed above first. (I'm guessing we'll start actual coding around the time she turns 5.) I'm eagerly chomping at the bit for the opportunity!
Since there seems to be a dearth of actual parents in this discussion, let me point out one more thing. School is fine for teaching fundamentals (which you can dispatch to large groups of kids simultaneously) and some other topics (where parents may not have sufficient familiarity). When you are both a parent and a geek, though, you can't WAIT to share the really awesome tidbits of knowledge with your kid--and even if it is something their school covers, you're going to be so excited that you'll show it to them first, just to make sure they know how amazing the world really is. That's why we listen to Symphony of Science in the car, and why lately we've been spending our spare time building spinning machines. When we finally sit down and write our first BASIC program together (screw you, Dijkstra--you were an amazing scientist, but a horrible educator), it's going to be for the same reason: I want to share it with her!
This approach appears to be slightly different from traditional frame stacking, in that they are utilizing a low read noise (really, 1e- doesn't pass the smell test--there's got to be some tradeoffs) to take a large number of frames with short exposure times. The only other interesting approach they are taking is searching the velocity space, for which (given 1000 points) they need a 2500-node HPC cluster (you do have one of those in your closet, yes?). From their description, they are also only searching 1px/frame movement, which runs into PSF constraints when the atmosphere starts blurring your crazy-sensitive focal plane (they give a token nod to this problem, and then promise to look into it in a future paper).
There are a couple of items that raised some red flags, including errors in formulating SNR calculations (who measures it in sigma? optical sensor pd--probably of detection--is typically computed for SNR = 6 or 10, in linear space), the avoidance of frame-stacking technique comparisons (as you noticed), and the fact that they conclude with a single-paragraph (5 1/2 lines) consideration of instrumental effects (everything depends on your sensor performance, noise, and sensitivities) while telling us to wait until yet another paper to straighten the issues out.
Of course, I'm being overly critical. I'm actually looking forward to seeing what this approach can do, given that everyone's pretty much given up on characterizing this class size of asteroids in the past. 90% of what they are saying is actually consistent with theory, it's the practice bit where the red flags are going up. It's worth keeping in mind, though, that you don't publish CalTech papers and get time on the Palomar 200" by being a dim-witted slacker.
As an engineer who lives practically next door to one of the hubs of so-called 'Silicon Beach', let me tell you that there is more publicity than business behind this concept. Don't get me wrong--there are legitimate reasons for considering the Los Angeles area a decent tech hub. A number of my favorite companies (Dreamhost!, and others) are located here, typically somewhere between the downtown area and Santa Monica. One of the biggest benefits is the thriving venture capital communities in the downtown and Pasadena areas, which is an understated-but-critical component to any truly substantial claim to 'Silicon [noun]'. (Strong venture capital communities also come with excellent startup support, commonly in some form of incubator.) You also get some great synergies between the educational institutions in the area (USC, UCLA, CalTech, Claremont colleges...), ongoing technology business efforts, and the parallel (not-as-mighty-as-it-once-was-but-still-substantial) aerospace community. We're fortunate to have a new (at least moderately) technically literate mayor, who's been pushing this 'Silicon Beach' idea quite a bit.
But that's the end of the good news. Here's why the TFA totally misses the mark, and (most likely unintentionally) buys into one of the latest political fads here in southern California.
First, the MySpace influence is strongly overrated. Businesses fail and shrink all the time, and--surprise--when they do, talented engineers will go off and do other things. The article paints a picture that implies that MySpace was this huge supergiant of a tech star, which went nova and whose subsequent remnants collected to spawn a whole new constellation of stars. In reality, MySpace was never really that big of a tech phenomenon or local influence. It's nowhere near as substantial as the unprecedented collapse of the southern California aerospace community (a PDF--page 11 is most interesting) after the Cold War. If you don't want to click the link, here's a summary: southern California employed 271,700 aerospace jobs in 1990; that number dropped by 57% by 2000, and continues to plummet (88,4000 in 2011). It really makes MySpace look like a drop of piss in a thunderstorm.
Second, it's easy to underestimate the fact that southern California--even just 'Los Angeles'--is a really, really big place. The Silicon Valley is ~46km long (I'm measuring from San Mateo to downtown San Jose--the width, of course, is mere miles), and Wikipedia puts the population between 3.5 and 4 million. By comparison, Los Angeles county alone is 76km (Santa Clarita to Long Beach) by 74km (Santa Monica to Claremont), with a population of 10 million souls. Why is that relevant? It shouldn't be a surprise that, in a really big area, there are going to be a few winning tech companies. Few people can even agree what 'Silicon Beach' constitutes. Is it supposed to be Playa Vista, with the new Fox technical studios (a la MySpace), Electronic Arts offices, and a few new offices in newly-remodeled air hangers? It is sort of the west side in general, where Google has recently consolidated a new office (in Venice Beach), and Activision-Blizzard is headquartered (Santa Monica)? Is it the general downtown vicinity, including North Hollywood and other light industrial areas, where established tech businesses have high-rise offices and new startups are renting out old movie studios for a steal of a rate? Is it the city of Los Angeles in general, with a new tech-friendly mayor, or the county, including tech-friendly Pasadena (CalTech and JPL, plus a lot of venture capital organizations)? Or does it also include Orange County, host of a whole slew of tech-sector ecosystems centered around U.C. Irvine (including the Blizzard campus itself)?
In my opinion, one of the biggest reasons Los Angeles (let's say we're talking about the county here) will never really be a serious competitor to Silicon Valley is the cost of living. Yes, it's expensive in Silicon Valley communities--even more so than most places in Los Angeles. But in L.A., cost of living is very inelastic. If you don't mind living in the hood (and I mean the Real Honest-to-God Hood), with bars on your windows and regular gunshots across the street, you might be able to rent an apartment for a reasonable rate. However, if you're working in one of the west-side or downtown tech businesses, aren't already a millionaire, and have aversions to hood housing (perhaps you have kids for whom you need to consider safety and schooling), your options are extremely limited. You either suck it up and shell out $2000+/month for a family apartment, or you live in your car--and I guarantee you that L.A. freeways (and L.A. public transit options) are worse that you have heard or imagined. In comparison, places like San Diego (which is historically also a high cost-of-living area) at least have flexibility and choices that don't involve risking your life, and effective ways to get around.
Another factor, which is less quantifiable, is the lack of a true tech startup culture. In short, this means that finance sources are highly aggressive in performance expectations and unrealistic returns on their investment (think Hollywood--spend $20M for a summer blockbuster that rakes $40M despite dismal cliches and poor reviews.) In a culture where new business are expected to let concepts thrive and grow before proving or justifying their technologies, you have more patient expectations. You also don't have a geek-dominant business culture or social climate in Los Angeles, which translates to a strong lack of facilitators that are taken for granted in established environments like Silicon Valley (think workforce expectations, technological infrastructure, resource availability, etc.)
In my experience, what the 'Silicon Beach' concept really comes down to is political marketing. This isn't a bad thing in and of itself, but it should be recognized in order to properly consider and evaluate statements like those in TFA. As previously mentioned, we have a new tech-friendly mayor who's very eager to capitalize on the relative advantages of southern California business and living when compared to Silicon Valley. There's also concern about the flight of California businesses to more business-friendly climates like Texas, and pointing out the spurts of growth here and there is one way to counter that narrative. This is also a way for a new mayoral administration to rally constituents around a platform plank that involves new jobs, future-oriented growth, shiny business opportunities, and (hopefully) new tax revenue down the road.
I'm probably coming off as overly cynical. I should point out that there is another claimant to the 'Silicon Beach' title, which shares very few of the aforementioned disadvantages: San Diego. There's a strong and thriving biotech sector, with plenty of engineers from the shrinking defense industry looking for new opportunities. There's also a thriving tech/software/developer background, including communities and a small number of tech incubators. Probably the biggest shortfall is the lack of an established, sizable, and experienced venture capital community, but that could easily change with the advent of perceived opportunities. It's not a sure thing, but I know I'd much rather be working at a startup in San Diego than here in Los Angeles. Of course, the grass is always greener on the other side!
Johnson's biography is quite a good read--not going to win a Nobel prize in literature any time soon, but the content is a delight for aerospace engineers like me. Johnson WAS Skunk Works, for many many years.
Funny note about Rule 4: He's advocating version control for engineering projects. If he was alive today, I suspect Mercurial would give him a hard-on.
Rule 4: A very simple drawing and drawing release system with great flexibility for making changes must be provided.
Last point: It's funny how many of the 14 Rules are anathema to modern management practice, particularly as implemented by the dominant aerospace firms. I'm not saying you can solve all of the industry's problems by requiring 14 Rule adherence, but you'd come close, The parent post already mentioned Rule 14, and implied it's laughable contradiction with current pay-scales in the engineering industry; Rule 5 directly contradicts the micro-managed, hyper-documented approach for modern systems engineering standards. Rule 12 has been repeatedly blown away by deeply-ingrained contractor dishonesty w.r.t. pricing and scheduling estimates, and by contractees' fanatical devotion to requirements creep and abrupt project changes (although in fairness, the budgeting environment doesn't help things), and as a result it's hard to imagine that this kind of trust will ever again exist between the government and the large aerospace contractors. Rule 10 is also a victim of this phenomenon.
I try to avoid becoming involved with
It's my conclusion, from experience, observations, and a lot of thought on the topic, that this is a good approach. It forces highly-intelligent technical people to spend significant time ourside of their comfort zone, which in itself is a valuable experience. What's more, because so many of these highly-intelligent technical people are able to follow Hum topics that actually interest them (you are required to choose an area of concentration for at least four of your eleven course), there's a lot of critical thinking going on between the students and Hum professors. This prevents the latter group from being too fring-idealogical (you know what I'm talking about--the avowed communist economics teacher who tells engineers that it is immoral for them to make a profit from their work, for one arbitrary example); professors actually have to stay on their toes, and some of them even enjoy dropping marginal / controversial / partisan topics into their courses every know and then just to watch the students react and critically tear it apart. Lots of fun discussions occurs.
That having been said, TFA is completely the wrong approach. This idea of 'science-vs-non-science' is absurd, but even worse than the condescending elitism shown by the scientific side of the discussion is when you find gentlemen like this author, who somehow manage to convince themselves that they are the Secret Guardians of the Superior Non-Rational Secrets to the Universe, which mere reason (and scientists who use it) can never hope to comprehend. (Don't tell me you haven't meet anyone like this--there are at least four in every Coffee Bean at any given point in time, reading their latest Jenny McCarthy blog post and trying to pass head shots off to anyone who looks like an acting agent.) This isn't skepticism (which is a profoundly scientific trait, by the way, despite what TFA tries to advocate); this is blind contrarianism with a dash of well-read, modern pseudo-intellectualist voodoo.
Social changes are hard and difficult to enforce--the most reliable way is to thoroughly screen who you hire and, reciprocally, who you work for, both of which are moot points for situations like this. Therefore, I am going to enumerate three easy steps, albeit brute-force, that should help change the way you and your engineers work and think. Devious, yes--but wait on your judgement until you've read through the list, they're not all that bad.
- * Start managing a trunk repository, and require all engineers to branch and merge as needed. Nearly every version control tool worth a darn now adays can handle distributed repositories, and most are easy to learn.
- * Document, document, document. Interns are great for this, as it's also a good way to learn a new code base. It's also easier to track and sort new feature requests when you know where they stand with respect to your current version. This doesn't have to be Word DOCs--two easy and useful alternatives are wikis and PowerPoint (a lot easier to edit and communicate with the former; a lot easier to diagram and illustrate with the latter).
- * Establish an expectation of workflow--specifically, bug and feature pipelines. A substantial part of this duty falls on your shoulders, as you must always know (or be able to look up) who is working on what, and where that piece fits into the big puzzle. Particularly important is the feature pipeline, as this is your interface to managers and others who interact with your code base.
I'm assuming, of course, that you haven't already done these things. If you have, there are many ways to take it to the next level: start using Bugzilla (or a similar tool) to track bugs and features; organize your documentation into User Guides, ICDs, and ADDs; define a small but standardized set of processes for complex challenges; etc. You get the idea--incremental is good, as you can't simply dictate, from top down, the behavior and practices of your engineers (unless... are you on a management track?). Of course, if you're in a CMMI-4 organization in which each person has their own HG clone of the primary code base, with your own QA and support staff, these are hopefully already in your rear-view mirror.
One additional practice I would recommend is periodic release cycles--that is, release a major version every year, minor every month (or some variation thereof), and include in each release only the features and updates that are ready. If you can get to a point where your schedule and release cycle is no longer at the mercy of your requirement creep problems or an almost-done set of patches, you will find the atmosphere becomes much more productive and stress-free; quality will skyrocket as a result.
Last but not least... I'm sure there will be many fine suggestions from knowledgeable slashdotters on this page. Pick and choose the ones you think will work (feel free to ignore all of mine, except this one) for your team / company / project--and when you sit down to implement them, don't expect to get away with 100% of your changes. Do what you can, what you are allowed, and what keeps you productive. Too many idealistic junior managers or engineers-turning-managers try to bring their 'brilliant ideas' into an environment where they simply don't work, and the lack of pragmatism leads to total failure of all of their efforts, when they could have gotten away with 60% (which is better than 0%). Throw in the vast world of methodologies, process monkeys, and paradigm acolytes, and you'll see why it's worth being cautious and flexible.
As a long-time contributor to Wikitravel, I'm very glad to see Wikivoyage managed by Wikimedia. Internet Brands, the organization that took over Wikitravel some time ago, has been turning their site into a classic example of ham-handed monetization; compare intrusive travel booking banners and horrendously limited search to their respective alternatives. For a while, they were even several versions behind the MediaWiki platform itself. I abandoned contributing to Wikitravel last year, and I'm very happy to have a new place to which I can contribute content. More importantly, I suspect I'm not the only one.
It seems like a lot of contributors here are missing the point. Discussions like this--and particularly the analysis that provokes them, regardless of how perfect it may be--are highly valuable and productive, for the same reason as project post-mortems. Too often, debates about development (where it's styling, methodologies, or rules of thumb) are done before coding begins, and as a result, several things invariably happen:
- * Contributions are made by people with little or no knowledge of the topic
- * Contributions are made by people with little or no experience
- * Too much time is spent debating theory and not practice
- * Minor and unimportant topics absorb the majority of pre-development time and energy
- * Development planning becomes convoluted, and / or ends of being thrown out as actual development begins and unforeseen issues arise
This, of course, is a reflection of the principle that a good engineer learns from his mistakes, while a great engineer learns from other people's mistakes. Both pictures and hindsight are worth a thousand pre-emptive, ill-informed words. I'm particularly pleased with the focus on coding quality, which (in post-mortems and hindsight analysis like this) takes second place to project and management practices. I will take, and have learned infinitely more from, 10
You missed the part where he said it was a deliberate feature of the Prod build:
A program that doesn’t run uses a lot less CPU.