Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Grasshopper/DC-X design issue (Score 1) 71

Even in a hex losing 2 adjacent struts would cause failure, because the rockets only fire downwards.

Yeah, you're right. To allow for loss of adjacent struts, you'd need an octagon so that the CM was still within the bounding area of the surviving struts. I think quadrupling the gear weight would be a problem.

Something to consider might be a vertical column into which a damaged craft could descent and remain (mostly) upright. 5 is barely enough struts to stay within the CM for a single failure; six seems about right.

Comment Grasshopper/DC-X design issue (Score 1) 71

Grasshopper/DC-X design issue

The failure of the DC-X vehicle resulting in its destruction was the failure of a single hydraulic line not having been connected properly.

Perhaps we could learn from this, and use 6 struts instead of 4 struts in designs like the grasshopper, so that if we lose a strut, or even up to 2 adjacent or 3 non-adjacent struts, the vehicle can still land safely?

Comment Re:Again, it's not 3D. It's stereovision. (Score 0) 120

I'm not trying to be mean. But you don't own a "3d tv"; you own a 2d, stereoscopic television that marketers *called* 3D, although it clearly isn't anything of the sort.

A 3d display will allow you to see behind the actors. If you move, your point of view will shift. If someone moves closer, it won't be the camera that has to re-focus to follow, it will be your eyes, because the objects in the display will actually be nearer or further away. You'll be able to look down on a tennis match on your display as if from the sky, from any side of the court, or up at it from (the player's) shoe-level. Your true 3d display will not take up a flat surface. It will take up an 3-d volume and within that volume, truly volumetric objects will seem to exist.

What you have right now is basically a ViewMaster that changes images really fast. It's not 3d.

Comment Do they actually have the legal right? (Score 1) 219

Oracle clearly has the legal right to do what they are doing, and there is no morality in business, so that is the only right that matters.

Do they actually have the legal right? I contributed patches to BDB 1.0; I don't remember being asked for an assignment of rights so that they could legally change the license. The SleepyCat license only applied to the newer code added by Margo, which, if you wanted to use the newer code, you accepted the license on the aggregate work, and if not, you could excise the new work from the code by using an older version.

It's not clear to me from TFA exactly what the license change means, or if this is merely hand-wringing, since so far it has not changed the tar ball contents, and therefore the license declaration within the tar ball. However, if their intent is to relicense *all* the code, not just the SleepyCat portion of the new code, then that's a problem.

Comment Having this would be extremely useful (Score 2) 128

On the other hand, keeping works that are an author's old shame under wraps might help keep those works from tarnishing the reputation of the same author's newer works. Case in point: Disney's Song of the South.

Having this would be extremely useful to teach courses on cultural history.

https://en.wikipedia.org/wiki/Cultural_history

Similarly, it would be useful to have access to the Popeye cartoons referring to "Alice the Jeep", as well as the various Bugs Bunny cartoons of the WWII era, which have been censored in the name of political correctness.

In fact I would say that having this information available is critical to future cultural historians undestanding of our current era, and the whole "Political Correctness" phenomenon.

Comment Re:Ministro (Score 1) 86

And it doesn't exist on mobile phones.

That's a problem with mobile devices; not with Qt. Mobile phones are still pretty weak computing platforms; but give it time, and we'll be doing more and more serious things with them. We'll have higher resolution (and probably projected to holographic) displays, more compute power, more power supply available, better charging regimes, lots more memory of all kinds (working RAM and longer term storage), better input mechanisms (speech, for one) and so forth.

Mobile devices started out as pretty much weak platforms -- Palm, etc. Today, they're much more powerful, but they're not even *close* to a desktop, and pretending they are does no one any good. More to do; more to come. And that's a good thing, really.

Comment Re:Jump Ship (Score 1) 86

it's not a bad thing, but it's not even accurate. .NET isn't going to produce cross-plaform apps. Qt will (with some limitations, but it's like 98% there. I write some *big* apps in Qt, and have been fairly successful at the cross-platform thing, though I do all my development under OSX.)

Mr. Troll is, to be blunt, bewildered.

Comment Software engineers are not plumbers (Score 1) 467

No, I'm not. I was part of the 1987 DOL study that resulted in the classification of Software Engineering/Programming as a primarily creative endeavor. Until a skilled practitioner A and skilled practitioner B will tend to come up with the same solution to a given problem more often than not, it is an art as much as it requires a high degree of training and skill.

That may be what the DOL says; that's not what happens in practice. The chance for creative work is often rare unless you get to create something new; as every good software developer should know, the vast majority of time is spent maintaining an existing code base & making small tweaks. Or dealing with planning & meetings. The time for creativity is very small indeed.

Nevertheless, they are classified "exempt". This means (1) if they agree to a fixed price contract, it doesn't matter how many hours they have to work, and (2) they are predominantly salaried, so there is no such thing as "overtime" or "hourly pay". You might as well be trying to unionize the management at GM.

[...] Other engineering disciplines also have trade unions. Why should software engineering permit somebody without the proper training to write code professionally? How often are lives made inconvenient (or worse) because of rookie/amateurish design flaws?

I think the part that you are failing to grasp here is that a lot of training can make someone a better plumber, and there is a measurable correlation which you can use to justify this position. A lot of training isn't necessarily going to make someone a better software engineer, There's not a specific set of skills you can inculcate, nor can you test that someone has these skills and is able to apply them. Unlike plumbing, where it's easy to test whether someone is cleaning both sides of a copper fitting, then applying flux, before using an acetylene torch to solder the pipe to the fitting, or Tig welding, where you can pass/fail them with a ball-peen hammer test on their weld, after it first passes visual, you can not test whether or not a software engineer will write good software. Tests are not predictive of performance, as they are in the trades.

Leadership is a trainable skill & the military knows this very well. Some people are born naturals, sure, but most everybody else can be trained. Likewise with management.

Leads and management with both the leadership skills & the software engineering background are an all-too-rare combo.

And here you miss the difference between "leader" and "tech lead". A tech lead in software engineering is someone whose direction other people follow because they have demonstrated that they know what they are doing: if you follow them, the problem is going to get solved, and it will happen on time.

Someone with military training is frequently an asset; however, their value is predominantly in project management roles, and, to a lesser extent, in people management roles. My statement of the people management role being "to a lesser extent" is made both thoughtfully and reluctantly. In the best companies I have worked with and for, the best managers were those who had been technical themselves. It's also frequently the case that these managers do not have a lot of top-down authority for anything other than human problems. For example, in Google, there is fierce internal recruiting, and because of this, if you don't like your manager, you can tell them "take a hike" and go work elsewhere within the company. Note that this is a contributing factor in Google tending to cancel products, and in them not finishing things to the point of productization; nevertheless, it's the reality of the situation that managers are reluctant to play "800 pound gorilla" to drive projects to completion, and people tend to work on whatever they find interesting (which isn't productization, among other things).

I've seen developers pigoenhole themselves into obscure work like maintaining z80 assembler code (this was in the 80s and early 90s). Is it their fault? Sure. But why not give developers a helping hand here anyway?

Even if they'd learn C--which I think all software engineers should know--they would've been ok. But alas...

You're, again, assuming that training is meaningful in this regard. But first, let me answer your first question: "Yes, it is their fault, at least according to most of us in the industry". The training you get at a university will not be meaningful in terms of teaching you a new programming language. Forget for a second that if you can't generalize and pick up a new language to the point of being productive in about a week, then you are incapable of generalizing CS concepts apart from language bindings, and probably deserve pigeon-holing. Universities don't teach computer languages, and haven't since the accreditation rules changed in 1987. What they teach are things like "databases using C++", and you are expected to pick up the language on your own. Would some people benefit from taking language-centric courses? Undoubtedly. But unless you are teaching these first or second year in a degree program, the people who would benefit are the people who can't generalize, and they aren't the people you want working for/with you.

Interviews are not unidirectional, as they are in most union shops: they are about the technical employee interviewing the company as much as they are about the company interviewing the technical employee. In a good company, the company is asking if you lied on your resume, and if you have a track record, and if you would be a good team fit, and if you and your prospective manager take an instant dislike to one another or not. In a good employee, the employee is asking if it'd be a good work environment, if they would be a good team fit, if they and their prospective manager take an instant dislike to one another or not, and if the work will be meaningful/fulfilling or bullshit like "we want to build another Zynga!".

How so? Are you saying non-technology offices shouldn't get software engineers?

Short: "yes", Long: If the work environment is terrible for an engineer, and their skills are actually existent and in demand, they can find a better place to work. Typically you'll end up with mediocre engineers in those sorts of positions, and they will produce mediocre code.

I may be a small business; and I know I need a developer to make X. Given that even tech shops don't have a full grasp on what makes a good hire, what chance do I have? Hire via a head-hunting firm? Look for a consulting firm like ThoughtWorks? Ask around in a college/University?

Whereas I don't have this dilemma if I need to make, say, some structural changes to my home.

Your assumption that you "need a developer to make X" is fundamentally broken. You don't need that, and if you are smart, you don't want that. What you have is a problem Y, and you, as a software non-professional, believe that "a developer making X" will "solve problem Y".

If you need structural changes to your house, do you tell the contractor whether or not they should use lath and plaster construction or sheetrock, or do you let them solve the larger problem of "I need this wall moved 3 feet back?".

You don't have the dilemma because you aren't micromanaging the solution. Frankly, if you are unable to design anything but the UI you want to see yourself, you should hire a contractor to do the software work. To put it another way: if you want to hire a software engineer, are you prepared and qualified to be a software engineering manager and software project manager? Software engineers do not operate in a vacuum.

How do you hire a software contractor for a job? Well, how do you hire a building contractor? Do you just look in the phone book for a building contractor, the way you looked in a search engine for Thoughtworks, or do you actually go by word of mouth, reputation among your colleagues and peers, and look at their portfolio of previous projects, sometimes even visiting a previous clients home to see if they did a good job on the grout for the tile in the bathroom?

Exactly.

There's a world of development outside of Silicon Valley. This isn't unheard of in financial services & investment banks, for example.

I'd be more impressed with this anecdote if there were evidence, and even more impressed with the evidence, had the financial services industry not just melted down due to bad judgement. If I can't trust them to show good judgement when looking after their primary business focus, how am I to trust their judgement when it comes to hiring criteria for a field of endeavor where they are ill suited to either manage or specify projects?

Comment It's the promises you have to look out for (Score 1) 86

I use Qt extensively, and while there are numerous reasons to sing its praises, the project has a severe problem in the form of not fixing bugs (like file open dialogs dumping huge amounts of console errors) or limitations (like a windows audio sample rate of 48 khz) before proceeding to new versions.

What that causes is applications that are unstable either because the existing bug hasn't been fixed, or unstable because they get moved to an arbitrary new version with many changes -- and very few applications are as extensively tested against the new version as compared to the old during the initial design and implementation phase.

On the one hand, Qt has enabled me to make apps that (mostly) work the same using the same code under Windows and OSX. But it also causes me a lot of angst trying to explain to my users why the OSX version works so much better than the Windows version. It's not that I want it to; the OSX version of Qt is simply better.

Qt isn't the only project or organization guilty of wanting to make new versions before fixing the existing version. The lure of new APIs and such is strong, and the urge to fix bugs apparently not so much, right across the software spectrum. Apple does it; bugs persist for many OS revisions while APIs come (and go... Apple is not shy about telling you to stop using something.) Microsoft does it: I remember the glyph rotation bug (CW on some platforms, CCW on others) and the how-many-files-you-can-multi-select bugs managing to survive over not only OS revisions, but different OS platforms when the MIPS and Alpha versions were being offered. In the interim, Microsoft changed a great deal about window metrics across various OS levels, affecting all manner of interface issues, while OSX inflicts such obnoxious "favors" as not supporting new cameras except with a new OS level (while still happily selling you a version of their image editing that will work on your older OS), and breaking such *NIX components as cron. UDP sockets still don't work correctly under OSX (broadcast reception sockets where only one can exist on a machine... yeah, that's a good idea... not.)

Basically what I'm bitching about and advocating is that if you produce software, you shouldn't somehow magically get to ignore the fact that it doesn't do what you told the end user it would do just because you've released a new version you want them to buy, or even just use. I have NO problem with charging for new features. I don't even have a problem with adding new features to the current version while also fixing bugs, as long as you take care to not break pre-existing functionality. I have a HUGE problem with charging to fix something that was supposed to be working in the first place, or even, as is the case with Qt, not charging, but simply abandoning in place software that is significantly less than it was purported to be. I find adding new features to be insufficient cause to excuse leaving known bugs in place.

And frankly... if we would just seriously commit to fix the stuff we have before we move on, moving on would be a much better experience overall; the codebase would be more stable, the customers happier, and if we could couple that with a sense of responsibility that left existing APIs in place (once you tell someone to use an API, I think it's just awful to tell them they have to stop), I think we'd have a better process, a better end user experience, and a great deal less agonized tech support. When your "it's-our-fault" buglist hits zero, that's when you should start thinking about changes that might involve moving on from the previous version. I see it as an obligation to the end user. Unfortunately, at least thus far, Qt does not.

Comment Re:The 50 employee limit (Score 1) 600

Why doesn't your asshole friend give his employees health coverage in the first place? If he had, he wouldn't have a problem now.

Because he couldn't afford to be in business at all if he were to offer coverage. He's a cash flow business, as are all functional small businesses that aren't someone's spouses hobby funded by the other spouse. He's literally two payrolls from going out of business due to the current regulatory environment for small businesses being dictated by large businesses to lobbyists and thence to the congressmen that the big businesses own. Big business does not like competition.

So, in other words your friend runs a barely viable business that can not survive another economic head wind no matter where it comes from. He is probably going out of business soon anyway but he wants to score political points by blaming it on Obamacare.

There is a slight difference between an "economic head wind" and setting up wind machines to funnel an increased amount of money to the health insurance industry middlemen instead of going to single payer. Health insurance should come from the income tax general fund, not a tax on being an employer, or you are going to end up with nothing but large companies and their peasant wage slaves.

Comment A craftsman must know his tools. (Score 2) 138

When are people going to realize "coding" != "computer science"? (or <>, or ! .equals(), or ne, etc. depending on your flavor).

I think it'll happen when you read past the Slashdot title to the summary, and realize that the article is talking about computer science and that the whole "Who Will Teach U.S. Kids To Code?" this is a fabrication of the submitter/editors.

Nothing against Java devs, but IT needs a little more than programmers in language X. There are millions speaking English, Spanish, etc., but not that many of them churn out bestsellers, or even mundane but usable prose.

This is probably the most elegant argument I have ever seen for teaching algorithms, big O notation, and other theory topics, instead of languages, which is a switch most U.S. universities made after the decision was forced by the accreditation change in the late 1980's.

And it's absolutely, totally, completely wrong.

A craftsman must know his tools.

You need to learn at least one language deeply enough that you have a feel for the calculus of the language, and how to force it to represent any CS concept you want it to represent. Ever since they stopped teaching languages as a subject because they could no longer offer credit hours for doing so, the U.S. has been mostly turning out crap coders, except at schools like Brown that have self-directed programs where you can actually still learn a language from an expert practitioner.

A secondary consideration is that, when programming in a high level language, you should know what's happening under the covers. Java and other "pointerless" languages are an incredibly poor choice for doing that. So I agree on your condemnation of teaching the Java language, but it's for a reason other than the one you cite. The best way to understand what's going on under the covers is to use a high level language which compiles to assembly, and stop after the compilation to assembly; "cc -S" does this, although C isn't the only language in this category. Which brings me to the second point: apart from learning a single high level language to a great depth, it's also necessary for the student to learn an instruction set for a particular architecture, which, given prevalence, is going to mean either x86 or ARM these days. If you can't look at the high level language and know what assembly is likely to be generated from it, you do not understand your high level language. Without a grounding in assembly language, you are never going to have an intuitive grasp of memory layout or pointers, and if you learn "pointerless" languages, particularly ones with garbage collection, you may be able to program them, but you aren't going to be able to write a runtime for them yourself, with no intrinsic idea of memory layout, reference, or management.

Which leads us to our third consideration, and the one that gets in the way of problem solving using computers as a tool, which is really what we are trying to teach when we talk about teaching "computer science" or "coding": You need to repeat the process with a second high level language that also does not target a virtual machine. Only by being able to contrast the two calculus of the different languages are you going to be able to generalize in terms of an arbitrary computer language being able to be applied to a problem set. There's a reason that mathematics courses teach both Newton and Leibnitz style calculus to students, and it has nothing to do with not being able to solve problems with one that you can with the other. It has everything to do with the ability to generalize theory across systems.

The fourth and final consideration is algorithms, big O notation, and other topics, and how to represent them in your high level language. Yes, you need to learn this, but you need to learn it in a context of a language calculus, since throughout your career, if you end up coding, you will be called upon to translate these concepts to the local calculus of whatever programming language you are using at the time. After that, language doesn't/shouldn't matter, but up to that point, it matters desperately. After that point, you should also feel free to use interpreted languages running in a virtual machine, such as Python or Ruby or Java (yes, I know most JVMs JIT, but unless your Java programmer can explain how and into what and why, they are not a computer scientist or software engineer, they are merely a programmer).

So back to your original analogy:

There are millions speaking English, Spanish, etc., but not that many of them churn out bestsellers, or even mundane but usable prose. You're certainly not going to make good or even adequate writers by (only) teaching a language.

Neither will you become a best-selling author if you don't know how to spell, nor can you use a pen, pencil, or keyboard to put your words down. Output first, then grammar, then high concept.

Comment Re:Where is the service? (Score 1) 133

It would be ride share if his response was "Nah, I was going to Bruno's".

No, it wouldn't -- because YOU are going to Brunos, and so you wouldn't go with him, you'd get your paid service from someone willing to provide it. There are plenty of taxi situations where the driver will tell you "no, I don't go there."

The distinctive element here isn't what doesn't happen; it is what does happen. As I said, I'm not arguing for regulation (nor am I claiming any one way is better than another... that strikes me as highly situational); but in terms of the common element here, it's that (a) transport costs money, (b) you don't have transport, (c) you pay someone to provide it, (d) they do so.

Comment Re:Bottom line: how would a union help me? (Score 1) 467

Physical presence?

Yes. Unless your job requires physical presence, then there is zero cost for a corporation to move it from a union area to a non-union area, or completely offshore, as long as they can find sufficient talent that their cost per unit work is cheaper whereveer they relocate it to. Without a physical presence requirement, there is zero leverage for a strike.

Creativity? Are you fucking kidding me?

No, I'm not. I was part of the 1987 DOL study that resulted in the classification of Software Engineering/Programming as a primarily creative endeavor. Until a skilled practitioner A and skilled practitioner B will tend to come up with the same solution to a given problem more often than not, it is an art as much as it requires a high degree of training and skill.

Why do screen writers maintain their unions? Screen writing is highly creative & it requires even less physical presence than software engineering work.

Predominantly? To keep the available talent pool limited in order to inflate their price through artificial scarcity. Try joining WGAw or WGAe with less than 24 units of writing credit, and try getting employed as a screen writer without being a member of WGAw or WGAe.

But here are some potential benefits to all parties, since you need to be spoon fed:
- Professional trade certifications; most of the certification courses in the past 15 years are completely meaningless or too geared towards specific technologies.

These are useless. They are like an acting degree from a prestigious university: they are worthless compared to a track record as an actor/actress. You can have the best certifications in the industry, but you aren't going to find work if you can't act - or if you do find work, you aren't going to keep it very long, since it was a mistake.

This should include technical management & lead skills for those that are thinking of it.

I'm pretty sure the first is is spelled "M B A", and the second is something you get on merit, rather than because some idiot certified you as having had training in being tech lead. Unless you can do the work, again, the certifications are all BS. Technical fields are all meritocracies, and have zero to do with "time in grade" or other things that typically matter for career tracks in unions.

- Carving out a genuine, non-management professional career track. I'm a reasonably good developer; yet I know I'd be a terrible manager & team lead.

This already exists at successful technical companies; Novell, Apple, IBM, Google, all have this. From personal experience, IBM has had it since the at least the late 1990's. The companies which don't have it are out of business because no one good wants to work there, or they are stagnant in their growth because no one of a higher skill level wants to work there, and they get people who can "get by".

- Genuine, best-of-breed continuing education to keep good software developers *relevant* while filtering out the fads & buzzwords.

I'm going to call BS on this. "Relevant" is recruiter code for "has a resume containing the buzzwords we are looking for this week". You should also be aware that most software engineering reduces to language calculus, and there are only a couple of these that are in common use, and once you've learned the underlying principles devoid of a language binding, the language bindings really don't matter to anyone other than recruiters. A good engineer can pick up enough of a new language to be productive, as long as it matches one of the calculus with which they are already familiar, in a week or less. I don't need some stupid MSCE certification or other BS certification from a certifying authority to make me able to do the job.

OK let me point out something that tends to bug the hell out of me about this particular "benefit": the typical behaviour of a union would be to limit the number of such certifications issued in a given time period to keep the number of people certified for such work smaller than the demand count.

The only thing I can see resulting from this would be increased outsourcing, since there's no leverage for a strike... your entire team goes on strike? Fine, outsource all of the, It's ot like the existing active web pages on your eCommerce site are going to suddenly stop working if the idiots go out on strike, and you can limp along without a revamp to marketings idea of a "new look and feel" until you can get replacements (outsourced or outside union jurisdiction; either works).

- *Encouraging* the idea to employees that the fads & buzzwords are really the least important qualities, compared to the fundamentals. Employers really are overpaying.

I'm going to assume that the first "employees" was intended to be "employers", since employees already know this. Frankly, an employer who doesn't already know this can put up as many billboards as they want on 101 between San Francisco and San Jose, and while they might have people interview with them, especially if they offer over-market benefits and/or salary, they aren't going to get a lot of people working for them. It's pretty easy to see in an interview when the people interviewing you are clueless, and when they are, that's going to be a terrible place to work long term.

Interviews are not unidirectional, as they are in most union shops: they are about the technical employee interviewing the company as much as they are about the company interviewing the technical employee. In a good company, the company is asking if you lied on your resume, and if you have a track record, and if you would be a good team fit, and if you and your prospective manager take an instant dislike to one another or not. In a good employee, the employee is asking if it'd be a good work environment, if they would be a good team fit, if they and their prospective manager take an instant dislike to one another or not, and if the work will be meaningful/fulfilling or bullshit like "we want to build another Zynga!".

- Enforcing maximum working hours to keep good software developers from burning out. Asking developers to work 60-80 hour weeks consistently is a great sign of burn & churn; such places should be called out for it.

This is typically only applicable to startups, where it translates to sweat equity. After that, it's only typical where the work environment is preferable to home anyway, or where the problem set is so compelling you lose track of time. Other than Facebook, where they light a neon sign when they expect their employees to "burn the midnight oil", there's not much in the way of forced hours. At Apple, you'd occasionally get it to meet a product deadline, and mostly if you were the person who broke whatever it was that was in the way of the deadline. At Facebook, it's peer pressure if the light is on; maybe they should be called on it. At Google, it was almost always compelling work. At IBM, it pretty much didn't happen, except with contractors.

I'm still not seeing a benefit, as a technology worker, to joining a union.

The biggest draw I've seen so far is that you could start a Programmers Union and put "P.U." after your name on your business cards, but most places, you can do what you want, within reason, on your business cards anyway (I listed my job title on my Apple business cards as "Conspiracy Theorist" for a couple of years, as an example).

Slashdot Top Deals

If you want to put yourself on the map, publish your own map.

Working...