What would you define as enterprise out of interest? I think it'd be hard to argue that the financial services and defence firm I've worked for aren't big businesses.
What would you define as enterprise out of interest? I think it'd be hard to argue that the financial services and defence firm I've worked for aren't big businesses.
"How are you supposed to know the intricate details of another company's codebase and development process to be able to judge if they are really similar or not? You can only guess and hanging someone's job on a guess is pretty crappy."
I don't think the intricate details matter, I've worked for enough different companies to realise that the idea that some company is a special snowflake is an incredibly rare an unlikely thing. The odds of your company being in such a fundamentally different place that you're talking about drastic differences in delivery time, let alone if you take a number of samples is entirely negligible.
But apart from that many of the big boys actually do blog about their issues and state of their codebase, so the problems you describe are all incredibly well understood. You wouldn't realistically throw someone onto a project anyway and judge them immediately. There's always going to be a bedding in period with an employee and it's within this period that a lead will be taking on the issues within the team and the business, raising them and tackling what they can - few leads get to jump into entirely problemless companies and can attain maximum efficiency straight away. The question is how are they performing when those problems have been sorted, again, you're really conflating business issues with developer competence here, if you don't get business issues sorted then of course developer competence is going to be irrelevant.
"I fix code and I make users happy (because I fix the code and simplify their interactions), but that all costs time and pain and management typically just sees "not much movement"."
I hear you, I'm not pretending all business are perfect, but this wasn't about how do we survive in terrible businesses (hint: you don't, you leave them), it's how do we deal with problems of developer competence in general and my point is that we do that by having sufficiently competent and talented leads to weed out the bad, and help the good rise up.
"Welcome to the world of a real job working for a real company. Most companies have fucked up processes and policies."
I've worked in companies like those you mention and as I say, no amount of ability to gauge competence will help them - the fundamental problem you're talking about is not developer competence, it's about bad business practices, in that environment someone will always be looking for someone else to blame and even if they have an objective measure that you're the greatest developer in the world there will still be people who will ignore it because they don't want the latest failed delivery to be their fault. You can't resolve that as a developer, and being measured fairly wont fix or change it. A good lead might be able to change such a company but only if there are people in said company willing to fix problems, you're describing companies though where that's obviously not the case - as you said, "we hear you, but
I learnt very early on in my career as a developer that you can't sit around in those companies hoping things will magically fix themselves - those companies aren't looking after you so you have no obligation to look out for them, don't feel like you have to stay for any degree of loyalty that they're not willing to pay back, and if you are talented as you believe you are then just move until you find a good employer. Then you can worry about measuring competence, because then you'll be somewhere that wants to get that right, and that's the company that needs the good lead.
I think I've always been quite lucky in that respect, whether working at small companies or large, public sector or private, I've always found that development's opinion has at least been deeply respected. We've always been recognised as the money makers so there's always been an inherent fear about interfering with us unnecessarily.
I wonder what the difference is? I've worked as a developer in a few fields - engineering, defence, medical, and finance so I don't think it's a field specific issue. Maybe cultural or location based?
I've always looked on in disappointment when I've seen stories here and elsewhere that there is no shortage of companies that treat developers as disposable assets so the problem you describe is certainly an issue in a number of places.
I'm biased of course, but I've always figured that a company dependent on software treating it's developers as disposable assets is like a restaurant treat it's chefs or an army treating it's soldiers in the same way - a restaurant wont get far with no one to cook, and an army wont win any wars without any soldiers. It doesn't seem like a smart move if you're commercially dependent on a group of people to do anything other than do everything you reasonably can to help them do a good job.
I don't disagree, I think this is where the problem with most companies struggling with development is - they just don't have the talent sufficient to judge whether they have the talent.
Time and time again the companies that I've seen excel at development have either found a 1 in 100 developer out of sheer damn luck who just happened to be looking for some aspect of the role in question (i.e. maybe it's a small town and they just wanted to live near their family regardless of career impact), or they've decided to think outside the corporate box of fixed salary bands and have paid for someone at that level even if that meant paying that person more than their boss.
But what is clear in my experience is that when you get past that point of filling those roles based on ability to blag, time spent at company as you point out, and other such nonsense metrics then the rest sorts itself out. The problem is that most companies like you say fail at this very first hurdle, and so we get questions such as in the summary where they're trying to fight the symptoms, not the root cause.
The problems you must be having in this industry to be so angry about this topic might be because you don't take criticism well.
Not sure what gives me that impression but you know, just saying.
No, I think someone's skill is based on how succesful they are at doing the job as demonstrated by their track record. What they tell you is irrelevant, what they've done, and what can be backed up by research, references, and a competent interview process is really what matters.
"In bumfuck, USA, there is nothing but small developers, even the ones that charge 6 figures."
I've no doubt, but how is having an effective way to measure competence going to help that? If you don't have the local talent available you either bring it in from outside by paying relocation, you relocate your business, or you open a satellite office for development. Just as you can't build a dev team where there's no talent to do so, you can't run a succesful fishing fleet in the middle of a dry desert. That is unfortunately the nature of reality.
That's precisely why you need to find a similar case, or ideally, many similar cases.
In terms of distractions, if you're in a lead development role then it's your responsibility to raise at the highest levels of management and evidence the fact that your team just isn't being given sufficient time to work on actually writing software and documenting interruptions as evidence isn't difficult. Similarly taking a poor specification up the chain and explaining why it's poor also isn't difficult - explaining where there are deficiencies in a specification is fairly easy to do if they are present. These are all traits in a good lead, and any lead not able to do these things is precisely the sort of junior to mid-level developed wedged into a senior role that I talked about - a good lead has to know how to get things done in the business world as much as they know development.
But most of these issues aren't about measuring developer competence, they're about tackling fundamental problems within a business - that's a separate issue. If you can't tackle those then you're fucked as a business regardless of how good or bad your developers are. You could have the most perfect measurement of developer competence in the world, but if the rest of your business is broken then you're fucked anyway. It doesn't matter how good the product is if the product is based on a specification too dysfunctional to sell by a sales team too busy pestering development to make any actual sales.
No, stupid. Pay enough to get sufficiently competent people from the marketplace in the first place. People with a track record of leadership in software development sufficient to command good pay because they do a good job. Many companies fail at this, pay below competitive levels and just fill their senior jobs with junior/mid-level staff then wonder why their software development department isn't competitive.
Of course throwing money at someone who is incompetent wont magically make them competent, why would anyone ever think that would be the case?
Jokes aside I'd argue there's actually a more fundamental question that stems from the question being asked.
Why aren't your senior/lead developers weeding this people out? That's their job, they'll spot bad developers by working with them based on a number of metrics from readability of code, number of bugs, performance, amount of useful code produced and so on, not just a single metric.
If your senior/lead developers aren't doing this then you've probably not paid enough, you've probably paid too little and ended up putting a low to mid level developer in what you've called a senior role.
If it's your senior/lead you're concerned about then you should just be able to go on track record - do they have a history of delivering high quality software in a reasonable timeframe? Are you confused about what a reasonable timeframe is? Look at other software on the market, how long does it take Microsoft to release a new version of Word? Adobe a new version of Acrobat? and so on. There is plenty of evidence out there as to how long it takes a succesful company to release a piece of software - find something with a similar scale to what you're doing and see how rapidly they release. If you're paying competitively against those companies, and your developer isn't delivering as rapidly then they're underperforming, if you're not paying as much as they are then don't expect them to be able to produce as much quality in as little time. If they're overperforming in quality and delivery speed and you're paying them less than the big boys then thank yourself that you've found a fucking star and spend a lot of time thanking them for giving you more for less money and do everything you can to make them happy and keep them, because if they leave, you might not be so lucky next time.
On a conceptual level when talking about object oriented analysis I think the argument for single inheritance makes sense - I believe it's always the case that a thing is a sub-type of another thing and that remains true until you boil down to the fundamental building blocks of the universe - the base particles, which would, if you were building the entire universe in code presumably be our base classes (I'm not a physicist, I'm sure one might correct me
The difficulty with it is again in the mapping of figuring out what that parent type is, as we've seen it can be difficult when mapping casual English to an object oriented design that we run into problems, and so we have to be very careful about determining what that parent type actually is, rather than what we initially think it is, sometimes that parent type can be an entirely abstract concept - I would argue this is what we have in the case of Honda, it's a brand, but a brand in itself is just an abstract concept defined by a series of other items such as a trademark, a name, a logo and so forth. If we really wanted to get theoretical though we might choose to argue that a brand is actually just, say, an idea, and create an "Idea" parent type which is in turn just an organised set of thoughts, which are in turn just an organised set of electrical impulses in the brain and so on and so forth, so there nearly always seems to be something deeper you can parent type to that is individual. Even if we go back to the idea of particles, one might argue that element A is made of particle X and particle Y, so we just inherit those particles with multiple inheritance, but I think even here it's not an ideal solution because there's more to it than that, we probably need a more abstract concept of a class to inherit from instead such as a ParticleSystem, that defines not just the elements that comprise the element but their relation, the number of each (what if it's made up of two of particle X for example? - we can't inherit it twice).
That's not to say there aren't examples where maybe multiple inheritance does make sense - if we invent a Car Plane then is it not reasonable to inherit from Car and Plane? That's probably a reasonable argument conceptually, practically this may cause issues though - the behaviour of wheels on the car, and wheels on the plane when taxiing may be completely different and confuse each other, so even here you'd probably be better off just implementing a new CarPlane type that inherits from vehicle. It of course gets messy if some of your planes are water planes and land on floats.
Of course you're right though, there are other ways to solve these sorts of problems rather than single inheritance, and I don't want to make it sound like I'm pretending single inheritance is a magic bullet, it gets misused every day and junior devs still often struggle to properly understand it's use and associated keywords, especially when you're having to explain the difference between abstract and virtual and so on and so forth - they certainly don't follow that sort of thing conceptually as easy as something like the humble if statement.
The real benefit of inheritance is that it allows encapsulation of reusable code, we're effectively saying that this code in the parent class is reusable, but only by certain other types - it's not relevant to everything, just these few things. This means we only have to write that code once, but it also means that it's not floating around for people who just don't care about those methods in that part of the application to see. It's of course not just methods we capture in this way, but properties/fields also.
I have tried a no class inheritance route, using only interface as some profess is the ultimate clean design, and I inevitably find it just ends up resulting in more code duplication. In part this is probably more a limitation of various languages (C#, Java etc.) than it is an inherently intractable problem. So whilst I'm not inherently against the concept in principal, I just don't find it's practical in practice, which isn't to say it's not prudent to at least minimise inheritance by following that route, and only using base classes where it becomes necessary to remove duplication of course - that will typically result in a far cleaner design than jumping in with complex class hierarchies from the get go.
There's obviously quite a few ways of resolving that such as simply allowing a button to have a shape (i.e. a rectangle) which in turn allows it to be swapped out for a circle if you want round buttons for example. The danger with inheritance is that you're effectively fixing that behaviour.
Clickable I'd argue is more suited to an interface, interfaces are quite good at representing behaviour.
But in modern GUIs it's obviously become a lot more complex than all that, but in a good way - it means we have a steeper learning curve, but much more maintainable and testable code. We typically try to separate these things to a much greater degree now, as we really don't want visual design to impact on business logic. If you look at something like WPF you'll see that you basically have a button that inherits from a control, which in turn inherits from UI element (because not all UI elements need to be controls). The base UI element class has members that define how the element should be rendered - because in WPF you're not limited to rectangles, you may want a star shaped button or god knows what else if you're creating some weird UI for a game or something. It also has properties for things like size, width, font, colour, fade in/fade out effects and so on and so forth. Effectively the UI element class allows anything inheriting to look like anything, so an inheriting button class might implement some default behaviour, such as a rectangle, which in turn can be further overridden by a user, but ultimately it means that how the button looks slots in in a number of has-a relationships - has a colour, has a font, has a shape, has a size and so on and so forth.
Things like clickable are simply exposed via events - a Control (which inherits from the above mentioned UIElement for display) will expose a number of common events such as OnClick and so forth, in C# events are basically just a form of callback, so effectively once you create a button you just assign callback handlers to those events which are again present in has-a relationships, so that a button has an on click event, a button has a resize event, and so on and so forth.
What this means is that a button is really just an abstract concept for the most part - how that looks is really about how you set it's look and feel properties, how it behaves is really just defined by whatever you hook up to none, some, or all of it's many events.
Now I'm not arguing that WPF is some example of perfect OO design, not by any measure, I'm citing it as an example of one way to design a GUI library without needing multiple inheritance. I believe that WPF's composition is actually quite messy - personally I think a UIElement should have a "visual style" and that in turn that visual style should encapsulate all the visual elements - if you look at the interface it's massive and messy, though to be fair a lot of this is because Windows is still dragging a fuck ton of legacy along behind it:
For more simplistic designs you may just determine that all UI elements are in fact controls and just merge UI element and control into one base class (which is basically what you have in WinForms). Essentially the goal of WPF is to separate concerns - you don't have to worry about someone having checked the button out to write code defining how it looks whilst someone else wants to check it out to define what it does, because the button is just abstract - someone can define how it looks in one file, and someone else can define what it does in another. Only one of them has to wire it up without being concerned about the implementation details, and it kind of works. WPF has an XML GUI definition called XAML not too different to the way websites have their visual elements defined in HTML with the idea being that the whole interface can be defined by someone who doesn't even know code, whilst developers write the business logic behind it oblivious to what any of it is going to look like at the end of it. It certainly works well, but again, the learning curve is much steeper as a result of this added complexity when compared to say a more classical GUI library as I suspect you're alluding to.
Hopefully this will help a bit in terms of highlighting that most problems can in fact be resolved by composition, rather than inheritance - a button is a rectangle and clickable is how many people would typically phrase it in a day to day conversation, but doesn't help us understand a good object oriented hierarchy, instead saying a button has a rectangular shape and triggers an event when clicked is an equally valid description, but also lets us decompose the sentence into an object oriented design without inheritance much more easily. This is really one of the key problems people face with object oriented design - both these example sentences mean the same thing to most English speakers in practice, but how they are written and how they are analysed can cause much more work for an analyst trying to break down a description from a customer into an object oriented design. The process of doing that therefore is effectively as much about trying to break down what people really mean, rather than accepting the short hand they've written, as again, technically a button isn't a rectangle, because a rectangle is a shape, a button does have a rectangular shape, though it is clickable - so it could reasonably just inherit from Clickable but have a shape for example also. Clickable is probably just a property of a control though, so again, falling back to giving a control a clickable property and inheriting button from Control, whilst also giving Control a shape typically makes decent sense from a design point of view.
I guess the only question one can ask themselves when considering this sort of problem is "Is it really?" - "Is a button really a rectangle or is it a UI element?", "Is a car really a Honda or is it a vehicle?", you can only try and dig down to what something really is, rather than what short-hand descriptions we use for it.
But Honda is a brand, so by inheriting it you're saying it's a sub-brand of Honda, but a car isn't a brand, it's a car.
You could say that Pepsi Max is a sub-brand of Pepsi, but not that a can of Pepsi Max is a sub-brand of Pepsi, it's not, it's a can, branded with the brand Pepsi Max.
Time and time again most people that argue that we need multiple-inheritance from classes demonstrate that the real problem is that they're not that great at breaking down things into classes in a logically sound manner - you're falling into this trap here. I'm not trying to insult you with that, frankly there isn't an awfully high percentage of developers who are actually good at it - I've read your other comments here with interest, and you're genuinely knowledgeable on a whole lot of other stuff. No developer should feel defensive or embarrassed for not fully understanding absolutely everything because as soon as you go down that path you become immune to learning about anything, and that way lies redundancy in this fast moving market.
You have to be quite pedantic about the English language to get it right, really when people say "My car is a type of Honda", what they're saying is merely shorthand for "My car is a type of vehicle made by the company with the brand Honda", because technically no, your car isn't a type of Honda, because Honda is a brand - your car has a brand, but it isn't a brand, again, it's a car. This pedantry is pointless and irrelevant when it comes to every day life because everyone looks at you like you're a dick when you make such pedantic statements about common usage of English, but it does help when you're trying to break some real world set of entities down into a high quality object oriented design.
If you get good at that, then it rapidly becomes clear that multiple-class inheritance is entirely unnecessary, and that multiple interface inheritance is sufficient. I do personally believe that single class inheritance is helpful though as it can drastically decrease duplicate code - whilst I accept that the interface only approach has it's benefits in terms of implementing a clean design on paper, it doesn't in terms of code maintainability IMO.
Your car has a make, Honda, and a registration, that in turn has a registration state of New Jersey. Why do you need to inherit these things when you can just reference them?
You're effectively saying that your car is a type of a brand (No. It has a brand, but it isn't a type of brand.) and that it's also a type of New Jersey (What?).
Is the US tax system really so archaic that they can't just produce stats on this sort of thing automatically?
In the UK the majority of employees get paid through a system called PAYE where tax information is automatically passed to the tax man. There are of course people outside of this system, but not enough to be statistically significant enough to alter the stats.
This means it's trivial for our statistics body the ONS to produce stats on things like average salaries by field which are then used to inform government policy on the rare occasion we have ministers willing to engage in evidence based policy rather than doing what they want regardless of what the data says.
If we've learnt anything from politics in 2016 it's that partial truths can be as damaging (or effective, depending on your point of view) as outright lies.
So whilst I agree we shouldn't force companies to publish things they don't want to, there should at least be some guarantee that they don't mislead.
As such allowing reviews, but hiding the negative ones is grossly misleading, as it acts to give people a false impression about your product. It implies to people that the product is well received by customers based on customer feedback when that simply isn't true.
So I think companies should be forced to publish negative reviews if they're going to have the option of letting users create reviews in the first place. If they can't take negative reviews then they shouldn't have reviews at all. If they want to give that impression then do what movies do with the whole quote thing where it's clear that it's marketing, not a balanced view of actual customer opinion.
"Dump the condiments. If we are to be eaten, we don't need to taste good." -- "Visionaries" cartoon