Sadly(?) English doesn't keep the original pronunciation, though UK-English is closer than US-English. I mentioned the reason in another post, it's that damned Great Vowel Shift what makes English stand out among European languages.
Well that's maybe relevant for those coming from another European language or reading old English texts, but to users only interested in contemporary English that's more of a historical curiosity. Their challenge is that the rules aren't consistent, which is often traceable to its historic roots. For example let's take the word steak, it's a loanword from Old Norse steik which is why the "ea" in steak is different from that in peak, leak, beak, weak or freak. Of course every language has a few foreign words that don't follow the normal rules but English has it dialed up to 11.
And the next thing he knew, he woke up in an alley. His wallet, keys, phone and shoes were missing. For the life of him, he could not figure out why they didn't take his cool new toy.
It's a photo/video camera that might have been on, not even stupid crooks would leave that potential evidence behind.
I do not believe English has had the same done to it. Otherwise you would not end up with something like:
English keeps the pronunciation of the language they took it from, which means it's a smattering of Britons (~Welsh, -450), Anglo-Saxons ("English", 450-1066), Normans (~French, 1066-), Gaelic (~Scottish, ~Irish) with some Norse from Scandinavia, and through the British Empire it's picked up words from most of the world's languages by now. While "English" has pronunciation rules, unless you're a professor of etymology (the history of words) it's easier to just learn each word than trying to find a pattern.
Or in banking terminology, GNOME is too big to fail. Sorry, ever since Qt went LGPL in 2009 I've wished they'd go away so you can actually build a modular desktop, but as long as there's two competing languages it's almost impossible to build common components without going to awkward workarounds like D-Bus. Not even the kernel would work well with kernel modules written in C++, Java and Python, not that there's anything wrong with them as languages but as modules to a C program. Otherwise I expect the in-fighting will continue until Google pulls an Android and leaves GNOME, KDE, XFCE etc. to be a Nokia N900 niche in the desktop market. Not because it's technically the best solution, but because Google has a certain Steve Jobs effect too - if they tell everyone desktop Android is the next big thing devices, developers/applications and users will follow.
Well, first of all since OpenSSL is an open source project, I doubt staying anonymous was an option as you can go back and check git logs and mailing lists.
Dr. Seggelmann said the error he introduced was "quite trivial", but acknowledged that its impact was "severe". (,..) After he submitted the code, a reviewer "apparently also didn't notice the missing validation," Dr. Seggelmann said
So the takeaway here is that OpenSSL has a review process that lets "quite trivial" bugs in the input validation of a high security product through, that's comforting
Seggelmann said it might be "tempting" to assume the bug was inserted deliberately by a spy agency or hacker. "But in this case, it was a simple programming error in a new feature, which unfortunately occurred in a security relevant area," he said, according to the newspaper report. "It was not intended at all, especially since I have previously fixed OpenSSL bugs myself and was trying to contribute to the project."
If you were a spy agency trying to get a vulnerability into OpenSSL, do you think it'd be on the first patch? Fix some insignificant bugs, get trusted, introduce seemingly innocent but deeply flawed code and trust that it gets rubber stamped through. He the first of three authors on the Heartbeat extension which for some reason includes an arbitrary size, arbitrary content data block where a simple PING/PONG would confirm the connection is still alive. I'm not saying he is a plant, but I am saying that everything he says is exactly the same as a plant would say to excuse his backdoor as a honest mistake. I mean, could you do it any better if you tried? Create a side channel by passing large chunks of data back and forth between the client and server, then create a flaw to pass the state buffer instead. It smells to high heaven.
Objects are generally passed by reference, so it should be MORE efficient than passing around 10 values. The problem arises if you are setting the object's values as you pass it around, which can lead to unexpected or hard to determine states.
If you have a natural owner that's just providing access to it I'd agree, references (or constant references) are great but in this case I'd disagree. If it's for example an application form the form itself is ephemeral, but the information in is not. If you submit it, I want the form to pass the information by value and self-destruct cleaning up after itself. Once it reaches some kind of data owner, it can pass the application by reference through processing steps. For the same reason references are not so good for display, for example you have a function to display an invoice. If some other process on the back-end deletes the invoice, you suddenly have a reference to nowhere and it could crash as you try getting more details or see the next page. In short, don't pass a reference unless you know the source will live longer than the reference.
There is also the argument that programming teaches logical thinking, much like learning Latin used to, but when I read Slashdot I'm not always sure that is the case.
Logical in some kind of binary-compulsive-autistic way. If you have some kind of fuzzy state like say raising a child where the answer is somewhere between "Let them do everything" and "Don't let them do anything" it makes geek heads hurt. Half our jobs is taking fuzzy requirements and turning them into rigorously defined, deterministic rules that defines behavior down to the last bit, it's our job to take a round peg and squeeze it until it fits a square hole. You also see it in geeks trying to reduce everything down to some oversimplified set of axioms, like free speech. Maybe we don't think threats or companies being able to lie in commercials or or kiddie porn is okay, but some will take it all the way to bizarro-world where Hitler didn't kill any jews unless he personally choked one to death, he was just exercising his free speech.
At least most geeks will agree there's a "street smart" too, maybe a little bit derisively but it's also a recognition that everything isn't in a book and being able to practically deal with situations as they happen in real life and interacting well with other people and your surroundings is a good thing and is important to function well in real life. Or I think maybe that's two things really, one is the practical side like knowing how to survive in the wilderness versus having read a book on how to survive in the wilderness and the other is dealing with people and animals with emotions. Your computer is your obedient slave, you tell it what to do and it executes it, it doesn't need a "please". It doesn't need motivation. It doesn't need buy-in or an explanation for what it's doing. If you think "HR" degrades people, you should hear the wetware's opinion on IT...
Depends on the type of coder, I've met too many old coders who try to keep the memory use low, performance high but code complexity is terrible because it's all one giant spaghetti ball of code.
For example now at work I've created a system which has a single master procedure( productionId, datasetId, stepId ) where NULL in the last two means all sets, all steps. I know some of the steps would be more efficient if merged, I know some contain one-time setup (but is hard to extract out) that's repeated many times when I run them on all datasets but for development it's a bliss. I can rerun a single step for a single set, a single step for all sets, all steps for a single set, I can easily time them (start and finish, per step, per set) and see what's making it choke not to mention if there's an error it's in a narrowly defined piece of code not the many-thousands-of-lines script it's replacing. A coworker of mine is starting to work on it setting up another production type and he loved the structure because it was so easy to grasp, even if he's only looked at a few steps.
Another feature I like is passing objects instead of values through layers. For example, say you have a form that has a string and a radiobutton but needs to have another UI element added, let's say a checkbox. If you pass the values as ( string, radioButton ) you have to change signatures everywhere. If you have an object FormValues, add the checkbox and pick up the value where it's needed. Is that efficient? Probably not, I guess I'm often passing ten values around when I only need two. But it saves a lot of pointless coding time when I find out that oh, I have to increase that from two to three. Defensive coding that makes it easy to expand or change functionality beats hardcoding every time.
I started out with a C64 which had 64kB of RAM, I'm not going to do that if we're talking about a million or a billion objects. But there are still people stuck in that mode where it's like every byte matters and it just doesn't. Make code that's easy to work with (verbose for clarity and descriptive names, but compact using standard functions and generic code where possible) and about 95% of the time it'll be worth more than trying to make it machine-efficient. A lot of "hardcore" developers dismiss abstractions as simplification for the simpletons and real developers code right on the metal, maybe not in assembler anymore but they kind of want to. It takes a real change of mindset to write code for coders, not code for the machine. Of course it must run in acceptable time with acceptable resource use, but that's often a low bar these days.
For example, each member of the House of Representatives is responsible for approximately 500,000 people. Assume that they spend on average two hours a day talking to their constituents and the rest is spent in committees, or on holidays (since we're talking about an average). That's 2628000 seconds per year, or around 5 seconds per constituent per year (10 seconds per term). If you want to have a five minute conversation with a representative, then you must find 60 people all willing to give you their time allocations. Or 300 all willing to give you 20% of their allocation. If you want to have an hour-long meeting, then that's 720 people who must give up all of their allowance, or 3600 who must give up 20% (or any breakdown).
It always amuses me when GPL'd software contains a clickthrough insisting that you press an "Agree" button, when the licence specifically says that no such agreement is necessary.
In fact, by placing the requirement that someone agrees to the license before using a derived work of the GPL'd software, they are violating the GPL...
We are experiencing system trouble -- do not adjust your terminal.