Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re: Swift (Score 1) 337 337

No magic bullet :-) Also, I'd almost agree that ternary is a shorthand for 'if', were it not for the fact that the ternary is an expression, and as such, can be part of other expressions and its constituents can be expressions too, while 'if' is a statement that forces imperative style. Even if it were a shorthand, we could say that C or Swift (I'm so ontopic) or whatever is ultimately shorthand for machine code - it doesn't remove from the fact that one form is often preferable over another, while both are Turing-complete.

Re the distance metric: sure, in some cases you don't know the domain extent until after the fact, but sometimes you do, or you work with past, known data, so you picked a specific case I didn't imply. Even in that case, the 'business requirement' or API spec says nothing about interim values, so I don't violate contract by opting for an implementation detail such that it helps me avoid conditionals. The API would be tested against its contract and/or there might be an assertion, or you turn the API supplied enumerative [start, end] to a continuous measure just internally. It's your freedom and this thinking can eliminate some manual code path dispatching (manipulation of the PC register) that the 'if' statement essentially is.

There are so many things you can do even at the basic level, like taking the max of two values rather than if(a > b) ... or filtering a set before doing something on the elements (IOW conditionally including/excluding elements without if or ternary) that are good for a linear flow. Maybe I got into this habit when using vectorized R / Matlab and making SMC particle filters or MCMC for CUDA execution; these give you tremendous power if you're willing to leave relics behind.

But again, you must have enough SW dev experience, know about most alternatives and have formed your opinion. My purpose isn't to change it, though if your DSP software works in the pacemaker or car ABS or whatever I might need down the road, I prefer verifiable, safe code in it, and I think it's clear that conditionals, mutation (which the 'if' statement mandates, as opposed to ternary) and shallow abstractions over them (explicit loops etc.) make code more complex to reason about. I'm sticking to what has worked for me, within the constraints of the problem domain, language and coding conventions of the project.

Comment Re: Swift (Score 1) 337 337

I don't think it works like that. It's not like I take a problem, and implement it with imperative style (what you term as 'implementing an equivalent with if statements'), and then, or alternatively, I root them out one by one.

A small pseudocode example is, for example:

Instead of

var circleColor;
if(hovered) {
        circleColor = 'blue';
    } else {
        circleColor = 'black';

or its even more horrible (more surprising to the reader) version,

var circleColor = 'blue'; ... // 20 lines down
if(hovered) {
        circleColor = 'blue';

I'd just use

style({fill: hovered ? 'blue' : 'black'})

Better yet, I'd not let 'hovered' to stay a loose variable; it would be bound to some data, which then allows

function elementColor(d) {
      return d.hovered ? BASE_COLOR : LINK_COLOR;

Even better, if sensible, I'd not even do this, but go straight for CSS styling via a pseudoclass.

This is an atomic example, and higher level approaches include things like:
- vectorized code, or use of linear algebra libraries
- using Autolayout / cassowary, CSS constraints or other declarative tool instead of manually fiddling or special casing
- using functional programming idioms, and their asynchronous extensions, rather than imperative and MVC (not because of the 'if's; the 'if' is but a symptom of hard to trace code with lots of branches and a combinatorial explosion of code path permutations, i.e. unsafe and mentally burdensome practice)
- using declarative style and data binding (think Angular, React, d3, FRP) over the MVC imperative race condition fest
- for the Web, using isomorphic, idempotent style with such data binding (otherwise you need to conditionally create or remove elements etc.)
- using a lookup Map or array, or a pure function that contains a single, simple, switch/case
- avoiding magic numbers and values; relying on (I know, problematic) IEEE arithmetic (NaN / Infinity isn't always to be avoided like the plague)
- using mapReduce like constructs (map, filter, reduce, some / every) over loops with 'if's in the inside
- sometimes (when it's short and unambiguous) using logic, e.g. elemsExist = !!whatever.length over if(whatever.length === 0) {elemsExist = true;} else {elemsExist = false}
- it's often okay to multiply with zero (if a zero / one flag is meaningful) rather than if(foo === 0) {doThis} else {doThat} - whether we talk about scalar, vector or matrix
- bit masks aren't scary for me, though I almost never need to even hypothetically consider them

Also, I try to be conscious about why we'd need to basically introduce two edge cases, i.e. a binary condition. Sometimes the problem can be recast such that it not only leads to the removal of the 'if', but also, possibilities opened up for greater reuse and possible new usage patterns, or help clarify requirements. For example, depending on the problem, it might be possible to move from an enumerative ['not started', 'arrived'] representation, which would lead to manual condition coding, to a completion ratio (0 means not started, 1 means arrived, 0.75 means that 75% of length or time has elapsed). In that case, downstream calculations, or visual stying (translate etc.) can be done without conditional statements.

Things like this are often possible, if your guiding principle is to shoot for deterministic code, which is most easily achieved with pure functions, data binding, FP in general, avoidance of mutation, and trying to stick to a linear data flow and single code path. Code should be easy to follow by visually reading a file or function top to bottom, and by clicking on symbols should jump to function definitions and lexical bindings, such that it's sufficient for understanding what VALUE or function is being referenced, and an easy way to put in a log statement or breakpoint, because you KNOW you assigned the value when you declared the variable, and there are no other assignments that you have to hunt for, inside if() curly blocks.

Comment Re:Yes, more people is better (Score 1) 337 337

Hmm I don't know, my experience is that mediocre programmers will advance the project in mediocre ways, and at best, two things can be said: 1) they're less well-rounded and more easily replaceable, so you need more of what's a commodity, i.e. there's lower risk of single-person dependencies; 2) even mediocre programmers, if they're dedicated and hard-working, can save the day in the face of (typically not that rare) project mishaps, slippages, resource constraints.

But I found that good programmers stick out like a sore thumb, and often implement an initial or way superior solution to, or refactoring of something problematic, over the matter of a skunkworks weekend or maybe a few weeks, to a satisfactorily convincing version that disrupts the struggle with some former, ill-conceived way of tackling the problem, while typically also reducing the code size to a fraction of what it replaces.

Development leads, project managers, product managers and internal users take note immediately, and want that person to at least technically lead the effort or the next critical one.

My anecdotal evidences include cases when I was the one to 'disrupt', in a positive way, previous routines, and cases when it was one of my several fantastic colleagues who did that. It also occurred sometimes that some developer (typically researcher) who maybe wasn't superior with the code itself, created something above and beyond the expected. I also worked in project management roles, and if there is a single thing a project manager has an eye for (besides project risks) and what he wants to cultivate is the otherwise rare combination of talent, dedication and time availability.

So if you find yourself to be a good programmer among mediocre ones and yet feel like just one among the many, then maybe it's just a natural bias to think you're better; or if you're indeed better, then maybe your current environment isn't conducive to becoming noticed; e.g. no new projects, or no visible impact of being better than the others, due to boredom, lack of motivation or whatever. In most cases, you're the one who can change: be more visible, find active projects or a new workplace, or reevaluate your judgment about how you compare to your peers.

Comment Re: Swift (Score 1) 337 337

While every programming construct can be abused, here's the benefit of ternary, over an if statement. The ternary is a value generating expression. The result can be used in place, passed to a function, or assigned to a lexical variable. It fits well into functional programming, referential transparency and in general, data flow traceability. I'm assuming that ternary conditional expressions are done for the value, rather than for side effect.

'if' statements are differentiated from ternaries in the following way:
- it's syntactically more verbose, which may be a desirable feature for some (WARNING! code path forking taking place) or it can be seen as pollution
- it encourages, or only allows imperative style (for example, you are binding alternative values to the same variable in both branches)
- by opening the door for imperative style, it promotes, and often the root of, all other imperative constructs, like early returns and terminations
- also it promotes variable reassignment (traceable code ideally binds a value to a symbol once, and never touches the binding again)
- orients towards 'statements' rather than expressions
- in fact, strongly invites to add a bunch of statements inside the branches
- run-on conditions are encouraged by convenient 'else if' constructs
- typically fork the data flow AND the code path (ternaries fork just the data flow)
- uses ugly curly brackets, or if it's optional and omitted, can lead to woot when somebody adds another statement to the branch

'if' is similar to ternary in terms of weaknesses:
- both can be deeply nested, and beyond about one layer of nesting, both become unreadable

'if' has overlapping functionality with switch/case constructs, or just Map or array lookup. Generally, the latter options yield clearer, more regular code with fewer edge cases.

The largest sin of 'if' is that, as seen above, it biases the program toward an imperative style, and any nontrivial program will end up with a practically unlimited number of code path permutations, leading to untested, never encountered edge cases, i.e. a not very deterministic, unsafe style.

So on balance, 'if' is pretty bad, while ternaries should be avoided if possible (obviously not through overly smart, opaque numerical trickery, which just mask the same effect). My preference is not to use 'if', and I have some code made up of files each with up to hundreds of lines, which don't contain a single 'if' - it wasn't for purity goals, it just happens as an unplanned for side effect of trying to write tractable code. It doesn't imply that my code is for some specialized domain unlike most development tasks. Most of my current code is client side, being on the complex side of UI (think CAD and interactive data visualization like functions) but when I worked on the server side (think R / numpy / scipy / GPGPU) it was the same case. Regular, linear code flows, parallelizable execution etc.

Obviously everyone needs 'if' in that ultimately, machine code is full of conditional jumps, but even this mindset helps: lots of 'if' usage potentially leads to cache misses and pipeline stalls. If you find you use 'if' a lot, then maybe there's a small abstraction out there which could help you avoid it.

The problems with 'if', to a large extent, are also applicable for 'for' and other loops, and imperative (side effecting) switch/case statements. It's safer, though often slower with large data sizes, to use map, filter, reduce etc. instead of a monster loop; or on the server side, using vectorized and linear algebra operations (R, BLAS, LAPACK, Armadillo, NumPy).

The biggest utility of 'if' these days is that it's a code smell; it has some use but it's just a concealed local GO TO.

Comment Where's the video? (Score 1) 116 116

The summary starts with:

"At OSCON this year, Red Hat's Tom Callaway gave a talk [...]"

The summary has a link, which points to the article, which says:

"At OSCON this year, Red Hat's Tom Callaway gave a talk [...] My favorite part of this talk was Callaway's passion for the items on the list."

So where is the video. The list felt a bit bland so I also got the notion that the video would be more informative.

Comment Re:Narrowminded Fools (Score 1) 293 293

What you write makes sense, and is true. Perhaps this is a reason for the Fermi paradox: for a technological (= potentially detectable) civilization, technological progress brings those advances which can kill the civilization earlier than the safeguards - for example, we already have nuclear bombs, mutually assured destruction, religion, robotic warfare, climate change and ISIL like uncontrolled minorities, who can however act and destroy on the global scale.

Then there is nobody left alive to make further advances toward safe practices, peace, self-sustaining populations on nearby planets and green economy.

The destruction tools are sophisticated enough to kill or reboot civilisation, but not sophisticated enough for them to survive, proliferate and advance on their own. And we've used up all the easily accessible wood, coal and oil so it may be impossible to ignite an industrial revolution the second time around.

Maybe safe practices can be implemented by complete surveillance and mind control (how do you otherwise stop a lunatic, a terrorist or criminal, who is set to destroy the World with the ever more powerful tools available to him?). Which sounds like tyranny, a dream of dictators, so it's a bit of a hard sell. Only a dozen or so memoirs of such suicidal civilizations could change minds (except those of the Moon landing deniers), which might be a practical purpose for SETI.

Comment Re:i haven't bought a car in a while... (Score 1) 252 252

There is an even more common reason for not wanting to share a car. Which is the same reason you don't invite totally random people into your home, to use your furniture. Most car owners, I guess, are happier with using their car exclusively, then would be if they shared it with random people. I guess vomit, excretions, spilt alcohol, blood on the carpet etc. are pretty bad in the confines of a car, especially if it's hot and the previous user broke the air conditioner too.

Also, maybe the previous driver drove over some debris or into a nail, and the front tire is on the brink of rupturing. Maybe the 19 year old guy didn't plan to then use the car on the highway with two toddlers on board, but you do. He might know about the risk but you don't.

Sure, there is public transportation, taxi, and relatives' cars. But an unsupervised, or weakly supervised, shared thing is very different. Or maybe it's full of cameras and detectors to deter from or at least report use violations. But then maybe you prefer your own car over a slightly cheaper, randomly smelling, randomly beat up car that's recording video of the entire cabin from 8 different angles, all the time.

There is still a market (car rental, taxi) so I'm not disputing that car sharing has a future, but there are lots of wants and needs for private car ownership too.

Comment Re: ... and the hype for Windows 10 begins.... (Score 1) 405 405

I don't want this to transition into an 'I am right' contest, but you're making strawman arguments. First, it makes sense to take a snapshot and put it in a message (email, IM...) directly. Don't project your usage patterns onto others as exclusive or 'right'. Also, there are four-key shortcuts for screen capture, should my response be, you should CERTAINLY know about them? You emphasized the word GROUP as if it was something novel, but if you take a look, I also referred to the chords in plural.

I've been using emacs, and have also used Linux, so I'm OK with key chords, mkdir and similar, but having a shortcut isn't an excuse for inconsistent design. Haven't used OS X that long, so while the command-shift-N doesn't shock me, I haven't known about it, so ultimately your message was informative.

An operating system is not just for techies, but also for people who just want to do something, and in the process, create a folder, and maybe they don't even know what a keyboard shortcut is. I believe that it's puzzling that the 'as List' and 'as Coverflow' views in the folder don't even have a context menu item for making a folder, while the 'as Icons' and 'as Columns' do; and these four options are interleaved, so the logic of why it works eludes me, tho I haven't analysed it. There might be some good reason but as someone who has programmed since the 8 bit era, and used old Macs, and iOS devices, and bought into the hype about how Apple design is great, I definitely expected OS X to be more intuitive than my experience turned out to be.

Another example: if you minimize a window, then select the application with the alt-tab, it won't actually switch to the previously minimized window of the application. It takes extra steps to get it back. Someone who was an expert OS X user, and a developer, told be this when I asked, how he handles this: 'I never minimize windows'. Interesting. Ah, and don't accidentally touch the mouse while doing the command-tab - it'll hijack the application selection.

Yet another example: you can't maximize a window. Yes, there is what used to be the green button (now just the rightmost of the three identical, unmarked circles), but it doesn't stretch the window edge to edge: it puts the desktop into some other 'presentation' mode, and the previous navigation modes will be all weird, especially with multiple monitors, multiple desk spaces and/or multiple documents within the same 'app'. Command-tabs will make windows zoom around, and it's all pretty haphazard and definitely not intuitive, but let's stick to screen maximization. I can manually adjust the edges to the side of the window. Also, if I previously double-click on the top bar of the window, it'll maximize it at least vertically.

So okay, I manually move the window edges to the sides of the desktop. By grabbing the window edge. This, of course, implies that when I want to use the scroll bar (yes, sometimes useful), I can't just flick the mouse all the way to the right side with a quick move, click and expect that it moves the scrollbar. Because, if I flick it to the right, it'll actually still be the window border. So I have to flick to the right, then MOVE BACK A LITTLE. The Mac is intuitive and efficient like that.

There is Fitt's law, explained here, for example: http://blog.codinghorror.com/f...
The above usability problem implies that the designers of OS X haven't considered it important, and that's OK, but there isn't a real alternative. You either have a dumb full-screen window - even if you have a 32 inch monitor - or you must resort to tweaking and adjusting window borders manually. In Windows, there is snap to the side, snap to top, etc, not to mention the split screen and other attempts.

I took a quick glampse, and there seem to be a bunch of workarounds to solve what Apple hasn't solved: https://news.ycombinator.com/i...

Clicking on a promising link (called Optimal Layout; it's expressly for OS X) I landed on a page, which wanted to present something to me via Flash (!). I can be wrong on a lot of things, but OS X and its 'ecosystem' isn't exactly the gold standard uf UX either for novices or for expert users.

Don't even ask about how intuitive iTunes or iCloud are...

On the other hand, some Apple software works great. Years ago, around iPhone 4 era, I shot some photos and vids, and managed to create a decent movie with music, intro etc., and with video cuts, photo transitions etc. - I think Bret Victor was responsible for how intuitive and cohesive it felt, even though it was content creation, rather than the simpler task of an 'OS desktop'. Too bad for Apple and their users that they let him go.

Comment Re:Can the brain live without the body? (Score 1) 60 60

Your point makes sense w.r.t. how the entire body is innervated, especially the sensors, and the guts have massive amounts of neurons. Also, pretty much all organs that exist, in part, to sustain the brain.

However the interface between the brain and the rest of the body feels so tangible, that we can say it's part interface (certainly a complex one), part resource and defense supply.

- Eyes: the retina and the optic nerves are part of the brain, or we could consider the optic nerve analogous to the cranial nerves
- The 12 cranial nerves: smell, head related sensing and motor
- Spinal cord

- Blood: brings oxygen, nutrients and defense against infection etc. to the brain, removes waste, and there are the hormones too (very low bandwidth)
- Cerebrospinal fluid

- Skull (encapsulation, protection, and mounting point for stuff)
- Some layers, dura mater etc.

So it only makes sense that in fiction, the assumption is that neural interfaces are pretty much the main bottleneck, i.e. once an interface for the 1-2cm^2 spinal bundle can be built, realistic simulation of the sensorymotor system (or part of it) can be provided, and the brain can be kept alive through the couple of veins and arteries, then what the GP asks hypothetically, makes sense. It would be hard to say if technology for these will arrive earlier or later than the technology necessary for full body interplanetary travel and long term settlement.

Machines certainly can solve problems, store information, correlate, and play games -- but not with pleasure. -- Leo Rosten