Follow Slashdot stories on Twitter


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Re:Digital Rights? (Score 1) 63

Then you're only looking at mainstream, mass market, fixed content. A great deal of content created commercially isn't actually in that category.

Also, it makes a big difference what the "digital format" is. Sure, if you're providing fixed content that someone can play at home, then if nothing else you're vulnerable to the analog hole if you're willing to accept the drop in quality, and for the next Avengers movie or Taylor Swift album or whatever, someone among the millions of interested people is going to bother doing that. But there are online DRM schemes that are pretty effective at preventing casual copying at full quality these days, which is probably one of the reasons content creators are so keen to move in that direction for distribution.

Comment Re:Digital Rights? (Score 2, Insightful) 63

Most DRM isn't expected to prevent 100% of copies indefinitely. Usually it's intended to deter and/or delay casual copying, and in that, it is often quite successful these days. This is something that almost invariably gets overlooked in the "DRM never works" posts that will no doubt be filling this Slashdot discussion within minutes.

Comment Re:Digital Rights? (Score 3, Insightful) 63

Smart people don't care what it stands for. This issue always going to be about balancing the rights of content providers and the rights of content consumers, or about balancing the restrictions on the same parties, depending on how you choose to look at it. What matters is finding a reasonable balance, whatever you call any technology or laws or whatever that are used to promote it.

Comment Re:DRM (Score 4, Insightful) 63

As with almost all technology, it depends on context.

DRM can be abused to lock up content far in excess of normal copyright protections.

DRM also makes new and useful business models practical, giving us modern replacements for old school rental stores from the likes of Netflix and Spotify, which obvious work out for a lot of people.

Comment Re:Use a liberal definition of planet (Score 2) 142

I actually really like this idea:
Define a Star as a body that has achieved a nuclear fusion reaction.
Define a Planet as a body that has enough mass to be spherical that orbits a star.
Define a Planetoid as a body that has enough mass to be spherical that does not orbit a star.
Define a Moon as a body that has enough mass to be spherical that orbits a planet.
Define an Asteroid as a body that does not have enough mass to be spherical that orbits a star.
Define a Natural Satellite (here's to you, potato shaped Phobos) as a body that does not have enough mass to be spherical that orbits a planet. Maybe call it a Moonoid?

Define Pluto and Charon as a binary planet; since they appear to orbit each other (and binary stars are already defined).
If this means Sedna and a few other bodies become planets -- fine. But at least the definitions are easy.

Comment Re:Lots of links to articles, phfft (Score 1) 231

I would not know why both can not be expressed with a set of small functions.

Well, what would those small functions be? Suppose we've got a data processing pipeline that essentially looks something like this:

valueA, valueB = step1(inputA, inputB, inputC)
valueC = step2(valueA, inputD)
valueD, valueE = step3(valueB, valueC)
resultA, resultB = step20(valueX, valueY, valueZ)

It probably makes sense to break out each individual step into its own function, but if your algorithm has 20 steps, each of which might need some input data from various earlier steps, what advantage is there in breaking this function down any more? It's just fundamentally a long sequence of operations.

The other example I gave is a slightly different case, but a similar conclusion. The general subgraph isomorphism problem is NP-complete, but you probably know more about your underlying data in realistic situations, so matching algorithms often wind up being some sort of deeply nested search with the order of tests determined heuristically. The similarity to the previous example is that again each level of the search might need context from various outer levels. So again, what is gained by breaking out the inner loops into their own functions here? They have no inherent meaning without that context, so it's not as if they're going to be reusable or they're going to represent some useful abstraction with a meaningful name. It's just splitting up code that is fundamentally related to avoid some dogma about deep nesting being bad.

I don't know what your background is or whether you've ever worked in the kinds of fields where these sorts of situation tend to come up. A lot of programs only need very simple data structures and algorithms, and if that's the kind of code you tend to work on then maybe what I'm saying just looks like contrived examples. All I can say is that I've worked on many projects over the years that do have need of more sophisticated data structures and algorithms than, say, typical business management software, and I've seen plenty of code where having functions of 50-100 lines or longer is entirely reasonable. Obviously I'm not talking about all of the functions, or probably even most of them, but I would argue that the important point is whether each function provides some clean, coherent packet of functionality, not whether it's 5 lines long or 105.

Perhaps your mind works different than mine. For me it is obvious that a set of smaller functions is easier to understand and maintain than a big one.

From the discussions so far, I suspect we've just worked mostly on different types of software. In some cases, I do write a lot of quite small functions. But I don't do that because being small is good, I do it when it happens to need only a few lines of code to represent whatever concepts I'm working with at the time.

When I see people like Robert Martin making sweeping generalisations about keeping functions very short or keeping under N parameters or whatever, and particularly when those people also imply that anyone whose code doesn't follow their rules is somehow a bad programmer or unprofessional or whatever insult they're hurling around this week, I feel a bit like a guy modelling the aerodynamics for a supersonic jet plane whose ten-year-old kid thinks wing design doesn't really matter because he can get enough lift already with his paper plane.

Comment Re:Lots of links to articles, phfft (Score 1) 231

I never saw a reason for functions that long.

A data processing pipeline with more than a handful of steps?

A subgraph match that needs to check for more than a handful of nodes and edges?

Those are the first two that come to mind, because I happened to work on both of them today.

Emphasize on "can". But they should not. So you are arguing because you write bad code (functions with side effects) it is pointless to write shorter functions?

If you're advocating being careful about where side effects happen, you're preaching to the choir. However, I have not yet been made god-emperor of the universe, so sadly I don't get to impose my preferred design style on everyone else and sometimes have to work with code written by foolish mortals. Also, I have yet to encounter a mainstream programming language that is actually good at supporting this goal.

We don't do that because of an arbitrary limit, like 15 lines. We do it to improve readability and maintainability and on top of that: testability.

The first part of this, about readability and maintainability, is just begging the question.

As for the testing part:

A complex algorithm that is divided into several small(er) functions can be tested by testing the functions individually. A big function can only be tested as a block. And then show me your test and proof that every branch in the function is at least tested once ... if you are smart you are using a coverage analyzer for that ;D

Merely breaking out parts of the code into smaller functions doesn't reduce the complexity of the decision tree, so if you want to argue for comprehensive branch coverage then you need just as many tests either way unless you're immediately reusing your smaller functions (and I don't think we're talking about that sort of scenario here).

In my experience, attempting comprehensive branch coverage as a strategy for writing unit test suites is usually futile and almost always inefficient anyway. There are many other ways to try to prevent errors that are usually more effective.

Comment Re:Lots of links to articles, phfft (Score 1) 231

That is a strawman. Nobody said very small. They simply should be small enough, that is all.

Unfortunately, that's not true. Plenty of people do openly and sometimes rather strongly advocate extremely short functions, including Robert Martin, whose book was mentioned back up the thread. Even the original point that was linked at the start of this thread suggests an upper limit of 15 lines, which is far too short for a lot of useful algorithms.

By "definition" a function only works their arguments and returns a result.

In a pure functional language, that is true. Almost anywhere else, it is not, because functions can have side effects, often including interacting with various forms of state kept outside the function.

Usually, if it is a method in a class, it does not even manipulate the attributes of its associated object.

If it's not doing anything with the attributes of its associated object, why is it a method on that class in the first place? This makes little sense, unless you're talking about languages where all functions live on some class and so sometimes classes are basically just used as namespaces with no state.

The only relation functions have to each other is their call hierarchy, which is easy to figure and in an IDE trivial. Worst case use the debugger.

Again, that only really works in certain types of programming language, though. In languages where you can pass functions around like any other value, or even just C-style function pointers, you have a level of indirection that makes systematically identifying all potential calls to a function significantly harder. If your functions really do represent self-contained concepts with a useful amount of abstraction, the trade-off is often still worth it, but if you're just breaking a relatively long function into smaller parts because of some arbitrary limit, it's a different story.

Comment Re:Lots of links to articles, phfft (Score 2) 231

You write 10 small functions and then have another function that ties them all together. Problem solved, your system isn't any more complicated

Yes, it is. It's more complicated by now having 11 parts instead of one, and 10 extra relationships between them.

None of the rest of your claims follow automatically just from having more, smaller functions. Indeed, the point several of us are trying to make is that exactly the opposite may be true: breaking everything down into very small functions can make code harder to understand. While the individual functions are simpler, the number of potential relationships between them grows exponentially, and those relationships may not be as transparent once you've broken everything down into tiny pieces.

Comment Re:You missed the point. It's about relativity. (Score 1) 167

IMO, that recently UIs have swinged too much towards being good for touch use.

I agree with almost everything you wrote, but I think the above overlooks the real problem, which is that it is almost impossible to design a good UI for some tasks when you're constraining yourself to what works on small touchscreen devices. You can do OK if your presentation and interaction requirements are simple, and that's where those kinds of devices are useful: share a photo, write a short message, log a site visit. However, for more complicated systems, we have big screens with multiple information displays and space for direct controls, and we have keyboards and mice for relatively fast and precise input. Asking a UI designer to ignore all of that is like giving a painter nothing but thick, black paint and an inch-wide brush that they can only hold at the other end of its foot-long stem, and then asking them to create the next Mona Lisa.

Comment Re:You missed the point. It's about relativity. (Score 1) 167

I sincerely doubt the UIs are getting worse year after year. If that were the case, we would have unusable devices by now.

Lately, we are sometimes coming awfully close to that. The lack of affordances in modern flat, touch-friendly, most-things-hidden UIs is horrible for usability compared to the explicit menus and toolbars and consistent conventions we had before. The things some UI designers seem to think people understand or want and the comments you hear and behaviour you see if you observe real users trying to operate the UI seem to be in different universes at times.

What is really happening is that people are resisting change.

That is also true, but there is a good reason for that. In most cases, UIs are not starting from a blank slate. If you take two UIs for something that the user has never done before, maybe one of them will work a bit better than the other one. But if you take those exact same UIs for something the user has done before but only using one of them, the other one needs to be much better to be worth the time and effort to learn it given that the user could already do what they needed to.

Also, obviously not all change is necessarily good change, and bad changes will naturally be resisted.

Slashdot Top Deals

Chemist who falls in acid is absorbed in work.