Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment: Re:Host on your own website, consider (Score 1) 60

by MoonlessNights (#49406201) Attached to: Ask Slashdot: Options Beyond YouTube For An Indie Web Show?

You could consider delivering pointers to your shows delivered cooperatively via BitTorrent with magnet URLs posted to popular BitTorrent-based sharing sites so the public can keep your shows downloadable even if you find hosting hard to come by.

This touches on an interesting idea which I wonder if anyone has built: a video distribution system built on BitTorrent. It seems like a clever way of using the network strength instead of these monolithic companies storing and sending all the data, requiring massive resources. It seems like it could work as long as you had enough nodes (especially offering the first few blocks - improve quick-start behaviour) and it would scale with popularity of the channel: a channel could just provide an RSS feed including summary data and Magnet links (essentially borrowing a page from the podcast world).

It seems like _someone_ must have built such a thing since the hard parts (BitTorrent network and Magnet queries) are already solved by the network. Is anyone aware of such a tool?

Comment: (Score 1) 335

by MoonlessNights (#48842669) Attached to: Lies, Damn Lies, and Tech Diversity Statistics

... or that people are actually individuals and their genders are not their direct identities.

Sure, I am male but that only matters in one situation (due to my handicap of being straight) and we aren't doing that right now so I have no interest in your gender.

If you can help me work through the design of this idea without resorting to arguments relating to "where the braces will go", then I think this may be the beginning of a beautiful colleagueship.

Comment: Re:Some practical examples (Score 1) 153

by MoonlessNights (#48612431) Attached to: In IT, Beware of Fad Versus Functional

This is very true. While new ideas can be useful (or even great - everything was new at one point), the hype of the fad leads to tunnel vision where we only talk about how they will revolutionize everything.

The problem I have seen with the power of the fads is that they often become vague and redefined by everyone to fit what they are doing. "Cloud" is a great example: is it a common execution dialect, a remote storage system, or a flexible infrastructure virtualization system? "Agile" had the same problem a few years ago when everyone was doing it, even though their implementations were about as diverse as they were without it.

The "trendy" programming languages are frustrating since they are justified as being "great" because of their abilities to solve small problems with concise (or even terse) expressions. Since few people actually deal with large systems, they don't realize that most of these languages are really only good for prototyping or other small problems and big things are still written in C, C++, or Java for very good reasons.

It is why "legacy" has come to mean "actually works".

Comment: OpenAutonomy and the big list of alternatives (Score 1) 88

by MoonlessNights (#48460575) Attached to: Revisiting Open Source Social Networking Alternatives

(Sorry for the shameless plug)

Personally, I created OpenAutonomy to solve this (and other) problems in an open, federated network (here is a video I did at FSOSS 2014 talking about this space). There is no centre of the network, nor is there much of a limitation in terms of what it can actually do.

That said, most of the approaches to solving this problem focus on social networking, specifically, and there are tons of them!

The problem is figuring out a way to explain the vision to a non-technical audience and get their interest in something new/different. The problems aren't technical, they are related to communication and marketing.

Comment: Sparse on Technical Details (Score 3, Informative) 125

I was interested in what the change-over was, which was causing the performance increase, and how old the existing system is. This information seems to be missing.

What is included actually sounds a little disappointing:
13x faster
12x as many CPUs
4x mass (3x "heavier")

I would have thought that there would be either a process win (more transistors per unit area and all that fun) or a technology win (switching to GPUs or other vector processors, for example) but it sounds like they are building something only marginally better per computational resource. I suppose that the biggest win is just in density (12x CPUs in 4x mass is pretty substantial) but I was hoping for a little more detail. Or, given the shift in focus toward power and cooling costs, what impact this change will have on the energy consumption over the old machine.

Then again, I suppose this isn't a technical publication so the headline is the closest we will get and it is more there to dazzle than explain.

Comment: Re:These on XP? (Score 3, Informative) 83

by MoonlessNights (#48087785) Attached to: Infected ATMs Give Away Millions of Dollars Without Credit Cards

That isn't an operating system flaw but a hardware flaw: loads data from device into memory and points the CPU at it.

What is actually surprising is that they don't use some kind of DRM-esque bootloader (much like you find in many phones) where it only boots an image with a matching signature.

Comment: Re:Multiplayer = Devoid of Content (Score 2) 292

by MoonlessNights (#47915467) Attached to: The Growing Illusion of Single Player Gaming

I definitely agree with this. Building a good game requires really good ideas (the game mechanics) and really great content (artwork and writing). These days, it seems to be common to sell a shell of a game and relying on multi-player to make it worth playing. Of course, to sound savvy, you just say you "crowd-sourced" it.

Many indie games have carved out a good niche for themselves by capitalizing on exceptionally creative game mechanics, which is definitely a great thing to see.

Comment: Re:Online only gives the illusion of accomplishmen (Score 4, Insightful) 292

by MoonlessNights (#47915453) Attached to: The Growing Illusion of Single Player Gaming

But in a truly single-player game, you are only cheating yourself, so you are probably just reducing your own fun and value.

If you want to cheat to "accomplish" things, then I don't really see the problem. It is just a different way of "playing" the game (albeit probably a less interesting one).

Comment: Re:Fully loaded 2U POWER8 for $2,000 USD, yes or n (Score 1) 36

by MoonlessNights (#47772147) Attached to: Slashdot Talks WIth IBM Power Systems GM Doug Balog (Video)

Having used GCC and XLC on AIX, I can tell you that XLC is definitely the superior compiler.

The difference is less dramatic on Linux, but it is still there.

The difference between the platforms is caused by some interesting knowledge the compiler has of how the OS does some things (readable zero page being the most obvious example).

Comment: False "new vs. old" dichotomies (Score 2) 826

by MoonlessNights (#47751335) Attached to: Choose Your Side On the Linux Divide

(FYI: I haven't followed the systemd saga but I have noticed this fight in a growing number of places)

This seems to be a VERY common problem in the modern computing environment: arguments are reduced to ad hominem labels of their supporters where the proponents of "new" are just "kids fascinated by the trendy at the expense of stability" or other "why maintain it when I can write something better?" inexperienced people and the proponents of "old" are just "out of touch old-timers who are afraid of the unknown" or people "only interested in their own job security".

Of course, the reality is some bits of these straw men, combined with massive doses of reality. The truth is, both sides of the argument make more sense if they are reduced to actual concerns and interests, as opposed to "us versus them" camps.

The truth is that "change for change sake" is a dangerous position and the reality is that the "legacy" moniker is slowly changing from a negative term into something which means "has worked well for a long time".
Alternatively, sometimes new ideas are beneficial since they tend to think of current realities, as opposed to sometimes-extinct realities.

This whole notion of "choosing your side" doesn't help anyone since it isn't actually a division, but a conversation/argument. Sometimes stepping forward is correct while sometimes standing still is correct and neither approach is "always correct". Maybe we would choose our next steps better if we worked together to choose them instead of all lining up in our preassigned trenches.

Comment: Do people confuse them? (Score 1) 267

by MoonlessNights (#47691565) Attached to: Are Altcoins Undermining Bitcoin's Credibility?

I would assume that credibility pollution is not much of an issue since I don't think people confuse all cryptocurrencies as being "Bitcoin". However, I have no actual data so maybe they do. I would assume that the users of these more exotic currencies are a smaller group who know that these are all different.

The bigger concerns I have with these is that they seem rather redundant.

Also, does anyone _actually_ view Bitcoin as "an alternative to fiat currencies and central banks" or more as "a real solution to the problem kludged by PayPal"?

Comment: Re:How does this account of caching? (Score 1) 125

I really wonder about this, too. Perhaps they determined that the common case of a read is one which they can statically re-order far enough ahead of the dependent instructions for it to run without a stall but that doesn't sound like it should work too well, in general. Then again, I am not sure what these idioms look like on ARM64.

The bigger question I have is why they didn't go all out with their own VLIW-based exposure of the hardware's capabilities. As I recall of the Transmeta days, their problem was related to constrained memory bandwidth causing their compiler and the application code to compete for the bus (which is a problem this design may also have unless their compiler is _really_ tight, which might be true for this low-ambition design) while the benefits of statically renaming registers and packing instructions into issue groups were still substantial.

Comment: Re:Real Programmers don't use GC (Score 1) 637

And those versions are few and far between, exception not the rule.

Not really. It depends on the environment and things like expected application running time. Things like Java, for example, use this kind of collector. They are used in production so they shouldn't be excluded from the discussion, thus meaning my statement is still correct.

Define a steady-state. Not every application has one. This is why real-time stuff doesn't do that - they allocate memory/blocks on the stack at the application (global) level. If you can load the application then everything the application will ever need is allocated. If you cannot load the application, then that's it.

I think we are "having an agreement". If something other than dynamic allocation can be used (the size of something is known at compile time, for example), then it should be allocated using a different mechanism.

From a security point-of-view, you need to be able to validate that a pointer is valid beyond whether or not it is NULL. You need to know that your application issued the pointer and that the data it points to is valid and within your application space. And this needs to be in real applications, not debug mode.

What do you mean? Under what circumstances is this kind of pointer validation required? It sounds like this is an attempt to detect other bugs, after-the-fact (reading uninitialized or over-written memory, for example).

Which is a major draw back for using a GC as it now has to crawl everything periodicially.

Whether this is a problem is really the core of this conversation. The problem is the pause time but the question is whether or not that is a real problem and whether other benefits exist to offset it, in the general case of your application.

So now you're adding indirect pointers for normal pointer usage...which again now means two calls for every pointer and now you've slowed down the whole application. Smart pointers do the same thing in a sense; as does the PIMPL design pattern - it can still be quite fast, but is still (provably) slower than directly using the pointers to start with.

I said nothing about indirect pointers at any point. The pointers are still directly to the memory being used. Managing the underlying memory slabs, directly, in no way invalidates this.

Except now you are again penalizing the performance by randomly moving the memory around at application run-time. So you are not just hitting the performance to remove unused memory, but to also "optimize" it. And in doing so you remove the ability of the application developer to run-time optimize the memory usage when necessary.

The application developer in managed runtimes has effectively no control over heap geometry. Technically, they aren't even allowed to think as object references as numbers since they can only compare them for direct equality/inequality.

Also, I am still not sure what you mean by "remove unused memory". Remember that the unit of work, in a managed heap, is either the number of live objects or the number of live bytes. "Unused" (or dead or fragmented) memory is not a cost factor.

These optimization opportunities do a great job of actually improving performance of the application (check the benchmarks - there is a surprising win in both throughput and horizontal scalability).

Seriously, GCs are probably one of the biggest hits for performance of applications on Android. It's one of the many reason that Java as a language is SLOW.

Can you substantiate that claim, since it sounds surprising. Their heaps aren't big enough to be seriously hurt by GC (unless they keep the heap right on the edge of full). Over all, Java is actually very FAST. The slowest part is generally VM bootstrap (just because it has a long path length and much of it can't be parallelized), followed by application bootstrap (which is not related to Java but many Java applications tend to be massive - application servers, Eclipse, etc). This win is some combination of GC memory optimizations but more so the "promise of JIT" which gives them a pretty serious win.

Comment: Re:Real Programmers don't use GC (Score 1) 637

Not quite. GC are always built on top of malloc+free, not side-stepping them.

This is incorrect. High performance GC implementations are typically built on top of the platform's virtual memory routines, directly (on POSIX, think mmap or shmem). This avoids the waste of "managing the memory manager" and also allows the GC fine-grained control over the heap. On some platforms, this also provides selective disclaim capabilities meaning that the GC actually will give pages back to the OS when contracting the heap, whereas free() wouldn't.

But that doesn't mean that your program should just crash because it failed; the program should degrade gracefully in those situations.

Agreed. To avoid this problem, the better pieces of software I have seen did no dynamic allocation once they reached a steady running state. This meant the failure states were easier to wrangle since you could only fail to allocate during bootstrap or when substantially changing running mode.

Of course, I'd go further and say the standard libs and operating systems should provide a method to validate pointers, but that's more a security concern and the hard part for that is figuring out if the object is valid and on the stack versus valid and on the heap.

The general approach is that a valid pointer is anything non-null. If you need to further introspect the memory mapping or sniff the heap to determine validity, you seriously need to re-think an algorithm. Debugging tools, of course, are exempt from this rule since they are often running on a known-bad address space.

Now in doing the reference counter in the GC, then yes - it becomes a lot more expensive to keep it and maintain it because it has no knowledge of the actual use of the pointer.

GCs do not store reference counts since they are completely different from this kind of tracking. They determine validity by reachability at GC time.

GC allocation will never be faster than non-GC allocation because it relies on non-GC allocation underneath. Anything the underlying libraries and kernel do, it does as well.

This is incorrect. High performance GCs manage their heap directly and can offer allocation routines based on this reality. The main reason why they can be faster is that they have the ability to move live objects at a later time so fragmentation doesn't need to be proactively avoided in allocation, which is normally what causes pain for malloc+free.

GCs will never improve cache performance as it is entirely unrelated and should not be randomly selecting objects. If anything it will decrease cache performance because it will be randomly hitting nodes (during its checks) that the application wants to keep dormant for a time.

This is incorrect. The GC can easily remove unused space between objects (by copying or compacting the adjacent live objects together). Further, given that a GC has complete visibility into the object graph, it can move "referencee" objects closer to the "referencer" objects (especially if there is only one). These 2 factors mean that the effective cache density is higher and that the likelihood of successive accesses being in cache is higher.

For further information regarding this point, take a look at the mutator performance characteristics of GCs which can run in either copying or mark+sweep modes. Paradoxically, the copying performance is generally much higher even though the GC is doing more actual work (this benefit can be eroded by very large objects, since copy cost is higher and relative locality benefits are smaller).

Of course, these statements are based on the assumption that this is a managed runtime, and not a native embeddable GC like Boehm.

Science is to computer science as hydrodynamics is to plumbing.