Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Some practical examples (Score 1) 153

This is very true. While new ideas can be useful (or even great - everything was new at one point), the hype of the fad leads to tunnel vision where we only talk about how they will revolutionize everything.

The problem I have seen with the power of the fads is that they often become vague and redefined by everyone to fit what they are doing. "Cloud" is a great example: is it a common execution dialect, a remote storage system, or a flexible infrastructure virtualization system? "Agile" had the same problem a few years ago when everyone was doing it, even though their implementations were about as diverse as they were without it.

The "trendy" programming languages are frustrating since they are justified as being "great" because of their abilities to solve small problems with concise (or even terse) expressions. Since few people actually deal with large systems, they don't realize that most of these languages are really only good for prototyping or other small problems and big things are still written in C, C++, or Java for very good reasons.

It is why "legacy" has come to mean "actually works".

Comment OpenAutonomy and the big list of alternatives (Score 1) 88

(Sorry for the shameless plug)

Personally, I created OpenAutonomy to solve this (and other) problems in an open, federated network (here is a video I did at FSOSS 2014 talking about this space). There is no centre of the network, nor is there much of a limitation in terms of what it can actually do.

That said, most of the approaches to solving this problem focus on social networking, specifically, and there are tons of them!

The problem is figuring out a way to explain the vision to a non-technical audience and get their interest in something new/different. The problems aren't technical, they are related to communication and marketing.

Comment Sparse on Technical Details (Score 3, Informative) 125

I was interested in what the change-over was, which was causing the performance increase, and how old the existing system is. This information seems to be missing.

What is included actually sounds a little disappointing:
13x faster
12x as many CPUs
4x mass (3x "heavier")

I would have thought that there would be either a process win (more transistors per unit area and all that fun) or a technology win (switching to GPUs or other vector processors, for example) but it sounds like they are building something only marginally better per computational resource. I suppose that the biggest win is just in density (12x CPUs in 4x mass is pretty substantial) but I was hoping for a little more detail. Or, given the shift in focus toward power and cooling costs, what impact this change will have on the energy consumption over the old machine.

Then again, I suppose this isn't a technical publication so the headline is the closest we will get and it is more there to dazzle than explain.

Comment Re:These on XP? (Score 3, Informative) 83

That isn't an operating system flaw but a hardware flaw: loads data from device into memory and points the CPU at it.

What is actually surprising is that they don't use some kind of DRM-esque bootloader (much like you find in many phones) where it only boots an image with a matching signature.

Comment Re:Multiplayer = Devoid of Content (Score 2) 292

I definitely agree with this. Building a good game requires really good ideas (the game mechanics) and really great content (artwork and writing). These days, it seems to be common to sell a shell of a game and relying on multi-player to make it worth playing. Of course, to sound savvy, you just say you "crowd-sourced" it.

Many indie games have carved out a good niche for themselves by capitalizing on exceptionally creative game mechanics, which is definitely a great thing to see.

Comment Re:Online only gives the illusion of accomplishmen (Score 4, Insightful) 292

But in a truly single-player game, you are only cheating yourself, so you are probably just reducing your own fun and value.

If you want to cheat to "accomplish" things, then I don't really see the problem. It is just a different way of "playing" the game (albeit probably a less interesting one).

Comment Re:Fully loaded 2U POWER8 for $2,000 USD, yes or n (Score 1) 36

Having used GCC and XLC on AIX, I can tell you that XLC is definitely the superior compiler.

The difference is less dramatic on Linux, but it is still there.

The difference between the platforms is caused by some interesting knowledge the compiler has of how the OS does some things (readable zero page being the most obvious example).

Comment False "new vs. old" dichotomies (Score 2) 826

(FYI: I haven't followed the systemd saga but I have noticed this fight in a growing number of places)

This seems to be a VERY common problem in the modern computing environment: arguments are reduced to ad hominem labels of their supporters where the proponents of "new" are just "kids fascinated by the trendy at the expense of stability" or other "why maintain it when I can write something better?" inexperienced people and the proponents of "old" are just "out of touch old-timers who are afraid of the unknown" or people "only interested in their own job security".

Of course, the reality is some bits of these straw men, combined with massive doses of reality. The truth is, both sides of the argument make more sense if they are reduced to actual concerns and interests, as opposed to "us versus them" camps.

The truth is that "change for change sake" is a dangerous position and the reality is that the "legacy" moniker is slowly changing from a negative term into something which means "has worked well for a long time".
Alternatively, sometimes new ideas are beneficial since they tend to think of current realities, as opposed to sometimes-extinct realities.

This whole notion of "choosing your side" doesn't help anyone since it isn't actually a division, but a conversation/argument. Sometimes stepping forward is correct while sometimes standing still is correct and neither approach is "always correct". Maybe we would choose our next steps better if we worked together to choose them instead of all lining up in our preassigned trenches.

Comment Do people confuse them? (Score 1) 267

I would assume that credibility pollution is not much of an issue since I don't think people confuse all cryptocurrencies as being "Bitcoin". However, I have no actual data so maybe they do. I would assume that the users of these more exotic currencies are a smaller group who know that these are all different.

The bigger concerns I have with these is that they seem rather redundant.

Also, does anyone _actually_ view Bitcoin as "an alternative to fiat currencies and central banks" or more as "a real solution to the problem kludged by PayPal"?

Comment Re:How does this account of caching? (Score 1) 125

I really wonder about this, too. Perhaps they determined that the common case of a read is one which they can statically re-order far enough ahead of the dependent instructions for it to run without a stall but that doesn't sound like it should work too well, in general. Then again, I am not sure what these idioms look like on ARM64.

The bigger question I have is why they didn't go all out with their own VLIW-based exposure of the hardware's capabilities. As I recall of the Transmeta days, their problem was related to constrained memory bandwidth causing their compiler and the application code to compete for the bus (which is a problem this design may also have unless their compiler is _really_ tight, which might be true for this low-ambition design) while the benefits of statically renaming registers and packing instructions into issue groups were still substantial.

Comment Re:Real Programmers don't use GC (Score 1) 637

And those versions are few and far between, exception not the rule.

Not really. It depends on the environment and things like expected application running time. Things like Java, for example, use this kind of collector. They are used in production so they shouldn't be excluded from the discussion, thus meaning my statement is still correct.

Define a steady-state. Not every application has one. This is why real-time stuff doesn't do that - they allocate memory/blocks on the stack at the application (global) level. If you can load the application then everything the application will ever need is allocated. If you cannot load the application, then that's it.

I think we are "having an agreement". If something other than dynamic allocation can be used (the size of something is known at compile time, for example), then it should be allocated using a different mechanism.

From a security point-of-view, you need to be able to validate that a pointer is valid beyond whether or not it is NULL. You need to know that your application issued the pointer and that the data it points to is valid and within your application space. And this needs to be in real applications, not debug mode.

What do you mean? Under what circumstances is this kind of pointer validation required? It sounds like this is an attempt to detect other bugs, after-the-fact (reading uninitialized or over-written memory, for example).

Which is a major draw back for using a GC as it now has to crawl everything periodicially.

Whether this is a problem is really the core of this conversation. The problem is the pause time but the question is whether or not that is a real problem and whether other benefits exist to offset it, in the general case of your application.

So now you're adding indirect pointers for normal pointer usage...which again now means two calls for every pointer and now you've slowed down the whole application. Smart pointers do the same thing in a sense; as does the PIMPL design pattern - it can still be quite fast, but is still (provably) slower than directly using the pointers to start with.

I said nothing about indirect pointers at any point. The pointers are still directly to the memory being used. Managing the underlying memory slabs, directly, in no way invalidates this.

Except now you are again penalizing the performance by randomly moving the memory around at application run-time. So you are not just hitting the performance to remove unused memory, but to also "optimize" it. And in doing so you remove the ability of the application developer to run-time optimize the memory usage when necessary.

The application developer in managed runtimes has effectively no control over heap geometry. Technically, they aren't even allowed to think as object references as numbers since they can only compare them for direct equality/inequality.

Also, I am still not sure what you mean by "remove unused memory". Remember that the unit of work, in a managed heap, is either the number of live objects or the number of live bytes. "Unused" (or dead or fragmented) memory is not a cost factor.

These optimization opportunities do a great job of actually improving performance of the application (check the benchmarks - there is a surprising win in both throughput and horizontal scalability).

Seriously, GCs are probably one of the biggest hits for performance of applications on Android. It's one of the many reason that Java as a language is SLOW.

Can you substantiate that claim, since it sounds surprising. Their heaps aren't big enough to be seriously hurt by GC (unless they keep the heap right on the edge of full). Over all, Java is actually very FAST. The slowest part is generally VM bootstrap (just because it has a long path length and much of it can't be parallelized), followed by application bootstrap (which is not related to Java but many Java applications tend to be massive - application servers, Eclipse, etc). This win is some combination of GC memory optimizations but more so the "promise of JIT" which gives them a pretty serious win.

Comment Re:Real Programmers don't use GC (Score 1) 637

Not quite. GC are always built on top of malloc+free, not side-stepping them.

This is incorrect. High performance GC implementations are typically built on top of the platform's virtual memory routines, directly (on POSIX, think mmap or shmem). This avoids the waste of "managing the memory manager" and also allows the GC fine-grained control over the heap. On some platforms, this also provides selective disclaim capabilities meaning that the GC actually will give pages back to the OS when contracting the heap, whereas free() wouldn't.

But that doesn't mean that your program should just crash because it failed; the program should degrade gracefully in those situations.

Agreed. To avoid this problem, the better pieces of software I have seen did no dynamic allocation once they reached a steady running state. This meant the failure states were easier to wrangle since you could only fail to allocate during bootstrap or when substantially changing running mode.

Of course, I'd go further and say the standard libs and operating systems should provide a method to validate pointers, but that's more a security concern and the hard part for that is figuring out if the object is valid and on the stack versus valid and on the heap.

The general approach is that a valid pointer is anything non-null. If you need to further introspect the memory mapping or sniff the heap to determine validity, you seriously need to re-think an algorithm. Debugging tools, of course, are exempt from this rule since they are often running on a known-bad address space.

Now in doing the reference counter in the GC, then yes - it becomes a lot more expensive to keep it and maintain it because it has no knowledge of the actual use of the pointer.

GCs do not store reference counts since they are completely different from this kind of tracking. They determine validity by reachability at GC time.

GC allocation will never be faster than non-GC allocation because it relies on non-GC allocation underneath. Anything the underlying libraries and kernel do, it does as well.

This is incorrect. High performance GCs manage their heap directly and can offer allocation routines based on this reality. The main reason why they can be faster is that they have the ability to move live objects at a later time so fragmentation doesn't need to be proactively avoided in allocation, which is normally what causes pain for malloc+free.

GCs will never improve cache performance as it is entirely unrelated and should not be randomly selecting objects. If anything it will decrease cache performance because it will be randomly hitting nodes (during its checks) that the application wants to keep dormant for a time.

This is incorrect. The GC can easily remove unused space between objects (by copying or compacting the adjacent live objects together). Further, given that a GC has complete visibility into the object graph, it can move "referencee" objects closer to the "referencer" objects (especially if there is only one). These 2 factors mean that the effective cache density is higher and that the likelihood of successive accesses being in cache is higher.

For further information regarding this point, take a look at the mutator performance characteristics of GCs which can run in either copying or mark+sweep modes. Paradoxically, the copying performance is generally much higher even though the GC is doing more actual work (this benefit can be eroded by very large objects, since copy cost is higher and relative locality benefits are smaller).

Of course, these statements are based on the assumption that this is a managed runtime, and not a native embeddable GC like Boehm.

Comment Re:Real Programmers don't use GC (Score 1) 637

By that definition of "wasted", even a basic malloc+free system will waste massive amounts of memory since free rarely actually removes the heap space it is using from the process. Most of the time, someone using a GC configures it to avoid returning this memory, as well, since the environment is typically tuned against the maximum heap size, so giving it back would just mean it has to be requested again, later. Of course, if this is a concern, we are getting more into the debate of whether dynamic allocation should be used, at all (which is something I wish more people thought about - I also like you that you mentioned it can fail since so many people forget that).

I am not sure what you mean by "dynamic allocation time" in a GC and reference counting is NOT "sufficiently quick". Actual allocation cost, when using a GC-managed heap is generally incredibly cheap (because the allocator doesn't have to do heap balancing, etc). Reference counting can involve walking massive parts of the heap, and writing to it (sometimes many times). GC is generally very fast but, again, depends entirely on the number of live objects so it becomes more expensive the larger the live set becomes.

Whether or not real-time characteristics matter in every situation, or not, is really just determined by the maximum total time lost to the collector within a given window. A real-time collector can give you a guaranteed bound, no matter the size of the live set, but other collectors are typically fast enough for their environments: if you can GC the whole heap in 20 ms, it is probably ok for a game whereas if you can do it in 1 second, it is probably ok for an application server (although that would be an oddly long GC), etc. The question of whether or not an occasional outlier can be tolerated in order to reduce average pause time is another which depends on environment. Reference counting schemes are also subject to these limits but the bounds are harder to fix since the size of a now-dead object graph at any release point is not always constant at said release point.

While I agree with you that doing any real-time work in an environment with dynamic allocation is not a good idea (the only times I have done real-time work, dynamic allocation wasn't even supported on the platform - and that was never a problem), there is some amount of interest in things like real-time Java (hence JSR1) so we have real-time collectors. I have never done any real-time Java programming, but I have seen evidence that it works well.

What do you mean by "as the system needs better and better performance GCs become less and less useful"? A good GC will actually increase the performance of the application as it can improve cache density of the application's live set (not to mention memory-processor affinity).

We do seem to be having 2 conversations here:
1) How does GC compare to other dynamic allocation approaches
2) How does any dynamic allocation approach fare against static or stack-oriented allocation approaches. In this case, I think we both agree that avoiding the issue altogether is preferable, where possible.

Comment Re:Real Programmers don't use GC (Score 1) 637

What do you mean by "wasted", in this case?

If you are generally referring to unpredictable pause times, then that is a real concern of GCs (and some general cases of reference counting and some cases of dynamic allocation). Of course, in the case of the GC, the pause time is a function of the live objects (and, in some cases, their size or topology) so I am not sure what you mean by "wasted".

Of course, there are GCs which offer predictable pauses but they are typically lower performance. They only matter in real-time environments so they are not often used.

Comment Re:Real Programmers don't use GC (Score 1) 637

Where do you think the deallocation cost appears in the collector? Are you specifically referring to heap management, finalization, heap contraction, etc? The actual cost of the running the GC is a function of live objects, not dead ones or the number allocated since the last cycle.

Slashdot Top Deals

Never test for an error condition you don't know how to handle. -- Steinbach

Working...