Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Wouldn't Java be a counterexample? (Score 5, Interesting) 394

wouldn't Java be a example of the contrary to this?

Yes, but not the best one. The best would be Oracle's database. Despite the fact that Oracle Database Server is not the result of a 'community-based development model,' the product has a long, ugly history of vulnerabilities. For some reason it fails to be composed of 'low-defect code,' despite apparently having all the best financial incentives. The list of vulnerabilities is long and grows regularly.

The only reason Oracle Database Server has never been the victim of a SQL Slammer type exploit is that it is so expensive that most instances exist only well behind corporate and government firewalls that, if not well maintained, at least exist. Many SQL Server admins apparently don't believe in firewalls.

However, [Solaris] is more of Sun's creation than Oracle's.

Likewise with Java.

Comment Re:the most basic data structures (Score 1) 598

Point of order: 32-bit ARM code doesn't even have stack instructions

ARM's generalization of the classic PUSH and POP instructions has always been admirable (at least until they made THUMB which sadly does have these foul instructions,) but the real world uses STMDB (store multiple decrement before) and LDMIA (load multiple increment after) to implement stacks, which is exactly why these instructions exist. 32-bit ARM provides a stack pointer (R13 a.k.a SP as per ARM) and a return address (R14) register. This not merely software convention either; these registers are banked to allow distinct values for these specific registers across processor modes to accommodate the classic call stack in the face of exceptions.

32 bit ARM is every bit as "stack oriented" as anything that has explicit PUSH and POP instructions. There is no pretending otherwise.

Modern ISAs provide large numbers of registers specifically to avoid stack usage

Modern ISAs? Providing a large register file to avoid memory accesses goes back to Berkeley RISC-I (the inspiration for ARM, incidentally) at least. However, what you have then when executing real programs is merely a very limited stack inside the register file. From RISC I: A REDUCED INSTRUCTION SET VLSI COMPUTER:

Our approach is to break the set of window registers (r10 to r31) into three parts (Figure 7). Registers 26 through 31 (HIGH) contain parameters passed from “above” the current procedure; that is, the calling procedure. Registers 16 through 25 (LOCAL) are used for the local scalar storage exactly as described previously. Registers 10 through 15 (LOW) are used for local storage and for parameters passed to the procedure “below” the current procedure (the called procedure). On each procedure CALL a new set of registers, r10 to r31, is allocated; however, we want the LOW registers of the “caller” to become the HIGH registers of the “callee.” This is accomplished by having the hardware overlap the LOW registers of the calling frame with the HIGH registers of the called fmme: thus. without moving information, parameters in mgisters 10 through 15 appear in registers 25 through 31 in the called frame.

What we have here is a hardware accelerated stack based on a large banked register file. An optimization.

Stacks are a software things

If that's true it aligns pretty nicely with Genuinely Useful Ideas In Programming then, no?

And you go too far down that road, suddenly you're teaching FORTH.

Or the JVM instruction set, for something a tiny bit more relevant.

Comment the most basic data structures (Score 4, Insightful) 598

the heap, the hash table, and trees

There is nothing basic about these. Each is the subject of on-going research and implementations range from simplistic and poor to fabulously sophisticated.

An important basic data structure? Try a stack.

Yes, a stack. What majority of supposed graduates of whatever programming related education you care to cite are basically ignorant of the importance and qualities of a stack? Some of the simplest processors implement exactly one data structure; a stack, from which all other abstractions must be derived. A stack is what you resort to when you can't piss away cycles and space on ....better.... data structures. Yet that feature prevades essentially all ISAs from the 4004 to every one of our contemporary billion transistor CPUs.

Comment Re:Scientific "break even", or practical "break ev (Score 2, Interesting) 429

I think this is a decent milestone. While the reactor design itself is unlikely to ever break even, hopefully they're at least learning enough about efficiently triggering a fusion reaction that they can apply it to more productive designs

This achievement opens the door for future designs. Inertial confinement works; it needs improvement, but we're no longer debating whether it's possible to maintain symmetry or any of the other many doubts the detractors dwelled on.

The haters of NIF — and there are many — won't permit followup; they'll have it shut no matter what. For them, the whole idea of seeking energy sources that don't demand energy poverty is inherently illegitimate, and they run the show now. But the work and the results won't die at LLNL; there are other people and other nations that haven't decided to turn themselves into a windmill powered nature preserve.

So we'll have to let them take the ball and run with it. At least it will continue, now perhaps with far more enthusiasm.

Comment Re:Advatages of ZFS over BTRFS? (Score 0) 297

and it'll be as good / better than ZFS soon

No. Sorry.

There hasn't been a commit to the official BTRFS tree in over two months. There have only been five distinct contributors during the entire third quarter. The second quarter saw only 70 commits.

That pace is way too slow for a file system with so many 'to be implemented' features. While not dead, at this rate BTRFS will never surpass ZFS in any notable way.

I'm sincerely sorry about that. Linux contributors just aren't getting it done wrt BTRFS, and that's a crying shame; other operating systems should look on in envy at marvelous Linux file systems.

And yes, I should be in there plugging away at it. So should you. But we're not.

That's not Oracle's fault, either. People just don't care enough to put in the effort. We're just here griping about Oracle and the ZFS license issue and poasting about BTRFS being the answer, waiting for someone to do all that brutally hard work.

We're deluding ourselves.

Comment Re:Off the pig! Time to get rid of OSs on VMs. (Score 1) 335

The entire Linux kernel, virtual filesystem, daemons, user commands, etc, are just along for the ride.

A JVM relies heavily on a kernel for scheduling, storage (journalling, RAID, LVM, etc.,) network stack (IP stack, filtering, bridging, etc.) and virtual memory management, at least. All of those capabilities must exist; they weren't created because someone was naive. Either they land in some library used by the guest JVM or they land in the hypervisor.

This isn't to say the now 40 year old IBM LPAR model is wrong. It clearly works, and VMWare is independently evolving into the same thing. But there are some exaggerated claims of simplicity being offered here.

The fact is recent x86 CPUs and chipsets have gained powerful virtualization capabilities, including hardware accelerated IO, MMU and interrupt virtualization. This stuff only began to appear in x86 hardware in late 2005 with important new capabilities such as VMCS Shadowing appearing as recently as Haswell (circa June, this year.)

It isn't surprising that people are creating hypervisors to leverage this power. When hardware manufacturers give you a powerful virtualization platform the question is do you use a legacy OS retrofitted to utilize it[1] or adopt something supposedly better by virtue of being built with hardware virtualization as a given.

Stay tuned.

[1] FreeBSD 10 offers the bhyve type 2 hypervisor the relies on VT-x + EPT. It can virtualize x86, like VMWare could do in the late 90's, but FreeBSD hasn't had to synthesize a virtual sandbox from scratch because the hardware provides the hard parts, and the end result is superior.

Comment Re:Movies used to be about the art, the story. (Score 5, Interesting) 1029

I came up with the exact same summation; too much Indiana Jones. Some parts were great. Bilbo and Gollum under the mountain were truly excellent; it really did the book justice. The trolls weren't bad. The dwarf backstory was ok, going far beyond the book and doing it well.

But damn... Radagast the rabbit sledding superhero? The interminable goblin chase sequence....? wow. The whole mountain giant sequence was an exercise in excessive CGI combined with some unexplainable contempt for continuity. At some point during production someone had to think "wtf is this?"

There are two more. It is conceivable they didn't promulgate these mistakes to the remainder, but given that they've undertaken to stretch this relatively simple story over, what, 7.5 to 8 hours of movie... we could be in for a lot more fail.

Comment Re:Netbeans! (Score 3, Interesting) 543

I concur with this. NetBeans is not attempting to be a generic GUI application platform; it is a mere IDE so it weighs a lot less than Eclipse. I moved to NetBeans because Maven integration with Eclipse is still half baked after all these years; with NetBeans you just open the Maven project and things work correctly. I stayed with Netbeans because it performs better and just has fewer hairs. Eclipse not spamming .project and .classpath all over the place is just fabulous as well.

It is Oracle, however. One day it might cost $6000 per "seat."

Comment Re:Bullshit (Score 2) 139

It seems improbable that a gas giant would

Does it seem improbable to you? Life on Earth evolved in a fluid.

Even if genesis is not possible in a gas giant atmosphere, large planets tend to have lots of moons and, therefore, lots of opportunities for primitive life to emerge. Extremophiles from such a moon could survive a short trip through space to a gas giant's atmosphere. Some small fraction of those would thrive and evolve in the new environment.

I suspect gas giant atmospheres may actually be very fertile. Life is good at producing simple sphere shapes needed for buoyancy. There are probably gas giants with billions of tons of biomass drifting around.

Comment Re:Rust (Score 1) 407

* memory management is explicit [merriam-webster.com] -- what does this mean?

Quantifying the Performance of Garbage Collection vs. Explicit Memory Management
Automatic vs. Explicit Memory Management: Settling the debate

* deterministic [merriam-webster.com] -- what does this mean?

I thought it was self evident. Here is a discussion of the matter.

* endemic [merriam-webster.com] use of a garbage collector... -- what does this mean?

Pervasive would be a better word. Languages that make garbage collected allocations for most or all things. For example in Java, aside from primitives, all allocations conceptually occur on the a garbage collected heap.

reference-counted heap objects

Reference counting: counting the number of references to an object.
Heap: an arena of memory maintained by a memory allocator. Also CPUs typically have no knowledge of how software manages heaps. You may be thinking of virtual memory
Objects: object in the generic sense of some amount of memory managed on a heap. These lecture notes show the same usage. The editors of this page also use the word 'object' in exactly the same manner when discussing pointers. It's not that hard to follow.

Putting it together we have objects on a heap for which reference counts are maintained; reference-counted heap objects.

"exchange" heap -- what does this mean?
* "local" heap -- what does this mean?

The link I provide to Patrick Walton's blog would get you there. Also, there is documentation, Sorry if discussing a new programming language involves terms you haven't heard. Computing can be like that sometimes.

(note: there is only one "heap" on most CPU architectures, so now we have added abstraction)

Now you are definitely confusing heaps and virtual memory. There are usually many, possibly thousands of heaps on a system at any given time with many distinct implementations of which the CPU is entirely ignorant. Memory allocators and virtual memory are different things.

* via an "owned" pointer -- what does this mean?

Similar to a C++ auto_ptr or unique_ptr. Again, the link I provided would get you there.

* wild pointers -- what does this mean?

Dangling pointer and wild pointer are synonomous.

Use of the exchange heap is exceptional and explicit yet immediately available when necessary -- what does this mean?

I provided a link directly to a discussion of this.

Memory "management" is reduced to efficient stack pointer manipulation -- uhh, what? the language sits around modifying content at %esp and %ebp along with some offsets? sounds far from efficient)

Incrementing a decrementing stack pointer registers is very efficient. Offsets are computed at compile time and the instructions typically require one CPU cycle and no memory access, given a naive model of a CPU. These techniques are a ancient and ubiquitous. Sorry you weren't familiar with them.

or simple, deterministic destruction -- what does this mean?

Others seem to have no difficulty with these terms. In particular, they are not compelled to link merriam-webster at each use, for some reason.

(note: 2nd use of word "deterministic")

Reusing words is an important feature of language.

Compile time checks preclude bad pointers and simple leaks so common with traditional systems languages -- what does this mean and how does this work, considering that the value stored at a pointer (or what it points to) can be manipulated at run-time, so how would the language "deterministically know" (see what I did there?) what's "bad" vs. "good"?

"bad", "wild" or "dangling" pointers are memory safely faults or violations. It is an feature inherent to Rust that they can't exist. Feel free to learn about it.

* ... that is productive, concise

Holy shit! An opinion on Slashdot? Say it isn't so.

Comment Re:Article itself is a waste of memory (Score 4, Insightful) 407

The market decided long ago that fewer programmer hours was better than users waiting a few seconds everyday for their device to GC.

No, actually, that's not what happened. As the summary and the story itself (both of which went unread apparently,) point out, one of the most successful systems to emerge in the market recently, iOS, is not a GC environment.

Over here you may learn about iOS memory management. Without getting too far into that wall of text one discovers the following:

If you plan on writing code for iOS, you must use explicit memory management (the subject of this guide).

Ok, so your claim that GC is the only viable solution for contemporary application development is demonstrably false. Lets look some other assertions:

programmers are inherently bad at memory management. Memory will leak [if programmers must manage it].

First, the vast number of iOS applications not leaking shows that a non-GC system doesn't necessary have to leak. At least not badly enough to compromise the viability of the platform, which is the only meaningful criteria I can think of when it comes to the market.

Second, why assume programmers are inherently bad at a thing when that thing has traditionally been exposed via terrible, error prone, demonstrably awful mechanisms? It seems to me that among widely used tools we leaped from 'systems' languages with truly heinous MM primitives (C/C++) directly into pervasive GC systems. Aside from Objective C+ARC there just aren't enough good non-GC systems to make broad generalizations. Thus, you may be right about programmers, but you can't prove it, and I doubt it.

Finally, what proof is there that pervasive GC is better at not leaking than a good explicit MM system? Anyone with an Android system and a bunch of apps will quickly discover that pervasive GC does not eliminate leaks.

[some phone] comes with a whopping 2GB of RAM

Goggle Glass has 682mb of RAM. There is always a new platform into which we much fit our software and the new platform is usually resource constrained, so there will never be a day when questioning the cost of GCs is wrong. Maybe the wearable you eventually put on will have 8 GB of RAM. The computers you swallow or implant or sprinkle around the lawn probably won't. The fact the next generation of phones can piss away RAM to greedy GCs just isn't particularly informative.

Comment Rust (Score 2) 407

Memory management is an issue that has me excited about Rust. Rust memory management is explicit, easy to use, deterministic, efficient and safe. The language designers understand that garbage collection is costly and that endemic use of a garbage collector limits applicability.

Although Rust does have reference counted heap objects on the so-called "exchange" heap, memory is normally allocated on the stack or on a "local" heap (via an "owned" pointer) that has "destructor-based memory management," much like C++ objects but without the leaks and wild pointers.

The result is the vast majority of allocated memory is not managed by the garbage collector. Use of the exchange heap is exceptional and explicit, yet immediately available when necessary. Otherwise, memory "management" is reduced to efficient stack pointer manipulation or simple, deterministic destruction. Compile time checks preclude bad pointers and simple leaks so common with traditional systems languages.

There is a series of detailed blog posts about Rust memory management here.

Rust was inspired by the need for an efficient, AOT compiled systems programming language that is productive, concise and at least as safe as contemporary "managed" languages. Its memory management scheme goes directly to the point of this story.

Slashdot Top Deals

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...