Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:I am not going to convert (Score 1) 245

I've used sccs, rcs, cvs, svn, and git [all in production environments], spanning a 30 year period. git is easier to use than svn, and more powerful [features that git pioneered have been backported into svn, so you're getting the benefits, even if you don't realize it].

Ultimately, it's all about the merging, which is the premise that git is based on. See:
http://www.youtube.com/watch?v...
or
http://www.youtube.com/watch?v...

Comment I've already done it (Score 0) 245

I've already created perl scripts to do this. I've already got the blob files and a full git repo for netbsd. Yes, it takes days for these to run but what's the big deal?

I did this because I needed the scripts to convert some of my old personal software from CVS/RCS to git. To debug the scripts, I thought that a true test would be to convert something massive like netbsd. I'm not a snob as I also configured for freebsd and openbsd but didn't run the scripts on those.

I did this on an old 8-core 2.5GHz 64 bit machine with 12GB ram [and 120 of swap space] and enough disk space. The full retail price on this was $3000 five years ago. The same specs can be had much cheaper today.

How many repos of what projects are going to be converted? 10? 100? 1000? Ultimately, there aren't enough projects to justify a machine for 100% usage for a five year period.

I tried to post the script here but various /. posting filters tripped over the 800+ lines. So, here's the top comment section along with the comments for the various functions:

# gtx/cvscvt.pm -- cvs to git conversion
#
#@+
# commands:
# rsync -- sync from remote CVS repo
# cvsinit -- initialize local CVS repository
# co -- checkout CVS source
# agent -- do conversion (e.g. create blob files)
# xz -- compress blob files
# import -- import blob files using git-fast-import
# clone -- clone the bare git repo into a "real" one
# git -- run git command
#
# symbols:
# cvscvt_topdir -- top directory
# cvscvt_module -- cvs module name
# cvscvt_agent -- conversion agent (e.g. cvs2git)
#
# cvshome -- top level for most cvs files
# cvscvt_srcdir -- cvs work directory
# cvsroot -- cvs root directory (CVSROOT)
# cvsroot_repo -- cvsroot/CVSROOT
# cvscvt_rsyncdir -- cvsroot/cvscvt_module
#
# cvscvt_blobdir -- directory to store agent output blobs
# cvscvt_tmpdir -- temp directory
# cvscvt_logdir -- directory for logfiles
#
# git_top -- git files top directory
# git_repo -- git repo *.git directory
# git_src -- git extraction directory
# git_dir -- git [final] repo directory
# git_work -- git [final] working directory
#
# cvscvt_xzlimit -- xz memory compression limit
#@-
# cvscvtinit -- initialize
# cvscvtcmd -- get list of commands
# _cvscvtcmd -- get list of commands
# cvscvtopt -- decode options
# cvscvtall -- do all import steps
# cvscvt_rsync -- sync with remote repo
# cvscvt_tar -- create tar archive
# cvscvt_cvsinit -- create real repository
# cvscvt_co -- do checkout
# cvscvt_agent -- invoke conversion agent [usually cvs2git]
# cvscvt_cvs2git -- run cvs2git command
# cvscvt_xz -- compress blob files
# _cvscvtxz -- show sizes
# cvscvt_import -- run git fast-import command
# cvscvt_clone -- clone the bare git repo into a "real" one
# cvscvt_git -- clone the bare git repo into a "real" one
# cvscvt_cvsps -- run cvsps command
# cvscvtblobs -- get blob files
# cvscvtshow -- show cvs2git results
# cvscvtshow_evtmsg -- get fake timestamp
# cvscvtshow_etarpt -- show amount remaining
# cvscvtshow_msg -- output a message
# cvscvteval -- set variable
# cvscvtexec -- show vxsystem error
# cvslogfile -- get logfile
# cvslogpivot -- rename logfile

Comment Strangers with candy (Score 1) 201

The problem with Android Lollipop [for developers] is [still] the "android fragmentation" problem, which Google is trying to address with its Android One program. Lollipop has 5000 new API's, but developers have to program to the lowest common denominator, which is probably pre-4.0.

This is in contrast to Apple. Most devices get upgraded to the latest iOS in short order [3-6 mos]. IIRC, an author writing an iOS developers' book stripped all pre-iOS8 from it, because he felt that iOS8 was just so much better. Whether he's right or wrong doesn't matter as much as the fact that he can do it because of the iOS upgrade cycle. This makes iOS development much easier than Android development.

The latest Linux runs quite well on older devices. So should Android. This is just like a PC game that, during install, speed tests the machine and backs off on things like resolution, anti-aliasing, etc. to make it run smoothly.

Android One needs even more teeth:
- Vendors _must_ upgrade old devices [even at a loss] unless they can prove [to Google] that it won't run due to memory, etc.
- Vendors shouldn't force people to upgrade their device just to get the latest Android, just because the vendor wants to force this by refusing to upgrade Android on "last year's device".

I have a Galaxy S3 and Samsung has upgraded it every six months. I really like the fact that they're not forcing me to upgrade the device just to get the latest/best Android OS. As as result, they've got my loyalty. When I do [eventually] upgrade my device [at a time of my choosing], Samsung's firmware upgrade policy will be a major factor in my staying with them.

If Google can't get vendors to cooperate [even better] on this, it should offer backports of Lollipop [API's] to older versions via Google Play. This helps consumers with older devices, Android developers, Google, and even the [recalcitrant] vendors [even though they might vehemently disagree].

Comment Re:Great! (Score 1) 549

I use the keystore approach. Each of my devices has a unique private/public key pair. Each device has the public keys of all the others. I disable password based login [except for physical/console login].

Shouldn't be too hard for websites to implement this. Shouldn't be too hard to allow multiple public keys (e.g. just add them to the per-user "authorized_keys" file). Default this off for users at start. But, allow it to be enabled on the account management page [with a place to paste in new public keys and menus to delete/modify the existing ones].

Comment Re:Linked? (Score 1) 338

Thanks for the support. But, it seems my post is already going down in flames. Curious, since there have been many slashdot articles about Ubisoft's millitant attitute about [their] DRM. On such an article, it would probably get modded up. Or, perhaps, if I used a smiley face. Since I rarely get modded down for posts I make [and some are considerably more controversial], it makes me wonder if their aren't some astroturfing accounts at work. Sigh.

Comment Re:If Oracle wins, Bell Labs owns the world. (Score 4, Interesting) 146

The AT&T copyrights were the genesis of POSIX. Nobody could create a workalike Un*x, so POSIX was originally a "clean room" reimplementation of the Un*x API's [libc, programs, et. al.]. POSIX now serves as a standard, but that wasn't its original purpose.

Because the POSIX methodology has been around for 30 years, it provides some precedent/defense for Google [estoppel].

If Oracle's argument prevails, this kills all Linux, *BSD [OSX] workalike OSes. Also, because ISO copyrights the C/C++ specs [to charge a fee to have a copy], this means that nobody could program in C/C++ without a license from ISO.

The Oracle/Google decision by the appellate court is tantamount to conferring patent protections for a copyright. That is, because Louis L'Amour copyrighted his western novels, nobody else can pen a western.

Comment Re:Why do people still care about C++ for kernel d (Score 2) 365

placement new doesn't work without nullifying a few things. Automatic cleanup on scope exit doesn't work for locks in the kernel. See below ... Much more ...

placement new/delete are noexcept functions. But, they call std::terminate--not acceptable. The only thing that works is an alloc function that returns NULL (or (void *) -errno). Returning null is not fatal in the kernel. The caller must be able to deal with it (usually returning -ENOMEM). So, the [global] new/delete must be changed. Also, placement delete has problems [I've left off the backslashes for clarity]:

#define GETPTR(_ptr,_typ,_siz)
switch (_typ) {
case 0:
        _ptr = alloca(_siz);
        break;
case 1:
        _ptr = kmalloc(GFP_KERNEL,_siz);
        break;
case 2:
        _ptr = kmalloc(GFP_ATOMIC,_siz);
        break;
case 3:
        _ptr = slab_one(_siz);
        break;
case 4:
        _ptr = slab_two(_siz);
        break; // ...
}

void
myfnc(int typ)
{
        void *ptr;
        GETPTR(ptr,typ,23);
        class abc *x = new(ptr) abc(19,37);
}
At this point, a delete operator [even a placement version] has no idea which pool to release to because there's no way to pass typ to it. You might be able to create a contructor abc(typ,19,37) but that adds an extra member element to hold typ so the delete operator can get at it, but that's additional overhead/complexity that C doesn't have. It might be possible to make it work by casting typ to void* and using that as the pointer:
    class abc *x = new((void *) typ) abc(19,37);
and have the class specific new operator use GETPTR internally. I tested this and it works. However, I haven't yet been able to get the corresponding placement delete to work as a class specific overload [yet]. In trying to find the way, I came upon:
http://www.scs.stanford.edu/~d...
It's fairly detailed and lays out a [pretty strong] case against using the new operator [more eloquently than I could do here].

A lot of kernel code puts definitions in the usual place [top of function body] for C. In C++, this invokes the constructor, which is not what you want. The reason is that [say] 10 vars are defined. The function does a quick check on args and does a non-standard return -EINVAL. All that wasted create/destroy. This may be harmful if the constructors have side effects such as lock acquisition. Note that doing a [wasteful] lock followed by an immediate unlock [to satisfy having a destructor do lock cleanup] is a non-starter in the kernel [you'll never get such code checked in/signed off on]

So, you'd have to go through every kernel function by hand [there are 16.9 million lines of source code] and move the definitions down:
{
        struct foo x;
        if (bad_news)
                return -EINVAL; ...
}
{
        if (bad_news)
                return -EINVAL;
        struct foo x; ...
}

You can't put a lock release in a destructor because you'd need an extra member var that would have to be set/cleared when you acquire/release a lock. That's because the destructor has to have some way of knowing whether to suppress the lock release. So, you're adding an extra variable [that isn't needed in C] just to prevent an attempt to release a lock that was never acquired in the first place. More overhead and slower [and more complex] than its C counterpart.

In kernel functions, multiple different types of locks have to be acquired. Sometimes, it's:
get_lock_a();
x = find_object_in_a();
if (! x)
        goto release_a;
get_lock_b();
y = find_object_in_b();
if (! y)
        goto release_b; // do stuff
release_lock_b(); // do more stuff
release_lock_a(); // do even more stuff
return 0;
release_b:
release_lock_b();
release_a:
release_lock_a();
return -EINVAL;

Although you can create a goto-less version, sometimes the goto's are done deliberately for speed.

Another common snippet:
get_lock_a();
x = find_object_in_a();
if (! x)
        release_lock_a();
return x; // return with object list locked if we found one

Here's another one:
myfnc()
{
        if (in_interrupt()) {
                if (! trylock()) {
                        schedule_work(myfnc);
                        return;
                }
        }
        else {
                getlock();
        } // ...
        release_lock();
}

I've been writing linux device drivers for a living for the last 20 years. For 12 before that Unix. For 10 before that other OSes. So, I've had to read an awful lot of kernel code.

These are just the smallest of examples [junior grade--I was in a hurry] of what would be required. There are many more. Try a different approach. Download the kernel source code and start reading it. You'll find out a few things:

(1) C isn't nearly as messy or anemic as most C++ programmers think it is.

(2) See what expert level C programmers can actually do. The kernel is far cleaner that you probably suspect.

(3) Linus [and crew] don't want to use C++ merely because "they don't understand it". If it were truly beneficial in a kernel environment, they'd have switched long ago.

(4) Contrary to belief [on slashdot] Linus is a very reasonable guy. I've met him personally a number of years back. Ignore the bombast in postings. He only does it to counter some strong egos. But, it's completely done for shock effect to get stubborn [and wrong] programmers to do their jobs. Linus has had many discussions/battles where the others were saying "you just don't understand". This has usally been the gcc developers. In the end, he ends up being right [e.g. it really was a bug in the compiler and not a bug in Linus' understanding].

(5) The kernel is overhead to getting work done [an application]. Thus, it's designed to be fast--very fast. Other OSes have died because they forgot this. Mach, for example. [Clean] message passing microkernel architecture. Unfortunately, it [even after tweaks] was too slow for a production system and the project was sidelined.

(6) Linux is the basis for Android. Linux powers Google servers. Linux powers Facebook servers. Linux powers most zillion-core supercomputers. Considering all the diversity in arches, devices, etc. if Linux weren't already cleanly designed, it would have collapsed under the weight of maintaining all of the above.

(7) The kernel is "bare metal" programming. C is better suited to that than C++.

If you truly think the kernel will benefit from C++, read [a lot] of the code first [Repeating: 16.9 million lines of code]. Then join the project that started the discussion.

Comment Re:Why do people still care about C++ for kernel d (Score 2) 365

The importance of this is underestimated. With a sanely written C++ program (merely sticking to the modern approaches) memory and resource leaks are a thing of the past, but you still get the completely predictable and deterministic resource management of C.

Unfortunately, you can't use any of that in the kernel [overloading create/destroy new/delete operators won't cut it]. Spinlocks, rwlocks, RCU, slab allocation, per cpu variables, explicit cache flush, memory fence operations, I/O device mappings, ISRs, tasklets, kmalloc vs vmalloc, deadlocks, livelocks, etc. are the issues a kernel programmer has to deal with. Nothing in C++ will help with these and some C++ constructs are actually a hindrance rather than a help.

For instance, copy constructors must be disabled. This was part of a proposal a few years back to make a C++ subset suitable for realtime/embedded. It isn't acceptable to have "x = y" invoke an unexpected amount of code simply because you inadvertantly invoked a copy constructor.

Kernels by their nature are messy. Anybody writing kernel code must be fully aware of the implications of doing something and must be aware of the state they're being called in. Abstraction just makes this job harder not easier.

For example, all kernel code must be compiled with -mno-red-zone because of the threat that any base code could receive an interrupt at any time [even between 2-3 machine instructions that comprise the red zone setup code].

Linux already does a pretty fair job of keeping things clean. If you don't believe that, actually go read the kernel source code. And, if something ends up being crufty, it gets cleaned up. Even if that means that some 100 or so modules need corresponding changes.

Comment Re:Why do people still care about C++ for kernel d (Score 2) 365

Virtually all kernel functions return either NULL, true/false, or -errno for errors. No need nor desire for exceptions.

Just how would you do an exception inside an ISR, if you could even find a [credible/safe] way to implement them inside a kernel?

Uncaught exception === kernel panic?

Comment Re:Citation needed (Score 3, Interesting) 554

Overall, 64 bit has a 20% [or better] performance increase for most workloads. There are other factors other than just size of pointers.

Size of pointers is not the major factor in cache flush since most of the cache is taken up by data items and not pointers. These data items are more or less invariant across compilation mode.

64 bit compilers only use 64 bit fetch for non-pointers if you actually request them (e.g. long long). MS is the odd ball and defines a "long" to be 32 bits even in 64 bit mode [contrary to the compilation models used by everyone else]. "int" suffices for most data. Where it doesn't, one will [have to] code "long long" and that is invariant across 32/64, except that the 32 bit code will be slower [generating 2-3 instructions for each 64 bit one].

With x86_64, the first 6 arguments to a function are passed in registers and not on the stack (i.e. no wasteful push/pops for argument passing on entry/exit).

For a function that has a lot of automatic [stack] variables, in 32 bit, any non-trivial loop could spend a lot of time dumping a register to its stack frame solely for the purpose of making room for another variable that needs the register. This is register pressure and is considerably higher in 32 bit mode.

Once an address has been loaded in a register, access relative to that base register is identical speedwise between 64 and 32 bit.

64 bit has RIP-relative addressing which allows data to be addressed as small offset from the RIP [instruction pointer/program counter] register. Since it's relative to the RIP, two consecutive instructions that address the same data location will have slightly different offsets within each instruction.

You want a study? Try a google search on "performance 32 bit vs 64 bit".

Or, the easy reader version:
http://www.phoronix.com/scan.p...

Comment Re:Not so public disclosure (Score 2) 159

I agree.

Existing customers will already know about bugs as they're using the software. They'll want to know what's being done to fix it and will get some comfort if they can see the process (e.g. fix isn't yet out, but the problem is being diagnosed, test vectors generated, etc.).

In this particular case since some of the customers are 3rd party developers [programmers], their livelihood [selling their addons] depends on the core product being reliable. They absolutely want access. And, they can usually speed up the bug fix process with their [knowledgable] feedback.

Adding an NDA as a prereq to access to the issue tracker might be an idea. This prevents the info there from being used as ammo by a competitor.

Even if competitor buys the product [merely] to get access, they can't use it as marketing/sales weapon as that would violate the NDA.

If the competitor does not go for direct access [does not buy the product and is not subject to the NDA] but gets info leaked by an employee of a legit customer, then the competitor would be getting proprietary information [which might be considered industrial espionage, theft of trade secrets, etc.]

In either case, it weakens the competitor's incentive to try to use the information from the issue tracker.

Further, the issue tracker being accessible can be a marketing/sales selling point: "We stand behind what we sell and we're confident enough to have our bug tracker in the open to prove it. Why doesn't our competitor? What are they trying to hide?"

Slashdot Top Deals

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...