Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment: Re:First step is to collect data. (Score 1) 405

by Forever Wondering (#48382767) Attached to: Ask Slashdot: How To Unblock Email From My Comcast-Hosted Server?

All the rejection messages point to your systems being affected in some way. The "agent" may be establishing an SMTP connection that doesn't need authentication (e.g. it connects directly to yahoo's inbound SMTP port for a message to a yahoo user. Thus, it's not a relay as far as yahoo is concerned).

It could be bypassing anything you've already set up [or co-opting it in some way that you don't yet understand]. If your systems have been compromised, all the authentication credentials are available to the agent. The best way I know of to prove/disprove this is to set up a sniffer/router/blocker.

The rejections are based on [too] high message frequency, which tends to indicate that you're not on a blocklist [yet]. It's also not likely to be a policy change at a given mail recipient ISP since at least three started rejection at more or less the same time.

Having the ability to log/monitor/analyze traffic in general might be a good thing. What if it weren't just emails but DDoS or other attacks [which carry considerably more liability for your business]?

If you can track down some of the messages that got sent that had complaints attached to them, the delivery envelope may have some clues. For example, the specifics of the SMTP parameters used (ordinary SMTP or eSMTP, etc.) Perhaps contacting the mail abuse departments of yahoo et. al. and explaining what is happening may help. They could tell you how many messages are arriving from your IP address. Compare this against an estimate of what your users are doing. If your legit users haven't starting sending many more messages recently, but the ISP is seeing a huge uptick, this will be telling.

Since you've got [and are paying the extra money for] Comcast business class, they should be able to help with the traffic logging/analysis. Also, if the targeted ISPs are limiting based on an IP range, Comcast may be able to help in dealing with the ISPs. You may have to escalate this a level or two within Comcast's support hierarchy. Be sure to get a trouble ticket filed [if you haven't already].

Comment: Re:First step is to collect data. (Score 1) 405

by Forever Wondering (#48382017) Attached to: Ask Slashdot: How To Unblock Email From My Comcast-Hosted Server?

Deferred: 421 RP-001 ...

Are you sure your systems haven't been compromised by spambots? Everything was fine two weeks ago [and had been for a while]. What's changed? ISP logs before and after may show something.

Can you set up a new system [with a different OS like linux, netbsd, etc.] that is a gateway between your current systems and your router/modem [would require a second ethernet port/card]. Have this system filter/monitor all traffic, looking for something suspicious.

Comment: Re:I am not going to convert (Score 1) 245

by Forever Wondering (#48190981) Attached to: Help ESR Stamp Out CVS and SVN In Our Lifetime

I've used sccs, rcs, cvs, svn, and git [all in production environments], spanning a 30 year period. git is easier to use than svn, and more powerful [features that git pioneered have been backported into svn, so you're getting the benefits, even if you don't realize it].

Ultimately, it's all about the merging, which is the premise that git is based on. See:
http://www.youtube.com/watch?v...
or
http://www.youtube.com/watch?v...

Comment: I've already done it (Score 0) 245

by Forever Wondering (#48190617) Attached to: Help ESR Stamp Out CVS and SVN In Our Lifetime

I've already created perl scripts to do this. I've already got the blob files and a full git repo for netbsd. Yes, it takes days for these to run but what's the big deal?

I did this because I needed the scripts to convert some of my old personal software from CVS/RCS to git. To debug the scripts, I thought that a true test would be to convert something massive like netbsd. I'm not a snob as I also configured for freebsd and openbsd but didn't run the scripts on those.

I did this on an old 8-core 2.5GHz 64 bit machine with 12GB ram [and 120 of swap space] and enough disk space. The full retail price on this was $3000 five years ago. The same specs can be had much cheaper today.

How many repos of what projects are going to be converted? 10? 100? 1000? Ultimately, there aren't enough projects to justify a machine for 100% usage for a five year period.

I tried to post the script here but various /. posting filters tripped over the 800+ lines. So, here's the top comment section along with the comments for the various functions:

# gtx/cvscvt.pm -- cvs to git conversion
#
#@+
# commands:
# rsync -- sync from remote CVS repo
# cvsinit -- initialize local CVS repository
# co -- checkout CVS source
# agent -- do conversion (e.g. create blob files)
# xz -- compress blob files
# import -- import blob files using git-fast-import
# clone -- clone the bare git repo into a "real" one
# git -- run git command
#
# symbols:
# cvscvt_topdir -- top directory
# cvscvt_module -- cvs module name
# cvscvt_agent -- conversion agent (e.g. cvs2git)
#
# cvshome -- top level for most cvs files
# cvscvt_srcdir -- cvs work directory
# cvsroot -- cvs root directory (CVSROOT)
# cvsroot_repo -- cvsroot/CVSROOT
# cvscvt_rsyncdir -- cvsroot/cvscvt_module
#
# cvscvt_blobdir -- directory to store agent output blobs
# cvscvt_tmpdir -- temp directory
# cvscvt_logdir -- directory for logfiles
#
# git_top -- git files top directory
# git_repo -- git repo *.git directory
# git_src -- git extraction directory
# git_dir -- git [final] repo directory
# git_work -- git [final] working directory
#
# cvscvt_xzlimit -- xz memory compression limit
#@-
# cvscvtinit -- initialize
# cvscvtcmd -- get list of commands
# _cvscvtcmd -- get list of commands
# cvscvtopt -- decode options
# cvscvtall -- do all import steps
# cvscvt_rsync -- sync with remote repo
# cvscvt_tar -- create tar archive
# cvscvt_cvsinit -- create real repository
# cvscvt_co -- do checkout
# cvscvt_agent -- invoke conversion agent [usually cvs2git]
# cvscvt_cvs2git -- run cvs2git command
# cvscvt_xz -- compress blob files
# _cvscvtxz -- show sizes
# cvscvt_import -- run git fast-import command
# cvscvt_clone -- clone the bare git repo into a "real" one
# cvscvt_git -- clone the bare git repo into a "real" one
# cvscvt_cvsps -- run cvsps command
# cvscvtblobs -- get blob files
# cvscvtshow -- show cvs2git results
# cvscvtshow_evtmsg -- get fake timestamp
# cvscvtshow_etarpt -- show amount remaining
# cvscvtshow_msg -- output a message
# cvscvteval -- set variable
# cvscvtexec -- show vxsystem error
# cvslogfile -- get logfile
# cvslogpivot -- rename logfile

Comment: Strangers with candy (Score 1) 201

by Forever Wondering (#48156897) Attached to: Google Announces Motorola-Made Nexus 6 and HTC-Made Nexus 9

The problem with Android Lollipop [for developers] is [still] the "android fragmentation" problem, which Google is trying to address with its Android One program. Lollipop has 5000 new API's, but developers have to program to the lowest common denominator, which is probably pre-4.0.

This is in contrast to Apple. Most devices get upgraded to the latest iOS in short order [3-6 mos]. IIRC, an author writing an iOS developers' book stripped all pre-iOS8 from it, because he felt that iOS8 was just so much better. Whether he's right or wrong doesn't matter as much as the fact that he can do it because of the iOS upgrade cycle. This makes iOS development much easier than Android development.

The latest Linux runs quite well on older devices. So should Android. This is just like a PC game that, during install, speed tests the machine and backs off on things like resolution, anti-aliasing, etc. to make it run smoothly.

Android One needs even more teeth:
- Vendors _must_ upgrade old devices [even at a loss] unless they can prove [to Google] that it won't run due to memory, etc.
- Vendors shouldn't force people to upgrade their device just to get the latest Android, just because the vendor wants to force this by refusing to upgrade Android on "last year's device".

I have a Galaxy S3 and Samsung has upgraded it every six months. I really like the fact that they're not forcing me to upgrade the device just to get the latest/best Android OS. As as result, they've got my loyalty. When I do [eventually] upgrade my device [at a time of my choosing], Samsung's firmware upgrade policy will be a major factor in my staying with them.

If Google can't get vendors to cooperate [even better] on this, it should offer backports of Lollipop [API's] to older versions via Google Play. This helps consumers with older devices, Android developers, Google, and even the [recalcitrant] vendors [even though they might vehemently disagree].

Comment: Re:Great! (Score 1) 549

I use the keystore approach. Each of my devices has a unique private/public key pair. Each device has the public keys of all the others. I disable password based login [except for physical/console login].

Shouldn't be too hard for websites to implement this. Shouldn't be too hard to allow multiple public keys (e.g. just add them to the per-user "authorized_keys" file). Default this off for users at start. But, allow it to be enabled on the account management page [with a place to paste in new public keys and menus to delete/modify the existing ones].

Comment: Re:Linked? (Score 1) 338

Thanks for the support. But, it seems my post is already going down in flames. Curious, since there have been many slashdot articles about Ubisoft's millitant attitute about [their] DRM. On such an article, it would probably get modded up. Or, perhaps, if I used a smiley face. Since I rarely get modded down for posts I make [and some are considerably more controversial], it makes me wonder if their aren't some astroturfing accounts at work. Sigh.

Comment: Re:If Oracle wins, Bell Labs owns the world. (Score 4, Interesting) 146

by Forever Wondering (#48106933) Attached to: Google Takes the Fight With Oracle To the Supreme Court

The AT&T copyrights were the genesis of POSIX. Nobody could create a workalike Un*x, so POSIX was originally a "clean room" reimplementation of the Un*x API's [libc, programs, et. al.]. POSIX now serves as a standard, but that wasn't its original purpose.

Because the POSIX methodology has been around for 30 years, it provides some precedent/defense for Google [estoppel].

If Oracle's argument prevails, this kills all Linux, *BSD [OSX] workalike OSes. Also, because ISO copyrights the C/C++ specs [to charge a fee to have a copy], this means that nobody could program in C/C++ without a license from ISO.

The Oracle/Google decision by the appellate court is tantamount to conferring patent protections for a copyright. That is, because Louis L'Amour copyrighted his western novels, nobody else can pen a western.

Comment: Re:Why do people still care about C++ for kernel d (Score 2) 365

by Forever Wondering (#48071029) Attached to: Object Oriented Linux Kernel With C++ Driver Support

placement new doesn't work without nullifying a few things. Automatic cleanup on scope exit doesn't work for locks in the kernel. See below ... Much more ...

placement new/delete are noexcept functions. But, they call std::terminate--not acceptable. The only thing that works is an alloc function that returns NULL (or (void *) -errno). Returning null is not fatal in the kernel. The caller must be able to deal with it (usually returning -ENOMEM). So, the [global] new/delete must be changed. Also, placement delete has problems [I've left off the backslashes for clarity]:

#define GETPTR(_ptr,_typ,_siz)
switch (_typ) {
case 0:
        _ptr = alloca(_siz);
        break;
case 1:
        _ptr = kmalloc(GFP_KERNEL,_siz);
        break;
case 2:
        _ptr = kmalloc(GFP_ATOMIC,_siz);
        break;
case 3:
        _ptr = slab_one(_siz);
        break;
case 4:
        _ptr = slab_two(_siz);
        break; // ...
}

void
myfnc(int typ)
{
        void *ptr;
        GETPTR(ptr,typ,23);
        class abc *x = new(ptr) abc(19,37);
}
At this point, a delete operator [even a placement version] has no idea which pool to release to because there's no way to pass typ to it. You might be able to create a contructor abc(typ,19,37) but that adds an extra member element to hold typ so the delete operator can get at it, but that's additional overhead/complexity that C doesn't have. It might be possible to make it work by casting typ to void* and using that as the pointer:
    class abc *x = new((void *) typ) abc(19,37);
and have the class specific new operator use GETPTR internally. I tested this and it works. However, I haven't yet been able to get the corresponding placement delete to work as a class specific overload [yet]. In trying to find the way, I came upon:
http://www.scs.stanford.edu/~d...
It's fairly detailed and lays out a [pretty strong] case against using the new operator [more eloquently than I could do here].

A lot of kernel code puts definitions in the usual place [top of function body] for C. In C++, this invokes the constructor, which is not what you want. The reason is that [say] 10 vars are defined. The function does a quick check on args and does a non-standard return -EINVAL. All that wasted create/destroy. This may be harmful if the constructors have side effects such as lock acquisition. Note that doing a [wasteful] lock followed by an immediate unlock [to satisfy having a destructor do lock cleanup] is a non-starter in the kernel [you'll never get such code checked in/signed off on]

So, you'd have to go through every kernel function by hand [there are 16.9 million lines of source code] and move the definitions down:
{
        struct foo x;
        if (bad_news)
                return -EINVAL; ...
}
{
        if (bad_news)
                return -EINVAL;
        struct foo x; ...
}

You can't put a lock release in a destructor because you'd need an extra member var that would have to be set/cleared when you acquire/release a lock. That's because the destructor has to have some way of knowing whether to suppress the lock release. So, you're adding an extra variable [that isn't needed in C] just to prevent an attempt to release a lock that was never acquired in the first place. More overhead and slower [and more complex] than its C counterpart.

In kernel functions, multiple different types of locks have to be acquired. Sometimes, it's:
get_lock_a();
x = find_object_in_a();
if (! x)
        goto release_a;
get_lock_b();
y = find_object_in_b();
if (! y)
        goto release_b; // do stuff
release_lock_b(); // do more stuff
release_lock_a(); // do even more stuff
return 0;
release_b:
release_lock_b();
release_a:
release_lock_a();
return -EINVAL;

Although you can create a goto-less version, sometimes the goto's are done deliberately for speed.

Another common snippet:
get_lock_a();
x = find_object_in_a();
if (! x)
        release_lock_a();
return x; // return with object list locked if we found one

Here's another one:
myfnc()
{
        if (in_interrupt()) {
                if (! trylock()) {
                        schedule_work(myfnc);
                        return;
                }
        }
        else {
                getlock();
        } // ...
        release_lock();
}

I've been writing linux device drivers for a living for the last 20 years. For 12 before that Unix. For 10 before that other OSes. So, I've had to read an awful lot of kernel code.

These are just the smallest of examples [junior grade--I was in a hurry] of what would be required. There are many more. Try a different approach. Download the kernel source code and start reading it. You'll find out a few things:

(1) C isn't nearly as messy or anemic as most C++ programmers think it is.

(2) See what expert level C programmers can actually do. The kernel is far cleaner that you probably suspect.

(3) Linus [and crew] don't want to use C++ merely because "they don't understand it". If it were truly beneficial in a kernel environment, they'd have switched long ago.

(4) Contrary to belief [on slashdot] Linus is a very reasonable guy. I've met him personally a number of years back. Ignore the bombast in postings. He only does it to counter some strong egos. But, it's completely done for shock effect to get stubborn [and wrong] programmers to do their jobs. Linus has had many discussions/battles where the others were saying "you just don't understand". This has usally been the gcc developers. In the end, he ends up being right [e.g. it really was a bug in the compiler and not a bug in Linus' understanding].

(5) The kernel is overhead to getting work done [an application]. Thus, it's designed to be fast--very fast. Other OSes have died because they forgot this. Mach, for example. [Clean] message passing microkernel architecture. Unfortunately, it [even after tweaks] was too slow for a production system and the project was sidelined.

(6) Linux is the basis for Android. Linux powers Google servers. Linux powers Facebook servers. Linux powers most zillion-core supercomputers. Considering all the diversity in arches, devices, etc. if Linux weren't already cleanly designed, it would have collapsed under the weight of maintaining all of the above.

(7) The kernel is "bare metal" programming. C is better suited to that than C++.

If you truly think the kernel will benefit from C++, read [a lot] of the code first [Repeating: 16.9 million lines of code]. Then join the project that started the discussion.

Klein bottle for rent -- inquire within.

Working...