Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:It's in the image (Score 2, Interesting) 187

The mayonaise you like is the mayonaise you grew up with ...

Films are shot at 24 fps, but displayed [in theaters] at 48 fps, each frame is displayed twice: f0, black, f0, black, f1, black, f1, black, f2, ...

According to one study, when test audiences were shown true 1-to-1 48 fps film, they actually preferred the 24 fps.

The same is true for audio. Those that grew up on 128 kbps .mp3's preferred that over higher fidelity formats.

The human optic nerve has [surprisingly] low bandwidth. I worked for a company that developed a [now shipping] video product that models the human optic system and removes detail that the human eye would not see. This allows better compression without sacrificing video quality. In A/B testing of original [uncompressed] video sources vs. the detail reduced video, test audiences preferred the detail reduced video. It was considered "cleaner" and "more pleasing".

Comment Re:First step is to collect data. (Score 1) 405

All the rejection messages point to your systems being affected in some way. The "agent" may be establishing an SMTP connection that doesn't need authentication (e.g. it connects directly to yahoo's inbound SMTP port for a message to a yahoo user. Thus, it's not a relay as far as yahoo is concerned).

It could be bypassing anything you've already set up [or co-opting it in some way that you don't yet understand]. If your systems have been compromised, all the authentication credentials are available to the agent. The best way I know of to prove/disprove this is to set up a sniffer/router/blocker.

The rejections are based on [too] high message frequency, which tends to indicate that you're not on a blocklist [yet]. It's also not likely to be a policy change at a given mail recipient ISP since at least three started rejection at more or less the same time.

Having the ability to log/monitor/analyze traffic in general might be a good thing. What if it weren't just emails but DDoS or other attacks [which carry considerably more liability for your business]?

If you can track down some of the messages that got sent that had complaints attached to them, the delivery envelope may have some clues. For example, the specifics of the SMTP parameters used (ordinary SMTP or eSMTP, etc.) Perhaps contacting the mail abuse departments of yahoo et. al. and explaining what is happening may help. They could tell you how many messages are arriving from your IP address. Compare this against an estimate of what your users are doing. If your legit users haven't starting sending many more messages recently, but the ISP is seeing a huge uptick, this will be telling.

Since you've got [and are paying the extra money for] Comcast business class, they should be able to help with the traffic logging/analysis. Also, if the targeted ISPs are limiting based on an IP range, Comcast may be able to help in dealing with the ISPs. You may have to escalate this a level or two within Comcast's support hierarchy. Be sure to get a trouble ticket filed [if you haven't already].

Comment Re:First step is to collect data. (Score 1) 405

Deferred: 421 RP-001 ...

Are you sure your systems haven't been compromised by spambots? Everything was fine two weeks ago [and had been for a while]. What's changed? ISP logs before and after may show something.

Can you set up a new system [with a different OS like linux, netbsd, etc.] that is a gateway between your current systems and your router/modem [would require a second ethernet port/card]. Have this system filter/monitor all traffic, looking for something suspicious.

Comment Re:I am not going to convert (Score 1) 245

I've used sccs, rcs, cvs, svn, and git [all in production environments], spanning a 30 year period. git is easier to use than svn, and more powerful [features that git pioneered have been backported into svn, so you're getting the benefits, even if you don't realize it].

Ultimately, it's all about the merging, which is the premise that git is based on. See:
http://www.youtube.com/watch?v...
or
http://www.youtube.com/watch?v...

Comment I've already done it (Score 0) 245

I've already created perl scripts to do this. I've already got the blob files and a full git repo for netbsd. Yes, it takes days for these to run but what's the big deal?

I did this because I needed the scripts to convert some of my old personal software from CVS/RCS to git. To debug the scripts, I thought that a true test would be to convert something massive like netbsd. I'm not a snob as I also configured for freebsd and openbsd but didn't run the scripts on those.

I did this on an old 8-core 2.5GHz 64 bit machine with 12GB ram [and 120 of swap space] and enough disk space. The full retail price on this was $3000 five years ago. The same specs can be had much cheaper today.

How many repos of what projects are going to be converted? 10? 100? 1000? Ultimately, there aren't enough projects to justify a machine for 100% usage for a five year period.

I tried to post the script here but various /. posting filters tripped over the 800+ lines. So, here's the top comment section along with the comments for the various functions:

# gtx/cvscvt.pm -- cvs to git conversion
#
#@+
# commands:
# rsync -- sync from remote CVS repo
# cvsinit -- initialize local CVS repository
# co -- checkout CVS source
# agent -- do conversion (e.g. create blob files)
# xz -- compress blob files
# import -- import blob files using git-fast-import
# clone -- clone the bare git repo into a "real" one
# git -- run git command
#
# symbols:
# cvscvt_topdir -- top directory
# cvscvt_module -- cvs module name
# cvscvt_agent -- conversion agent (e.g. cvs2git)
#
# cvshome -- top level for most cvs files
# cvscvt_srcdir -- cvs work directory
# cvsroot -- cvs root directory (CVSROOT)
# cvsroot_repo -- cvsroot/CVSROOT
# cvscvt_rsyncdir -- cvsroot/cvscvt_module
#
# cvscvt_blobdir -- directory to store agent output blobs
# cvscvt_tmpdir -- temp directory
# cvscvt_logdir -- directory for logfiles
#
# git_top -- git files top directory
# git_repo -- git repo *.git directory
# git_src -- git extraction directory
# git_dir -- git [final] repo directory
# git_work -- git [final] working directory
#
# cvscvt_xzlimit -- xz memory compression limit
#@-
# cvscvtinit -- initialize
# cvscvtcmd -- get list of commands
# _cvscvtcmd -- get list of commands
# cvscvtopt -- decode options
# cvscvtall -- do all import steps
# cvscvt_rsync -- sync with remote repo
# cvscvt_tar -- create tar archive
# cvscvt_cvsinit -- create real repository
# cvscvt_co -- do checkout
# cvscvt_agent -- invoke conversion agent [usually cvs2git]
# cvscvt_cvs2git -- run cvs2git command
# cvscvt_xz -- compress blob files
# _cvscvtxz -- show sizes
# cvscvt_import -- run git fast-import command
# cvscvt_clone -- clone the bare git repo into a "real" one
# cvscvt_git -- clone the bare git repo into a "real" one
# cvscvt_cvsps -- run cvsps command
# cvscvtblobs -- get blob files
# cvscvtshow -- show cvs2git results
# cvscvtshow_evtmsg -- get fake timestamp
# cvscvtshow_etarpt -- show amount remaining
# cvscvtshow_msg -- output a message
# cvscvteval -- set variable
# cvscvtexec -- show vxsystem error
# cvslogfile -- get logfile
# cvslogpivot -- rename logfile

Comment Strangers with candy (Score 1) 201

The problem with Android Lollipop [for developers] is [still] the "android fragmentation" problem, which Google is trying to address with its Android One program. Lollipop has 5000 new API's, but developers have to program to the lowest common denominator, which is probably pre-4.0.

This is in contrast to Apple. Most devices get upgraded to the latest iOS in short order [3-6 mos]. IIRC, an author writing an iOS developers' book stripped all pre-iOS8 from it, because he felt that iOS8 was just so much better. Whether he's right or wrong doesn't matter as much as the fact that he can do it because of the iOS upgrade cycle. This makes iOS development much easier than Android development.

The latest Linux runs quite well on older devices. So should Android. This is just like a PC game that, during install, speed tests the machine and backs off on things like resolution, anti-aliasing, etc. to make it run smoothly.

Android One needs even more teeth:
- Vendors _must_ upgrade old devices [even at a loss] unless they can prove [to Google] that it won't run due to memory, etc.
- Vendors shouldn't force people to upgrade their device just to get the latest Android, just because the vendor wants to force this by refusing to upgrade Android on "last year's device".

I have a Galaxy S3 and Samsung has upgraded it every six months. I really like the fact that they're not forcing me to upgrade the device just to get the latest/best Android OS. As as result, they've got my loyalty. When I do [eventually] upgrade my device [at a time of my choosing], Samsung's firmware upgrade policy will be a major factor in my staying with them.

If Google can't get vendors to cooperate [even better] on this, it should offer backports of Lollipop [API's] to older versions via Google Play. This helps consumers with older devices, Android developers, Google, and even the [recalcitrant] vendors [even though they might vehemently disagree].

Comment Re:Great! (Score 1) 549

I use the keystore approach. Each of my devices has a unique private/public key pair. Each device has the public keys of all the others. I disable password based login [except for physical/console login].

Shouldn't be too hard for websites to implement this. Shouldn't be too hard to allow multiple public keys (e.g. just add them to the per-user "authorized_keys" file). Default this off for users at start. But, allow it to be enabled on the account management page [with a place to paste in new public keys and menus to delete/modify the existing ones].

Comment Re:Linked? (Score 1) 338

Thanks for the support. But, it seems my post is already going down in flames. Curious, since there have been many slashdot articles about Ubisoft's millitant attitute about [their] DRM. On such an article, it would probably get modded up. Or, perhaps, if I used a smiley face. Since I rarely get modded down for posts I make [and some are considerably more controversial], it makes me wonder if their aren't some astroturfing accounts at work. Sigh.

Comment Re:If Oracle wins, Bell Labs owns the world. (Score 4, Interesting) 146

The AT&T copyrights were the genesis of POSIX. Nobody could create a workalike Un*x, so POSIX was originally a "clean room" reimplementation of the Un*x API's [libc, programs, et. al.]. POSIX now serves as a standard, but that wasn't its original purpose.

Because the POSIX methodology has been around for 30 years, it provides some precedent/defense for Google [estoppel].

If Oracle's argument prevails, this kills all Linux, *BSD [OSX] workalike OSes. Also, because ISO copyrights the C/C++ specs [to charge a fee to have a copy], this means that nobody could program in C/C++ without a license from ISO.

The Oracle/Google decision by the appellate court is tantamount to conferring patent protections for a copyright. That is, because Louis L'Amour copyrighted his western novels, nobody else can pen a western.

Slashdot Top Deals

Say "twenty-three-skiddoo" to logout.

Working...