Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re:Your milage may vary (Score 1) 163

A $1000 chair is most definitely worth having. My current one did 10 years at my previous employer, and another 6 years since working from home (I'd started working from home 6 months before we were all made redundant and was allowed to keep the chair ...).

Also made a big difference in recovering from lower back surgery - most chairs it was impossible to sit in, but this one was usable with a pillow sitting across the arms.

When this chair eventually dies I'll be investing in the current equivalent.

Comment Re:Let's go even further! (Score 1) 181

The secret to very good managers I've worked with is that they realize that the staff working under them does have more talent than the management does.

As someone who has been managed by a very good manager earlier in my career, I'd modify the above slightly. Specifically, very good managers realise that the staff working under them have far more talent at the jobs they were hired to do.

OTOH, very good managers almost always have far more talent at the job they were hired to do (managing) than do the staff under them.

Comment No programmers' typeface (Score 4, Insightful) 175

They have a monospaced typeface, but it's not useable for programming - doesn't even have a significant distinction between zero and O, let alone any other programmer-friendly features.

Since I presume they're going to want people at Google to use Noto as standard, it seems sensible to me that they create a programmers' version.

Comment Install to get the license, roll back (Score 1) 982

If you're not ready to commit to Windows 10 yet, I would recommend getting your machine registered within the free period, then revert to the previous version.

There are 2 ways to do this:

1. Upgrade your existing OS, then roll back. Obviously this has some risks in terms of drivers, etc.

2. Clean install Win 10 on a different partition/drive, using the product key from your existing Windows install (not sure if this will work for OEM systems after extracting the product key). Then you can either dual-boot, or simply go back to using the previous OS.

Comment Re:Interpreted languages should cease (Score 1) 222

The first release of pypy (well, technically pypy3) that supported Python 3 was over a year ago (February 2015). However it is still Python 3.2.5 (they're working on 3.3+ support).

Regarding the rest of your post, a common approach is to write the low-level access code in C and everything on top in Python. The C code can release the GIL (only need to reacquire it if calling back into the Python subsystem).

Have a look at cffi.

Comment Re:Eliminate git, move back to cvs (Score 1) 87

In Mercurial I just create a new named branch (an optional step, but I prefer it), do in-progress commits and change the phase to secret (so they can't be pushed to another repository until the phase is changed back to draft).

Once I've got things to the desired code I modify and rebase the changesets until I have the series I want. Basically exactly the same process - secret commits provide essentially the same functionality as the git stash, but since they're just changesets in the graph it's one fewer concept to learn. There are various extensions that people like to use, but I find that just the base rebase + the Facebook-developed commit --amend --rebase extension gives me all the power I need.

If you enable changeset evolution then you don't even need to worry about putting commits into the secret phase - you can push, pull and modify history as you like and let changeset evolution propagate obsolescence markers to other repos. However I'm still not 100% ready to change over to evolution yet - I still see too much churn on it on the mercurial-devel list for my taste.

Comment Re: Eliminate git, move back to cvs (Score 1) 87

I feel your pain. With a bit of work, it is possible to use Mercurial for your day-to-day work, and then ClearCase Remote Client (or whatever it's called now) to sync to ClearCase. I know - I've done it.

The biggest issue is ensuring that the timestamps are as CCRC expects - the .ccrc files (I think that's right - it's been about 5 years) hold the timestamps in a somewhat obfuscated format.

Comment Re:There is a cost with all that (Score 1) 51

Whilst this is currently true, the situation is improving rapidly. I've been periodically testing the OpenELEC Kodi Jarvis alpha builds on my Raspberry Pi 2.

The previous time I tested it (a month or so ago), 720p HEVC was just playable - ~100% CPU on both cores, but only dropping the occasional frame. The time before that, 720p HEVC was unwatchable. But with build #1016 (which includes FFMPEG 2.8.1) I was getting smooth playback and averaging around 60% CPU on both cores.

HEVC will obviously never have the same hardware requirements that h264 does now, but there is a lot of work currently going into reducing the requirements.

Of course, I'd much prefer that royalty-free codecs take the fore.

Comment Failure to scale worse than crashing (Score 2) 285

We had a program that was doing session matching of RTP streams (via RTCP). We had to be able to handle a potentially very high load.

Things had been going OK - development progressing, QA testing going well. And then one day our scaling tests took a nosedive. Whereas we had been handling tens of thousands of RTP sessions with decent CPU load, suddenly we were running at 100% CPU with an order of magnitude fewer sessions.

I spent over a week inspecting recent commits, profiling, etc. I could see where it was happening in a general sense, but couldn't pin down the precise cause. And then a comment by one of the other developers connected up with everything I'd been looking at.

Turns out that we had been using a single instance of an object to handle all sessions going through a particular server, but that resulted in incorrect matching - it was missing a vital identifier. So an additional field had been added to hold the conversation ID, and an instance was created for each conversation.

Now, that in itself wasn't an issue - but the objects were stored in a hash table. Objects for the same server but different conversations compared non-equal ... but the conversation ID hadn't been included as part of the hashcode calculation. So all conversation objects for a particular server would hash the same (but compare different).

We had 3 servers and tens of thousands of conversations between endpoints. Instead of the respective server objects being approximately evenly spread across the hash map, they were all stuck into a single bucket per server ... so instead of a nice amortised O(1) lookup, we instead effectively had an O(N) lookup for these objects - and they were being looked up a lot.

The effect was completely invisible under low load and in unit tests. The hash codes weren't verified as being different in the unit tests as there was the theoretical possibility that the hashcodes being verified as different could end up the same with a new version of the compiler/library/etc.

Slashdot Top Deals

All constants are variables.

Working...