Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re: He sounds like an idiot (Score 1) 332

Experience counts for naught. Most developers with 10 years of experience just experience the first year ten times over. Many big tech companies confirm this with their talent search issues. Statistically there is virtually zero difference in skill between someone who has 6 months experience or 20 years. Most people quickly reach their limits.

In my many dealings with world grade technology consulting firms, they are horrible at consulting, but they do make for great human interfacable reference books. Most of the time I spend about 5 minutes reading a Wiki article about a given technology before jumping into a meeting with a specialist, then I poke holes in their logic until their ego is bruised enough to get them to be quiet, then I start asking my questions and finally get somewhere. Their logic and understanding is almost always horribly flawed, but they do know a lot. Their opinions are pretty much useless.

They may know more facts than I do and have dealt with more issues that I have, but I will have vastly more understanding of the domain than they will ever have. Cargo-cult, that is all.

Comment Re:order (Score 1) 332

One of my co-workers prototypes all the time. He likes empirical evidence. I am the polar opposite. Not that I hate empirical evidence, but empirical evidence lies too much for me. I mostly just prototype in my head, but I seem to have very accurate mental models.

Empirical evidence will find local minimums, but not global.

Comment Re:Has the lord and savior told you (Score 1) 332

TDD stands for Test Driven Development, not Test Driven Design. Architecture and design happens before development. Don't start writing code until you know WTF you're doing. Build some prototypes, but just throw them away once you understand the problem. Same thing with agile. It is not an architecture or design methodology, it is a development methodology.

Comment Re: I'm always proud of my code (Score 1) 280

I've had the opposite experience. Projects that have been going on for years where the code was such a mess that bugs took months to fix, I would re-write from scratch in a month or two and never have another reported bug. Messy code is unmanageable and does not scale. It works for only the simplest of projects. And many times these projects turn into full time jobs because they are important enough to keep working, but messy enough that anything short of a re-write will stop people from complaining.

Comment Re:Multithreading is a solved problem (Score 1) 497

I love using atomics, but the biggest issue I have with them is the assumptions I make are based on x86 or x64 memory ordering guarantees. Please don't use my code on ARM.

The reason I like using my own atomic thread sync code is I like to write my threaded code to not require exact ordering where possible, as long as the result is the same. Sometimes this results in duplicate work being performed and one of the results effectively discarded, but the reduction in locking overhead is a huge win for scalability.

Comment Re:Four hard problems in programming: (Score 1) 497

I've only had an issue with a race-condition once, and that was when I only had a few weeks experience programming. Write you code in a way that guarantees that race conditions can only occur in certain locations and that problem is easy. I have not had to use a debugger to fix a race condition in years now.

My most recent project was extremely async and parallel for highly scalable IO. I told my manager it would take me at least 1 month to write it, I was given a week. I slapped that thing together, stuck it in prod and hoped for the best. Now, 6 months later, Someone had an odd difficult to reproduce issue. I looked at the stack-trace, got a bit perplexed for a few minutes, then realized the problem. Five minutes later I had it fixed. This pretty much describes every race condition I have ever had. Only a few times have I had to use a debugger, and that's because the issue actually existed in someone else's code, to which I did not have access.

My co-workers describe me as having a super-human intuition for debugging code. I seem to have a knack for being able to debug non-reproducible errors in system to which I have never seen the code, nor know the architecture. Based solely on the characteristics of the issue, I can infer the architecture and the nature of the problem. I've never understood other people inability to debug these issues. I just think of many mental models to solutions for the system, then pick a mental model that would have the same symptoms as being described. Nearly every time, the mental model I choose very closely matches the design of the system. People just need to get better at creating viable mental models.

Comment Re:Buffers (Score 1) 497

Because there is no such thing as a buffer really, it's an abstraction on memory

And there's no such thing as color, it's just an abstraction of the relation among different optical inputs
And there's not such thing as thought, it's just a complex interaction of chemical reactions
And there's not such thing as random, the Universe is deterministic
And there's not such thing as life, just atoms moving around

Everything we know of in this world is just a collection of characteristics that describe an abstract idea.

Comment Re:Buffers (Score 1) 497

Async isn't meant to help cpu intensive work loads. Generally, most computers have too much CPU power and not enough IO or scaling. If you're getting "synchronous freezing" from "cpu intensive" work that needs to be done prior to your IO, you have an easy problem on your hands. Get more cores. If modern CPUs are not enough for your workload, it's probably because you're horrible at coding.

Comment Re:Closures? (Score 1) 497

I'm not sure about Go, but .Net has some interesting deadlock situations with async not all of your code is 100% async. Which is annoying because most opensource .Net libraries are not async. I had to help a co-worker with some GCC Go pseudo-deadlock issues many years back. I found it rewarding to have solved a deadlock issue in a language that I had zero experience and turned out it was an implementation detail of how GCC handled "async" at the time, via threads, and when the thread-pool ran out of threads to handle go-routines, the producer and consumer could be handled on the same thread and block each other. Took me about 15 minutes. I say "pseudo" because if a routine blocked too long, the scheduler would change which routing is running, which "fixed" the issue after some hesitation. You'd get this strange jittering that got worse as more routines were running, quickly getting to tens of seconds in our test.

Once you have a mental model of how concurrency works, it doesn't matter which platform you're on. There's only a few good ways to implement it.

Async allows for high scaling when dealing with lots "messages" moving through the system. Context switching is crazy expensive, about 10,000 cycles on a modern CPU. That's not including a lot of other contention that's created in the kernel. To put it simply, if you want a single server handling 100Gb of traffic with millions of network states, you HAVE to use async. And yes, a single server can handle 100Gb of traffic over 100,000/s of short lived connections.

Comment Re:Closures? (Score 1) 497

Right there with you. I've been coding multi-threaded code for nearly a decade now. It's not the difficult. My first real-world application was threaded. Taught myself threading in 3 days, wrote my own synchronization code. Looking back, I cringe at what I did, but it worked perfectly and I have not had to touch the code in 8 years. Not bad for re-writing an existing program that had been under constant development for almost a year, in only three weeks, fresh out of college, about two weeks of experience programming, and having never written multi-threaded code before.

I recently had to look over my code again to turn it into a library, which took me less than a day. Well written threaded code is well factored with single-responsibility taken to the extreme. I don't like more than one piece of code modifying shared state. This lends itself well to be converted into a library, so most of my work was already done for me, but some annoying coding habits that I had were annoying.

Comment Re:This stuff drives me nuts (Score 1) 166

Dedicated clusters can run through 90% of all passwords 8-16 characters in a matter of hours/days.

A 16 char password has nearly 10^32 combinations. If you had 100,000 computers, each with 100 cores that are 10ghz, it would take 10^12 seconds to go through all of the combinations, assuming it only took 1 clock cycle per comparison. That's still almost 32,000 years. Please, let me know about this magical datacenter of your's.

Your tool obviously makes many assumptions, like the password is composed of words or common patterns.

Comment Re:What part of this is hard to understand? (Score 1) 183

Paying $20/m for 150/150 dedicated fiber, $50 after 6 months. I have my own personal fiber from my home to the CO and the ISP says I will get my bandwidth 24/7. 6ms-12ms to Chicago, depending on which trunk to Level 3 they're using. Zero peering, no CDNs, just pure bandwidth, 100+ years old, been a customer for 10+ years and my bill has never gone up a single penny but recently is has gone down 50%. They got rid of their 20/20 for $30, 70/70 for $70, 100/100 for $90 and replaced them all with 150/150 for $50. People with the lowest tier were grandfathered in, but if they ever want to change speeds, I guess their bill is going to go up. That being said, they only recently created the lowest tier a few years ago. The decade prior, they only had a $70 and $90 tier. If only I could afford their $300 1gb/1gb.

You've got to love their marketing

It’s dedicated symmetrical fiber so speeds never change.
150 Mbps Dedicated Symmetrical $49.99/m
Recommended for
Web hosting
Online gaming
Heavy data transfer
Cloud computing
Webinar hosting
Extremely large online backups

Their network policy is nice to

The Company does not favor or inhibit lawful applications or classes of applications.
The Company does not knowingly and intentionally block, impair, degrade or delay the traffic on its network
The Company does not use or demand “pay-for-priority” or similar arrangements that directly or indirectly favor some traffic over other traffic.
The Company does not prioritize its own content, application, services, or devices, or those of its affiliates.
The Company does not charge edge service providers of content, applications, services and/or devices a fee simply for transporting traffic between the edge service provider and its customers.

Comment Re:What part of this is hard to understand? (Score 1) 183

Forgot to add why anti-bufferbloat AQMs help with link utilization. They help reduce the "thundering herd" caused by dumb tail-drop FIFO buffers that cause global TCP synchronization. If you look at a normal FIFO based link with lots of flows, the typically max out a little over 80% utilization as latency, jitter, and loss skyrocket. Constant TCP streams swinging up and down cause bursts of backoffs, causing the link to violently change from over-utilized to under-utilized very quickly. AQMs tend to have an elastic queue that works more like a head-drop, biased toward dropping packets of the heaviest flows in that instant.

This allows links to reclaim a lot of that left over 20% while stabilizing packet quality at the same time.

Comment Re:What part of this is hard to understand? (Score 1) 183

Using modern stateless configureless buffer management algorithms, you can have low latency, low loss, low jitter, and a mostly even distribution of bandwidth. High latency is caused by bufferbloat. Fix the bloat, fix all of the issues you mentioned, but with out of the box simplicity near-zero-configuration(set your bandwidth).

Some interesting statistics that I read about research on bufferbloat is that a very small percentage of flows are "hogs" are any given instant. They tested link rates from 133Mb/s all the way up to 10Gb/s, and they all had the same numbers. Even though they all had hundreds of thousands of active flows, at any given instant, only 100 flows had at least one packet in the link's buffer, and only 10 flows had 2 or more packets in the buffer. This means that you only need to track about 100 "states" of data, regardless the link rate or saturation.

Many fair queuing algorithms have taken advantage of this and just create hash buckets. There are only about 100 states in the buffer at any given time, meaning if you have something like 1024 buckets, the chance of any two network flows colliding is relatively low, but not zero. Another algorithm extended this to include "ways", where each bucket can have up to 8 flows in it, but round robins them. Turns out that adding these few "ways" resulted in zero collisions going into millions of states over a relatively slow congested link.

This all means a link can 100% isolate an infinite number of flows over a link while only using a small amount of fixed memory. Next advancement. Most of the time when a packet enters the buffer, it quickly leaves it. When a packet comes into the buffer, it gets shoved into a bucket+way. If this bucket+way was empty, this packet gets prioritized over all other non-new packets. Because 99.9999% of the time there are zero packets in the buffer for a given flow, this allows the non-hog traffic to get immediately scheduled.

If a data flow suddenly starts to become a hog, that means when the next packet arrives, there will already be an existing packet in the bucket+way, meaning that packet does not get prioritized and instead back-logs. But remember, this situation only applies to about 10 flows at any given instant. All of this means that 99.9999% of all packets immediately dequeue with near zero latency. The "hogs" get their additional packets delayed as they start to backlog, and if the link is too saturated, eventfully the packet is dropped. When this happens, the sender backs off and they are no longer a hog.

Turns out that doing it this way actually results in better link utilization. You can run the link to something like 99% utilization while still maintaining fractional millisecond latencies, and average utilization is higher. win-win-win. Zero downsides other than increased CPU usage. Right now they are able to run these algorithms on standard x86 routers/firewalls almost into the 40Gb range, but are hoping future optimizations to network stacks will allow higher. Assuming these algorithms get well tested, they will started to get implemented in hardware. DOCSIS 3.1 actually makes use of RED by default, which is a non-fair queue FIFO anti-bufferbloat AQM. RED is Codel-like but is friendly to Cisco ASICS.

Slashdot Top Deals

"The chain which can be yanked is not the eternal chain." -- G. Fitch