Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Ho ho ho (Score 3, Informative) 314

FWIW, I'm a PhD student at a reasonably large institution in the US.

Very little of this stuff sees the light of day. The vast majority of software is written simply as a proof of concept for some particular method/system/algorithm in order to get published. Good conferences/journals will typically want not only a well thought out idea, but an idea that you can and have implemented it to some extent, and that it works. That having been said, most of what gets produced is complete and total garbage -- typically just enough code to be able to prove that something runs correctly and in a given amount of time.

Personally, I have written a bunch of junk code during my time here. I'd like to think I know more or less how to write good code after all these years, but writing good, well documented, well tested code takes time we don't have -- writing code is simply a means to an end (publication) -- and so most of the code I write is hasty and ugly. This even applies to code that people say is for "wide distribution".

Before you go hounding on academia however, I'd warn that writing "good code" isn't really the point of what we're doing -- the point is to produce a reasonable method of solving some particular problem or type of problem. Going into bioinformatics for example, there are a whole bunch of problems that involve performing more efficient analysis of certain types of graphs. If a researcher discovers something along these lines, he/she will likely write some junk code to prove that the bare algorithm works, perform some analysis of it, publish it and move on. This may or may not end up actually being a useful improvement -- if it is however, then some implementer whose actual job it is to code whatever medical software might be using this algorithm then has a basic blueprint of how to proceed.

As for some examples of software from academia that have made it out, let me think...

Coverity - static code analysis tool, started at Stanford then moved into being a startup and is now quite successful
PostgreSQL - Originally from Berkeley
Bro (Intrusion Detection System) -- written by a researcher from Berkeley/ICSI -- is still somewhat "in academia", but I have heard of several production deployments

That's all I feel like coming up with right now, but I think the general pattern here is that if/when some piece of software produced in academia is seen to have value in its own right (e.g., away from the original research/publication that spawned it), it typically gets spun off in a start-up or a more concerted effort is given to its development, at which point one can actually spend the time to write good code.

Comment Re:Stupid workaround for stupid server code (Score 1) 151

"It breaks DNS" seems like a pretty strong comment to me and I'm not following how exactly it's going to do this. If you have a local DNS installation (I assume you're talking about dns /resolvers/ here?) that local machines use, there is absolutely no need for you to implement this, as any CDN basing a server selection choice on your local DNS installation will be well-guided. Your resolver won't send the applicable EDNS option, and the authoritative DNS server won't care that it's not there -- it'll just base it's choice on the resolver's IP as has happened for years.

If you're running an authoritative DNS server, then you're not going to get the EDNS field from google/opendns because they're not going to send it to you, and if you did get it, it would only be a problem if you had a backwards DNS server that pukes on EDNS.

How is it breaking things for you?

Comment Re:Stupid workaround for stupid server code (Score 1) 151

If you're running bind for your local net, then you don't need this as your DNS resolver is already located close to you. The problem arises when DNS resolvers are utilized that are not "close" to the clients they serve and therefore CDN's will often end up picking a CDN replica close to your resolver rather than close to you.

Obviously this problem grows as does the distance between you and your resolver -- if you're using a huge resolving service like Google DNS or OpenDNS, then you are much more likely to be far from your resolver. If you're using your ISP's resolver, then it could be just a few hops up the network path, or it could be across the country (as some ISP's will just use a "bank" or two of resolvers).

This stuff is done in DNS for a variety of reasons. If you use intelligence at the HTTP layer, you:

1. Obviously have a non-optimized initial server choice, as once you're communicating over HTTP you're already talking to a specific replica. This will likely apply for each and every new CDN-ized domain you use.

2. You require the client to add significant intelligence to their website in order to create all the internal links to point to a "good" server. Obviously, it's going to be harder to sell your services if the client has to rewrite a bunch of code and can't just repoint their main domain at your IPs.

3. And IMO most importantly, this removes the server selection choice from being under the sole control of the CDN provider. If this stuff is logic'd through the main HTTP page of the website, the CDN must expose its server selection strategy to the client, which is likely proprietary business knowledge. Furthermore, the server selection map is dynamic and rapidly changing based upon internet link congestion and server load, and obviously this data would have to be pushed to the client website as well. Also, if you're thinking you could just point the initial IP to a CDN-hosted HTTP server and issue HTTP redirects from there, then you've just eaten up two whole RTT's -- not a good way to speed up webpages.

Also, to those that say this aids censorship, I'd have to call BS. A country wishing to censor its own users can easily implement a "use our dns resolvers only" policy using a simple firewall rule and watch all the traffic/rewrite dns responses anyways.

Comment Okay... (Score 1) 179

But what exactly does this get me over SSL Client Certificates?

Frankly, I don't entirely understand why the world hasn't started using SSL Client Certificates, and I wonder what will make people use this scheme, when client certificates have lain unused for so long.

Comment Re:This is a big deal for me. :-( (Score 2) 459

I remember once upon a time when I was first setting up my mail server I experienced this exact problem. As I recall, there was some kind of hotmail-ish website I went to that helped me get its IP allowed by their system.

Here are some great resources on sending email to hotmail:

http://mail.live.com/mail/troubleshooting.aspx (generic troubleshooting page for sending to hotmail)

https://postmaster.live.com/snds/ (Signing up here lets you see what hotmail thinks of a specific IP, assuming you control RDNS for it. This might have been what I did once upon a time)

Finally, if none of those help, you can ask them directly here:

https://support.msn.com/eform.aspx?productKey=edfsmsbl&ct=eformts&st=1&wfxredirect=1

Regards,

Anom

Comment The Moon is a Harsh Mistress, anyone? (Score 1) 332

Any kind of launch system that is on the moon will require less energy to use due to the diminished effect of gravity and lack of atmosphere. While getting any such system to the moon obviously has it's difficulties, lobbing rocks/missiles/whatevers from the moon is going to be way easier than doing the same from the Earth. Furthermore, there is simply more room than any station one could build in space to house a base, "ammunition" for any type of weapons system, etc.

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...