Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Submission + - MIT's "Hot or Not" Site for Neighborhoods Could Help Shape the Future City (vice.com)

Daniel_Stuckey writes: Researchers from MIT Media Lab may have found a way to measure this "aesthetic capital" of cities, with their website Place Pulse, a tool to crowdsource people’s perception of cities by judging digital snapshots—a sort of “hot or not” for urban neighborhoods.

Some 4,000 geotagged Google Streetview images and 8,000 participants later, the team found that by using digital images and crowdsourced feedback, they can accurately quantify the diverse vibes within a city, which in turn can help us better understand issues like inequality and safety.

Submission + - Supercomputer Becomes Massive Router for Global Radio Telescope (slashdot.org)

Nerval's Lobster writes: Astrophysicists at MIT and the Pawsey supercomputing center in Western Australia have discovered a whole new role for supercomputers working on big-data science projects: They’ve figured out how to turn a supercomputer into a router. (Make that a really, really big router.) The supercomputer in this case is a Cray Cascade system with a top performance of 0.3 petaflops—to be expanded to 1.2 petaflops in 2014—running on a combination of Intel Ivy Bridge, Haswell and MIC processors. The machine, which is still being installed at the Pawsey Centre in Kensington, Western Australia and isn’t scheduled to become operational until later this summer, had to go to work early after researchers switched on the world’s most sensitive radio telescope June 9. The Murchison Widefield Array is a 2,000-antenna radio telescope located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia, built with the backing of universities in the U.S., Australia, India and New Zealand. Though it is the most powerful radio telescope in the world right now, it is only one-third of the Square Kilometer Array—a spread of low-frequency antennas that will be spread across a kilometer of territory in Australia and Southern Africa. It will be 50 times as sensitive as any other radio telescope and 10,000 times as quick to survey a patch of sky. By comparison, the Murchison Widefield Array is a tiny little thing stuck out as far in the middle of nowhere as Australian authorities could find to keep it as far away from terrestrial interference as possible. Tiny or not, the MWA can look farther into the past of the universe than any other human instrument to date. What it has found so far is data—lots and lots of data. More than 400 megabytes of data per second come from the array to the Murchison observatory, before being streamed across 500 miles of Australia’s National Broadband Network to the Pawsey Centre, which gets rid of most of it as quickly as possible.

Comment Re:Poor review (Score 2) 81

There is pretty much nothing you can do by hand that puppet can't also do - and often it takes just as much time to update a single puppet config file and run the puppet update process as it would be to ssh into the server and make the manual change.

Another advantage is what might go into traditional documentation is now just a puppet configuration.. Oh, fuck, this server crashed? Just roll another in 5 minutes... Who cares about the old one..

And this is the flaw in your argument. There seems to be some assumption that if it's not under control of Puppet or Chef then it's manual. This is completely untrue. Any competent admin automates their administration. I've been doing it for more than a decade.

Second it's not the host OS and host configuration that makes the servers distinct. It's the data. You can't automate ten years worth of data entry and workflow modules. I suppose it would be unfair of me to hold against you the fact that you don't know anything about my operations, but we're not an internet based company. We're doing stuff other than serving up a bunch of vanity gopro videos. We have several large data centers but we also have hundreds of offices around the world and those offices have their own IT infrastructure. Anyone can stand up a server in 10 minutes in their data center. How long will it take you to stand one up Chengdu given that your primary data centers are in the US and Europe and your network line to the remote facility is 512Kb/s?

The absurdity of the proponents of CFengine, Puppet, Chef, et. al. is that they assume no one has ever solved these problems before. What problems that I have are these products going to solve for me? The emphasis is on "problems that I have". It's not sufficient to tell me what a product does, it's whether it solves my problems.

You are right, there is nothing you can do with puppet that you can't do with SSH, and 10 years ago things like puppet didn't even exist so it makes total sense for you to be in the situation you're in and it wouldn't make alot of sense for you to switch just for the sake of using puppet.

But it's 2013, if you're starting off new - why would you roll your own when many solutions already exist that have been thoroughly tested and extended to have a rich feature set that you probally wouldn't have time to develop as a day-to-day dev op. Furthermore, Puppet allows for this idea of modules that people can write generic versions of "apache" or whatever and often times you don't even have to write configuration systems yourself but instead you can just clone it off github and customize it to your liking..

Your question about standing up servers is silly - why would a more manual solution be impacted differently than a puppet solution based on in-bound bandwidth? Puppet allows for client/server architecture and if bandwidth was your big concern, you could set up a local puppet machine in Chengdu and build servers in that data center based on that...

I worked in a largeish datacenter in 2002 and wrote alot of SSH scripts to manage those servers - it works but its tedious. The benefits of using something like puppet is enormous.. The company I work for now used to employ a full time sysadmin - now the devs just update puppet as we need and it hardly impacts our workloads given how easy it is for us to maintain...

Comment Re:Poor review (Score 1) 81

That was a pretty poor review. Giving a summary of the table of contents isn't a review. Additionally it doesn't seem like they recognize that devops and duplicative administration don't fit with a lot of data processing models. There are many organizations that have servers that have a distinct purpose and it doesn't make sense to envision them as just another clone system in "the cloud".

This is why puppet has a very strong inheritance system... We have it broken down as generic server (2 factor/LDAP configs, nagios configs, etc) and then apache_servers which build out the basic web infrastructure and then more specialized configs for one-off speicalized servers... (admin server versus production web servers)



There is pretty much nothing you can do by hand that puppet can't also do - and often it takes just as much time to update a single puppet config file and run the puppet update process as it would be to ssh into the server and make the manual change.

Another advantage is what might go into traditional documentation is now just a puppet configuration.. Oh, fuck, this server crashed? Just roll another in 5 minutes... Who cares about the old one..

Comment Re:I'm going to assume that was hipster irony. (Score 3, Informative) 91

1. jQuery core is hardly bloated, its 32k.. If you are willing to drop support for older IE, you can use jQuery 2.0 which is even more streamlined than its ever been.

2. Animations are not part of jQuery, they are part of jQuery UI which is a totally separate library and which I agree, sucks.

3. Native JS syntax for ajax is convoluted..


$.ajax({
url: '/some/url',
success: function(o) {
}
});


Is much more maintainable to me than several lines of new xmlhttprequiest()... blah blah blah... every time you need to make an ajax call...

4. Jquery has lot of powerful stuff that lets you write less code much of the time such as .on().. Much of the bad javascript I come across is from people who are trying to write it all themselves and have onclick handlers hard coded into tags in a giant unmaintainable mess. Unobtrusive Javascript is for the win and you'll save yourself alot of headache using jQuery to write it.

Comment Re:That explains things (Score 3, Interesting) 91

No kidding. JQuery Mobile is ridiculously slow.

You'd be crazy to use an inefficient and over-weight library like jQuery anyway. Adding jQuery mobile to that is just asking for trouble.

Let's face it: jQuery has long outlived it's utility. It's not even viable for dealing with old browser compatibility issues on the Desktop.

Just learn JavaScript. Your users will thank you. I'll bet that you'll even ultimately save time and effort as you'll spend less time trying to squeeze acceptable performance out of Resig's cludge -- and less time trying to debug the nasty one-liners you're forced to write to get those tiny improvements.

This is dumb for a variety of reasons. jQuery lets you abstract away a ton browser inconsistencies. It also makes you alot more productive b/c you don't have to constantly event wheels...JS by itself is extremely tedious syntax wise . (do you really like document.getElementById riddling your code to the point of unreadability? I'm sure you'd say oh, i'll just write a helper. Well congrats, there you go inventing wheels) Some of the worst code I run across on the web always seems to be the guy that insists on doing things with pure javascript and he has tons of onclick handlers directly on tags and all sorts of other crap which makes sites completely unmaintainable...

Comment Re:You can't win. (Score 3, Informative) 303

I've dealt with similiar situations in my professional career. Rackspace's DDOS protection isn't worth it, after 3Mbps, they null routed our box because the size of the attack was so large that it was saturating their uplink capacity...

Prolexic has a cool approach, you proxy your site through them (either web proxy or they can annouce BGP routes for you) and they have massive datacenters that do nothing but scrub packets for you.

The downside is their service is very very expensive ($60k+ a year)

Comment Re:1st vote? (Score 2) 503

Political discussion is an activity for fools. It's like arguing which brand of anal lube or who's servant of satan is better

Not all personal lubricants are the same, silcon based are more effective in the short run but have the nasty side effect of hanging around for days. Water-based lubes are easier to wash off but need to be re-applied more frequently.

Even in thoose categories, the lubes widely differ beteween manufacturers, some are too viscous, others are too thin..

So I must wholeheartedly disagree that putting thought into your choice of anal lubricant is a fool's task.

Slashdot Top Deals

What the gods would destroy they first submit to an IEEE standards committee.

Working...