Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Government

Submission + - Wisconsin considers forcing UofW off of Internet2 (chronicle.com)

Laxitive writes: In a set of surprise changes to the to a budget bill, the government of Wisconsin has included resolutions calling for UofW to return $39-million in federal grant money awarded to build out high-speed internet access in the state. From the article:

"The plan would also require all University of Wisconsin institutions to withdraw from WiscNet, a nonprofit network cooperative that services the public universities, most of the technical and private colleges in Wisconsin, about 75 percent of the state’s elementary and high schools, and 95 percent of its public libraries, according to David F. Giroux, a spokesman for the university system."

Submission + - Company eyes "reboot" of Blade Runner "franchise" (hollywoodreporter.com)

Laxitive writes: Company Alcon, behind the movie "The Blind Side" is eyeing a "reboot" of Blade Runner, according to this article. To quote the co-heads:
"This is a major acquisition for our company, and a personal favorite film for both of us... We recognize the responsibility we have to do justice to the memory of the original with any prequel or sequel we produce. We have long-term goals for the franchise, and are exploring multiplatform concepts, not just limiting ourselves to one medium.”

Comment Re:Canada? (Score 1) 208

An AC pointed this out in a reply to my post correcting some factual errors in my post. It was an australian company (BHP) doing the bidding, and that was scuttled. The executives, however, were not happy with the BHP bid (which was hostile), and were trying to arrange a more lucrative deal with a Chinese company.

The federal action scuttled both potential deals. Anyway, the point is that China buys a LOT of potash from Canada, and has strategic interests in that resource.

-Laxitive

Comment Re:Canada? (Score 1) 208

You are right. I didn't have my facts straight... thanks for the correction. So yeah, BHP was bidding, as was a Chinese company. The BHP bid was scuttled, and it seems Chinese offer went down with it. As for why the execs liked the deal, it's because they would have been greatly enriched by the sale.

Quoth the CBC, in an editorial (http://www.cbc.ca/canada/story/2010/10/01/f-vp-newman.html):

The executives at Potash Corp., who will benefit from a huge payout if the company is sold, are reportedly trying to organize a rival bid involving a Chinese government-owned company to drive up the sale price.

But ultimate ownership by a company from China, which is one of the biggest buyers of Saskatchewan potash, would have even greater implications for the value of the product than a sale to BHP Billiton.

So, regardless of who makes the stronger bid, the answer from both Ottawa and Saskatchewan should be the same: "Sorry. No sale."

Comment Re:Canada? (Score 3, Informative) 208

God no. We keep that shit in a bunker underneath the Canadian shield, disconnected from the internet. You don't leave national secrets like that just lying around.

On a serious note, China's main interest is in Canada's natural resources. As they grow and industrialize, their need to import massive amounts of raw resources to fuel their economy and people.

For example, Saskatchewan has basically the largest natural deposits of Potash in the world. The whole province is basically potash.. dig anywhere.. and you'll hit potash. Potash is what they make fertilizer out of. Not too long ago, a chinese firm wanted to acquire Potash Corp., Saskatchewan's potash producer. There was a big ruckus raised about it internally, and eventually the sale was stopped by the federal government after the extremely popular provincial minister went on the warpath about Saskatchewan natural resources being sold to foreign interests.

I don't disagree with that move (It'd be idiotic to sell off the rights to your own land's bounty).. but China really doesn't like not being able to get what they want. While it's not proven that it was the Chinese government behind these attacks, my suspicion is that they are (occam's razor). There's a well known effort by China to influence the Canadian government and people, and it's been brought up in the national media not too long ago.

-Laxitive

Comment What's the control group? (Score 1) 810

I'm assuming you're earnest about this... so on that premise, here's what'll happen:

You'll put together a set of measurements from this place. Then you'll try to interpret it with no reference point. You have no baseline measurements. Have you tested 20, or say even a handful, of regular, non-haunted houses to establish a control that you can compare to? Chances are you'll pick up SOME noise in SOME measurements that may or may not be construed to be paranormal, or maybe not. Who knows.

What are your predictions? Is there a set of particular things you'll be looking for? Can it be summed up as more-or-less "anything that seems wierd in the measurements"?

I'm not trying to dissuade you from doing it. Just don't call it scientific and then do bad science. It could be a very cool movie project, and it could be a lot of fun doing it, so it may entirely be worth your time. So if it seems cool then go for it.. but plz do not slap a "scientific" label onto it frivolously.

Submission + - Producer/Consumer VMs Using Instant Live-Cloning (gridcentriclabs.com)

Laxitive writes: GridCentric just posted an article and demo on using live-cloning of VMs across a network to implement a producer/consumer system where new producer and/or consumer VMs can be instantly scaled from a single running VM to dozens in a few seconds, with just a single click.

The article demonstrates scaling from 1 VM to more than a dozen (across multiple physical hosts on a network) in just a few seconds... putting the scaling performance of existing cloud architectures to shame.

Submission + - DDOS-in-a-box: VM swarm in a dozen lines of shell (gridcentriclabs.com)

Laxitive writes: We (GridCentric) just posted a couple of interesting videos demoing a load-testing use-case on top of our freely available Xen-based virtualization platform called Copper. In both videos, we use live-cloning of VMs to instantly create a swarm of worker VMs that act as clients to a webapp. The ability to clone is exposed as an API call to the VM that wants to clone itself, meaning that in a dozen lines of shell, we can script the automatic creation and control of dozens of VMs across multiple physical computers.

Creating a clone VM in Copper is similar in function and complexity to forking a process in Unix, and carries all the same assurances: your new VMs are near exact copies of the original VM, start running within seconds of the clone command being invoked, and are "live" — meaning that all programs running on the original VM remain running on the clone VM.

The more we play with it, the more it feels like live-cloning is one of those core capabilities which is at once powerful as well as easy to leverage in designing distributed applications and services. And it seems that today, when cloud is on the top of everyone's mind, is when we should really be having a discussion on what the APIs, architecture, and features of this new class of distributed operating systems should be.

We hope this demo spurs some of that discussion...

Submission + - Mathematics: The Most Misunderstood Subject (fordham.edu) 1

Lilith's Heart-shape writes: Dr. Robert H. Lewis, professor of mathematics at Fordham University of New York, offers in this essay a defense of mathematics as a liberal arts discipline, and not merely part of a STEM (science, technology, engineering, mathematics) curriculum. In the process, he discusses what's wrong with the manner in which mathematics is currently taught in K-12 schooling.

Comment Re:LiveSQL (Score 1) 78

I should have thought things out a bit better with the stddev example - and realized that it does indeed have a reasonable closed form. Good catch.

Complex data mining is hard everywhere, that's true. The problem is that even straightforward data mining is hard once the dataset sizes reach into the hundred-millions or billions or trillions in size (implying absolute dataset sizes of terabytes or more). For google it's webpages, for biology labs it's sequences.

The big killer is the cost of transferring data, which is how traditional data systems are built. A remote host has some software set up, and you send it some data, and it processes it and returns it to you. The distinction with Hadoop is that you keep the data on distributed hosts and send the code (which is typically a lot smaller).

The point stands that incremental update of queries on mutation is not a generally solvable problem: it'll still require the addition of new constructs and the limitation of existing constructs in SQL (e.g. ordering). Hadoop approaches the issue from the other end of the spectrum: focusing on a framework that models distributable algorithms directly using a small set of primitive operators (specifically, "map" and "reduce").

-Laxitive

Comment Re:Am I the only one who finds Hadoop unusable? (Score 1) 78

In situations where you are using Hadoop, your "primary" data store should BE the HDFS store you are using to analyze it. That's a big part of the actual efficiency proposition of Hadoop.

The big trick with the "big data" approaches is to recognize that you keep _everything_ distributed, _all the time_. Your input dataset is not "copied into the system" for some particular analysis task, it _exists in the system_ from the time you acquire it, and the analysis results from it are kept distributed. It's only at specific points in time (exporting data to send to someone external, importing data into your infrastructure) that you should be messing around with copying stuff in and out of HDFS.

-Laxitive

Comment Re:LiveSQL (Score 4, Informative) 78

There are some serious technical challenges to overcome when you think about actually implementing something like this.

Take something like "select stddev(column) from table" - there's no way to get an incremental update on that expression given the original data state and a point mutation to one of the entries for the column. Any change cascades globally, and is hard to recompute on the fly without scanning all the values again.

This issue is also present in queries using ordered results (as changes to a single value participating in the ordering would affect the global ordering of results for that query).

The issue that "Big Data" presents is really the need to run -global- data analysis on extremely large datasets, utilizing data parallelism to extract performance from a cluster of machines.

What you're suggesting (basically a functional reactive framework for querying volatile persistent data), would still involve a number of limitations over the SQL model: basically disallowing the usage of any truly global algorithm across large datasets. Tools like Hadoop get around these limitations by taking the focus away from the data model (which is what SQL excels in dealing with), and putting it on providing an expressive framework for describing distributable computations (which SQL is not so great at dealing with).

-Laxitive

Comment Re:Over commit is great (Score 2, Informative) 4

Well, not really. It's the same as operating systems 'overcommitting' memory by giving each process a full virtual address space and filling it on the go. Operating systems solve this problem by... well.. using paging.

The paging approach works well for systems where you expect the in-memory working set to be tight. Mainly you'll see a graceful degradation in performance as you actually start hitting real memory limits and paging comes into effect.

Eventually, I think that can be resolved by taking a hybrid approach: wait until memory pressure builds and paging hits performance more than you'd like, then auto-migrate machines off the host as necessary. You get the best of both worlds: oversubscription when resource usage is low and performance is not affected, and on-demand resource allocation when resources are known to be needed.

-Laxitive

Submission + - Extreme Memory Oversubscription for VMs (gridcentriclabs.com) 4

Laxitive writes: Virtualization systems currently have a pretty easy time oversubscribing CPUs (running lots of VMs on a few CPUs), but have had a very hard time oversubscribing memory. GridCentric, a virtualization startup, just posted on their blog a video demoing the creation of 16 one-gigabyte desktop VMs (running X) on a computer with just 5 gigs of ram. The blog post includes a good explanation of how this is accomplished, along with a description of how it's different from the major approaches being used today (memory ballooning, VMWare's page sharing, etc.). Their method is based on a combination of lightweight VM cloning (sort of like fork() for VMs) and on-demand paging. Seems like the 'other half' of resource oversubscription for VMs might finally be here.

Slashdot Top Deals

Kleeneness is next to Godelness.

Working...