12353944
submission
ChelleChelle writes:
Cloud computing has been generating a lot of buzz lately—yet is it really a revolutionary new concept or simply the industrial topic du jour? According to the author of this article (Dustin Owens of BT Americas) cloud computing is an extraordinarily evolutionary and potentially revolutionary concept due to its elasticity. For Owens, once elasticity is combined with on-demand self-service capabilities it could truly become a game-changing force for IT. As he states, “elasticity could bring to the IT infrastructure what Henry Ford brought to the automotive industry with assembly lines and mass production: affordability and substantial improvements on time to market.” While this sounds fantastic, there are several monumental security challenges that come into play with elastic cloud-computing. The bulk of this article is dedicated to examining these challenges.
11970180
submission
ChelleChelle writes:
The NTP (Network Time Protocol) system for synchronizing computer clocks has been around for decades and has worked well for most general-purpose timing uses. However, new developments, such as the increasingly precise timing demands of the finance industry, are driving the need for a more precise and reliable network timing system. Julien Ridoux and Darryl Veitch from the University of Melbourne are working on such a system as part of the Radclock Project. In this article they share some of their expertise on synchronizing network clocks. The authors tackle the key challenge, taming delay variability, and provide useful guidelines for designing robust network timing algorithms.
11655682
submission
ChelleChelle writes:
Today, many cloud end customers use price as their primary decision criterion when selecting a cloud provider. Due to a variety of factors (the reduced deployment costs of open source software, the perfect competition characteristics of remote computing, etc) cloud providers are expected to continuously lower their prices. While low prices may seem like a good thing, it is important to keep in mind the costs to performance. In order to provide cheap service, cloud providers are frequently required to overcommit their computing resources and cut corners on infrastructure, resulting in variable and unpredictable performance of the virtual infrastructure. As this article discusses, this is a situation that needs to change.
11291852
submission
ChelleChelle writes:
Using traces—an essential technique in emulator development—can be a useful addition to any programmer’s toolbox. This article examines how adding snapshots, tracing and playback to existing debugging environments can significantly reduce the time required to find and correct stubborn bugs. From the article, “Detailed CPU state traces are extremely helpful in optimizing and debugging emulators, but the technique can be applied to ordinary programs as well. The method may be applied almost directly if a reference implementation is available for comparison. If this is not the case, traces are still useful for debugging nonlocal problems. The extra work of adding tracing facilities to your program will be rewarded in reduced debugging time.”
11018802
submission
ChelleChelle writes:
“Visualization can be a pretty mundane activity: collect some data, fire up a tool, and then present it in a graph, ideally with some pretty colors.” As this interview reveals, however, all of this is changing. Thanks in large part to a new generation of collaborative visualization tools and the efforts of research scientists such as Martin Wattenberg and Fernanda Viegas (both from IBM’s Visual Communication Lab) the communicative potential of visualizations is becoming known. In this interview Wattenberg and Viegas discuss their interest in the artistic component and social uses of visualizations , interests which are apparent in their many collaborative projects (for example, their most recent project Many Eyes ).
9946316
submission
ChelleChelle writes:
By now it has become evident that we are facing an energy problem—while our primary sources of energy are running out, the demand for energy is greatly increasing. In the face of this issue energy-efficient computing has become a hot topic. For those looking for lessons who better to ask then Steve Furber, the principal designer of the ARM (Acorn RISC Machine), a prime example of a chip that is simple, low power, and low cost. In this interview, conducted by David Brown of Sun’s Solaris Engineering Group, Furber shares some of the lessons and tips on energy-efficient computing that he has learned through working on this and subsequent projects.
8785522
submission
ChelleChelle writes:
Eric Sax of Sun Microsystems recently posed an interesting question regarding power management--is there anything that software developers can do to address the issue of saving energy? While we typically think of power-management features as being synonymous with hardware, software's role in the efficiency of the overall system has become undeniable. According to Sax, "Features to reduce power consumption of underutilized system resources have become pervasive in even the largest systems, and the software layers responsible for managing those resources must evolve in turn--implementing policies that drive performance for utilized resources while reducing power for those that are underutilized."
7082548
submission
ChelleChelle writes:
An effective enterprise information management strategy needs to take into account both internal and external data. Dealing with the latter, however, often proves to be a challenge. Today, companies have access to more types of external data than ever before. The question of how to integrate it most effectively looms large. While this article doesn’t provide a hard and fast answer, it does clearly assess the situation, providing valuable information on the pros and cons of integrating at different layers in the architecture.
6931436
submission
ChelleChelle writes:
According to Paul Vixie of Internet Systems Consortium, “DNS [Domain Name System] is many things to many people—perhaps too many things to too many people.” The main aim of his article, then, is to underline our understanding of what DNS is by differentiating it from what it is not. Vixie maintains that there are many different misuses of DNS common today --or what he calls “stupid DNS tricks”--such as using it as a mapping service or a mechanism for delivering policy-based information. Overall his point of view is well-argued, yet the article begs the question of whether it is better to use DNS only for what it was originally intended or if innovation is the key.
6782708
submission
ChelleChelle writes:
“Everyone knows that maintenance is hard and boring, and avoids doing it.” So starts off a recent article in acmqueue. The authors of the piece (Peter Stachour and David Collier-Brown), however, set out to prove the opposite. According to them maintenance is not only easy but it also can save you and your management time in the most frantic of projects. So what, then, is the best way to go about it? The majority of the article is devoted to answering this question. Stachour and Collier-Brown discuss the different approaches to software maintenance, pointing out the downsides of each before finally advocating an approach in which maintenance is built into the system from the ground up.
6580677
submission
ChelleChelle writes:
For quite some time computer simulation has played an integral role in the study of the structure and function of biological molecules, with parallel computers being used to conduct computationally demanding simulations and to analyze their results. Recently advances in GPU processors and programming tools have opened up tremendous opportunities to employ new simulation and analysis techniques that were previously too computationally demanding to use. In many cases analysis techniques that once required computation on HPC (high-performance computing) clusters can now be performed on desktop computers, making them accessible to applications scientists lacking experience with clustering, queuing and the like. This article examines the evolution of GPU processors and their use by those involved in biomedicine.
6269105
submission
ChelleChelle writes:
Since the U.S. government first announced its plan to put RFID (Radio-frequency ID) chips in all passports issued from 2007 onwards, concerns have sprang up regarding the issues of privacy and security. As the RFID chips are used to transmit such identifying information as your full name, nationality and even a digitized photo, it is feared that anyone with a high-powered antenna will be able to gain access to such important information and potentially use it with malicious intent (for example to create a new passport with which to gain access to your social security number and finances). Today, the question still remains whether RFID passports really make us more vulnerable to identity theft.
In this article five Harvard University students, led by Jim Waldo (of Harvard and Sun Microsystems) provide a threat analysis of the new passports, examining the plausibility and possibility of such attacks successfully taking place as well as what is being done to prevent them.
5779655
submission
ChelleChelle writes:
Participatory sensing technologies are greatly expanding the possible uses of mobile phones in ways that could improve our lives and our communities (for example, by helping us to understand our exposure to air pollution or our daily carbon footprint). However, with these potential gains comes great risk, particularly to our privacy. With their built-in microphones, cameras and location awareness, mobile phones could, at the extreme, become the most widespread embedded surveillance tools in history. Whether phones engaged in sensing data are tools for self and community research, coercion or surveillance depends on who collects the data, how it is handled, and what privacy protections users are given. This article gives a number of opinions about what programmers might do to make this sort of data collection work without slipping into surveillance and control.