17787806
submission
ChelleChelle writes:
Over the last ten years virtualization has been the source of much hype. Many have come to perceive virtualization as a panacea for a host of IT problems, ranging from resource underutilization to data-center optimization. Yet the question remains— can virtualization deliver on its promises? According to the vice president of Morgan Stanley, Evangelos Kotsovinos, yes it can, just not right out of the box. For virtualization to deliver on its promise, both vendors and enterprises need to adapt in a number of ways. This article cuts through the hype surrounding virtualization and focuses on the hidden costs and complex and difficult system administration challenges that are often overlooked.
17420926
submission
ChelleChelle writes:
When most people think of cybercrime and the online theft of valuable, business-related information they tend to consider only the obvious information at risk--½Â½Âthink banking codes or secret inventions. Today's criminals, however, have broadened their definition of high-value commercial information to include the more mundane but valuable information such as manufacturing processes, suppliers, customers, factory layout, contract terms, employment data, and general know-how. This means that any business that shows leadership in any aspect of its industry is a potential target for attack. In this new age of cybercrime past security wisdom is no longer valid. To address how the current threat environment has evolved and how businesses can seek to protect themselves ACM initiated a roundtable discussion with some of the top minds in the industry.
16977928
submission
ChelleChelle writes:
As even a quick glance at this article will reveal, the author’s title was clearly intended to be tongue-in-cheek. Keeping bits safe is something that is actually quite difficult to do. In fact, as storage systems grow increasingly larger and larger, protecting their data for long-term storage is becoming more and more challenging. In this article David Rosenthal examines the claims of various storage system manufacturers regarding the reliability of their products and explores the different techniques that are used to prevent data loss before addressing some of the possible steps that should be taken in the future to handle the inevitable failures of long-term storage.
16458888
submission
ChelleChelle writes:
In this latest case study from acmqueue, Russell Williams (principle scientist, Adobe Photoshop) and Clem Cole (architect of Intel’s Cluster Ready program) discuss Photoshop’s long history with parallelism and what they now see as the main challenge. “ Photoshop’s parallelism, born in the era of specialized expansion cards, has managed to scale well for the two- and four-core machines that have emerged over the past decade. As Photoshop’s engineers prepare for the eight- and 16-core machines that are coming, however, they have started to encounter more and more scaling problems, primarily a result of the effects of Amdahl’s law and memory-bandwidth limitations.” An interesting read, especially since any software engineer who has ever attempted to achieve parallelism in an application will recognize many of the problems and challenges the Photoshop team is now facing.
16337874
submission
ChelleChelle writes:
Acmqueue has a new article out that promises to help make optimizing software performance easier. The key to doing so?—simply more clearly and better understand a few fundamental principles.
16094462
submission
ChelleChelle writes:
Information technology has the potential to radically transform health care by providing a variety of advantages ranging from a decrease in medical errors and paperwork to improved patient safety and care. Yet progress over the last several decades as been slow. In this article Dr. Stephen Cantrill discusses the history of HIT (health information technology), examining why so many efforts in this field have failed. In doing so he pinpoints some of the major challenges that still exist today in the application of medical informatics to the daily practice of health care. Foremost amongst these challenges are the issues of developing an effective human-machine interface as well as the reliability and availability of systems.
15086326
submission
ChelleChelle writes:
Errors, whether transient or permanent, are unfortunately a fact of life. In order to make sure that a system can properly handle them, it is absolutely essential to test the error-detection and correction circuitry by injecting errors. This is the main topic of a recent article from acmqueue in which Steve Chessin of Oracle talks about injecting various types of errors (e-cache errors, memory errors) on the Ultrasparc-II.
14867612
submission
ChelleChelle writes:
The advent of virtual machines and cloud computing has greatly changed the IT world, offering both new opportunities (making applications more portable) as well as new challenges (breaking long-standing linkages between applications and their supporting physical devices). Before data-center managers can take advantage of these new opportunities, they must have a better understanding of service infrastructure requirements and their linkages to applications. With this in mind acmqueue initiated a roundtable discussion, bringing together providers and users of network virtualization technologies from leading companies (including Yahoo!, Hewlett-Packard and Citrix Systems) to discuss how virtualization and clouds impact network service architectures.
14211714
submission
ChelleChelle writes:
Software developers regularly draw diagrams of their systems. Such diagrams, be they hastily sketched on a white board or in high-quality poster format, are of great assistance to a developer’s daily work (helping him examine and understand source code, explain existing code to a coworker, etc). A group of researchers from Microsoft Research—Robert DeLine, Gina Venolia and Kael Rowan-- feel, however, that software could make some improvements to this process. They are currently designing an interactive code map for development environments. As they see it, “making a code map central to the user interface of the development environment promises to reduce disorientation, answer common information needs, and anchor team conversations.”
12966816
submission
ChelleChelle writes:
Latency has a direct impact on performance—thus in order to identify performance issues it is absolutely essential to understand latency. With the introduction of DTrace it is now possible to measure latency at arbitrary points—the problem, however, is how to visually present this data in an effective manner. Towards this end heat maps can prove to be a powerful tool. When I/O latency is presented as a visual heat map, some intriguing and beautiful patterns can emerge. These patterns provide insight into how a system is actually performing and what kinds of latency end-user applications experience.
12568632
submission
ChelleChelle writes:
The production of digital information is increasing at an astonishing rate. In order to put this information to good use, we need to find ways to explore, relate and communicate the data in a meaningful way. Hence visualization, which involves the principled mapping of data variables to visual features such as position, size, shape and color, is becoming an area of great interest. According to three scholars from Stanford University, Jeffrey Heer, Michael Bostock and Vadim Ogievetsky, “The goal of visualization is to aid our understanding of data by leveraging the human visual systems’ highly tuned ability to see patterns, spot trend and identify outliers. Well-designed visual representations can replace cognitive calculations with simple, perceptual inferences and improve comprehension, memory and decision making.” In this article Heer, Bostock and Ogievetsky provide a survey of several powerful visualization techniques. As an added bonus, many of their visualizations are accompanied by interactive examples (created using Protovis, an open source language for Web-based data visualization).
12353944
submission
ChelleChelle writes:
Cloud computing has been generating a lot of buzz lately—yet is it really a revolutionary new concept or simply the industrial topic du jour? According to the author of this article (Dustin Owens of BT Americas) cloud computing is an extraordinarily evolutionary and potentially revolutionary concept due to its elasticity. For Owens, once elasticity is combined with on-demand self-service capabilities it could truly become a game-changing force for IT. As he states, “elasticity could bring to the IT infrastructure what Henry Ford brought to the automotive industry with assembly lines and mass production: affordability and substantial improvements on time to market.” While this sounds fantastic, there are several monumental security challenges that come into play with elastic cloud-computing. The bulk of this article is dedicated to examining these challenges.
11970180
submission
ChelleChelle writes:
The NTP (Network Time Protocol) system for synchronizing computer clocks has been around for decades and has worked well for most general-purpose timing uses. However, new developments, such as the increasingly precise timing demands of the finance industry, are driving the need for a more precise and reliable network timing system. Julien Ridoux and Darryl Veitch from the University of Melbourne are working on such a system as part of the Radclock Project. In this article they share some of their expertise on synchronizing network clocks. The authors tackle the key challenge, taming delay variability, and provide useful guidelines for designing robust network timing algorithms.
11655682
submission
ChelleChelle writes:
Today, many cloud end customers use price as their primary decision criterion when selecting a cloud provider. Due to a variety of factors (the reduced deployment costs of open source software, the perfect competition characteristics of remote computing, etc) cloud providers are expected to continuously lower their prices. While low prices may seem like a good thing, it is important to keep in mind the costs to performance. In order to provide cheap service, cloud providers are frequently required to overcommit their computing resources and cut corners on infrastructure, resulting in variable and unpredictable performance of the virtual infrastructure. As this article discusses, this is a situation that needs to change.
11291852
submission
ChelleChelle writes:
Using traces—an essential technique in emulator development—can be a useful addition to any programmer’s toolbox. This article examines how adding snapshots, tracing and playback to existing debugging environments can significantly reduce the time required to find and correct stubborn bugs. From the article, “Detailed CPU state traces are extremely helpful in optimizing and debugging emulators, but the technique can be applied to ordinary programs as well. The method may be applied almost directly if a reference implementation is available for comparison. If this is not the case, traces are still useful for debugging nonlocal problems. The extra work of adding tracing facilities to your program will be rewarded in reduced debugging time.”