ChelleChelle writes: The time has come for software liability laws—or so Poul-Henning Kamp believes. Drawing on Ken Thompson’s famous Turing Award lecture—in which he stated “You can’t trust code that you did not totally create yourself”—Kamp proposes a way to —http://queue.acm.org/detail.cfm?id=2030258">"> introduce product liability into the software world. A thought-provoking read from an always contentious writer.
ChelleChelle writes: Sharing storage can be a huge pain, especially when one or two people have a tendency to use up all of the available disk space. Yet there is a quick and efficient way to clean up your storage space and it’s outlined in a short, humorous article by acmqueue’s “Kode Vicious.” KV also addresses the issue of using debuggers vs. print statements for debugging.
ChelleChelle writes: Today’s data systems differ greatly from those of the past. While classic systems could offer crisp answers due to the relatively small amount of data they contained, today’s systems hold humongous amounts of data content—thus, the data quality and meaning is often fuzzy. In this article, Microsoft’s Pat Helland examines the ways in which today’s answers differ from what we used to expect, before moving on to state the criteria for a new theory and taxonomy of data.
ChelleChelle writes: On May 8th, 2007 the web Sites of Estonian banks, media outlets, and government were the targets of a cyberterrorism attack. A little more than a year after the Estonian incident, Georgia was subjected to cyber attacks in conjunction with the Russian incursion into South Ossetia in August 2008. This article from the ACM Queue examines these two attacks in order to highlight key vulnerabilities in national Internet infrastructure as well as suggest ways to establish a more robust and defensible Internet presence.
ChelleChelle writes: Over the last ten years virtualization has been the source of much hype. Many have come to perceive virtualization as a panacea for a host of IT problems, ranging from resource underutilization to data-center optimization. Yet the question remains— can virtualization deliver on its promises? According to the vice president of Morgan Stanley, Evangelos Kotsovinos, yes it can, just not right out of the box. For virtualization to deliver on its promise, both vendors and enterprises need to adapt in a number of ways. This article cuts through the hype surrounding virtualization and focuses on the hidden costs and complex and difficult system administration challenges that are often overlooked.
ChelleChelle writes: When most people think of cybercrime and the online theft of valuable, business-related information they tend to consider only the obvious information at risk--½Â½Âthink banking codes or secret inventions. Today's criminals, however, have broadened their definition of high-value commercial information to include the more mundane but valuable information such as manufacturing processes, suppliers, customers, factory layout, contract terms, employment data, and general know-how. This means that any business that shows leadership in any aspect of its industry is a potential target for attack. In this new age of cybercrime past security wisdom is no longer valid. To address how the current threat environment has evolved and how businesses can seek to protect themselves ACM initiated a roundtable discussion with some of the top minds in the industry.
ChelleChelle writes: As even a quick glance at this article will reveal, the author’s title was clearly intended to be tongue-in-cheek. Keeping bits safe is something that is actually quite difficult to do. In fact, as storage systems grow increasingly larger and larger, protecting their data for long-term storage is becoming more and more challenging. In this article David Rosenthal examines the claims of various storage system manufacturers regarding the reliability of their products and explores the different techniques that are used to prevent data loss before addressing some of the possible steps that should be taken in the future to handle the inevitable failures of long-term storage.
ChelleChelle writes: In this latest case study from acmqueue, Russell Williams (principle scientist, Adobe Photoshop) and Clem Cole (architect of Intel’s Cluster Ready program) discuss Photoshop’s long history with parallelism and what they now see as the main challenge. “ Photoshop’s parallelism, born in the era of specialized expansion cards, has managed to scale well for the two- and four-core machines that have emerged over the past decade. As Photoshop’s engineers prepare for the eight- and 16-core machines that are coming, however, they have started to encounter more and more scaling problems, primarily a result of the effects of Amdahl’s law and memory-bandwidth limitations.” An interesting read, especially since any software engineer who has ever attempted to achieve parallelism in an application will recognize many of the problems and challenges the Photoshop team is now facing.
ChelleChelle writes: Information technology has the potential to radically transform health care by providing a variety of advantages ranging from a decrease in medical errors and paperwork to improved patient safety and care. Yet progress over the last several decades as been slow. In this article Dr. Stephen Cantrill discusses the history of HIT (health information technology), examining why so many efforts in this field have failed. In doing so he pinpoints some of the major challenges that still exist today in the application of medical informatics to the daily practice of health care. Foremost amongst these challenges are the issues of developing an effective human-machine interface as well as the reliability and availability of systems.
ChelleChelle writes: Errors, whether transient or permanent, are unfortunately a fact of life. In order to make sure that a system can properly handle them, it is absolutely essential to test the error-detection and correction circuitry by injecting errors. This is the main topic of a recent article from acmqueue in which Steve Chessin of Oracle talks about injecting various types of errors (e-cache errors, memory errors) on the Ultrasparc-II.
ChelleChelle writes: The advent of virtual machines and cloud computing has greatly changed the IT world, offering both new opportunities (making applications more portable) as well as new challenges (breaking long-standing linkages between applications and their supporting physical devices). Before data-center managers can take advantage of these new opportunities, they must have a better understanding of service infrastructure requirements and their linkages to applications. With this in mind acmqueue initiated a roundtable discussion, bringing together providers and users of network virtualization technologies from leading companies (including Yahoo!, Hewlett-Packard and Citrix Systems) to discuss how virtualization and clouds impact network service architectures.
ChelleChelle writes: Software developers regularly draw diagrams of their systems. Such diagrams, be they hastily sketched on a white board or in high-quality poster format, are of great assistance to a developer’s daily work (helping him examine and understand source code, explain existing code to a coworker, etc). A group of researchers from Microsoft Research—Robert DeLine, Gina Venolia and Kael Rowan-- feel, however, that software could make some improvements to this process. They are currently designing an interactive code map for development environments. As they see it, “making a code map central to the user interface of the development environment promises to reduce disorientation, answer common information needs, and anchor team conversations.”
ChelleChelle writes: Latency has a direct impact on performance—thus in order to identify performance issues it is absolutely essential to understand latency. With the introduction of DTrace it is now possible to measure latency at arbitrary points—the problem, however, is how to visually present this data in an effective manner. Towards this end heat maps can prove to be a powerful tool. When I/O latency is presented as a visual heat map, some intriguing and beautiful patterns can emerge. These patterns provide insight into how a system is actually performing and what kinds of latency end-user applications experience.
ChelleChelle writes: The production of digital information is increasing at an astonishing rate. In order to put this information to good use, we need to find ways to explore, relate and communicate the data in a meaningful way. Hence visualization, which involves the principled mapping of data variables to visual features such as position, size, shape and color, is becoming an area of great interest. According to three scholars from Stanford University, Jeffrey Heer, Michael Bostock and Vadim Ogievetsky, “The goal of visualization is to aid our understanding of data by leveraging the human visual systems’ highly tuned ability to see patterns, spot trend and identify outliers. Well-designed visual representations can replace cognitive calculations with simple, perceptual inferences and improve comprehension, memory and decision making.” In this article Heer, Bostock and Ogievetsky provide a survey of several powerful visualization techniques. As an added bonus, many of their visualizations are accompanied by interactive examples (created using Protovis, an open source language for Web-based data visualization).