ChelleChelle2 writes: As Leslie Lamport once famously stated, "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." Given the complexity of distributed systems and the large set of possible failures, testing and verifying the systems you build is both difficult, yet incredibly important. Luckily Caitie McCaffrey, tech lead for observability at Twitter, has provided a useful practitioner’s guide to verifying a distributed system.
ChelleChelle2 writes: From automated writing algorithms churning out articles on finance, sports, weather and so on to fraud-detection systems for municipalities managing limited resources, the impact of algorithmic decision-making is being felt today throughout virtually all strands of industry and government. “It’s time to think seriously about how the algorithmically informed decisions now driving large swaths of society should be accountable to the public.” A recent article from ACMQueue discusses how to act ethically and responsibly when empowering algorithms to make decisions.
ChelleChelle2 writes: In a recent article Google Site Reliability Engineer Stepan Davidovic and Kavita Guliani (technical writer) discuss Google’s implementation of a distributed Cron service. Davidovic shares many of the valuable lessons his team learned from the experience, discussing some of the various problems that distributed Crons face and outlining possible solutions.
ChelleChelle2 writes: Despite the development of the Network Time Protocol to synchronize clocks between systems on the Internet, achieving simultaneity in distributed systems remains a major issue today. Part of the problem, according to Justin Sheehy, is that there is no “now” in computer systems—“the idea of ‘now’ as a simple value that has meaning across a distance will always be problematic.”
ChelleChelle2 writes: Dynamic Content Management Systems such as Wordpress and Drupal are becoming ever more popular today. Despite their many attractions, however, DCMSes can pose significant security risks, especially in comparison to static systems. In this article Paul Vixie discusses why you should “go static or go home.”
ChelleChelle2 writes: In the late 1970s, when David L. Mills first began working on the problem of synchronizing time on networked computers and developed NTP (Network Time Protocol), the net was a much friendlier place than it is today. While the NTP codebase has long had an enviable record as far as security problems go, today crackers have discovered how to use it as a weapon for abuse. It is thus more important than ever to secure the Network Time Protocol.
ChelleChelle2 writes: In this recent article from the acmqueue, Dave Long engages in a little “retro-computing,” implementing his own META II with the help of Dewey Val Schorre’s 1964 article on the topic. The benefits from this exercise are many. From the article “implementing your own META II will have not only the short-term benefit of providing an easily modifiable “workbench” with which to solve your own problems better, but also a longer-term benefit, in that to the extent you can arrange for functionality to be easily bootstrappable, you can help mitigate the “perpetual palimpsest” of information technology”
ChelleChelle2 writes: Are we currently in the middle of a paradigm shift in software engineering? Ivar Jacobson certainly thinks so. In a recent article, Jacobson discusses the SEMAT (Software Engineering Method and Theory) initiative, an international effort dedicated to “refounding” software engineering. “As the name indicates, SEMAT is focusing both on supporting the craft (methods) and on building foundational understanding (theory). “
ChelleChelle2 writes: What do Facebook, Apple, Google and Microsoft all have in common? In addition to being enormously successful, all four are software-based companies. Additionally, many hot and upcoming companies such as Über and Tesla are software-based as well. So why are software-based companies taking over the world today? According to a recent article, “The answer is simply that powering companies by software allows them to be responsive and data-driven and, hence, able to react to changes quickly.” The article then elaborates on how to transform into are responsive enterprise by embracing the “hacker way.”
ChelleChelle2 writes: Today, HTTPS is the de facto standard for secure Web browsing. However, within the past few years several highly visible security incidents—most notably OpenSSL’s Heartbleed—have made it clear that this crucial cybersecurity technology is fundamentally flawed. In both the US and abroad, policymakers and technologists are increasingly advocating various solutions to this problem. Recent analysis of the regulatory and technological solutions that have been suggested, however, unfortunately reveals that the “systematic vulnerabilities in this crucial technology are likely to persist for years to come.”
ChelleChelle2 writes: “Script injection vulnerabilities are a bane of Web application development: deceptively simple in cause and remedy, they are nevertheless surprisingly difficult to prevent in large-scale Web development. “ Unfortunately, code inspection and testing are typically not enough to ensure the absence of XSS bugs in large web applications. Luckily, the engineers at Google have developed practical software design patterns that make the development of Web applications much more resistant to the inadvertent introduction of XSS vulnerabilities into application code.
ChelleChelle2 writes: Network reliability is an important issue in distributed computing. “the degree of reliability in deployment environments is critical in robust systems design and directly determines the kinds of operations that systems can reliably perform without waiting.” Unfortunately, however, the degree to which networks really are reliable in the real world is the subject of considerable and continued debate. Complicating matters in this discussion is a general lack of evidence. In this article, Peter Bailis (UC Berkeley) and Kyle Kingsburg (Jepsen Networks) take the first step toward a more open and honest discussion of real-world partition behavior by providing an informal survey of real-world communications failures.
ChelleChelle2 writes: The Association for Computing Machinery (ACM) was founded in 1947. Today, it is considered one of the most prestigious scientific and educational computing societies in the world. For decades ACM membership was considered to be a mark of a professional; however, this is no longer the case. Many programmers today consider the ACM a purely academic institution of little use or relevance for professionals. In this article, Vinton Cerf—one of the “fathers of the internet” and a past president of the ACM—asks how can ACM “adapt its activities and offerings to increase the participation of professionals?” Is there anything the ACM can do to better serve professional programmers? Join in the conversation
ChelleChelle2 writes: If there’s anything that the Heartbleed fiasco has taught us, it’s that when it comes to free software you get what you pay for. Many free and open-source software (FOSS) projects are underfunded and thus badly staffed, creating the potential for bugs like Heartbleed to go undiscovered for years. So how can we generate funding for FOSS? In this article Poul-Henning Kamp provides a funding model based on his personal experience with FreeBSD and Varnish.