ChelleChelle2 writes: ACMQueue recently published an interesting article pondering what the author, Pat Helland, terms the “power of babble”—i.e. the importance of maintaining flexibility when communicating in an increasingly globalized world. From the article, “Metadata defines the shape, the form, and how to understand our data. It is following the trend taken by natural languages in our increasingly interconnected world. While many concepts can be communicated using shared metadata, no one can keep up with the number of disparate new concepts needed to have a common understanding.” And this is not necessarily such a bad thing, according to Helland. As he states, “As much as we would like to have complete understanding of each other, independent innovation is far more important than crisp and clear communication. Our economic future depends on the “power of babble.”
ChelleChelle2 writes: In her column “The Soft Side of Software,” written for ACMQueue, Kate Matsudaira (formerly of Microsoft and Amazon) offers simple, yet useful suggestions for acquiring new skills and growing your career.
ChelleChelle2 writes: As many software engineers are only too well aware, designing software for modern multicore processors can be quite a challenge. Traditional software designs (in which threads manipulate shared data) have limited scalability because synchronization of updates to shared data serializes threads and limits parallelism. However, alternative distributed software designs (in which threads do NOT share mutable data), while eliminating synchronization and offering better scalability, pose their own problems and are not a good fit for every program. Luckily, ACM Queue recently published a very useful guide describing advanced synchronization methods that can boost the performance of multicore software.
ChelleChelle2 writes: As Leslie Lamport once famously stated, "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." Given the complexity of distributed systems and the large set of possible failures, testing and verifying the systems you build is both difficult, yet incredibly important. Luckily Caitie McCaffrey, tech lead for observability at Twitter, has provided a useful practitioner’s guide to verifying a distributed system.
ChelleChelle2 writes: From automated writing algorithms churning out articles on finance, sports, weather and so on to fraud-detection systems for municipalities managing limited resources, the impact of algorithmic decision-making is being felt today throughout virtually all strands of industry and government. “It’s time to think seriously about how the algorithmically informed decisions now driving large swaths of society should be accountable to the public.” A recent article from ACMQueue discusses how to act ethically and responsibly when empowering algorithms to make decisions.
ChelleChelle2 writes: In a recent article Google Site Reliability Engineer Stepan Davidovic and Kavita Guliani (technical writer) discuss Google’s implementation of a distributed Cron service. Davidovic shares many of the valuable lessons his team learned from the experience, discussing some of the various problems that distributed Crons face and outlining possible solutions.
ChelleChelle2 writes: Despite the development of the Network Time Protocol to synchronize clocks between systems on the Internet, achieving simultaneity in distributed systems remains a major issue today. Part of the problem, according to Justin Sheehy, is that there is no “now” in computer systems—“the idea of ‘now’ as a simple value that has meaning across a distance will always be problematic.”
ChelleChelle2 writes: Dynamic Content Management Systems such as Wordpress and Drupal are becoming ever more popular today. Despite their many attractions, however, DCMSes can pose significant security risks, especially in comparison to static systems. In this article Paul Vixie discusses why you should “go static or go home.”
ChelleChelle2 writes: In the late 1970s, when David L. Mills first began working on the problem of synchronizing time on networked computers and developed NTP (Network Time Protocol), the net was a much friendlier place than it is today. While the NTP codebase has long had an enviable record as far as security problems go, today crackers have discovered how to use it as a weapon for abuse. It is thus more important than ever to secure the Network Time Protocol.
ChelleChelle2 writes: In this recent article from the acmqueue, Dave Long engages in a little “retro-computing,” implementing his own META II with the help of Dewey Val Schorre’s 1964 article on the topic. The benefits from this exercise are many. From the article “implementing your own META II will have not only the short-term benefit of providing an easily modifiable “workbench” with which to solve your own problems better, but also a longer-term benefit, in that to the extent you can arrange for functionality to be easily bootstrappable, you can help mitigate the “perpetual palimpsest” of information technology”
ChelleChelle2 writes: Are we currently in the middle of a paradigm shift in software engineering? Ivar Jacobson certainly thinks so. In a recent article, Jacobson discusses the SEMAT (Software Engineering Method and Theory) initiative, an international effort dedicated to “refounding” software engineering. “As the name indicates, SEMAT is focusing both on supporting the craft (methods) and on building foundational understanding (theory). “
ChelleChelle2 writes: What do Facebook, Apple, Google and Microsoft all have in common? In addition to being enormously successful, all four are software-based companies. Additionally, many hot and upcoming companies such as Über and Tesla are software-based as well. So why are software-based companies taking over the world today? According to a recent article, “The answer is simply that powering companies by software allows them to be responsive and data-driven and, hence, able to react to changes quickly.” The article then elaborates on how to transform into are responsive enterprise by embracing the “hacker way.”
ChelleChelle2 writes: Today, HTTPS is the de facto standard for secure Web browsing. However, within the past few years several highly visible security incidents—most notably OpenSSL’s Heartbleed—have made it clear that this crucial cybersecurity technology is fundamentally flawed. In both the US and abroad, policymakers and technologists are increasingly advocating various solutions to this problem. Recent analysis of the regulatory and technological solutions that have been suggested, however, unfortunately reveals that the “systematic vulnerabilities in this crucial technology are likely to persist for years to come.”
ChelleChelle2 writes: “Script injection vulnerabilities are a bane of Web application development: deceptively simple in cause and remedy, they are nevertheless surprisingly difficult to prevent in large-scale Web development. “ Unfortunately, code inspection and testing are typically not enough to ensure the absence of XSS bugs in large web applications. Luckily, the engineers at Google have developed practical software design patterns that make the development of Web applications much more resistant to the inadvertent introduction of XSS vulnerabilities into application code.