Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

+ - Recommended Practices for Developing Safety-Critical Software Systems->

heidibrayer writes: Our discussion of technical best practices for the software development of safety-critical (SC) systems has four parts. First, we set the context by addressing the questions "What are SC systems and why is their development challenging?" Three of the eight technical best practices for SC systems are presented below. We then briefly address how an organization can prepare for and achieve effective results from following these best practices.
Link to Original Source

+ - Generating Code with AADL->

heidibrayer writes: Avionics and other safety-critical systems are becoming increasingly reliant on software. For example, the F-35 Lighting II is a fifth-generation fighter jet that contains more than 8 million lines of software code (LOC), four times the amount of the world’s first fifth-generation fighter, the F-22 Raptor. This upsurge in software reliance motivates the need to verify and validate requirements early in the software development lifecycle, as requirements errors are often propagated from the design phase to the implementation phase. The remainder of this blog post gives examples of how AADL is being used to generate code for software-reliant avionics systems.
Link to Original Source

+ - DevOps in the Federal Government - Where to Start? ->

heidibrayer writes: The federal government continues to search for better ways to leverage the latest technology trends and increase efficiency of developing and acquiring new products or obtaining services under constrained budgets. DevOps is gaining more traction in many federal organizations, such as U.S. Citizenship and Immigration Services (USCIS), the Environmental Protection Agency (EPA), and the General Services Administration (GSA). These and other government agencies face challenges, however, when implementing DevOps with Agile methods and employing DevOps practices in every phase of the project lifecycle, including acquisition, development, testing, and deployment. A common mistake when implementing DevOps is trying to buy a finished product or an automated toolset, rather than considering its methods and the critical elements required for successful adoption within the organization. As described in previous posts on this blog, DevOps is an extension of Agile methods that requires all the knowledge and skills necessary to take a project from inception through sustainment and also contain project stakeholders within a dedicated team.
Link to Original Source

+ - Netflix and the Chaos Monkey ->

heidibrayer writes: DevOps can be succinctly defined as a mindset of molding your process and organizational structures to promote

-business value
-software quality attributes most important to your organization
-continuous improvement

As I have discussed in previous posts on DevOps at Amazon and software quality in DevOps, while DevOps is often approached through practices such as Agile development, automation, and continuous delivery, the spirit of DevOps can be applied in many ways. In this blog post, I am going to look at another seminal case study of DevOps thinking applied in a somewhat out-of-the-box way: Netflix.

Link to Original Source

+ - Goto Fail & Heartbleed: 2 Case Studies in Software Assurance ->

heidibrayer writes: Mitre’s Top 25 Most Dangerous Software Errors is a list that details quality problems, as well as security problems. This list aims to help software developers “prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped.” These vulnerabilities often result in software that does not function as intended, presenting an opportunity for attackers to compromise a system. This blog post highlights our research in examining techniques used for addressing software defects in general and how those can be applied to improve security detection and management.
Link to Original Source

+ - Four Approaches for Shifting Software Testing to the Left ->

heidibrayer writes: One of the most important and widely discussed trends within the software testing community is shift left testing, which simply means beginning testing as early as practical in the lifecycle. What is less widely known, both inside and outside the testing community, is that testers can employ four fundamentally-different approaches to shift testing to the left. Unfortunately, different people commonly use the generic term shift left to mean different approaches, which can lead to serious misunderstandings. This blog post explains the importance of shift left testing and defines each of these four approaches using variants of the classic V model to illustrate them.
Link to Original Source

+ - An Enhanced Tool for Android App Analysis ->

heidibrayer writes: Each software application installed on a mobile smartphone, whether a new app or an update, can introduce new, unintentional vulnerabilities or malicious code. These problems can lead to security challenges for organizations whose staff uses mobile phones for work. In April 2014, we published a blog post highlighting DidFail (Droid Intent Data Flow Analysis for Information Leakage), which is a static analysis tool for Android app sets that addresses data privacy and security issues faced by both individual smartphone users and organizations. This post highlights enhancements made to DidFail in late 2014 and an enterprise-level approach for using the tool.
Link to Original Source

+ - Addressing the Detrimental Effects of Context Switching with DevOps ->

heidibrayer writes: In a computing system, a context switch occurs when an operating system stores the state of an application thread before stopping the thread and restoring the state of a different (previously stopped) thread so its execution can resume. The overhead incurred by a context switch managing the process of storing and restoring state negatively impacts operating system and application performance. This blog post describes how DevOps ameliorates the negative impacts that "context switching" between projects can have on a software engineering team’s performance.
Link to Original Source

+ - Developing with Docker ->

heidibrayer writes: In our last post, DevOps and Docker, I introduced Docker as a tool to develop and deploy software applications in a controlled, isolated, flexible, and highly portable infrastructure. In this post, I am going to show you how easy it is to get started with Docker. I will dive in and demonstrate how to use Docker containers in a common software development environment by launching a database container (MongoDB), a web service container (a Python Bottle app), and configuring them to communicate forming a functional multi-container application. If you haven’t learned the basics of Docker yet, you should go ahead and try out their official tutorial here before continuing.
Link to Original Source

+ - Android, Big Data, DevOps, Malware, and Agile - The Top 10 Blog Posts of 2014 ->

heidibrayer writes: In 2014, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in big data, secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In 2014 (through December 21), the SEI blog logged 129,000 visits, nearly double the entire 2013 yearly total of 66,757 visits. As we look back on the last 12 months, this blog posting highlights our 10 most popular blog posts (based on the number of visits). As we did with our mid-year review, we will include links to additional related resources that readers might find of interest. When possible, we grouped posts by research area to make it easier for readers to learn about related areas of work. This blog post first presents the top 10 posts and then provides a deeper dive into each area of research.
Using V Model for Testing
Two Secure Coding Tools for Analyzing Android Apps (secure coding)
Common Testing Problems: Pitfalls to Prevent and Navigate
Four Principles of Engineering Scalable, Big Data Systems (big data)
A New Approach to Prioritizing Malware Analysis
Secure Coding for the Android Platform (secure coding)
A Generalized Model for Automated DevOps (DevOps)
Writing Effective Yara Signatures to Identify Malware
An Introduction to DevOps (DevOps)
The Importance of Software Architecture in Big Data Systems (big data)

Link to Original Source

+ - Preventing Java Zero-Day Vulnerabilities ->

heidibrayer writes: A zero-day vulnerability refers to a software security vulnerability that has been exploited before any patch is published. In the past, vulnerabilities were widely exploited even when a patch was available, which means they were not zero-day. Today, zero-day vulnerabilities are common. Notorious examples include the recent Stuxnet and Operation Aurora exploits. Vulnerabilities may arise from a variety of sources, but most vulnerabilities are the result of simple coding errors. Consequently, developers need to understand common traps and pitfalls in the programming language, libraries, and platform to produce code that is free of vulnerabilities. To address this problem, CERT published The CERT Oracle Coding Standard for Java in 2011. This book is version 1 of this standard and was written primarily for Java SE 6, but also covers features introduced in Java SE 7. This coding standard provides secure coding rules that help programmers recognize and avoid vulnerabilities in their products. Each rule provides simple instructions regarding what a programmer must and must not do. Each rule description is accompanied by noncompliant code examples, as well as compliant solutions that can be used instead. In this blog post, I examine a Java zero-day vulnerability, CVE 2012-0507, which infected half a million Macintosh computers, and consider how this exploit could have been prevented through adherence to two secure coding rules.
Link to Original Source

+ - What is DevOps? ->

heidibrayer writes: Typically, when we envision DevOps implemented in an organization, we imagine a well-oiled machine that automates infrastructure provisioning, code testing, and application deployment. Ultimately, these practices are a result of applying DevOps methods and tools. DevOps works for all sizes, from a team of one to an enterprise organization.

DevOps can be seen as an extension of an Agile methodology. It requires all the knowledge and skills necessary to take a project from inception through sustainment to be contained within a dedicated project team. Organizational silos must be broken down. Only then can project risk be effectively mitigated.

While DevOps is not, strictly speaking, continuous integration, delivery, or deployment, DevOps practices do enable a team to achieve the level of coordination and understanding necessary to automate infrastructure, testing, and deployment. In particular, DevOps provides organizations a way to ensure

- collaboration between project team roles
- infrastructure as code
- automation of tasks, processes, and workflows
- monitoring of applications and infrastructure

Business value drives DevOps development. Without a DevOps mindset, organizations often find their operations, development, and testing teams working toward short-sighted incentives of creating their infrastructure, test suites, or product increment. Once an organization breaks down the silos and integrates these areas of expertise, it can focus on working together toward the common, fundamental goal of delivering business value.

Well-organized teams will find (or create) tools and techniques to enable DevOps practices in their organizations. Every organization is different and has different needs that must be met. The crux of DevOps, though, is not a killer tool or script, but a culture of collaboration and an ultimate commitment to deliver value.

Every Thursday, the SEI will publish a new blog post that offers guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.

Link to Original Source

+ - How Do Agile Software Teams Engage with Systems Engineering? ->

heidibrayer writes: Tension and disconnects between software and systems engineering functions are not new. Grady Campbell wrote in 2004 that “systems engineering and software engineering need to overcome a conceptual incompatibility (physical versus informational views of a system)” and that systems engineering decisions can create or contribute to software risk if they “prematurely over-constrain software engineering choices” or “inadequately communicate information, including unknowns and uncertainties, needed for effective software engineering.” This tension holds true for Department of Defense (DoD) programs as well, which historically decompose systems from the system level down to subsystem behavior and breakdown work for the program based on this decomposition. Hardware-focused views are typically deemed not appropriate for software, and some systems engineers (and most systems engineering standards) have not yet adopted an integrated view of the two disciplines. An integrated view is necessary, however, because in complex software-reliant systems, software components often interact with multiple hardware components at different levels of the system architecture. In this blog post, I describe recently published research conducted by me and other members of the SEI’s Client Technical Solutions Division highlighting interactions on DoD programs between Agile software-development teams and their systems engineering counterparts in the development of software-reliant systems.
Link to Original Source

+ - Protecting Organizational Data in a High-Risk Economy ->

heidibrayer writes: Earlier this month, the U.S. Postal Service reported that hackers broke into their computer system and stole data records including social security numbers for 2.9 million customers and 750,000 employees and retirees, according to reports on the breach. In the JP Morgan Chase cyber breach earlier this year, it was reported that hackers stole the personal data of 76 million households as well as information from approximately 8 million small businesses. This breach and other recent thefts of data from Adobe (152 million records), EBay (145 million records), and The Home Depot (56 million records) highlight a fundamental shift in the economic and operational environment, with data at the heart of today’s information economy. In this new economy, it is vital for organizations to evolve the manner in which they manage and secure information. Ninety percent of the data that is processed, stored, disseminated, and consumed in the world today was created in the past two years. Organizations are increasingly creating, collecting, and analyzing data on everything (as exemplified in the growth of big data analytics). While this trend produces great benefits to businesses, it introduces new security, safety, and privacy challenges in protecting the data and controlling its appropriate use. In this blog post, I will discuss the challenges that organizations face in this new economy, define the concept of information resilience, and explore the body of knowledge associated with the CERT Resilience Management Model (CERT-RMM) as a means for helping organizations protect and sustain vital information.
Link to Original Source

+ - DevOps and Agile ->

heidibrayer writes: Melvin Conway, an eminent computer scientist and programmer, create Conway’s Law, which states: Organizations that design systems are constrained to produce designs which are copies of the communication structures of these organizations. Thus, a company with frontend, backend, and database teams might lean heavily towards three-tier architectures. The structure of the application developed will be determined, in large part, by the communication structure of the organization developing it. In short, form is a product of communication.

Now, let’s look at the fundamental concept of Conway’s Law applied to the organization itself. The traditional-but-insufficient waterfall development process has defined a specific communication structure for our application: Developers hand off to the quality assurance (QA) team for testing, QA hands off to the operations (Ops) team for deployment. The communication defined by this non-Agile process reinforces our flawed organizational structures, uncovering another example of Conway’s Law: Organizational structure is a product of process.

As the figure shown above illustrates, siloed organizational structures align with sequential processes, e.g., waterfall methodologies. The DevOps method of breaking down these silos to encourage free communication and constant collaboration is actually reinforcing Agile thinking. Seen in this light, DevOps is a natural evolution of Agile thinking, bringing operations and sustainment activities and staff into the Agile fold.

Every Thursday, the SEI Blog will publish a new blog post that will offer guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.

Link to Original Source

Overload -- core meltdown sequence initiated.