Power and vulnerability in structural analysis

Walking through design offices and flipping through books and archives, we’re often struck by glimpses of images that aren’t immediately decipherable — the one above, for instance. Our What’s That? series tells their stories. 

 

Neither an avant-garde abstraction nor a QR code, the image above depicts a stiffness matrix, the basic building block of structural analysis software.

Modern analysis programs have empowered engineers to design increasingly ambitious structures. Recent decades have seen the realization of incredible buildings unachievable without advanced computational analysis. Landmark projects like Beijing’s CCTV Headquarters stand as testament to these programs’ capabilities.

CCTV Headquarters building, designed by OMA (architecture) and Arup (engineering)

CCTV Headquarters building, designed by OMA (architecture) and Arup (engineering)

The complexity of today’s structural analysis programs tends to obscure their inner workings, even to the engineers who use them every day. As a result, it can be difficult for design teams to critically assess the software’s performance.

In an attempt to bridge this knowledge gap, Serguei Bagrianski, an architectural designer in Arup’s Toronto office, recently coauthored a textbook with Princeton University’s Jean H. Prévost. Based on an undergraduate course that Dr. Prévost taught at Princeton for over 15 years, An Introduction to Matrix Structural Analysis and Finite Element Methods chronicles the evolution of the computer code behind modern structural analysis.

The history of structural analysis

To understand how stiffness matrices became so prominent in modern structural analysis, we need to go back more than three centuries.

Elasticity, a fundamental principle of structural mechanics, was first described by Robert Hooke in 1676 as an anagram: “ceiiinosssttuv.” Daring his contemporaries to decipher the code, Hooke revealed the answer in 1678: “Ut tensio, sic vis,” which translates from Latin to “as the extension, so the force.”

Excerpt from Robert Hooke’s 1678 ‘Lectures De Potentia Restitutiva’

Excerpt from Robert Hooke’s 1678 ‘Lectures De Potentia Restitutiva’

In establishing a linear relationship between force and the deformation of a given object, Hooke’s law articulated the mechanical performance of trusses, beams, and frames.

Classical structural elements and exemplar structures: truss (Moscow’s 1922 Shukhov Tower), beam (Philadelphia’s 1951 Walnut Lane Memorial Bridge), and frame (1968 São Paulo Museum of Art)

Classical structural elements and exemplar structures: truss (Moscow’s 1922 Shukhov Tower), beam (Philadelphia’s 1951 Walnut Lane Memorial Bridge), and frame (1968 São Paulo Museum of Art)

Prior to the Industrial Revolution, builders had limited need for Hooke’s law. Traditional masonry and timber structural systems were designed using empirical knowledge and rules of thumb rather than analysis. But when mass production of iron began late in the 18th century, the material’s incredible strength attracted the interest of builders eager to explore more complex structures.

In order to realize these new systems, building designers developed a variety of rigorous analytical techniques. However, these techniques could be used to analyze only determinate systems: that is, structures with no redundancy, where every element is essential. Although builders of the day had limited appreciation for the benefits of redundancy, today’s engineers recognize that it is a prerequisite for safe, resilient infrastructure.

After the Industrial Revolution, designers created a range of complex iron structures, including the Eiffel Tower (shown under construction in 1887). However, structural complexity was still limited by the available analytical methods.

After the Industrial Revolution, designers created a range of complex iron structures, including the Eiffel Tower (shown under construction in 1887). However, structural complexity was still limited by the available analytical methods.

It wasn’t until the 1930s that a technique began to take shape for the analysis of indeterminate systems: redundant structures where some elements could be removed and the structure would remain stable. This period marked the emergence of matrix structural analysis (MSA), which allowed designers to analyze a complex structure by mathematically stitching together its simple component parts. MSA relied on the invention of stiffness matrices: mathematical entities that precisely express the elastic behavior of structural elements.

Assembly of structural stiffness from element contributions. The five element stiffness matrices (Ka through Ke) descend from the top right; the global stiffness matrix (KG) is shown on the bottom right.

Assembly of structural stiffness from element contributions. The five element stiffness matrices (Ka through Ke) descend from the top right; the global stiffness matrix (KG) is shown on the bottom right.

It’s no coincidence that MSA emerged at the same time that Alan Turing was laying the foundations for the programmable computer. In order to assemble and manipulate stiffness matrices, it was necessary to perform an overwhelming amount of repetitive operations. Although these calculations were too laborious for hand computation, they proved ideally suited for computers.

MSA made it possible to analyze structures of any complexity — as long as the contributing element stiffness matrices were available. (An element stiffness matrix describes the behavior of one element in a structure, whereas a global stiffness matrix describes the behavior of the entire structure.)

Unfortunately, the elements that could be derived using classical mechanics were limited to trusses, beams, and frames — what engineers refer to as discretized systems. MSA was simply inadequate for analyzing continuous surfaces such as membranes, plates, and shells.

Acceptable approximation

The 1950s witnessed a critical “A ha!” moment, as engineers recognized that the solution to this dilemma lay in living with error.

This breakthrough occurred with the development of the finite element method (FEM). FEM provided a necessary complement to the universal framework of MSA: it defined a universal method for generating elements.

FEM extension of classical structural elements (and exemplar structures): membrane (Frei Otto’s 1972 tensile roof for the Munich Olympic Stadium), plate (Mies van der Rohe’s 1951 Farnsworth House), and shell (Félix Candela’s 2003 L’Oceanogràfic)

FEM extension of classical structural elements (and exemplar structures): membrane (Frei Otto’s 1972 tensile roof for the Munich Olympic Stadium), plate (Mies van der Rohe’s 1951 Farnsworth House), and shell (Félix Candela’s 2003 L’Oceanogràfic)

FEM effectively allowed a continuous structure to be analyzed as a mesh of discrete elements.

To understand how this works, it’s important to know that the behavior of continuous structures like the plate below is governed by something called continuum mechanics, which transforms the simple algebraic expressions of classical mechanics into partial differential equations (PDEs).

Typical FEM problem: a simply supported plate subject to a distributed load

Typical FEM problem: a simply supported plate subject to a distributed load

As any engineering student will confirm, solving PDEs is no easy task. Even for a simply supported plate, arriving at an equation for the deformed shape requires several dozen pages of algebra:

Analytical solution to the bending of a uniformly loaded plate, by Stephen Timoshenko, one of the founders of modern engineering mechanics

Analytical solution to the bending of a uniformly loaded plate, by Stephen Timoshenko, one of the founders of modern engineering mechanics

Any minor modification to the plate’s geometry — a duct opening or a skewed edge, for example — invalidates this solution, necessitating another volume of derivations. Consequently, analytical solutions are impossible (or at least impractical) to obtain for most engineering applications.

FEM bypasses this hurdle by solving PDEs approximately rather than exactly. In order to control the error inherent in an approximation, FEM has a unique property: the more elements employed, the lower the magnitude of error.

In FEM, a more defined mesh (i.e., a higher number of elements defined in the program) results in a better solution but takes a great deal longer to analyze.

In FEM, a more defined mesh (i.e., a higher number of elements defined in the program) results in a better solution but takes a great deal longer to analyze.

Although using a fine mesh might seem like the obvious way to achieve better approximations, computational burden grows dramatically with the number of elements used. Even with today’s computational power, analysis of complex structures typically takes hours and sometimes even days. The art of finite element analysis lies in finding the optimal balance between accuracy and computation time.

The engineer’s perspective

Therein lies the power and vulnerability of modern analysis. MSA and FEM, respectively, contribute the computational framework and element library for a universal method of analysis. Yet the approximation that enables this power necessarily introduces the vulnerability of error.

Modern analysis software has enabled the design of extremely complex structures, but it hasn’t lessened the engineer’s burden of ensuring the competency of the analysis.

Unfolded analysis map of the CCTV structural braced tube

Unfolded analysis map of the CCTV structural braced tube

 

Comments or questions for Serguei Bagrianski or Sarah Wesseler? Contact serguei.bagrianski@arup.com or sarah.wesseler@arup.com.

 

 

Print this post
Read More Articles