The Fundamentals Still Matter

By: Mark H Goodrich – Copyright © 2008

[This presentation was developed for an audience of government test pilots – flying for the USAF, USN and NASA – and presented to a Flight Test Safety Workshop at Edwards AFB on 16 Jul 2008.]


For reasons that are corporate, regulatory, economic – or sometimes just fashionable – there are many influences upon the design, manufacture, modification, and maintenance of aircraft. And – no surprise to the people in the engineering flight test community – we will see substantially all of those influences at some point or points between concept and certification. Indeed, flight test is often the first to bring the harsh light of what is practical to the process. The concept people can be pretty esoteric. The marketing people most often want to believe that the laws of physics can be suspended. And, the user community is often the functional equivalent of the fellow who owns a $10,000.00 home entertainment system, but cannot set its clock. Fight test is often the bridge between these elements, responsible to reconcile the theoretical and the practical, checking and balancing design validity, and ensuring that minimum regulatory standards are met or exceeded.

Here at DFRC, and in other government-sponsored research facilities, one might be encouraged to think, “this is the purest of the environments in which design research and testing is conducted, and we are free from those ‘other influences’ that exist in the corporate-sponsored world”. That has a nice ring to it, but is – of course – more theory than it is reality. Is it more of a pure environment? Certainly. Is it free from influences that can distract from both the science and safety? Absolutely not.

Human nature is still a given in every circumstance, and that prehistoric hard-wire programming that we carry in our on-board computer – the human brain – taints our best efforts to be clinical with those “oh so hard to ignore” emotional inputs. Budgets are still influential and can color judgment about what is necessary for validity, and prudent for safety. And, of course, the fears, prejudices, and ambitions that are so basic to people generally create the same obstacles for you as for your more corporate cousins.

For those of us approaching the end of our careers, one undeniable reality is that we are once again at a crossroads where the guard is changing. In a recent industry aviation survey, 73% of respondents said that a shortage of trained personnel was their principal problem. We have seen an influx of new people before, but this time it is different is several material respects, When my peers were moving into the business – that stone-age period now known as “the 60s” – rock and roll was still “slow” music with words, television was black and white (for those households that had succumbed to its lure and purchased a set), we could fill the Chevy with gasoline for three dollars, and many from the more experienced generation before us were on-site to mentor our learning. The institutional memory of the agencies, services and corporations was passed from one generation to the next both formally and as tribal lore – around the water cooler, the coffee pot, the conference table and in the shade of a wing. We did not have to reinvent all of the wheels, and to the extent that we were wise enough to at least consider their counsel, our older brothers in arms allowed us the opportunity to profit from their scar tissue without need to suffer all of the same experiences.

And, here we are again – a tidal change in personnel accompanied by quantum leaps in the technologies available to our profession. A new group of young people is filling the chairs around us. They are every bit a bright as their predecessors, every bit as educated, and just as green to start.

New realities of the structures and budgets at both business and government have resulted in more attention to the analysis of cost over benefit in most projects and project components. Retirements from the “boomer” generation, furloughs, terminations and the mass buyout of pension and health benefits for those older, more expensive and more experienced hands – all of these factors have resulted in the outsourcing of functions traditionally maintained in-house. And, significantly, the ability of this new generation to enjoy the benefits of mentoring has been thus seriously eroded. Transition periods have been dramatically shortened, and outsourcing has created new challenges in the quality assurance aspects for the process.

This all results in a very direct challenge to the engineering flight test community. We have always been at the very center of the conscience of the aeronautical design and manufacturing business, but it now ever more important for us to maintain the professional standards and responsibilities inherent in the work that we do. We must work – often outside our traditional reporting lines – to fill the gaps where others no longer enjoy the mentoring help of former generations.

We must work to avoid what has become all too commonplace – the “us and them” view of our work place. In the airline business worldwide, we have been witness to an increasing isolation between flight crew, maintenance and management. In design and manufacturing, this is also extant – design, test, manufacturing and marketing functions in many cases do not see themselves as a team with a common goal, but rather in tension. And, spreading functions across the globe makes this more problematic.

It is important that we bring a steady and reasoned approach to our interactions with those who are new to the agency, new to the management in corporations with which we must interface, and new to the service and regulatory agencies, for they too must learn on the job, just as we did and do. It is important that we maintain the safety and standards of our work, and also help to build the next community of those who will carry on the work that we have enjoyed so much and for so long. It is time to “pay it forward”.

But it is also a time fraught with danger. Nothing is more basic to this business than that the fundamentals themselves play out over and over. It is usually right after people begin to believe that there is a “new paradigm”, and “the old rules no longer apply”, that one of those old rules rises up to bite us on our collective hind parts.

Winston Churchill said “there is nothing new – just history that one does not know” – true then and still true now. And so it is incumbent upon flight test to reassert itself as a primary check and balance within the design, certification and manufacturing business, both north and south from the flight line. The following are several modest observations, some of which I learned from my mentors before me, and others that I have paid for with my own scar tissue.

Fundamental 1: Falsified and inaccurate source data exists – everywhere.

One feature of operating within a government-sponsored program is the absence of pressure to cheat on the documentation. But, that does not insulate one from the effects, because so much of our work requires the use of source data for baseline purposes.

Your design and testing work is only as good as the source data and assumptions that underlay it. While budget constraints often produce pressures to make assumptions as to baseline data, this can be risky when the original engineering was less than valid, and also raises the potential for much higher costs when a program is halted in mid-stream, and source data purchased or baseline testing added in order to start once again with a set of assumptions that are valid.

The true facts are that even using “minimum design standards” as an assumed baseline is far less than predictable. Sometimes errors are made, but masked in service by safety margins. Sometimes marketing pressures cause shortcuts – or worse – to work their way into the design process. Sometimes issues with the supply of materials results in a manufactured product that does not match up with the blueprints.

Some years ago, we were asked to consult with a “re-engine” project for a popular jet transport. The original manufacturer refused to provide source data regarding a variety of issues from airflow volume through the cowlings and associated ducting to vibratory frequency and harmonic analyses. Obviously, baseline testing for these issues was an incredibly daunting and expensive proposition. The client decided to make the following assumption regarding induction airflow volumes: “Since no aviation product may be designed and manufactured below the ‘minimum design standards’ of the Federal Aviation Regulations, one can reasonably infer that use of those minimum standards as baseline assumptions is a conservative approach to design.” The regulator agreed.

Unfortunately, when testing began some months later, engine performance quickly revealed induction volume issues. The project was halted, and the original recommendations about baseline testing completed to validate volumes on an unmodified exemplar. As you guess by now, the evaluation revealed the original product to have been certified at far less than minimum design standards, with very large and unanticipated costs incurred to essentially start over with the modification design plans. Some time later I had the opportunity to meet an engineer who had worked on the original product and asked the question – he confirmed that the pressures to certify and begin production essentially superseded the design and test program.

In another case, we were – through our associated accident investigation group – looking at a spate of inflight breakups for a popular high-performance single-engine airplane. When it became apparent that the tails were failing, we sought and ultimately obtained original engineering and test data. The data revealed a simple mathematical error some 15 years earlier to be the culprit. Safety margins were sufficient to mask the problem until – as the airplane became longer, heavier and equipped with larger engines – the margin was reduced to a failure point.

I was talking about that with a good friend of many years, who had been a senior engineer with a number of manufacturers between 1930 and 1980, or so. He related a story about how – as chief engineer for a general aviation manufacturer in the early 1950s, an accident investigation revealed that a fairly new airplane was equipped with a tail assembly from an earlier model. Records revealed no evidence of any maintenance or repair, and he thus visited the assembly line to review the drawings being used by those who were building the new airplanes. The blueprints being used had been superseded several times, and when asked to explain, the section supervisor said, “If we changed the way we did things every time you guys sent down a new set of blueprints, we’d never get anything done.”

In another case, I had refused to sign off stability testing on a jet transport. Under production pressures, the company sought and obtained an administrative determination that an equivalent level of safety existed, and that compliance with minimum standards was not necessary. This determination was not readily discernable from any publicly available documents, and the manufacturer claimed that its design and test data was protected as a “trade secret”.

Most manufacturers operate under the so-called delegation option authority – DOA – which means that there is little or no review outside the manufacturer itself, as to regulatory compliance for deign, materials, testing or manufacture.

The lesson is that reliance on source data and assumptions can be risky. Physics is not just a set of good ideas. They are called ‘laws’ for a reason, and those who engage in the folly of self-delusion about their applicability will ultimately be revealed. But, the revelation often comes at the expense of those who rely on the data for baseline purposes, only to find that all their testing must be accomplished a second time and after new baseline data is developed.

Fundamental 2: Regardless how many graduate engineers are assigned to a project, at least one member of the design validity team must see reality with the eyes of a blacksmith.

It seems that one result of the computer age has been the assumption by many that anything printed on computer paper carries a presumption of validity. Remember that, for all of the benefits that computers have brought to the aeronautical design and manufacturing business, they are still just dumb adding machines, dependent for their accuracy upon the sophistication and correctness of the programming and data input processes. They make mistakes, too, and at the speed of electron flow.

I remember well an older engineer’s observation some thirty years ago about a thin-walled austenitic stainless exhaust system that was the pride of the young computer-assisted design team. Management wanted to avoid the time and expense of normalizing the components between welding and installation. Having looked at the computer-generated predictions, and having listened to all of the arguments about how the material had been successfully used on engines producing ten times the horsepower of that planned for the instant application, he looked it over and opined that, “None of those engines were turbocharged. Their exhaust port studs were more substantial. You have no intercooler, you are using excess fuel to cool the top end, and you have told the computer that the gases will expand and cool after passing the exhaust valves, which is a questionable assumption. I don’t care what your computer says, the material is going to heat right through its intermittent service temperature, bulge and crack right there”. Thirty-five of the first fifty units bulged, cracked and failed within an inch of his designated spot, the computer engineering data notwithstanding.

We see much the same type of presumed reliance upon computational fluid dynamics programs, handling quality simulations, and a variety of other computer-generated information. That it is a computer printout, or that we have generated a flow diagram in yellow, red and blue, does not render the data valid.

We were asked to evaluate and fly a test program. Because the airframe manufacturer was not a part of the program, there was no original test data available. An engineering consulting firm had been hired to develop an analysis to support the approval of the test plan, and issuance of an experimental airworthiness certificate. One thousand sheets of paper with color graphs, flow patterns and equations equal to the final in a 300-level engineering mechanics class were presented. It was just that, absent any actual data for a starting point, the entire analysis was constructed on assumptions. It was all very pretty, but it was not real. When our requests for a more realistic analysis and the installation of instrumentation to gauge airframe responses were denied – too expensive and not required – we declined to participate in the program. The report from a flight engineer who resigned after the first test flight was that outboard engines were doing “figure-eights” on their pylons and wing bending was sufficient to breach a fuel tank.

I often reflect on one of the less obvious benefits of having completed engineering school before the advent of hand-held calculators. For five years, every single day included a slide rule class in which one-hour of problem solving was scored on the basis of how many correct answers were calculated during the period. For those of you who have never used a slide rule, the first consideration is always the order of magnitude. The answer is always displayed as an interpolated numerical value, and the operator must insert the decimal point. Whether the answer is .01750, 1.750, or 1,750 is left to reasoning. Thus, the starting point of every calculation is to understand the order of magnitude of the ultimate answer. On a very regular basis, I now see younger engineers offer up a solution that cannot possibly be correct. They have seen it printed out on the hand-held, and presume validity, when taking a step back and thinking for a moment would reveal the answer to be impossible by orders of magnitude.

Remember that the computer only knows what it has been told – nonsense in means nonsense squared out. Think about where the calculation is likely to take you, and bring the common sense of a blacksmith to bear as a part of your validity analysis.

Fundamental 3: One test equals one-thousand expert opinions.

At Edwards some four decades and more ago, I remember seeing that written on a sign. At the time I was too young to appreciate its fundamental truth, but a copy of that sign now hangs behind my desk. In fact, it hangs right next to a sign that says, “How come everyone wants to be a pilot, but no one wants to fly?”

No one knows better than the people in this room how easily things can be rationalized. In the face of test program time and expense, with an expert opinion validated by people up and down the hallway, it is all too easy to climb on board the “rationalization express”, and accept opinions in lieu of testing.

It was no less than Albert Einstein who – when asked why he was not more positive about the ultimate correctness of his theory on relativity – observed that, “In theory, theory and practice are always the same, but in practice, theory and practice are never the same”. Those are wise words from the wisest of men.

Budget constraints – as popular in the government sector as anywhere else – are often the fly in the ointment of deciding whether a test is necessary. Beware the “mob effect”, where people begin to make rationalizations about the need for testing. In terms of getting along, it is always easier to agree with the building consensus. In some cases, expressing a different opinion may even pose a career risk – we’ve all worked for that guy at some point. And, when that train of consensus begins to move out of the station, there is always pressure to climb aboard. Try to maintain an independent evaluation and make your case. There are many times when the best course of action is to just salute and say, “Yes sir”, but there are a few where you must hold your professional ground.

Fundamental 4: Minimum regulatory standards can seldom be legitimate design targets.

I am always amazed to arrive on the first day of a design and development project to see the black or white board at the end the room with the words “project targets” at the top, and a recitation of applicable minimum regulatory standards thereunder. You may think that – given the more pure approach to research that defines the mission at DFRC – this is not a terribly relevant issue. I disagree, because the following approach has been used for the original design and testing of products that you are operating.

History instructs that test results reveal as data scatter – hopefully not too scattered – both above and below the target values. Therefore, every data point below the target is, by definition, less than the minimum standard. This leaves the engineers to try and rationalize – there is that word again – that it is a legitimate engineering practice to average the data. The obvious rhetorical that must be asked is, “If four airplanes test below the minimum, but six test above it, are those that test below it in fact actually above it in regulatory terms? Try making that argument to an accident investigation board, or to a judge and jury.

The true facts are that proper engineering practice requires the design target be based on how and where the product will be used. For example, a wing design for a low-altitude observation profile must take into account far more in terms of turbulence and gust-loading than one for high-altitude flights averaging twelve hours and more in duration, but the applicable minimum standards comprehend neither extreme very well.

I remember well some of the early problems when turbo-charging first provided the ability to operate at high altitudes for single-engine general aviation airplanes. Flutter suppression had been accomplished in the low-altitude regime by high levels of aileron cable tension. Suddenly, flutter was an issue, because the design team had not previously had to consider the effects on stainless steel cable tension when the airframe shrinks more than the control cables under the very cold temperatures at high altitude. Neither the minimum design standards nor the myopic view and limited experience of the design team addressed the fulcrul issue.

Also significant is the “over-design rule” that teaches one must balance the weight and costs of a modest over-design against the inherent benefits that usually result in terms of continuing airworthiness, reduced fatigue damage and reduced maintenance costs.

Finally, reflect on the following quotation from FAA Order 2100.13 (dated 01 June 1976, and incorporating language from its predecessor, FAA Order 2100.1 dated 18 May 1962). Noting that “minimum standards” are established as adequate to meet basic requirements, the order states unequivocally that such minimum standards “do not constitute the optimum to which the regulated manufacturers should strive” in the design, materials and manufacture of their aviation products.

Standards are not specifically developed for individual products and operating environments. That one standard may be adequate for a trainer does not imply that it is also adequate for a multi-engine cabin-class airplane that will operate under instrument conditions and at high altitudes. That one standard may be adequate for a business jet that will operate up to 350 hours a year at altitudes up to FL330, does not imply that it is adequate for a 900,000 pound airliner that will operate 350 hours a month at altitudes up to FL450.

The lesson for manufacturers is to design for the application and then test to ensure that minimum regulatory standards have been met or exceeded. The lesson at DFRC is to question design validity when you are relying upon conformance with at least minimum design standards in the original product.

Fundamental 5: Problems more often have a simple answer than one that is complex.

I marvel at the modern trend to throw fast answers or replacement parts at a problem, presuming that it must be some technologically complicated issue, rather than thinking about it using a methodology we used to call “lowest common denominator trouble-shooting”.

I recently arrived to perform some post-modification tests on a large two-engine transport type that had been converted from passenger to cargo configuration. I was told that they had been trying to resolve a pressurization problem for ten days and twelve flights, but without success. Two pressurization controllers had been replaced, an outflow valve had been replaced, door seals had been replaced, two pack valves had been replaced, and the back-pressure isolation valve on the APU bleed line replaced, as well. I asked if they had checked the small motor-driven valve used for ground avionics cooling. Answer, no. With a ten-minute ground test, I verified that a $120.00 motor was the problem.

An even more dramatic example was a recent experience with a four-engine transport type coming out of a heavy check. In doing the ground checks, I noted a dual failure of the total air temperature source inputs. When I inquired about the fact that it was an unresolved issue, I was told that they had replaced both TAT sensor units, traced wiring through to the air data computers, and finally replaced both air data computers, but without any apparent effect on the problem. Since both inputs were failed, and since there is only one place on the airplane where both systems pass through a common switch – the nose gear extension switch that prevents overheating while on the ground – I inquired as to whether the switch had been checked for both condition and operation. Answer, no. I led the team to the nose gear, borrowed a putty knife and scrapped one quarter inch of crud off the proximity plate, wiped it down with some methyl-ethyl-ketone, and fixed the problem with zero dollars in parts and 10 minutes of labor.

Consider the simple answers first, and the esoteric things only if required thereafter.

Fundamental 6: Remember the distinctions between academics and the real world.

In school, we were taught lessons, and then given a test. In the real world, we are often given the test first, and from the way in which we do or do not handle it, learn the lesson. In that second example, the lives of people are at stake.

Fundamental 7: Manufacture and operate behind the technological state of the art.

History in the aviation and other manufacturing businesses instructs over and over that one should manufacture in volume well behind the leading edge of technology, in order to gain experience slowly and on a limited scale with new materials, design concepts and technologies. Yet, each new generation of managers and engineers wants to test that lesson of history.

When counseling that some experience with the new technology should be obtained through its limited use on collateral components over time, I am frequently told that new design and manufacturing paradigms using modern computer technologies have eliminated the need for such unnecessary caution.

The extraordinary expansion in the use of composites is a premier example. In service, we see myriad composite failures on airplanes with modest times in service – delaminations, hinge anchor separations, loss of stiffness, and surface erosions. Despite the absence of dependable and accurate non-destructive testing capabilities for many of these materials, their incorporation into products has left the station like an express train. Our advice to more slowly incorporate these materials into designs, allowing for time in service to measure their real-world performance and gain experience in their use and repair, goes unheeded for the most part.

I am often told that I am a “dinosaur”, and that my cautionary advice is too conservative.  Maybe, but over four decades in this business has taught me – above all else – that the fundamentals still matter.

Mark H Goodrich – Copyright © 2008

The Fundamentals Still Matter was presented 16 Jul 2008  to the NASA Dryden Flight Research Center Safety Workshop at Edwards AFB, California