I’d like to obtain diverse, recent, experience-based perspectives on
a thesis that I first encountered in the mid-1990s:
A book about “developing software for high-integrity and
safety-critical systems” (1) suggested that, according to the best
evidence then available, extensive “tool support” can compensate for
the deficiencies of an unsafe programming language and make it
suitable (indeed, positively preferable) in unforgiving contexts
that demand exceptionally high-quality software.
Since then there has been much improvement in analysis methods
underlying software tools for unsafe languages such as C and C++.
However there have also been countervailing trends, e.g.,
improvements in optimizing compilers that can expose latent defects
in software that previously “worked” (accidentally) but that runs
afoul of increasingly picky language standards. Recently I studied
one major variety of un-safety thoroughly enough to document traps
for the unwary:
Furthermore I have experience writing business-critical production
code in both Java and C/C++, but I’m seeking broader knowledge that
may include other safe languages.
My question is: To what extent have trends in software analysis
tools, for both safe and unsafe languages, undermined (or amplified)
the inherent attractions of safe languages, particularly for writing
“high-integrity” software? I’m interested in both static and dynamic
I ask because I’m planning projects with high-integrity requirements
and I want to choose languages based on current expert knowledge.
Many thanks in advance for your thoughts.
(1) Safer C, McGraw-Hill 1994. ISBN 0-07-707640-0.