For Good Measure: Curves of Error
One hears often enough that the error rate for software is so many flaws per thousand lines of code or the like. A fraction of those flaws turn out to create vulnerabilities. A fraction of those vulnerabilities get exploited. And "we" learn about a fraction of those exploits. Let's call it:
S * F *V * E * P
In other words, we create S lines of new code, F of which are wrong, V of which are vulnerabilities, E of which are weaponized, and P of which come to our attention. Let's stipulate one thing: arguing about what constitutes a line of code is irrelevant. While we're at it, let's stipulate that everything here is subject to argument about definitions and what goes in what set.