Hey! I'm David, the author of the Real-World Cryptography book. I'm a crypto engineer at O(1) Labs on the Mina cryptocurrency, previously I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

# CFF22 trip report posted July 2022

This Monday I was at the Crypto Finance Forum 2022 in Paris. It was my first time going to a crypto conference that revolves around traditional finance and regulations (aka the current banking system). It was a really interesting experience, and here's a list of things I thought were interesting (to me) during the event. Note that it was hard to aggregate things as panels were often going in all sorts of directions:

Dress code. Am I going to be the only one in shorts and flip flops? Is everyone going to be wearing suits? Thanks no! There were a number of underdressed people, in addition to the suits of course.

Government-approved. A minister showed up. That’s the closest I ever got to the French government :D

Open banking. Some talks about how open banking laws in Europe created a boom of innovation and a fintech sector, and similarities with cryptocurrencies. (me: Indeed, cryptocurrencies seem to allow services to directly interact with the financial backbone and for users to share data across services more easily). Employees of banks at the time were leaving for fintech companies, now we're seeing a shift: they're leaving for crypto companies.

Open Banking emerged from the EU PSD2 regulation, whose original intent was to introduce increased competition and innovation into the financial services sector. PSD2 forces banks to offer dedicated APIs for securely sharing their customers' financial data for account aggregation and payment initiation. bankinghub.eu

AML. A panelist mentioned that cryptocurrencies have better tooling for AML (anti money laundering) and customer protection. (me: I guess currently banks will do their best to collect data and perhaps report to government agencies if they think they should. Not ideal compared to cryptocurrencies like Bitcoin, but that's ignoring privacy coins like Zcash.)

MICA. Throughout the conference, MICA kept being mentioned. It's a European regulation to "protect investors and preserve financial stability by creating a regulatory framework for the cryptoassets market that allows for continued innovation and maintains the attractiveness of the crypto currencies sector which is evolving quickly". It apparently was proposed at a time where NFTs were not really that big, so it is explicitly excluding NFTs from its scope. It applies to the 27 states of the EU which is quite practical as service providers won't have to comply to many laws to serve in many jurisdictions (although some panelists said that MICA would still throttle crypto innovation in Europe). Of course, the UK seems to be going a different way. Interestingly, while some of the US regulations were mentioned, nothing was said about Asian regulations (but I guess the audience did not care about that).

Stable coins. A euro stablecoin was mentioned, with projects like EUROC from Circle. People deplored the fact that a US company was at the source of this new stablecoin, citing it as one more example of the issues with Europe slow adoption of yet another technology.

Traditional players. A panelist mentioned that some traditional players, like Paypal and Visa, were investing in their own stablecoin projects. Although banks are still in observation mode (me: talking to some bankers at the event, it seems like banks have tiny crypto teams that have been investigating the field for years, but nothing concrete has been approved. What is changing is that more and more clients are asking to invest into crypto, and so banks have to look into it.)

Not part of the discussion. Interestingly, the word "layer one" was never pronounced during the whole event. There was one panel that had a regulation specialist working at NEAR, but that's it. It seemed like cryptocurrencies were seen as blackboxes that regulations could not touch; only service providers can be regulated. It seems really weird to me that there's no talk of having cryptocurrencies meet in the middle with regulations, as there's a lot that a cryptocurrency can do at the protocol layer (perhaps because no regulations exist for currencies? As one of the panelist mentioned).

For example, if you try to off-ramp, and convert some crypto into fiat at your favorite exchange (called a CASP in MICA, and a VASP in FATF, of course they had to choose a different name), they will surely ask you where the funds came from. You might have got the crypto by selling a ticket for an event (apparently a lot of the tickets for EthCC were paid in USDC), and that sounds good enough, but what about the buyer? Who says that they didn't get their tokens selling weapons or something?

In the current banking system, it is assumed that every transaction goes through a bank and so things are recorded eventually (with the exception of cash and other hard-to-move-lots-of-money ways). But in crypto, that's not true anymore. So it seems to me that regulators just don't understand this, or are OK with it (which seems weird to me, considering how much they seem to want to know).

The interesting thing is that you can probably think of schemes that work and are privacy-enhanced by using zero-knowledge proofs. For example, a layer one could force every transaction to provide a zero-knowledge proof that some approved entity has recorded the transaction and its metadata. (Perhaps a signature from that entity is enough though.)

(Smart contracts were rarely talked about as well.)

Reconciliation. One of the big topic of the event was reconciliation between the current/traditional financial world and crypto. This is of course a very interesting topic to me, as I worked on the diem (libra) project which was heavily betting on that (before it got shut down). In crypto conferences I go to, I mostly see updates about projects, but I seldom hear about banks or payment networks and how they could switch to using crypto. Interesting remark from a panelist: it will be on crypto actors to train and learn about the financial world and its regulations, and on the regulators to try to adapt to cryptocurrencies.

Environmental impact. There was a lot of confusion and misunderstanding about the environmental impact of cryptocurrencies. I felt like some of the panelists sounded like Bitcoin maximalists, repeating many of the false narratives you can hear from the Bitcoin community (proof-of-stake is less secure and decentralized than proof of work, proof of work is special because it directly converts energy into money, with proof of work there is a real cost to mine, proof of work actually needs to be compared to the consumption of the current financial system (greenwashing?), etc.)

(me: there are many problems with proof of work: it's a race to the top, you're constantly incentivized to use more energy; it's slow; it is less secure (51% attacks have happened regularly whereas proof of stake hasn't had any attacks as far as I know); it hinders interoperability between blockchains or light clients; etc.)

"Impact investing" was the word du jour. It seems to refer to investments "made into companies, organizations, and funds with the intention to generate a measurable, beneficial social or environmental impact alongside a financial return” (wikipedia).

Keywords. "metaverse", "web3", "NFTs", "gaming", "supply-chain". People seem to want to appear in-the-loop. Even the minister mentioned web3. It's interesting as I would have expected bankers and others to be more skeptical about some of these and focus more on the fast, secure, and interoperable settlement capabilities.

The need for regulation in Europe. Interestingly, it was mentioned several times that even though Europe has a lot of good engineers, regulations have already slowed down crypto adoption and innovation. Some panelists argued about how much exactly, of the world's total assets, are in crypto. It sounded like it was negligible, and that no issues in crypto could trigger a global crisis, and so it was not the right time to regulate. A panelist even said that it was "questionable" to do so, as nobody has any idea what they are trying to regulate today. We are lucky to witness the burst of a new ecosystem and like every new ecosystem the beginning is pure chaos. We need to make sure that we have the chance to experiment and fail.

One of the panelist said that Terra (the stablecoin) failing now was good news, as it would be worse if a stablecoin would have failed while being used at a massive scale in the world economy. (Although, building confidence takes years, and losing trust takes days.) Another panelist argued that Terra was centralized, and that is why it failed (me: that's not my understanding), and then mentioned tether resisting attacks (me: it seemed like people have confidence in tether, which is interesting). Panelists also mentioned DAI, saying that regulators had to understand these new types of algorithmic stablecoins (not backed by actual fiat money, but referencing the price of fiat). Someone argued that without liquid assets to back a stablecoin, you're facing a crisis.

Another interesting thing from an economist who didn't seem to like stablecoins or crypto in general, was that he liked crypto when used for ICOs. I'd really like to know why that specific use-case was interesting to him.

Some of the discussion focused on how all of the largest banks and payment networks in the world are American. "In Europe, we are always wondering what is the best way, in the US they're wondering if something works". (It seems like Europe is still debating if they need crypto or not.)

Stablecoins. The recent crashes around stablecoins have led to a lot of regulatory attention. The FSB (Financial stability board comprising many members) for example said this month:

Stablecoins should be captured by robust regulations and supervision of relevant authorities if they are to be adopted as a widely used means of payment or otherwise play an important role in the financial system. A stablecoin that enters the mainstream of the financial system and is widely used as a means of payments and/or store of value in multiple jurisdictions could pose significant risks to financial stability in the absence of adequate regulation. Such a stablecoin needs to be held to high regulatory and transparency standards, maintain at all times the reserves that preserve stability of value and meet relevant international standards. (https://www.fsb.org/2022/07/fsb-issues-statement-on-the-international-regulation-and-supervision-of-crypto-asset-activities/)

A panelist said that the ECB (European central bank) had failed its main goal: stability of prices, and that it should not be able to regulate currencies. There was then some question about legal tender, and if stablecoins could be considered legal tender at some point. Crypto was defined as a partial currency, not a full currency, because in most countries it is still not legal tender.

Legal tender is anything recognized by law as a means to settle a public or private debt or meet a financial obligation, including tax payments, contracts, and legal fines or damages. The national currency is legal tender in practically every country. A creditor is legally obligated to accept legal tender toward repayment of a debt. (...) Legal tender laws effectively prevent the use of anything other than the existing legal tender as money in the economy.  (https://www.investopedia.com/terms/l/legal-tender.asp)

Central banks and regulators are in the same boat here, and they're going to resist stablecoins and crypto. The role of a central bank is to preserve financial stability (the FEDs were given as an example, as they were created right after a crisis). Blockchain as technology can be decoupled from blockchain as cryptocurrencies, and countries (central banks?) could adopt the technology.

There was some talks about Mica putting all stablecoins in the same basket, which was apparently annoying. Someone said that tokenizing a money market fund would make the best stablecoin (me: whats a money market fund?)

CBDCs. CBDCs are central bank digital coins. They are the governments and their central banks' answers to private currencies (libra was mentioned several times as something that could have been THE private currency, a currency not directly backed by a government).

It seems like Banque de France (France's central bank) was already working on a wholesale CBDC, but not for retail use (me: retail always refers to normal people using the digital coin directly, like with metamask). It was mentioned that this week, Russia's central bank was launching a digital coin. China also is working on a CBDC (me: probably inspired by the prevalence of wechat taking over digital payment in China (you can't pay in cash anymore in China)). The upside, from China's perspective, is being able to track its citizens more and mix social score (their new way to tame their citizens) with money, and also having a way to make the yuan more present in the world's economy. The US, on the other hand, already has many stablecoins running on blockchains.

Panelists seemed worried that Europe could be left behind, once again. (Someone even mentioned Europe not having been bullish on the technology of the Internet fast enough.)

A lot of the discussion focused on how the USD was THE dominant currency in the world, probably because they were the first mover. 60% of central banks reserve is in USD (2% for the yuan). Apparently there's always tend to be a single dominant currency internationally. China is trying to be the first mover in the CBDC space, and perhaps will take advantage of that to get a bigger slice of the world currency pie.

There was also some concerns about how CBDCs might want to replace crypto with their own solutions, which might be bad for privacy and used as a surveillance tool.

The debate then focused on currencies as a definition (a mean of payment, a mean of exchange, a unit of account, a storage of value, ...). One of the panelist argued that euro was not a store of value anymore, and that the ECB had abandoned its mission to issue a storage of value object (and that people in general had stopped using currencies as storage of value).

Again, there was a lot of worries that the US was going to take too much of an advantage, and that Europe will end up using US services built on crypto (you can't talk to a pension fund in the US today that hasn't invested in crypto, apparently). Interesting phrasing from a panelist: "When you're thinking of investing on a thesis, you shouldn't wait for an antithesis to invest. If you wait for it to be a success then its too late to invest".

Finally, someone mentioned that nobody talked about the future of cash. But that was at the end of a panel.

comment on this story

# What are zkVMs? And what's the difference with a zkEVM? posted July 2022

I've been talking about zero-knowledge proofs (the "zk" part) for a while now, so I'm going to skip that and assume that you already know what "zk" is about.

The first thing we need to define is: what's a virtual machine (VM)? In brief, it's a program that can run other programs, usually implemented as a loop that executes a number of given instructions (the other program). Simplified, a VM could look like this:

let stack = Stack::new();
loop {
let inst = get_next_instruction();
apply_instruction(inst, stack);
}

The Ethereum VM is the VM that runs Ethereum smart contracts. The original list of instructions the EVM supports, and the behavior of these instructions, was specified in 2014 in the seminal yellow paper (that thing is literally yellow) by Gavin Wood. The paper seems to be continuously updated so it should be representative of the current instructions supported.

a zk VM, is simply a VM implemented as a circuit for a zero-knowledge proof (zkp) system. So, instead of proving the execution of a program, as one would normally do in zkp systems, you prove the execution of a VM. As such, some people say that non-VM zkps are part of the FPGA approach, while zkVMs are part of the CPU approach.

Since programs (or circuits) in zkp systems are fixed forever (like a binary compiled from some program really), our VM circuit actually implements a fixed number of iteration for the loop (you can think of that as unrolling the loop). In our previous example, it would look like this if we wanted to support programs of 3 instructions tops:

let stack = Stack::new();
let inst = get_next_instruction();
apply_instruction(inst, stack);
let inst = get_next_instruction();
apply_instruction(inst, stack);
let inst = get_next_instruction();
apply_instruction(inst, stack);

EDIT: Bobbin Threadbare pointed to me that with STARKs, as there is no preprocessing (like plonk), there's no strict limit on the number of iteration.

This is essentially what a zkVM is; a zk circuit that runs a VM. The actual program's instructions can be passed as public input to that circuit so that everyone can see what program is really being proven. (There are a number of other ways to pass the program's instructions to the VM if you're interested.)

There's a number of zkVMs out there, I know of at least three important projects:

• Cairo. This is used by Starknet and I highly recommend reading the paper which is a work of art! We also have an experimental implementation in kimchi we call turshi.
• Miden. This is a work-in-progress project from Polygon.
• Risczero. This is a work-in-progress project that aims at supporting the set of RISC-V instructions (a popular standard outside of the blockchain world).

All of them supports a different instruction set, so they are not compatible.

A zkEVM, on the other hand, aims at supporting the Ethereum VM. It seems like there is a lot of debate going on about the actual meaning of "supporting the EVM", as three zkEVM were announced this week:

So one can perhaps divide the field of zk VMs into two types:

• zk-optimized VMs. Think Cairo or Miden. These types of VMs tend to be much faster as they are designed specifically to make it easier for zkp systems to support them.
• real-world VMs. Think RiscZero supporting the RISC-V instruction set, or the different zkEVMs supporting the Ethereum VM.

And finally, why would anyone try to support the EVM? If I'd have to guess, I would come up with two reasons: if you're Ethereum it could allow you to create proofs of the entire state transition from genesis to the latest state of Ethereum (which is what Mina does), and if you're not Ethereum it allows your project to quickly copy/paste projects from the Ethereum ecosystem (e.g. uniswap) and steal developers from Ethereum.

comment on this story

# Panel: how ZKPs and other advanced cryptographic schemes can be deployed to solve hard problems in decentralized systems posted July 2022

I spoke at a panel at Privacy Evolution with Aleo and Anoma, which coincidentally was my first time ever speaking at a panel :D this was actually a lot of fun and I'm thinking that I should do this again. The topic was "how ZKPs and other advanced cryptographic schemes can be deployed to solve hard problems in decentralized systems"

You can watch the whole thing, or skip directly to the panel I was in at 4:10

comment on this story

# Smart contracts with private keys posted July 2022

In most blockchains, smart contracts cannot hold a private key. The reason is that everyone on the network needs to be able to run the logic of any smart contract (to update the state when they see a transaction). This means that, for example, you cannot implement a smart contract on Ethereum that will magically sign a transaction for Bitcoin if you execute one of its functions.

In zero-knowledge smart contracts, like zkapps, the situation is a bit different in that someone can hold a smart contract's private key and run the logic associated to the private key locally (for example, signing a Bitcoin transaction). Thanks to the zero-knowledge proof (ZKP), anyone can attest that the private key was used in a correct way according to the contract, and everyone can update the state without knowing about the private key.

Only that one person can use the key, but you can encode your contract logic so that they can respond to requests from other users. For example:

-- some user calls smart_contract.request(): Hey, can you sign this Bitcoin transaction? -- key holder calls smart_contract.response(): here's the signature

The elephant in the room is that you need to trust the key holder not to leak the key or use it themselves (for example, to steal all the Bitcoin funds it protects).

The first step to a solution is to use cryptographic protocols called multi-party computations (MPCs) (see chapter 15 of Real-World Cryptography). MPCs allow you to split a private key between many participants, thereby decentralizing the usage of the private key. Thanks to protocols like decentralized key generations (DKGs) the private key is never fully present anywhere, and as long as enough of the participants act honestly (and don't collude) the protocol is fully secure. This is what Axelar, for example, implements to allow different blockchains to interoperate.

This solution is limited, in that it requires a different protocol for every different usage you might have. Signing is one thing, but secret-key cryptography is about decrypting messages, encrypting to channels, generating random numbers, and much more! A potential solution here is to mix MPC with zero-knowledge proofs. This way, you can essentially run any program in a correct way (the ZKP part) where different parts of the program might come from different people (the MPC part).

A recent paper (2021) presented a solution to do just that: Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets . As far as I know, nobody has implemented such a concept onchain, but I predict that this will be one of the next big thing to unlock for programmable blockchains in general.

comment on this story

# The Web PKI 2.0 posted June 2022

The Web Public Key Infrastructure (PKI) is what's behind the green lock in your browser's URL bar. Actually, as I'm writing this, I realize that it's not even green anymore:

Now, instead of having a green lock that stands out, you get a "Not Secure" that stands out if you visit a non-HTTPS (or plain HTTP) website:

In this post I will briefly explain what this lock means, what the foundations for the security of the web are, and how blockchain technology can turn this into a better world.

## The web PKI, in brief

When you talk to a website like Google, under the hood, your browser uses a protocol called TLS to secure the connection (and display the lock icon).

Part of that protocol is simply about making use of good old cryptography. Your browser performs a key exchange with Google's public key, and then encrypts the connection with some authenticated encryption algorithm. (If these words don't mean much to you, I introduce these concepts as well as TLS in my book Real-World Cryptography.)

The other part of the protocol is about identities. As in: "how can I trust that this secure connection I just established is really with Google?" The only way we found out how to create trust between people's browsers and random websites on the web is by having a number of organizations manage these identities. These organizations are called "Certificate Authorities", and they are in charge of verifying the owner of a domain (for example, google.com) before signing their public keys. The artifact they produce is usually referred to as a certificate.

Your browser trusts a number of these Certificate Authorities by default. They are hardcoded in the software you downloaded. When you connect to google.com, not only do you create a secure connection with some public key, but you also verify that this public key is signed by one of these Certificate Authorities that you trust.

Without this, you'd have to have all the public keys of all the websites on the internet baked in your browser. Not very practical.

This system, called the web PKI, is chaotic. Your browser ends up trusting hundreds of these authorities, and they sometimes misbehave (and sign certificates they should not):

When a Certificate Authorities misbehave, you sometimes have to revoke a number of the certificates they have signed. In other words, you need a way to tell browsers (or other types of clients) that the certificate they're seeing is no longer valid. This is another can of worms, as there is no list of all the current valid certificates that exists anywhere. You'd have to check with the Certificate Authority themselves if they've revoked the certificate (and if the Certificate Authority themselves has been banned... you'll need to update your browser).

## Detecting attacks, Certificate Transparency to the rescue

To manage this insanity, Certificate Transparency (CT) was launched. An append-only log of certificates that relies on users (e.g. browsers) reporting what they see and gossiping between one another to make sure they see the same thing. Websites (like google.com) can use these logs to detect fraudulent certificates that were signed for their domains.

While Certificate Transparency has had some success, there are fundamental problems with it:

• it relies on clients (for example, browsers) to do the right thing and report what they see
• it is useful only to those who use it to monitor their domain (paranoids and large organizations who can afford security teams)
• it can only detect attacks, not prevent them

With the advance of blockchain-related technologies, we can take a different look at Certificate Transparency and notice that while it is very close to what a blockchain fundamentally is, it does not rely on a consensus protocol to ensure that everyone sees the same state.

## Preventing attacks, blockchain to the rescue

If you think about it, a blockchain (touted as "a solution in search of a problem" by some technologists) solves exactly our scenario: it allows a set of organizations to police one another in order to maintain some updatable state. Here, the state is simply the association between websites and their public keys.

In this scenario, everyone sees the same state (there's consensus), and clients can simply own their state without going through a middle man (by being the first to register some domain name, for example).

There are many technical details to what I claim could be the web PKI 2.0. A few years back, someone could have simply retorted: "it'll be too slow, and energy inefficient, and browsers shouldn't have to synchronize to a blockchain".

But today, latest consensus systems like Bullshark and consensus-less systems like FastPay are not only green, but boast 125,000 and near-infinite transactions per second (respectively).

Not only that, but zero-knowledge proofs, as used in cryptocurrencies like Mina allow someone to receive a small proof (in the order of a few hundred bytes to a few hundred MB, depending on the zero-knowledge proof system) of the latest state of the blockchain. This would allow a browser to simply make a query to the system and obtain a short cryptographic proof that the public key they're seeing is indeed the one of google.com in the latest state.

Again, there are many details to such an implementation (how do you incentivize the set of participants to maintain such a system, who would be the participants, how do you prevent squatting, how do you prevent spam, etc.), but it'd be interesting to see if such a proof of concept can be realized in the next 5 years. Even more interesting: would such a system benefit from running on a cryptocurrency or would the alternative (in cryptocurrency lingo: a permissionned network based on a proof of authority) be fine?

# ZK FAQ: What's a trusted setup? What's a Structured Reference String? What's toxic waste? posted June 2022

In proof systems, provers and the verifiers rely on a common set of parameters, sometimes referred to as the common reference string (CRS).

In some proof systems (for example, the ones that rely on pairings) a dangerous setup phase produces these common parameters. Dangerous because it generates random values, encrypts them, and then must get rid of the random values so that no one can ever find out about them. The reason is that knowing these values would allow anyone to forge invalid proofs. Invalid proofs that verifiers would accept. Such values are sometimes referred to as toxic waste, and due to the fact that the individuals performing the setup have behave honestly, we call the setup a trusted setup.

By the way, since this common set of parameters has some hidden structure to it, it is usually referred to as structured reference string (SRS).

In the past, ceremonies (called powers of tau ceremonies) have been conducted where multiple participants collaborate to produce the SRS. Using cryptographic constructions called multi-party computations (MPC), the protocol is secure as long as one of the participant behaves honestly (and destroys the random values they generated as part of the ceremony).

It seems to be accepted in the community that such ceremonies are a pain to run. When mistakes happen, new ceremonies have to take place, which is what infamously happened to Zcash.

comment on this story

# What's two-adicity? posted May 2022

Some elliptic curves (related to zero-knowledge proofs I believe) have been claiming high 2-adicity. But for some reason, it seems a bit hard to find a definition of what this term means. And oddly, it's not a complicated thing to explain. So here's a short note about it.

You can see this being mentioned for example by the pasta curves:

They have the same 2-adicity, 32, unlike the Tweedle curves that had 2-adicity of 33 and 34. This simplifies implementations and may assist in square root performance (used for point decompression and internally to Halo 2) due to a new algorithm recently discovered; 32 is more convenient for this algorithm.

Looking at the definition of one of its field in Rust you can see that it is defined specifically for a trait related to FFTs:

impl FftParameters for FqParameters {
type BigInt = BigInteger;

#[rustfmt::skip]
0x218077428c9942de, 0xcc49578921b60494, 0xac2e5d27b2efbee2, 0xb79fa897f2db056
]);

so what's that? Well, simply put, a two-adicity of 32 means that there's a multiplicative subgroup of size $2^{32}$ that exists in the field. And the code above also defines a generator $g$ for it, such that $g^{2^{32}} = 1$ and $g^i \neq 1$ for $i \in [[1, 2^{32}-1]]$ (so it's a primitive $2^{32}$-th root of unity).

Lagrange's theorem tells us that if we have a group of order $n$, then we'll have subgroups with orders dividing $n$. So in our case, we have subgroups with all the powers of 2, up to the 32-th power of 2.

To find any of these groups, it is pretty straight forward as well. Notice that:

• let $h = g^2$, then $h^{2^{31}} = g^{2^{32}} = 1$ and so $h$ generates a subgroup of order 31
• let $h = g^{2^2}$, then $h^{2^{30}} = g^{2^{32}} = 1$ and so $h$ generates a subgroup of order 30
• and so on...

In arkworks you can see how this is implemented:

let size = n.next_power_of_two() as u64;
let log_size_of_group = ark_std::log2(usize::try_from(size).expect("too large"));
omega.square_in_place();
}

this allows you to easily find subgroups of different sizes of powers of 2, which is useful in zero-knowledge proof systems as FFT optimizations apply well on domains that are powers of 2. You can read more about that in the mina book.

# Are system thinkers right? And why I left security posted April 2022

Niall Murphy recently wrote about The Curse of Systems Thinkers (Part 1). In the post he made the point that some people (people who he calls "System thinkers" and who sound a lot like me to be honest) can become extremely frustrated by chaotic environments, and will seek to better engineer them as they engineer code.

He ends the post with a pessimistic take (which I disagree with):

If you can't get the ball rolling on even a small scale because no-one can see the need or will free-up the required resources, then you're free: they're fucked. Give yourself permission to let the organization fail

A while ago, Magoo suggested I read The Phoenix Project which is a book about engineering companies. Specifically, it's a novel that seeks to teach you lessons through an engaging story instead of a catalogue of bullet points. In the book, the analogy is made that any technology company is like an assembly line, and thus can be made efficient by using the lessons already learned decades ago by the manufacture industry.

The book also contains a side story about the failure of the security lead, which at the time really talked to me. The tl;dr is that the security person was too extreme (like all security engineers who have never worked on the other side) and could not recognize that the business needs were more urgent and more important than the security needs at the time. The security person was convinced to be right, and that the others didn't not care enough (reminiscent of Niall Murphy's blogpost), and consequently he lived a miserable life.

The point I'll be trying to make here is that it's all the same. Security, devops, engineering, ... it's all about trade offs and about finding what works well at a given time.

Ignoring yak shaving (which everyone does, and thus needs to be controlled), how much time and effort should be spent specifying protocols, documenting code, and communicating ideas? How much time and effort do we really need to spend writing clean code and refactoring?

I don't think there's a good or bad answer. The argument for both sides are strong:

Moving slow. Maintaining your own code, or having people maintain and extend your code, becomes harder and harder for the team with time. You will switch projects, and then go back to some code you haven't seen in a while. As the team grows, as people come and go, the situation amplifies as well. Obviously some people are better than others at reverse engineering code, but it's generally a hard problem.

Another argument is that some people on the team are not necessarily good programmers, or perhaps don't even know how to code, so it becomes hard/impossible for them to review or contribute in different ways. For example, by writing proofs with formal analysis tools or with a pen and paper, or to discuss the design with you, etc.

Complexity and rushed code obviously lead to security issues as well. That's undeniable.

Moving fast. On the other hand, you can't spend 90% of your time refactoring and doing things the_right_way™. You need to ship at some point. Business needs are often more important, and companies can go bankrupt by taking too much time to launch products. This is especially true during some stages of a company, in which it is in dire need of cash.

Furthermore, there are a ton of examples of companies growing massively while building on top of horrible stacks. Sometimes these companies can stagnate for years due to the amount of spaghetti code and complexity they're built on, and due to the fact that nobody is able to make changes effectively. But when this happens, codebases get rewritten from scratch anyway, which is not necessarily a bad thing. This is what happens with architecture, for example, where we tend to leave houses and buildings the way they are for very long periods of time, and destroy & rebuild when we really want to do consequent changes.

Eventually, the decision to move faster or slower is based on many factors. Many people work well in chaos and the system engineers might have to adapt to that.

That being said, extremes are always bad, and finding the right balance is always the right thing to do. But to find the right balance, you need extremists who will push and pull the company in different directions. Being a fanatic is a consuming job, and this is why you get a high turnover rate for such individuals (or blogposts like Niall Murphy telling you to let the organization fail).

This is the reason I personally left security.

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. -- George Bernard Shaw, Maxims for Revolutionists

comment on this story