The ‘Web Proof Techniques’ series covers the technical implementations of various Web Proof protocols. The first article in the series was dedicated to TLSNotary, and this article focuses on Origo. A future article will cover TEEs.
Welcome back to the Pluto blog, where today, we’ll delve into a topic that may be keeping your local cryptographer up at night:
What is a Web Proof proxy protocol, and how does the Origo protocol achieve performant Web Proofs?
This post may be interesting in particular for readers looking to develop deeper understanding of the underlying machinery involved in proving the contents of TLS sessions.
Up first, we'll run through a quick context refresher on why you would want to prove the contents of TLS sessions at all, then we will explore the basic ideas of how this can be achieved with a proxy server. We'll then explore the technical details of the Origo Web Proof protocol, which we at Pluto are putting into production, providing cryptographic infrastructure enabling developers to efficiently prove web content.
Readers already conceptually familiar with Web Proofs may find it expedient to skip to the Proxy Web Proofs section.
Why Web Proofs?
But first, why would you want to prove the contents of a TLS session?
If you were to share the contents of your TLS transcript with a third party, that party would have no way to detect whether you forged that transcript. Web Proofs are proofs of Data Provenance, allowing users to prove the contents of their TLS session.
Web Proofs have broad potential applications:
- Smart contract developers rely on centralized oracles to post data on-chain. Some machine has to post that data on-chain; that machine is an extra point of failure for hackers to compromise. Web Proofs remove this point of failure, allowing the posting machine to prove the data source and authenticity.
- To demonstrate reputation between marketplaces; for example, allow a seller with good reputation on eBay and Etsy to bootstrap their reputation on Amazon Marketplace.
- Proofs of Personhood are reputation scores aggregated over social network presence, and other online measures of unique human behavior. PoPs may become an important filter against spam, botting, and fraud.
- Allow users to demonstrate commonalities (e.g. connections in common) to one another, without relying on an intermediating application.
- Gray markets, like video game marketplaces and secondary ticketing marketplaces - e.g. allow a user to prove that they bought their ticket directly from an artist, and not from a scalper.
- In peer-to-peer networks, it is currently difficult or impossible for a node to prove that it has correctly forwarded a received message. Web Proofs have the potential to play a transformative role in peer-to-peer network arbitration games.
Aside: Web Proofs made suspiciously easy
Cryptographically speaking, Web Proofs are incredibly simple to construct, with just a tiny change to the TLS protocol. For the cryptographically inclined, take a moment to consider how you might change TLS to allow a client to prove the contents of a server response.
Digital signatures are an asymmetric cryptographic primitive for exactly this purpose. Given message m, the server could simply return .
Then, any verifier could obtain the server's public key from a certificate authority, verify the signature, and we're done.
Why doesn't the TLS specification require that servers sign messages?
Even if changing TLS were an option, it would be incredibly expensive to add generally unnecessary cryptographic operations. Modern CPUs and fast signature schemes like Ed25519 can compute a signature in around 100k CPU cycles (See Appendix 1 for a script timing OpenSSL Ed25519 signature).
On a 3GHz machine:
That is, about 33μs to sign a message. That's way faster than computing a zero knowledge proof, but an impossibly large computational overhead for all web traffic.
Putting 33μs in context, in 2021, AWS IAM handled 400 million requests per second, so let's conservatively ballpark our guess of total global requests per second to 1 billion. Requiring even the most efficient signatures on all TLS messages would then add at least 33,000 seconds of additional global cryptographic server computation per second!
If we narrow our scope from "change TLS" to "change what my server does", the future is here! Unfortunately, we've arrived at the land of possible-yet-useless, at least for Web Proofs. Web Proofs can't rely on every web server to change just because we want them to; we can't rely on servers to do additional work just in case someone wants to create a Web Proof.
Proxy Web Proofs
Recall that a Web Proof allows the client to prove the contents of a server response.
It would be excellent if we could achieve that without introducing a third party into the system, but for reasons explored in the prior section, insisting that all servers change the way they communicate is a non-starter. So we introduce a third party to the TLS flow: a proxy server.
client <-> proxy <-> server
only the best of diagrams for this audience
TLS background speedrun
We won't go into depth about TLS here, but it helps to have a rough outline of the TLS handshake (for a more detailed guide, see the excellent Illustrated TLS 1.3)
- Client Hello - The client says hello to the server, telling the server about what protocol and cipher suites the client prefers, and the client's public key(s)
- Server Hello - The server responds with a public key and protocol and cipher suite information.
- Then, a few irrelevant messages are passed, relating to backwards similarity to TLS 1.2
Both parties may now compute a shared secret, used to derive symmetric keys, used for further encryption and authentication between client and server.
Note that, since symmetric cryptography is used for authentication and encryption, a malicious client can forge encrypted and authenticated messages and claim that the server sent them. This is what introducing a proxy aims to solve.
Introduce a proxy
The proxy's role is to forward traffic passing between client and server, and record the ciphertext it observes. Because the proxy saves ciphertext content, the client can no longer forge arbitrary ciphertext to claim as the server response.
That might make more sense in context of what happens next. The basic picture of a creating a Web Proof looks like this:
- Client Server handshake communication: the client begins passing handshake messages to the server through the proxy. The proxy saves the ciphertext.
- Client server record layer communication: the client and server exchange whatever record layer messages. The proxy saves the ciphertext.
- Client computes a Web Proof: the client computes a proof that plaintext is the correct decryption of the observed ciphertext .
- This proof may simply disclose all of , or prove something slightly more involved; the client will only want to prove some section of the entire message , so the client may instead prove that subset is the decryption of a particular section of , where .
- Technical quibble: as we discuss in the next section, the proof must also demonstrate that the client provided the correct key in their proof.
- Proxy signs the proof: If the observed by the proxy is identical to the provided in the proof, the Proxy signs the proof.
- Simplification: in cases where the proof consumer would accept simply a signature from the Proxy, rather than checking the proof themselves, the Proxy may instead simply verify the proof, and publish a signature over the hash of the client's proof.
- Verify the proof: the proof and signature (or in the simple case, just the signature) may published by the client as a Web Proof to be verified for any further application.
In broad strokes, this is how a proxy might be implemented to allow a client to produce a Web Proof.
Origo efficiently implements a Web Proof proxy
The client must convince the proxy of the following:
- Server Authenticity Certificate Authority Check: the server's claimed public key must be accompanied by a CA signature, or else the identity of the server may be spoofed. The CA signature check is expensive to compute in-circuit, and via an optimization discussed below, may be computed entirely out-of-circuit. That is, the CA signature check is not necessary to include in the client zk-proof, and is instead verified out-of-circuit by the proxy.
- Correct Key Derivation: The proof must demonstrate that the client record layer key private input is the correct key. This is proven by disclosing key information from the client-server handshake, which is used to prove the correct derivation of the record layer key. The zk-proof maintains the privacy of the client's session keys, while allowing the client to prove...
- Correct Decryption: finally, the ciphertext observed by the proxy must be consistent with the claimed plaintext, via the key derived in the previous step (rather than some arbitrary key chosen by a cheating client).
Correct key derivation
One approach to proving correct key derivation would be to prove the correct derivation of Diffie Hellman key-exchange parameters, and the DHE shared secret. This step, in addititon to verification of the server certificate signature in the zk-proof would be expensive; fortunately this is unnecessary to compute in-circuit.
The Key Independence Property in the context of TLS 1.3 ensures that the leakage of one key does not compromise the security of other keys derived within the same protocol run, and may be exploited in the key derivation circuit to Specifically, each key is derived using a Hierarchical Key Derivation Function (HKDF) with a unique context and input, ensuring that no two keys are directly related or can be used to infer one another.
The significance of the Key Independence Property is that the client may disclose the Server Handshake Traffic Secret (SHTS) to the proxy, so as to give the client a shortcut to prove the correctness of further keys, without compromising the Handshake Secret (HS), or further traffic secrets. The Proxy may use the disclosed SHTS to verify the Server Finished message (SF). This reduces the client proving burden.
That is, the Proxy computes out of circuit:
- - the server MAC finished key
- - the server finished message
- - a check that the server finished message is as expected
- - a check that the server certificate is valid.
And the Client proves in-circuit, with HS as a private input:
Key Derivation(HS; H2, H3, SHTS):
Finally, the client may prove the consistency of decryption of the observer record-layer ciphertext with the claimed plaintext.
In summary
In this post we discussed:
- what Web Proofs are, and a brief survey of practical applications
- how Web Proofs could be made extremely trivial by asking the server to sign their response, and why that doesn't work
- how introducing a proxy between client and server allows a client to construct a Web Proof
- some details of what precisely goes into the proof, via the Origo protocol
Unfortunately, the cost to corrupt even a well-secured proxy server is not infinite. We are exploring techniques to archititect a range of Web Proof infrastructure options, capable of meeting requirements at all scales.
Pluto is on a mission to solve the challenges of an increasingly exploitative, centralized, and plutocratic internet. If you believe in this mission, would like to help build and deploy applied cryptography tools at scale, check out our open positions. You can also DM us on Twitter or Telegram.
If you are interested in building with Pluto, please see our developer docs and our open-source repositories on Github.
Appendix: timing an OpenSSL Ed25519 signature in seconds and cycles
This script generates an Ed25519 keypair, signs a message 1000 times, logs how long that took, and verifies the signature to observe that the signature was correctly constructed. Divide the logged time by 1000 to get the time for one signature.
Log your CPU clock speed with
lscpu | grep "MHz"
, or lscpu -e=cpu,mhz
for a reporting across each core's current clock speed.# Generate an Ed25519 key pair openssl genpkey -algorithm ED25519 -out private.pem # Extract the public key openssl pkey -in private.pem -pubout -out public.pem # Create a sample message file echo "This is a test message for Ed25519 signing" > message.txt # Perform signing operation multiple times and measure time echo "Performing Ed25519 signing benchmark..."time for i in {1..1000}; do openssl pkeyutl -sign -inkey private.pem -rawin -in message.txt -out signature.bin done # Verify the signature (optional, to ensure it works) openssl pkeyutl -verify -pubin -inkey public.pem -rawin -in message.txt -sigfile signature.bin # Clean up rm private.pem public.pem message.txt signature.bin echo "Benchmark complete."
My output:
real 0m3.816s user 0m2.504s sys 0m1.283s
My laptop's median core clockspeed is around 2000MHz, so we compute the cycles per signature:
The OpenSSL script above is single-threaded, so this is a reasonable estimate of how many cycles my not-particularly-fast laptop actually takes to compute a signature.