Casablanca: The Web of Trusted Updates

01 Mar 2016

2017 Preface

The following is a description of a project idea that came out of the 2016 Apple vs. FBI events. I thought about exploring it as the last project in my thesis, but ultimately decided not to. However, it looks like some other researchers were motivated along similar lines and produced CHAINIAC (I can’t say whether it was the same events that spurred this research or not, although the Apple vs. FBI story is cited as one source of motivation). My understanding of CHAINIAC is that it is pretty similar to what I outline below, except it uses blockchain ideas instead of the classic web-of-trust model, and provides source-to-binary assurances. And, of course, CHAINIAC actually exists and has explored, specified, and solved all the implementation details of a system like this: I really hope it gets deployed.

1 Example of Use

My encrypted messaging app, Signal, has made an update to its Android app and pushed it out. How do I know my phone is receiving the same update as everyone else, or that everyone else is even receiving an update? I could be getting a targeted malicious update that compromises my secure messaging, with Signal having been hacked or forced to sign the update by a governing authority! But it’ll be okay, because I have Casablanca. With Casablanca, my phone automatically downloads the Signal update I was pushed, but before it can apply it and update the app, Casablanca intercedes. Casablanca pauses the update operation before it affects my Signal app, and pulls the new data aside. It hashes the new data and signs it with my private key, then sends out the hash to a list of other trusted Casablanca users, my web of trust. Within a few hours, the majority of these trusted users have responded to my update hash by saying that yes, they also received an update with the same hash; or they respond saying that they never received an update, or they received an update that was different in some way from mine. In the former case, Casablanca assumes the update is okay since many others also received it, and releases it back to my Signal app to be applied as normal. In the latter case, Casablanca is suspicious of the update because some other users didn’t get it, or they got a different form of update. In that case, Casablanca continues to quarantine the update and alerts the user that it might be suspicious.

2 The World Today

(Skip this if you don’t care about/already have motivation) Over the past two decades updates and patches to software have transitioned from being rare and often ignored by users of the software, to frequent and ignorable by users. Automatic updates have been arguably the greatest win for security of end-user systems in the past decade. In the past users were expected to go out and seek a patch or update if they started experiencing problems or were sent an email based on having registered the product. Now, users are continuously pushed updates and can expect to receive a patch with a day of the author pushing it out. This can make the time a vulnerability exists in the wild a matter of hours or day, where previously it could persist indefinitely in systems that no one actively maintained or thought to patch. This has been a huge security win, and is arguably vital for maintaining any kind of secure software so long as our software continues to have bugs. Once we routinely write programs in some form of verified manner that is able to account for all the various forms of attacks that could be applied, then we can discuss whether automatic updates are truly helpful. Until that time, automatic updates seem to be one of the key components of trying to keep our software secure.

3 The Problem

(Mostly more motivation) Unfortunately, automatic updates currently all rely a single point of failure: being signed by the authors of the software. We can count ourselves lucky that, as far as we know, the cryptographic algorithms for signing code are effective and guarantee whoever signed an update had to have access to the private key that signed it. This still leaves two major opportunities for failure: (1) Stealing a private key; (2) coercing the keyholder to sign something. (1) is a clear problem, that is mitigated to some extent by the ability for a keyholder to revoke the private key if they know it is stolen. So long as the company can secure their key fairly well, and can monitor it’s potential theft even better, this threat is probably manageable, and does not seem avoidable. (2) is a more intriguing problem. It can safely be assumed that most users of software do not what malicious updates being pushed to their applications, and it can be assumed that most companies want to keep their users happy. Therefore, it is fairly easy to assume no company would push a malicious update without some sort of coercion. However, many companies could be subject to various forms of coercion to push such a malicious update. In particular, we will take the case of a government entity coercing a company into signing a malicious update.

3.1 The Court Order

(Specific motivation, almost a threat model?) In the current case between Apple and the FBI, the FBI have obtained a court order to compel Apple to create what amounts to a malicious update and sign it, to push it to an iPhone the FBI wants unlocked. In this case, and in most cases in which the U.S. government wishes to gather evidence of intelligence on a particular subject of investigation, the order is very narrow in wanting to only target a single individual. In this case, that individual’s smartphone. This is important, because to force a company to push malicious updates to allow surveillance of all its users would be extremely broad, and probably not allowed under multiple different constraints of the legal system. It would be overly broad, and require surveillance of all users when only one user is actually subject to an investigation, and thus would violate the 4th amendment rights of the rest of the users. Furthermore, it would be much more burdensome on the company pushing the update, since it would be much more likely to cause users to abandon that software. Thus, it seems likely that without legislation requiring some kind of general backdoor, court orders like those the FBI have served to Apple will continue to be highly targeted at a particular subject of investigation.

4 The Solution

This is the fundamental assumption behind the idea of using a web-of-trust for software updates. Rather than blindly accepting any update pushed to a program so long as it is signed with the correct private key, a web-of-trust enabled program will perform additional checks. For the rest of this article, we define a program called Casablanca which handles the web-of-trust software updates for a device it is installed on. On this device, it hooks into each app it is enabled for, in order to intercept and manage updates to these apps. When an update for an app arrives, Casablanca first checks that the update is properly signed by the expected private key. Then, instead of applying the update, it creates a hash of it. Casablanca then signs this hash with a private key unique to this Casablanca user and device. Casablanca then creates a set of encrypted messages. Each message contains the signed hash of the update and is encrypted with the public key of one of the members of Casablanca’s web-of-trust. Casablanca then awaits a response from $n$ of these members of the web-of-trust. If these responses indicate those users have also received the exact same update for their apps, then Casablanca will assume the update is safe, because it was pushed to many other users of the app. If other users indicate no update, or an update containing different code, was sent to them, then Casablanca will flag this update as suspicious and decline to install it.


For Casablanca, the web of trust consists of a set of public keys associated with other Casablanca users whom it knows it can trust. Bootstrapping this trust can be challenging depending on the threat model assumed. provides one method, distributing public keys of individuals backed by affirmations made on their social media accounts. Casablanca is also less fragile than most uses of the Web-of-Trust, because it can tolerate some members to be untrusted, so long as $n$ out of $m$ members are honest. The size of the web of trust for Casablanca is important to ensure its efficacy and usability. In particular, members of the web of trust are only useful if they also have the particular app that is being updated. Thus, a Casablanca user will need to ensure their web of trust is sufficiently large for each app they use. This could potentially be mitigated by two methods: 1. Simply requesting a few members of their web of trust also use these apps just for purposes of a Casablanca’s security. 2. Having Casablanca spoof the presence of an app on a device that does not actually have it installed, and thus get a copy of the update. This could, however, create the potential for detection of this spoofing and potentially result in malicious updates not being considered burdensome if pushed to fake users.


Casablanca is, of course, just another piece of software. This raises the question: what keeps Casablanca from being compromised? There is not a truly satisfactory concrete solution, but several factors can be used to create a reasonable argument for the security of Casablanca itself.


In accordance with its fundamental philosophy of providing software transparency, Casablanca will be open source, and ideally could have a streamlined build process that would allow it to be locally built on a device to ensure the source code is what actually goes into the app. Failing this, the standard procedure of signing a hash of the app would be used, to allow users to verify their version of Casablanca matches the code that is purported to be the real Casablanca. Ultimately, the authenticity of an instance of Casablanca must be determined by a user via out-of-band methods such as comparing hashes of the app with signed hashes provided by the authors of Casablanca and other users. Unfortunately this step cannot be trusted to be performed automatically by Casablanca. Once an instance of authentic Casablanca is installed, Casablanca can use its web of trust to verify updates to itself.


Casablanca is a program with a single purpose: verifying updates to other programs via a web of trust with other instances of Casablanca. We are optimistic that keeping the scope of Casablanca small and very specific will allow its codebase to also remain small and tractable to be audited frequently and easily by security professionals. Ideally, the code would actually be written using verification tools to categorically eliminate as many bugs and vulnerabilities.

5 How Does My Web Connect?

Since Casablanca is intended to operate on many different kinds of devices a strictly peer-to-peer option would be unreliable, particularly for mobile devices. Instead, Casablanca uses a distributed database to store signed update hashes. This database is distributed across multiple server for both performance and redundancy reasons. In addition, any volunteer can mirror any amount of the database if they wish. This distributed database stores data in one of two ways, depending on the privacy settings of the Casablanca user.


In private sharing mode the signed update hashes Casablanca submits are encrypted using a shared key that is established between two Casablanca users that agree to mutually trust each other. For each such shared key, a separate version of the signed update hash is encrypted and submitted to the database. Then, when Casablanca wants to check the signed update hash of a private user, it queries for the record in the database, and then decrypts it using the shared key.


In public sharing mode the signed update hashes Casablanca submits are not encrypted in any way when submitted to the database. Thus, any other Casablanca instance can query the database for these publicly shared records and check the signed update hasdh. In theory, if many users set themselves to public sharing mode the security of Casablanca might be improved, because it increases the number of users to compare an update against. However, if it is assumed that an adversary may deploy fake Casablanca instances with falsified signed update hashes, then these additional publicly shared records cannot be trusted, and so cannot be relied upon to make a decision about an update. These fake users could lie to say a malicious update is fine when it is not and convince a Casablanca user to install the update. Or conversely the fake users could lie to say a normal update is suspicious, and thus prevent the user from installing it, a form of denial of service. In addition to weak theoretical gains, public sharing mode also exposes information about what apps the user has installed. In many cases this is not a problem, but in some cases it may be. It is possible that Casablanca will implement public/private sharing mode on an app-by-app basis, rather than all-or-nothing, to allow users to prevent sharing their use of sensitive apps but allow sharing of nonsensitive ones. Additionally, Casablanca could implement separate trusted users for individual apps in private sharing mode, so that a user could choose which other users they shared the presence of a particular app with.

6 Practical and Extensible Proof of Concept Implementations

This section seeks to lay out ideas for how Casablanca should be implemented.

6.1 Practical Considerations

Casablanca needs to interact with the updating of programs; how do we do that when programs can update themselves in arbitrary ways? Ideally, Casablanca does not need to deal with trying to figure out how a program is updating itself so that it can interpose itself. Casablanca should obviously be implemented for program ecosystems with uniform update mechanisms. In particular, the apt package manager on Linux and the Android Play Store update mechanisms seem to be obvious targets.

6.2 Potential Impact of Casablanca

I see two main paths to implementing a Casablanca-type system: The first is an all-or-nothing OS-level fork of existing systems. This shows the ideal potential of the system, because it avoids all the compromises of trying to create a solution that must seek adoption in the real world. If this system can convince a major software vendor that it is a good idea, then this kind of all-or-nothing approach is validated and a more timid approach would have been a waste of effort and potentially resulted in less overall impact if it failed to convince the hypothetical major software vendor. As a research project, if this approach fails to get the right kind of traction, it essentially falls by the wayside as yet another instance of “academia did that 10 years ago but still no one in the real world can/will use it.” The second approach is best described as opportunistic; an implementation that leaves individual users or developers the choice to use the system, and ideally tries to make its use as low-friction as possible, in order to encourage more people to voluntarily adopt it. This approach can place restrictions on the strength of the system and the guarantees it can provide. However, it also allows for partial success. It could slowly grow in acceptance, it could be distributed and picked up as an open-source project and slowly be built up by additional developers until it eventually reaches a number of users large enough to be considered a sucess, or large enough to validate its motivations to a major software vendor who then builds their own system because they see that a demand has been created.

6.3 All-or-Nothing: Operating System Level Implementation

6.3.1 LINUX

In Linux, Casablanca could be implemented in the kernel to ensure user-level programs could not circumvent it. From this vantage point, the kernel to ensure Casablanca is employed (or otherwise warn the user) for all program installations/updates, even those used outside the package manager. I am not terribly familiar with SELinux, but I imagine it may have stricter controls on program installation? It would probably make sense to build Casablanca as an extension onto SELinux.


In Android, Casablanca could be implementd in the Android OS, giving it root-level access to apps and resources. This would allow Casablanca to easily intercept all app installations and updates and check them against Casablanca before allowing their use.

6.4 Opportunistic: Userland Implementation

6.4.1 LINUX

The obvious point of use for Casablanca in Linux is the package manager. It seems reasonable to implement Casablanca solely for the apt package manager, and put other programs out of scope (although further development could certainly incrementally expand support to other package managers such as pip, npm, etc.).


Without rooting the device or building Casablanca into the Android OS, there is no way for Casablanca to forcibly interpose itself into other app’s update processes. Therefore, to implement Casablanca as a normal Android app and still function would require active cooperation from other Android apps. In this case, the cooperation could take the form of a simple blob of code, hopefully not even as large as a library, to include in the update code for the app. This blob would simply forward the update, or a hash of the update, to Casablanca for checking. While this does rely on the app implementing this cooperation properly, as long as the app does it once, any future change that breaks this cooperation with Casablanca would presumably be a malicious change, so Casablanca’s threat model and security guarantees hold: if a malicious update is issued, it will be detected if it is not issued to all members of the web-of-trust.

6.5 Conclusion

In my opinion, it makes the most sense to develop Casablanca from a userland perspective that allows small-scale adoption. The web-of-trust mechanism is far from being easy in the current world, and is most definitely not user-friendly to average users. Given this, it seems highly unlikely any major software vendor would be interested in implementing a web-of-trust-based update mechanism like Casablanca. However, as an indepedent application that can be opted into by individual users or developers, Casablanca or projects built off of it have a chance to become an important component of some communities. For exampe, I could imagine secure messaging and storage applications being interested in supporting Casablanca for some of their more privacy conscious users. As far as which environment to implement Casablanca for: the Linux apt package manager is a stronger position, because Casablanca could probably be implemented without any support required from the package creators. However, the mobile space seems more compelling to many, so an Android implementation may be considered more impactful even though it would require developer support to opt-in to Casablanca use for their app.