So RubyGems was compromised. Tony Arcieri has written about the details and how signing gems could have made a big difference. I fully agree. I have been dreaming about signed gems for a long time and I already talked about why I would believe it to be a good thing at RubyConf over a year ago. I guess nobody would disagree if I say gems are a really cool feature and that RubyGems and package managers in general are awesome. Just run bundle install and you’re set to go, I mean how cool is that? Besides RubyGems and Bundler, Maven, Gradle, pip, Leiningen, sbt etc. are all tools that hugely simplify dependency management in day-to-day work and I assume nobody would like to go back to the status quo before those tools arrived. Just pause for a second and think of the many, many places where these tools are used today - from hobby programmer on to professional programmers and small, medium and large businesses. I believe they deserve to be called critical infrastructure these days. Just look at all the whining that started when RubyGems was down for a bit. And to be honest, I myself was thinking “Damn, where do I get my gems from if RubyGems is not up again soon? … O_o okay, actually it ain’t that hard”. (Yes, I was thinking O_o) But that thought serves to prove a point: we are so spoilt by the comfort of our package managers that we start to forget how we could do without them.

Big surprise then that this would become a primary target for hackers. If I wanted to piss off the Ruby community as badly as I could, then I can see no better way than messing with gems. Enter the doomsayers, who “have seen this coming”, “told you Ruby kids to stick with something battle-tested in the enterprise” and “knew that Ruby sucked and was slow anyway”. My only answer is to point them towards the problems that Java had for example over the last year. Let’s keep calm and focused over these events, let us thank all the hard-working people that deal with these issues and let us make a combined effort to improve the situation. Real software has real problems, no use in ignoring or over-emphasizing this fact.

Let’s have a look at some popular code signing solutions that already exist, including thoughts on their particular advantages and disadvantages. Smart people have designed them, so why not let ourselves be inspired by them, taking them as blue prints for a solution that is tailored towards RubyGems. I’ll cover the first example in much more detail than the rest of them because I have extensively worked with it in the past and I believe it showcases a lot of good ideas, but also how good intentions can easily go wrong. Also many of the problems that appear there appear in one form or another in the other examples as well.

Java: Signed .jar files

This one has been around for quite some time and is quite similar in functionality to Microsoft’s Authenticode. Java code signing equals to signing the contents of a jar file, and then putting the resulting signature inside the jar file as well. Because the signature value is not known until the signature has been computed, it is not possible to sign the whole jar file at once, e.g. by just signing the bytes of the file. Ultimately, you still need to embed the signature file in the jar - the signature cannot “sign itself” before knowing its value. If you look at the specs, you will notice that the signature is not computed using the individual files directly, but over a manifest. Using a manifest serves several purposes: first of all it is possible to only sign parts of the jar file, say just a few classes individually. This is quite dubious in practice, because it causes a lot of confusion over what is signed and what is not, or, later on, over what is to be trusted. A much more useful feature is that in order to validate the signature you only need to compute the digest of the manifest itself. Technically, you don’t need to recompute the digests of each individual file to be able to tell whether the signature value is sound or not. Of course, you need to verify the individual file digests to finally establish their integrity. But still, adding the manifest intermediate layer is really useful. Imagine there was a power outage in your data center and you want to do a quick sanity check of your data now. An excellent way to do so would be to check all the signatures on the files. This could take a very long time if the files themselves are quite large because you have to compute the digest of each and every file, megabyte by megabyte. Using manifests speeds this process up quite a bit. All you digest now is the manifest itself, which is usually in the kilobyte range rather than in the megabyte range. In both cases, you wouldn’t even have to validate the signature itself as your only concern is data integrity at this point and it would suffice to match digest values. Luckily for us, the CMS style signatures used for signed jars contain the message-digest attribute which stores the file’s digest value.

From a security point of view, jar signatures seem quite OK. That is, if you believe in X.509 PKIs. The signature in a jar is made using an X.509 certificate, with either RSA, DSA or even elliptic curve keys. As such, the signature validation process not only requires to establish the validity of the signature itself but also the validity of the associated certificate. This step is absolutely mandatory, because this is the source of our trust in the signature. Commercial X.509 certificates are tightly bound to an identity of some person or corporation. It is the responsibility of the authorities issuing such certificates to establish the identity of the certificate holder prior to issuing the certificate to them. This is why you can’t go to a CA (certificate authority) and ask them for a certificate that says Martin Boßlet or Google or “My father is my uncle” on it. They will require some form of identification and they will issue you only certificates with your identity on it. “Now anybody can issue a certificate that says Google on it - just use OpenSSL!” you say? And you are right, but in return for your money, a CA is part of the global PKI network, and your Google certificate is not. And this is what the certificate validation process will try to determine: it obviously can’t judge whether you are Google or not, but it can see whether the certificate was issued by a trustworthy CA. To do so, it will look up the certificate of that CA itself and see whether that was also issued by another trustworthy CA, and to verify that certificate it will… infinite recursion? No, because there are “trusted root authorities” - these are the certificates that form our trust axioms. If any chain of certificates was eventually issued by one of these “axiomatic certificates”, we simply believe that all is well. The trusted root certificates typically ship with the initial distribution of the software that performs certificate validation (as in Java) or the software utilizes a standard set of what is shipped with your operating system.

In order to sign a jar, you have two choices. You either buy a code signing certificate from a CA, or you issue your own (often in the form of self-signed certificates, where the issuer is the certificate itself). The first case is quite expensive, but it will guarantee that anyone can use your jar file right away. This is not happening in the second case - the trusted root certificates shipped with Java know nothing about your certificate, so your jar will be rejected. You may ask every user to include your certificate in their trusted certificate pool, but this is a really tedious procedure, and certainly nothing the average user would really want to do. Besides that, the approach doesn’t scale. While certain users might import a certificate or two, as soon as a third guy asks them to they will tell that person to go to hell for sure.

Another problem with code signing using X.509 certificates is that of certificate expiration. Typically, code signing certificates last one, two or four years, depending on how much money you are willing to spend. But as soon as those certificates have expired, any jar file signed with it will “expire” too, and therefore be rejected by Java. Even rightfully so, but this still sucks because every couple of years you have to deliver new signed packages. Doesn’t sound too bad? Imagine you have a signed jar used by thousands of clients that don’t take updating their software too seriously. Still, your jar is critical for a module that interacts with their COBOL-driven conquer-the-world system. Now imagine it’s the day when your code signing certificate expires. Phone much?

While signed jars do allow to add a RFC 3161 timestamp, this doesn’t help all that much: the timestamp itself is a CMS signature, again with its own certificate attached to it. As soon as the timestamp certificate expires, your jar will again be rejected. To be fair, however, it is still possible to think of a central service (like RubyGems) that could host signed jar files and re-timestamp the jars as soon as the latest timestamp is near expiration. A timestamp also helps in cases where a code signing certificate needs to be revoked due to key compromise for example. Without timestamps, any jar signed by the revoked key will instantly become invalid. Every software using this jar will immediately have to replace it somehow. That sounds like a real mess because it is. Timestamps greatly improve the situation: the timestamp allows you to securely establish the time of revocation and in addition the time when a particular jar file was signed. If the jar signature was made before the key was compromised, then you’re still good to go using that jar. It’s only stuff that is signed after the compromise using the compromised key that needs to be rejected. Anything still in place signed by that key prior to compromise can be continued to be used safely.

Still, if the signatures do lack such a secure timestamp, there is a significant problem with key “roll-over”. It requires tremendous efforts of coordination and communication to keep a system running without any downtime whenever a key/certificate critical to the infrastructure expires. This may work in a closed system like a company, but is much harder to obtain in an open environment where we cannot assume that all participants are actively engaged in making the updating effort a success. If you now consider that these events don’t typically all happen on the same day, then you’ll get a pretty good picture of how much sys admins like the term “certificate expiry”.

There is also the issue that in addition to code integrity and authenticity, Java also tightly couples granting security privileges to signed code. Say you have an applet that needs to access the local hard disk, then Java will reject this by default because remote code cannot be trusted, or so they say. This sandboxing feature of Java is really cool to be fair, but the trouble is that an applet can still request to use those features forbidden by default. To do so, it needs to be signed, and in addition to the signature and certificate validation process the user needs to manually accept a popup dialog that asks something along the lines of “Do you trust ‘company x’?”. Even worse, there’s the option to trust that entity by default, which makes the annoying popup go away for good. This is the ancient battle of security vs. usability right there. Instead of strictly rejecting anything that looks even remotely suspicious, there’s a fallback that allows you to circumvent the security feature. Ironically, this has led to a “social engineering” exploit in Metasploit. What attackers do now is they sign their jar files with names like “Oracle”, “Deutsche Bank” etc. This usually irritates users to such an extent that the popup in combination with a question like “Do you trust Deutsche Bank?” surprises them completely and all they can think of at that moment is “Hell yeah I trust Deutsche Bank, that’s where my money’s at!” - and they finally click on the OK button.

In order to link permissions to code being signed or not, the JVM effectively needs to validate signatures at classloading time. Doing this right means you’d have to do the certificate validation, too. But since this takes up a lot of time (think CRL & OCSP) and could cause random failures if a CRL or OCSP is not available, the decision was to only perform pure signature validation. Authenticode is strict here, but the delay in load time seems to be unbearable for busy people, and again we can circumvent this feature. In Java on the other hand, WebStart applications and applets do include the certificate validation, but if you distribute a signed jar by any other means (the dreaded USB stick for example), any self-signed certificate would be accepted. It is a bit ironic again that this confusion has led as far as to consider it an antipattern to sign code that is doing nothing “signworthy”. But even if you try to do it right (that is check signature and certificate), funny things still might happen. Imagine you distributed a WebStart or ClickOnce app. Once downloaded, your clients may have that file conveniently placed as a link on their desktop. Now your app doesn’t do anything related to networking and one day your client’s company decides to fiddle with the company’s proxy settings. Your client’s app which has worked like a charm for months suddenly stops working - claiming it couldn’t download something. “But there’s no networking in it goddammit!” You might even be accused of having infiltrated their company with some sort of spyware. Imagine the fun of explaining the technical implications here, why your app that apparently does nothing on the network still needs network access in order to download revocation information to validate the code signing certificate. I mean even I don’t understand that sentence after reading it again.

Linux distros: GPG/PGP with centrally distributed key

I promise to keep it shorter from now on. I’ll link to the Fedora process for details of how this works in general. This is a centralized signing approach, meaning that instead of the individual users, the code packages are centrally signed by the authority that typically also distributes the packages to end users. This is not necessarily bound to PGP, you could think of something as simple as a central public/private key pair where the public key is shipped with the initial distribution and the private key is used to sign downloadable packages that will be verified after downloading them using the public key. This is quite simple and effective, as long as it stays centralized. If you are a Linux user that tried to install for example VirtualBox from a package, then you might remember that in order to do so, you had to import the Oracle public key first. See how the site which also lists the key fingerprint is served via https? This is absolutely essential! You need to obtain the fingerprint over a secure channel, otherwise you have no guarantee that the key you just imported is really the one you wished you were importing. This eventually leads to the dilemma with PGP and its web of trust. Say we wanted a code signing mechanism for RubyGems where each user could sign their gems instead of RubyGems signing them for us (because we maybe don’t want to trust RubyGems), then we’d have to import all the keys of every publisher into our system. Beyond that, to make this secure, we would also have to compare the published fingerprint to the key we just imported. This fingerprint needs to come from a secure channel. The more keys circulate, the more of a pain this will become. Now let’s say we all used RSA-2048 keys, but in 5 years somebody can trivially break those - the whole game begins anew. Revocation also seems like a bit of a pain with PGP. Also, unless you are constantly checking online, you probably will miss the notification of a revoked key. This means we are pretty much in the same camp with X.509 certificates when it comes to revocation.

Android: Stripped-down Java approach

The process is outlined here. Compare this to Java’s signed jar files from above. They’re basically the same, with a few details left out. First, you must create your own self-signed certificate. Why? Because you are asked to present a certificate that is valid until somewhen after Oct 22, 2033. Yes, after. No CA will issue you a certificate with that lifetime. Not that it would buy you anything, there is no PKIX validation (the X.509 certificate validation process) going on anyway. “But this is against all good practices!” you might say. As always, you are right. By making certificates effectively non-expirable the whole system might fall apart if somebody finds a way to factor RSA pretty easily. Most certificates right now probably use RSA. What will happen in that case? So why do they do it like that? I can only speculate, of course, but I guess the approach is pragmatic and feasible for a couple of reasons: with one central authority (the Play Store), it is possible to associate a given key with an identity. Since the key/certificate basically never expires, it’s relatively easy to keep up a one-to-one mapping of certificate - identity. They have some form of proof of identity when you apply for your Android developer license, so it’s not completely easy to act upon somebody’s behalf. I also guess they’re willing to take the risk that RSA or any other of the algorithms currently in use might fall eventually - they probably hope that at the time, Android will either be long gone or they have an alternative ready by then.

In order to validate a signed app now all it takes is to validate the signature itself and then to compare whether the associated certificate is the one that is registered with a specific author. It would be interesting to know what happens in the case you actually do want to revoke your certificate (unfortunately I didn’t have the pleasure to find out yet) and the apps that were signed with it. Only judging from this article I assume you’re in for a ton of fun.

In general, if you follow common-sense precautions when generating, using, and storing your key, it will remain secure.

This is also not very reassuring (found here) - I mean why was revocation invented then in the first place? Why are keys even compromised on the CA level - they of all should know how important it is to keep their key secure! Again, I haven’t verified what happens in the event of revocation, so please take my assumptions with the necessary skepticism.

iOS: Like signed jars, but with an Apple root CA

Code Signing for iOS looks similar to signed jars or the Android approach from above. In fact, it is somewhere inbetween. Instead of relying on the global PKI network (Java) or choosing to ignore it (Android), here we have a PKI set up from scratch with an Apple root CA at the top. Given their man power to back this approach, this makes perfect sense. Custom PKIs can work well in a closed system, given you have enough people that can take care of the administration. Both criteria are fulfilled here. In this setup, we may use all the good features like for example certificate revocation, all we have to do is put our faith in Apple and trust them to do their job right. I’m not familiar with what happens in case revocation information is not accessible in a given instance. This is typically a weak spot in any PKI, for example browsers choose to silently ignore this, they treat certificates as valid if revocation information like CRLs or OCSPs is not accessible: security vs. usability again. The chance of a certificate actually being revoked is so small that we can choose to ignore the check instead of pissing off our users. Would be interesting to know how this is handled in the iOS ecosystem.

In return for putting your faith in them they even up the ante by trying to ensure that “signed code” actually means “benevolent code”. Note that none of the other systems try to make that guarantee, apart from Linux distros maybe. In general, “signed code” only means you established the code’s integrity and authenticity of origin, but it still could be the worst malware on the planet. Apple goes one step further here, and their goal is an admirable one, but also a quite challenging one. They need to analyze each app that is being submitted to the store. In the case of Charlie Miller, we notice that he claimed to have submitted two identical apps to the store, one got rejected but the other one didn’t. It’s a process involving humans, and as much as we try to automate it, the halting problem indicates that it probably never can be fully automated.

RubyGems: the existing solution

While not very often used, a code signing solution for Ruby gems already exists. It is very similar to signed Java jars in that it requires gem authors to sign their code. So why isn’t this used more? Wouldn’t this already solve the problem? Well, if we used “real” code signing certificates, this would indeed be a major step forward. But there is no true incentive for gem authors to buy such a certificate. We certainly can’t ask individuals to spend a significant amount of money just so we can use their gems securely. Come on, they spend their spare time on developing gems and now you want them to pay for it, too? Never gonna happen, and rightfully so. Even if some were willing to pay for code signing certificates, what about the teenagers or people that cannot afford such certificates but still want to publish a gem?

OK, so what about self-signed certificates then? Couldn’t we all simply generate our own self-signed certificate and sign our gems with those, much like it is the case in Android? True, we could, but what would we gain by this? We don’t have a central authority to watch over all of us, some authority that could step in whenever there are conflicts. Unlike it happens to be with the Play Store, I doubt any billion-dollar enterprise would voluntarily watch over RubyGems. There’s no money to be earned and just doing the good deed is probably not sufficing as an incentive. We can’t handle revocation with such an environment and we have absolutely no guarantee if anyone really is who they are claiming to be on their certificates. I can create gems that are signed by “Yukihiro Matsumoto” and nobody would check my identity when I create this key and certificate. Nor would anyone have the means later on to verify my identity when they download my gem. Let me be clear about it: in a world where all gems are signed with self-signed certificates and no authority in place to supervise them, a signature on a gem degenerates to merely a fancy checksum. The integrity of a gem could still be established - but the authenticity cannot.

That’s why both scenarios, paid, “real” code signing certificates or self-signed certificates, are not really ideal for Ruby gems in general.

Summary - or what we might want for RubyGems

The majority of schemes we saw allow end-to-end security, meaning that the author of the software also signs the software. This is desirable, as otherwise we need to put our faith in the central authority that signs the packages. While this is mostly fine like in the case of Linux distros, you could still argue that the authority could mess with the software packages if they wanted to, they could change the author’s original code. There is a “trust gap” at the point where the software is handed over from author to central authority. That’s why we would prefer the author to directly sign the software.

We shouldn’t endorse the impression that “signed code” means “well-behaving code”. While it is probably impossible to automate this decision, there aren’t enough human reviewers as well. We should be very clear about what “signed code” means: it verifies the integrity of the software (nobody has tampered with it since it was signed) and it additionally verifies its authenticity (meaning we can determine who has signed the software). The rest of the trust decision is left to the user alone: do we think that the author of the software means well and has put nothing bad in the code? I believe going beyond that is simply impossible right now.

All of the schemes relied on external services to some extent. PKIX validation requires access to CRLs and OCSP, and the GPG solution might too, if we want to determine revocation of a given key. In general, it is desirable to minimize external service dependency for the obvious reason that the inability to access those services either leads to inevitable rejection (in the strict case) or to weakening of the entire system (as the browser example had shown). A perfect solution would either not rely on external services at all or it would provide some form of redundancy and failover mechanism in place. Perfection is hardly obtainable, therefore good solutions would try to get as close to the ideals as possible.

We have seen that almost anytime complexity is introduced into a well-working solution it potentially breaks that solution. Partially signed jars, or omission of validation for performance reasons opened the door for social engineering and hard-to-detect corner cases in the Java case. Exceptions to the existing system for JIT and Mobile Safari enabled Charlie Miller to break iOS code signing. Of course, often corner cases are not avoidable, but we should try to be as strict as possible. Changing any of the variables slightly is almost never a good thing. It should never be the case like with signed jars for Java, that signed code is actually considered a bad thing, even if it’s just special cases. Signed code should always increase our confidence in it, be it for verifying its integrity or rejecting it with more certainty.

Related to the previous argument, I would keep permission management separate from the signatures. Adding them to the mix exponentiates the possibilities and makes the original task of signatures, proving integrity and authenticity of origin, significantly harder. Signatures should serve only to prove those, period. While a permission system is certainly desirable for something like RubyGems (something along the lines of let’s say Android app permissions) I’d still try to design it as completely orthogonal to the integrity/authenticity aspects. Cryptographic protocols are incredibly complicated as they are - every occasion their feature set is expanded potentially opens backdoors nobody thought possible or thought of at all before.

A solution like Java’s signed jars is - although pretty secure - unfortunately not really desirable for RubyGems. Its security rises and falls with the willingness of code authors to go and buy a “real” code signing certificate. While this is OK for closed source software that is sold for a price, it is really undesirable in an open source environment like Ruby gems. At the top of your head, do you know of any gems you’d have to pay for? There is no incentive for developers to buy such a certificate, and it would certainly keep young people or hobby programmers from ever writing or at least publishing a gem.

While the mechanisms used for Android and iOS work pretty well in their given context, I think both are not applicable to RubyGems. There is enough man power behind Android to deal with the security compromises they introduced and there is again man power behind iOS to run a full-fledged CA handling revocation and other tedious, time-consuming tasks. But the people dealing with these tedious chores receive good money for it and generally operating a CA is a lot of work. I doubt that we could keep up such an infrastructure for RubyGems for long if it’s all based on voluntary work.

I also have to warn anyone who designs a scheme that somehow involves revocation checking using CRLs or OCSP. The Ruby OpenSSL extension (and to my knowledge, no other Ruby library) is currently not able to perform online revocation checks. The culprit is the messy way how OpenSSL itself implements the PKIX algorithm, it doesn’t use online checks per default. Fixing this would require significant effort, we’d basically have to circumvent the OpenSSL mechanism entirely. I’m working hard to get this done right in krypt one day, but it’ll take some time before we get there.

Conclusion

As a response to the RubyGems incident, rubygems-trust was created, and people have begun to write proposals on how to improve the situation.

None of the examples that were presented here seem to be a perfect solution for RubyGems, any of them would require a compromise of some sort. What now? Code signing is doomed and won’t work anyway? Back to publishing SHA-1 digests? No, please let’s not do that. After comparing more than a handful of digests, nobody will bother to do that any longer anyway. Nevertheless, I firmly believe that we should think hard about how we can improve the situation with a solution that is also sustainable in the long run. I do believe that this solution includes digital signatures. It’s just not yet clear at this point what the solution should look like.

If you care for the security of gems, please read the proposals presented there and join the discussion. I’m convinced that we will not find the perfect solution - as always security does involve compromises - but I find it very important that we find something we can all agree upon and that will improve the security of RubyGems. Any good security solution doesn’t try to be perfect, it’s just not possible. It should be designed under the assumption that compromise is possible and certain, and it should focus on disaster recovery strategies as much as it should focus on initial security itself. For example none of the solutions presented can effectively protect against total compromise of either a server within the infrastructure or of one of the clients. I’m not even sure if any solution could. But still it’s easier to recover from such incidents for some solutions than it is for others.

What’s the point of this article if it just complains about existing stuff but doesn’t provide any answers? Good point, very good point indeed. It’s always easy to complain, isn’t it? I have spent a lot of thought on this over the past, and I might have an idea for a proposal, too. Still working on the details. I won’t even start to claim that it’s any better or worse. But it tries to focus on requiring as few maintenance as possible and on good disaster recovery characteristics as I believe those to be quite important in the context of RubyGems. It also tries to avoid online revocation checking, which is currently not possible in Ruby. I’ve also recently been introduced to the ideas of others that apparently have spent a lot of thought on the subject. Their work looks quite promising, it’s definitely worth a look. I consider this article to be sort of an introduction and motivation for what we might look for in a solution, and I thought it could be useful to analyze the solutions that are being proposed right now. Hope to see you around next time when we will hopefully know which solution was selected and are able to discuss that solution - thanks for making it this far!

github repos

@emboss on GitHub