WikiLeaks recently published documents detailing CIA attacks on common smartphone operating systems, including those on Samsung, Google, and Apple devices. WhatsApp and other “secure messaging apps,” it turns out, are not secure at all. The reason is that if the phone OS is itself compromised, nothing on the phone can be secure. It doesn’t matter how good your app’s encryption is—if the CIA can read everything on my phone, then that includes unencrypted plaintext and private encryption keys.
The situation appears hopeless. For one thing, our devices are not secure. Furthermore, who’s to say the cell phone companies won’t voluntarily give the government your data, even if the operating systems themselves are fixed? If you want secure communication, it seems the only way to do it is to build your own cell phone, and then build your own network of cell towers.
Sending secure messages over untrusted channels is basically a solved problem. Encrypt the message with RSA and sign it with DSA, for example. (These, and similar algorithms, will probably be broken by quantum computers. But as far as we know, that hasn’t happened yet.)
What makes the situation more interesting is that the broadcasting device itself can no longer be trusted. So, where does that leave us? Assuming I can’t patch the phone, I need a new device.
The new device would:
- randomly generate keys
- accept plaintext input from a keypad
- encrypt messages with a private key
- decrypt messages with a public key
Since we still want to use the phone to send and receive public keys and encrypted messages (ciphertext), we would need to transmit these back and forth between the phone and the device. I drew a schematic:
Fig. 1 By hiding the plaintext and private key from CIA malware on Samsung and Apple devices, we can broadcast messages securely from these infected devices.
There are a few points you’d have to consider to make this thing work. After all, if they can hack my smartphone, why can’t they hack this thing?
First, it would have to be completely open-source from the circuitboard up. That way, anybody could check for security vulnerabilities on their own.
Second, it would have to be simple. This way, a single user could understand the entire system and convince himself that there were no holes. Security flaws emerge when a product becomes too complicated for an individual engineer to understand in its entirety. You can’t hack a toaster! But you can hack an operating system, because it’s got a lot of “moving parts.”
Third, you would have to ensure that the encrypting device could not be infected via the connection to the smartphone. I would be wary of complex protocols like USB. In fact, one could go so far as to send and receive information over the headphone jack. (Microphone for output; left or right channel for input; ground for ground.) That way, you know that only data bits are being sent, not commands or metadata or any other crap. Unless I am mistaken, the only potential vulnerability would be buffer overflow attacks—and you can avoid those if you’re careful.