Today, the European Union’s General Data Protection Regulation comes into force. For the past several months, technology companies have been racing to meet its demands, which can involve substantial engineering work to retrofit existing platforms.
My former colleagues at Apple have launched a new self-service Data and Privacy tool and a Privacy Enquiries support website that give customers visibility into (and control over) the information about them that Apple keeps. Recent software updates also added data usage disclosures to inform users when and how their data is being processed. No, The Verge, it has nothing to do with stopping phishing.
As journalists have been trying out these tools, they’ve reported that the company has been true to its word about its devotion to privacy. Jefferson Graham wrote,
The zip file I eventually received from Apple was tiny, only 9 megabytes, compared to 243 MB from Google and 881 MB from Facebook. And there’s not much there, because Apple says the information is primarily kept on your device, not its servers.
This is no accident. Apple engineers go to great lengths to design software that minimizes data collection and protects the user’s information. From end-to-end encrypted iMessage to feats like on-device photo classification and privacy-preserving telemetry, so many features took a more challenging path because it was the right thing to do. And all without apparent sacrifice to usability or capability.
So today, in thanks for their tireless work looking out for our privacy in an industry that, at best, doesn’t seem to care, I sent my friends on the Privacy Engineering team a treat in honor of GDPR Day:
Betsy Braun, the Bay Area’s most ebullient violinist and music instructor, designed, baked, and decorated this three-layer vanilla buttercream cake for the occasion. Let’s say the layers represent transparency, control, and consent—the delicious foundations of privacy protection. Thank you, Betsy!
It’s cherry blossom season in Japan, and everyone loses their collective minds over it. There are official forecasts of when the trees will bloom. There are different words to describe the progression of the flowers. Admirers flock to tree-lined parks for picnics during the day, and then return again at night for lantern-lit strolls. There are special sweets and even seasonal beer cans.
As I was drifting to sleep one night, I thought about how lovely it would be to watch the bloom arrive and recede in timelapse.
I remembered I had brought a cheap GoPro knock-off, an APEMAN-brand action cam, so on a lark I tested to see if it could act as a USB webcam—and indeed it can. If you start it up attached to a computer, it asks whether it should act as a video camera or mass storage device.
The SakuraCam began to take shape. I headed to the ¥100 store with the basic parts—the camera, a Raspberry Pi, and a 16000mAh battery pack—and played around with arranging everything in variously-sized plastic organizer boxes, imagining how the cables would be dressed and assessing them for weather resistance. I settled on a shallow toolbox-style one with a handle and toggle latch.
I made a coarse cut to allow the camera lens to stick through the case, then sealed up the gaps with hot glue.In retrospect, the mirror image arrangement would have avoided some problems. The camera’s USB connection doubled as its power supply, and the Raspberry Pi was in turn powered by the battery. A short script invoked fswebcam to capture a frame from the webcam at regular intervals, and purged the oldest frames when the SD card filled up. At one frame per minute sped up to 24 fps, I had enough space to store about 7 minutes’ worth of photos.
Everything seemed to have fallen into place until, after a few minutes of testing at 5 fps, the camera reset. And then it reset again after another few minutes. Unluckily, upon reset it returns back to the mass storage / video camera prompt, which requires physical interaction—a hard failure in the field.
Unable to scrounge up another webcam, the project seemed unworkable. I thumbed through the camera’s settings, which include a time-lapse mode, but the interval can’t be set any longer than a few seconds. Then I noticed the Wi-Fi settings, and wondered whether I could use it as an IP camera.
In Wi-Fi mode, the camera creates a wireless network which you join from your smartphone, and then you are able to control the camera via an app. It’s not entirely clear whether there is an official app to do so, but CamKing seemed to be the closest thing, and while it is not the most well-crafted app in existence, it works. It allows remote configuration of some of the camera’s settings, such as the exposure value, and best yet, it can capture still frames at 5K resolution, far exceeding the 1080p I could get from the camera as a USB video device.
The only challenge now was figuring out the protocol for triggering a photo.
I pulled the Android APK for CamKing to decompile it, and found that it talks to a web server at 192.168.1.254 that serves a browsable directory index of the SD card, as well as a video stream on port 8192. Taking photos, changing settings, and so on are done by making a GET request with a corresponding command number:
Amazingly, the web server appears to be HFS, an open source web server for Windows. I was originally led to HFS by the HTTP headers, but dismissed it because it’s a GUI app, and, well, for Windows. Even when the API was returning paths starting A:\, I chalked it up to some confused developer. Then it dawned on me that HFS is running in Wine! Surely this was the most practical solution.
Another trick I learned is that rvictl -s [udid] on macOS will create an rvi# interfaceI wasn’t able to inject any packets, but I tried. that taps the network connection of an iOS device, a handy way to sniff the unencrypted traffic between CamKing and the camera as I mapped out the command numbers.
The APEMAN uses a digital camera SoC from Novatek, a fact that is not well hidden: photos on the SD card are stored in a directory called NOVATEK/, and the USB vendor ID belongs to them. I suspect the SoC is a clone of the Ambarella sports camera SoC, once used in the GoPro, and has found its way into most of the sub-$100 action cams and dashboard cameras with unheard-of brands like Campark and Crosstour. Steven Hiscocks’s web interface to the YI Dash Cam, for instance, uses some of the same command numbers and so likely works with these other devices.
My Python module for communicating with the API is published on GitHub, although the code is very much a rough draft.
Porting the time-lapse script over to the new API was painless. However, as the
Raspberry Pi now needs to be on the camera’s Wi-Fi network, I lose SSH access to monitor its status.The camera AP does not support multiple clients. I made two improvements to help:
First, I was able to wrest control over the green activity LED, a small feat on the Raspberry Pi 3 Model B, to blink out a status report after each capture.
Second, I configured the device to automatically join the camera network when it is broadcasting, and rejoin the home network when it goes away. This way I can easily gain debug access simply by powering down the camera.
I did not succeed at powering the camera off of the Raspberry Pi without triggering the USB mode selection menu. (It might be useful to know how in the future, but it wasn’t enough to de-authorize the device using udev.) But since the communication is now wireless, I was able to simply move the Raspberry Pi indoors and power the camera directly from the battery. This also pushed the battery life over 24 hours.
So, where’s the video? Ultimately, I didn’t capture the footage I had hoped for, and I decided to stop investigating histogram matching and tone mapping to improve the quality of the image. Hopefully I’ll be able to use what I’ve learned on another project.
DeepSound is a steganography utility that can hide data inside of audio files. The contents can optionally be protected with a password, in which case DeepSound advertises that it encrypts using AES-256.
Used incorrectly, the security of all cryptographic algorithms, including (or perhaps especially) the beloved AES, can be devastatingly eroded. I took a peek at DeepSound to see if I could find any weaknesses in the way it performs encryption that would allow me recover the payload from a carrier file.
The first thing I noticed was that DeepSound will only prompt for a password when it is fed an audio file that actually does contain an encrypted payload. This ability to distinguish between encrypted and unencrypted payloads without first providing the password means that there is some metadata that should be easily sifted out of the carrier file. This was my first lead to investigate.
Since DeepSound is written using .NET and not obfuscated, it was possible to decompile the binary and just read the code. As a newcomer to reverse engineering C# apps, I found JetBrains dotPeek to be useful for exploring the decompiled code, and dnSpy to be a helpful debugger.
It was easy to understand from the decompiled code how DeepSound stores the header for its payload inside the audio file using a simple encoding mechanism. Remarkably, the payload remains intact after being transcoded to another format and back. After it has located this header, it checks a flag to see whether the payload is encrypted and prompts for the password if so.
To validate the entered password, DeepSound computes the SHA-1 hash of some AES key—not the password directly—and compares it to a hash stored in the header. But it isn’t obvious here where this AES key came from; if it were generated with a good password-based key derivation function, for instance, then this scheme might be reasonably secure.
It turns out that the line this.Key=e.Key, which copies the entered password into an instance variable, does more than meets the eye:
A secure PBKDF was too much to hope for: the password is used directly as the AES key, and the SHA-1 of the password, unsalted and uniterated, is what’s written into the audio file.
From here it was easy to write a script to locate the payload in a carrier file and extract the SHA-1 hash from its header. Then it should be possible to crack the password by running a tool like John the Ripper or hashcat, or sometimes just by searching Google.
Except that I overlooked something: The Key setter doesn’t compute the hash of the password directly; it copies it into a 32-byte buffer and computes the hash of that. In effect, it truncates or null-pads the password to a length of 32-bytes first, an idiosyncracy that precludes the use of off-the-shelf tools.
I decided to contribute support for this flavor of SHA-1 hash to John the Ripper, a tool that already knows about the imaginative password hashing schemes used by dozens of software packages. The developers of John have realized that most of these schemes are small variations on one another, whether it’s md5(sha1(password)) or sha1(md5(md5(password))) or what have you. Optimizing each of these algorithms by hand is too time consuming, so they have made a clever system that allows these schemes to be expressed in terms of some primitive building blocks.
For instance, DeepSound’s hashing scheme can be expressed in terms of four of these primitives: First, zero out our buffer. Then copy the password to it. Set the length of the buffer to 32, regardless of how long the password was. Lastly, compute the SHA-1 of the buffer.
Admittedly, finding the right sequence of primitives was not trivial, and there are a number of other switches to flip that I found a bit confusing. But it in the end it took only 8 lines to teach John about the new hashing scheme.
My changes have been contributed back to the John the Ripper community edition, including the deepsound2john.py script for extracting hashes from carrier files. My thanks to Dhiru Kholia for the code review.
Unbeknown to me, DeepSound was featured in a scene of Mr. Robot, which caught the attention of Alfonso Muñoz. Alfonso has a nice write-up of his blackbox reverse engineering of the payload encoding, in which he noticed another bad flaw: the use of ECB mode for encryption. Even without the password you can see penguins.
I’ve published Thunderbolt 3 Unblocker, a macOS kernel extension that patches IOThunderboltFamily to disable peripheral compatibility checks. This permits the use of some unsupported peripherals such as the Razer Core external GPU enclosure.
Apple has chosen to prevent Thunderbolt 3 devices using currently available controller chips from Texas Instruments from enumerating and functioning on the 2016 MacBook Pros. … Thunderbolt 3 peripherals [released prior to November 2016] which use this controller chip are incompatible with the new 2016 Thunderbolt 3 MacBooks.
These existing devices use Intel’s Thunderbolt 3 chipset (Alpine Ridge) in combination with the first generation of TI USB-C chipset (TPS65982). Apple requires the 2nd generation TPS65983 chipset for peripherals to be compatible.
Of course, patching your kernel to make it do unsupported things is not the most cautious idea, and there is likely a reason why Apple decided to disable this older chipset in the first place.
Previously, Tian Zhang’s TB3 Enabler script could be used to patch the IOThunderboltFamily binary on disk. This technique required that the script be kept in sync with macOS releases. The patch would also need to be reapplied after every system upgrade, and reverting back could be difficult. The runtime patching technique of Thunderbolt 3 Unblocker addresses all of these shortcomings.
One of the contributions of Thunderbolt 3 Unblocker is xnu_override, the small static library that does the patching in the kernel. (A few people also wrote to me to mention Lilu, a larger project with similar goals.) One nice feature of xnu_override is that it can revert all patches when you unload the kext.