I wrote an Objective-C bridge for Node.js. Don't use it.
Clarification ()
In the below article I go pretty hard against FFI's in Node.js. To clarify, when I use that term in this article, I am mainly referring to general-purpose FFI's (those which allow calls to arbitrary foreign functions). Properly-scoped and limited bindings (wherein specifically-written functions, and only those functions, are callable) can be built and used relatively safely in Node.js (though, there's still inherent risk whenever involving native code). The main purpose of this article is to caution against the use of general-purpose FFI's in Node.js.
Introduction
I wrote a Node.js package that allows you to call Objective-C API's from Node.js. I do want to note that this is not an original idea of mine and there is prior work. Nathan Rajlich (former Node.js core contributor, now at Vercel) made his own Node-Objective-C bridge, NodObjC, years ago. His bridge is built on top of the now-long-dead ffi
package (which he also heavily contributed to). I've (somewhat shamelessly) ripped off at least the name of his bridge. Mine is called nobjc. But you shouldn't use it.
Why not use it?
Having a foreign function interface (FFI) in Node.js may sound like a great idea to some. The ability to call functions written in an entirely different language from JavaScript sounds like it would open up a wealth of opportunities for interoperability. While that may be true, it can also open up an enormous attack surface that the supposed benefits do not justify, primarily on desktop platforms. More specifically: Electron.
FFI is bad on Electron
There's a reason why I chose to write my bridge specifically around Objective-C. The only practical (by some meaning of that word) use case for nobjc that I see is for use in macOS Electron applications (or similarly-desktop-targeting apps that are built on top of Node.js). But again: don't use it for that. As a security researcher who has dug deep into macOS's internals, I can say one thing fairly definitively: many Electron apps on macOS are less secure against code injection than native apps.
Code injection and Electron
Electron allows developers to write desktop applications in JavaScript. Most times this JavaScript source code is shipped directly with the application and just sits on-disk, ready to be executed. What's worse is that this code is often executed without integrity checks, meaning that any malicious application also running on the user's computer (and which has write access to the code) can modify it and inject malware in the code.
This type of code injection has happened. As early as 2019, a trojan dubbed Spidey bot
was discovered modifying the JavaScript of the Discord client in an info-stealer attack. Later stealers such as AnarchyGrabber, NitroHack, PirateStealer, and
Iluria Stealer have continued this trend of using code injection attacks on Discord. But Discord is not special here, and any Electron app can be vulnerable.
Discord's stance on code injection
When I realized this attack vector in Discord, I reached out to their security team and received this response:
We do not consider physical/local attacks as valid security issues at this time. This stems from Chromium/Electron, the upstream software we use for our app, being vulnerable to this kind of attack. To read about why this is not considered as a vulnerability in their (and our) threat model you can check out https://chromium.googlesource.com/chromium/src/+/master/docs/security/faq.md#why-arent-physically_local-attacks-in-chromes-threat-model
The article Discord's team linked to, at the time of me writing this article, has this to say:
We consider these attacks outside Chrome's threat model, because there is no way for Chrome (or any application) to defend against a malicious user who has managed to log into your device as you, or who can run software with the privileges of your operating system user account. Such an attacker can modify executables and DLLs, change environment variables like PATH, change configuration files, read any data your user account owns, email it to themselves, and so on. Such an attacker has total control over your device, and nothing Chrome can do would provide a serious guarantee of defense. This problem is not special to Chrome — all applications must trust the physically-local user.
Code injection and macOS
Now, if you are a Windows user (or someone who researches security on Windows) the above may seem like a fairly reasonable position to take. However, from my perspective as a macOS security researcher, the logic does not exactly hold for the macOS platform. Many of the things that the Chrome team calls out as things that malicious users (or apps) can do are not possible (or are severely restricted) on macOS.
DLL hijacking is pretty much dead on macOS thanks to its Hardened Runtime. Also, executables cannot be easily modified and still work as expected due to Apple's fairly strict code-signing requirements. Overall, it's extremely difficult to get code injection on modern versions of Apple platforms, and now, with Apple's introduction of Memory Integrity Enforcement, it's likely going to be even harder than it was before.
But, if an Electron app on macOS makes use of JavaScript that's just sitting on disk in a place where other apps can access and potentially modify it (and does not perform integrity checks on the code before executing it), that leaves open a wide attack surface for code injection into the app. This is what I meant earlier when I said that many Electron apps are less secure against code injection than native apps.
While Apple's App Sandbox can provide filesystem containerization, I don't know of any Electron apps that utilize it for their JavaScript code. Electron apps that store their JavaScript code directly in their bundles may be better protected, as Apple's security features on macOS generally protect app bundles from outside tampering. However, through my own testing, I have found this behavior to be inconsistent in practice.
What should be done
Throughout my research, I also reached out to Electron team about my concerns. I got a response from Keeley Hammond. They informed me that Electron offers a feature called ASAR Integrity which builds runtime integrity checks directly into applications to ensure that the JavaScript code on disk was not tampered with. While, obviously, no security solution is perfect, the Electron team is actively patching bypasses to it. I'd encourage all Electron developers to use this feature.
Why did I write nobjc?
And now, we circle back to nobjc. Why did I write it? There are many reasons, but I wrote it primarily as a proof-of-concept of what not to do with Node.js. The security of what you're developing should always be top of mind, and FFI's in desktop Node.js applications often pose far too much of a risk to be worth it. If you're considering using an FFI in your Electron application, please don't use mine. And please use ASAR Integrity. And, if you liked this article and want to see what I put out next: watch this space.