Notes on writing Rust-based Wasm

2026-03-089:24228112notes.brooklynzelenka.com

I’ve been writing an increasing amount of Rust‑based Wasm over the past few years.

I’ve been writing an increasing amount of Rust‑based Wasm over the past few years. The internet has many opinions about Wasm, and wasm-bindgen is — let’s say — not universally beloved, but as I get more experience with it and learn how to work around its shortcomings, I’ve found some patterns that have dramatically improved my relationship with it.

I want to be clear up front about two things:

  1. I deeply appreciate the work of the wasm-bindgen maintainers.
  2. It’s entirely possible that there are better ways to work with bindgen than presented here; this is just what’s worked for me in practice!

I’ve seen excellent programmers really fight with bindgen. I don’t claim to have all the answers, but this post documents a set of patterns that have made Rust+Wasm dramatically less painful for me.

TL;DR

Unless you have a good reason not to:

  1. Pass everything over the Wasm boundary by &reference
  2. Prefer Rc<RefCell<T>> or Arc<Mutex<T>>1 over &mut
  3. Do not derive Copy on exported types
  4. Use wasm_refgen for any type that needs to cross the boundary in a collection (Vec, etc)
  5. Prefix all Rust-exported types with Wasm* and set the js_name/js_class to the unprefixed name
  6. Prefix all JS-imported types with Js*
  7. Implement From<YourError> for JsValue using js_sys::Error for all of Rust-exported error types

Some of these may seem strange without further explanation. Below give more of the rationale in detail.

A Quick Refresher

wasm-bindgen generates glue code that lets Rust structs, methods, and functions be called from JS/TS. Some Rust types have direct JS representations (those implementing IntoWasmAbi); others live entirely on the Wasm side and are accessed through opaque handles.

Wasm bindings often look something like this:

#[wasm_bindgen(js_name = Foo)]
pub struct WasmFoo(RustFoo)
 
#[wasm_bindgen(js_name = Bar)]
pub struct WasmBar(RustBar)

Conceptually, JS holds tiny objects that look like { __wbg_ptr: 12345 }, which index into a table on the Wasm side that owns the real Rust values.

The tricky part is that you’re juggling two memory models at once:

  • JavaScript: garbage‑collected, re‑entrant, async
  • Rust: explicit ownership, borrowing, aliasing rules

Bindgen tries to help, but it both under‑ and over‑fits: some safe patterns are rejected, and some straight-up footguns are happily accepted. Ultimately, everything that crosses the boundary must have some JS representation, so it pays to be cognisant about what that representation is.

flowchart TD
    subgraph JavaScript
        subgraph Foo
            jsFoo["{ __wbg_ptr: 42817 }"]
        end

        subgraph Bar
            jsBar["{ __wbg_ptr: 71902 }"]
        end
    end

    subgraph Wasm
        subgraph table[Boundary Table]
            objID1((42817)) --> WasmFoo

            subgraph WasmFoo
                arc1[RustFoo]
            end

            objID2((71902)) --> WasmBar
            subgraph WasmBar
                arc2[RustBar]
            end
        end
    end

    jsFoo -.-> objID1
    jsBar -.-> objID2

Should You Write Manual Bindings?

My take on most things is “you do you”, and this one is very much a matter of taste. I see a fair amount of code online that seems to prefer manual conversions with js_sys. This is a reasonable strategy, but I have found it to be time consuming and brittle. If you change your Rust types, the compiler isn’t going to help you when you’re manually calling dyn_into to do runtime checks. Bindgen is going to insert the same runtime checks either way, but if you lean into its glue (including with some of the patterns presented here), you can get much better compile-time feedback.

Names Matter

It’s an old joke that the two hardest problems in computer science are naming, caching, and off-by-one-errors. Naming is extremely important for mental framing and keeping track of what’s happening, both of which can be a big source of pain when working with bindgen. As a rule, I use the current naming conventions:

IntoWasmAbi Types

Trait IntoWasmAbi […] A trait for anything that can be converted into a type that can cross the Wasm ABI directly.
Source

These are the primitive types of Wasm, such as u32, String, Vec<u8>, and so on. They get converted to/from native JS and Rust types when they cross the boundary. We do not need to do anything to these types.

Rust-Exported Structs Get Wasm*

This is where you’ll usually spend most of your time. Wrapping Rust enums and structs in newtypes to re-expose them to JS is the bread and butter of Wasm. These wrappers get prefixed with Wasm* to help distinguish them from JS-imported interfaces, IntoWasmAbi types, and plain Rust objects. On the JS-side we can strip the Wasm, since it will only get the one representation, and (if done correctly) the JS side generally doesn’t need to distinguish where a type comes from.

#[derive(Debug, Clone, Copy, PartialOrd, Ord, PartialEq, Eq)]
pub enum StreetLight {
 Red,
 Yellow,
 Gree,
}
 
#[derive(Debug, Clone, PartialOrd, Ord, PartialEq, Eq)]
#[wasm_bindgen(js_name = StreetLight)]
pub struct WasmStreetLight(StreetLight)
 
#[wasm_bindgen(js_class = StreetLight)]
impl WasmStreetLight {
 #[wasm_bindgen(constructor)]
 pub fn new() -> Self {
 Self(StreetLight::Red)
 }
 
 // ...
}

On the JS side, there’s only one StreetLight, so the prefix disappears. On the Rust side, the prefix keeps exported types visually distinct from:

  • Plain Rust types
  • JS‑imported interfaces
  • IntoWasmAbi values

JS-Imported Interfaces Get Js*

Any interface brought into Rust via extern "C" get a duck typed interface (by default). These pass over the boundary without restriction, which makes them a very helpful escape hatch ___.

#[wasm_bindgen]
extern "C" {
 #[wasm_bindgen(js_name = logCurrentTime)]
 pub fn js_log_current_time(timestamp: u32);
}
 
#[wasm_bindgen]
extern "C" {
 #[wasm_bindgen(js_name = Hero)]
 type JsCharacter;
 
 #[wasm_bindgen(method, getter, js_name = hp)]
 pub fn js_hp(this: &JsCharacter) -> u32;
}
 
// Elsewhere
const gonna_win: bool = maelle.js_hp() != 0

Duck typing is really helpful for cases where you want to expose a Rust trait to JS: as long as your Rust-exported type implements the interface, you can accept your Rust-exported type a JS-imported type, while retaining the ability to replace it with JS-imported types. A concrete example is if you’re exporting a storage interface, you likely have a default Rust implementation, but want extensibility if downstream devs want to give it an IndexedDB or S3 backend.

We’re going to abuse this “duck typed JS-imports on a Rust-export” later for wasm_refgen.

The main gotchas with this approach are that 1. it’s brittle if the interface changes, and 2. if you don’t prefix your methods on the Rust-side with js_*, you can run into namespace collisions (hence why I recommend prefixing these everywhere by convention). As an added bonus, this makes you very aware of where you’re making method calls over the Wasm boundary.

Don’t Derive Copy

Copy makes it trivially easy to accidentally duplicate a Rust value that is actually a thin handle to a resource, resulting in null pointers. Just make a habit of avoiding it on exported wrappers. This can be a hard muscle memory to break since we usually want Copy wherever possible in normal Rust code.

Copy is only acceptable when exporting wrapping around pure data that has IntoWasmAbi, never for handles. I chalk this up as an optimisation; default to non-Copy unless you’re really sure it’s okay.

Avoiding Broken Handles

Try as it might, wasm-bindgen is unable to prevent handles breaking at runtime. A common culprit is passing an owned value to Rust:

#[wasm_bindgen(js_class = Foo)]
impl WasmFoo {
 #[wasm_bindgen(js_name = "doSomething")]
 pub fn do_something(&self, bar: Bar) -> Result<(), Error> {
 // ...
 }
 
 #[wasm_bindgen(js_name = "doSomethingElse")]
 pub fn do_something_else(&self, bars: Vec<Bar>) -> Result<(), Error> {
 // ...
 }
}

If you do the above, it will of course consume your Bar(s), but since this goes over the boundary you get no help from the compiler about how you manage the JS side! The object will get freed on the Rust side, but you still have a JS handle that now points to nothing. You might say something like “so much for memory safety”, and you wouldn’t be wrong.

Why would you find yourself in this situation? There’s a couple reasons:

  • Bindgen forbids &[T] unless T: IntoWasmAbi
  • Vec<&T> is not allowed
  • You just want the compiler to stop yelling

Types that have are IntoWasmAbi that are not Copy get cloned over the boundary (no handle) so they behave differently from both non-IntoWasmAbi and Copy types

Prefer Passing By Refence (by Default)

If you take one thing from this post, take this:

Never consume exported values across the boundary unless you have a clear reason to do so and are going to manage the handle on the JS side.

This one if pretty straightforward: pass everything around by reference. Consuming a value is totally “legal” to the compiler since it will happily free the memory on Rust side, but the JS-side handle will not get cleaned up. The next time you go to use that handle, it will throw an error. Unless you’re doing something specific with memory management, just outright avoid this situation: pass by &reference and use interior mutability.

This is a pretty easy pattern to follow: default to wrapping non-IntoWasmAbi types in Rc<RefCell<T>> or Arc<Mutex<T>>1 depending on if and how you have your code structured for async. The cost of going over the Wasm boundary definitely eclipses an Rc bump, so this is highly unlikely to be a performance bottleneck.

#[derive(Debug, Clone)]
#[wasm_bindgen(js_name = Foo)]
pub struct WasmFoo(pub(crate) Rc<RefCell<Foo>>)
 
#[derive(Debug, Clone)]
#[wasm_bindgen(js_name = Bar)]
pub struct WasmBar(pub(crate) Rc<RefCell<Bar>>)
 
#[wasm_bindgen(js_class = Foo)]
impl WasmFoo {
 #[wasm_bindgen(js_name = "doSomething")]
 pub fn do_something(&self, bar: WasmBar) -> Result<(), Error> {
 // ...
 }
}

Avoid &mut

This one can be pretty frustrating when it happens: there are cases where taking &mut self can throw runtime errors due to re-entrancy. This pops up more frequently than I would have expected given that the default behaviour of JS is single threaded, but JS’s async doesn’t have to respect Rust’s compile-time exclusivity2 checks.

If you can’t prove exclusivity, don’t pretend you have it. Use the relevant interior‑mutability primitive for your concurrency model.

Ducking Around Reference Restrictions

As mentioned earlier, you can use extern "C" JS-imports to model any duck typed interface, including Rust-exports. This means that we are able to work around several restrictions3 in wasm-bindgen.

Owned Collection Restriction

Bindgen restricts which types can be passed across the boundary. The one folks often run into first is that &[T] only works when T is IntoWasmAbi (including JS-imported types4) — i.e. usually not your Rust-exported structs. This means that you are often forced to construct a Vec<T>. This makes sense since JS is going to take control over the resulting JS array, and can mutate it as it pleases. It also means that when the type comes back in, you are unable to accept it as &[T] or Vec<T> unless the earlier IntoWasmAbi caveat applies.

A classic example of this is returning an owned Vec<T> of instead of a slice when T does not implement a JS-managed type. What’s returned to JS are not a bunch of Ts, but rather handles (e.g. { __wbg_ptr: 12345 }) to Ts that live on the Wasm side.4

On the other hand, we’re able to treat handles as duck typed objects that conform to some interface. Handles are far less restricted than Rust-exported types, and can be passed around more freely.

The workaround is fairly straightforward:

  • Make your exported type cheap to clone
  • Expose a namespaced clone method
  • Import that method via a JS interface
  • Convert with friendly ergonomics (.into)
// Step 1: make it inexpensive to `clone` (i.e. using `Rc` or `Arc` if not already cheap)
#[derive(Debug, Clone)]
#[wasm_bindgen(js_name = Character)]
pub struct WasmCharacter(Rc<RefCell<Character>>)
 
#[wasm_bindgen(js_class = Character)]
impl WasmCharacter {
 // ...
 
 // Step 2: expose a *namespaced* (important!) `clone` function on the Wasm export
 #[doc(hidden)]
 pub fn __myapp_character_clone(&self) -> Self {
 self.clone()
 }
}
 
#[wasm_bindgen]
extern "C" {
 type JsCharacter
 
 // Step 3: create a JS-imported interface with that namespaced `clone`
 pub fn __myapp_character_clone(this: &JsCharacter) -> WasmCharacter;
}
 
// Step 4: for convenience, wrap the namespaced clone in a `.from`
impl From<JsChcaracter> for WasmCharacter {
 fn from(js: JsCharacter) -> Self {
 js.__myapp_character_clone()
 }
}
 
// Nicely typed Vec
// Step 5: use it! vvvvv
pub fn do_many_things(js_foos: Vec<JsFoo>) {
 let rust_foos: Vec<WasmFoo> = js_foos.iter().map(Into::into).collect();
 // ... ^^^^^^^
 // Converted
}

This still requires that you to manually track which parts bindgen thinks are JS-imports and which it thinks are Rust-exports, but with our naming convention it’s pretty clear what’s happening. The conversion isn’t free, but (IMO) it makes your interfaces significantly more flexible and legible.

Use wasm_refgen

The above pattern can be a bit brittle — even while writing the boilerplate — since all of the names have to line up just so, and you don’t get compiler help when crossing the boundary like this. To help make this more solid, I’ve wrapped this pattern up as a macro exported from wasm_refgen.

use std::{rc::Rc, cell::RefCell};
use wasm_bindgen::prelude::*;
use wasm_refgen::wasm_refgen;
 
#[derive(Clone)]
#[wasm_bindgen(js_name = "Foo")]
pub struct WasmFoo {
 map: Rc<RefCell<HashMap<String, u8>>>, // Cheap to clone
 id: u32 // Cheap to clone
}
 
#[wasm_refgen(js_ref = JsFoo)] // <-- THIS
#[wasm_bindgen(js_class = "Foo")]
impl WasmFoo {
 // ... your normal methods
}

Here’s a diagram from the README about how it works:

 ┌───────────────────────────┐
 │ │
 │ JS Foo instance │
 │ Class: Foo │
 │ Object { wbg_ptr: 12345 } │
 │ │
 └─┬──────────────────────┬──┘
 │ │
 │ │
 Implements │
 │ │
 │ │
 ┌───────────▼───────────────┐ │
 │ │ │
 │ TS Interface: Foo │ Pointer
 │ only method: │ │
 │ __wasm_refgen_to_Foo │ │
 │ │ │
 └───────────┬───────────────┘ │
JS/TS │ │
─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─│─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┼ ─ ─ ─ ─ ─ 
 Wasm │ │
 │ │
 ┌───────────┼──────────────────────┼───────────┐
 │ ▼ ▼ │
 │ ┌────────────────┐ ┌────────────────┐ │
 │ │ │ │ │ │
 │ │ &JsFoo ◀────────▶ WasmFoo │ │
 │ │ Opaque Wrapper │ │ Instance #1 │ │
 │ │ │ │ │ │
 │ └────────────────┘ └────────────────┘ │
 └──────────────────────┬───────────────────────┘


 Into::into
 (uses `__wasm_refgen_to_Foo`) 
 (which is a wrapper for `clone`)



 ┌────────────────┐
 │ │
 │ WasmFoo │
 │ Instance #2 │
 │ │
 └────────────────┘

References passed over the boundary are already passed by ownership by bindgen — but these handles grab the reference off the boundary table. Recall that our Into::into calls clone under the hood, so these are always safe to consume without breaking the JS handle!

pub fn do_many_things(js_foos: Vec<JsFoo>) {
 let rust_foos: Vec<WasmFoo> = js_foos.iter().map(Into::into).collect();
 // ...
}

Automatically Convert To JS Errors

There are a few ways to handle errors coming from Wasm, but IMO the best balance of detail and convenience is to turn them into js_sys::Errors on their way to JsValue. This lets us return Result<T, MyError> instead of Result<T, JsValue>.

For example, let’s say we have this type:

#[derive(Debug, Clone, thiserror::Error)]
pub enum RwError {
 #[error("cannot read {0}")]
 CannotRead(String),
 
 #[error("cannot write")]
 CannotWrite
}

The fact that this is an enum is actually not a problem (the rest of the technique would work), but if you’re wrapping a different crate you’ll need a newtype wrapper:

// Important: no #[wasm_bindgen]
#[derive(Debug, Clone, thiserror::Error)]
#[error(transparent)] 
pub struct WasmRwError(#[from] RwError) // #[from] gets us `?` notation to lift into the newtype

We “could” slap a #[wasm_bindgen] on this and call it a day, but then we wouldn’t get nice error info on the JS side. Instead, we convert to JsValue ourselves with this final bit of glue:

impl From<WasmRwError> for JsValue {
 fn from(wasm: WasmRwError) -> Self {
 let err = js_sys::Error::new(&wasm.to_string()); // Error message
 err.set_name("RwError"); // Nice JS error type
 err.into() // Convert to `JsValue`
 }
}

Now you can return Result<T, WasmRwError>, including if you want to call the Wasm-wrapped function elsewhere in your code. It retains the nice error on the Rust side (at minimum types-as-documentation). You also get ? notation without needing to do in-place JsValue conversion everywhere this error occurs; bindgen will helpfully do the conversion for you.

  • Typed Rust errors
  • ? propagation
  • Real JS Error objects
  • Zero boilerplate at call sites

This works as a copy-paste template; I’ve considered wrapping it as a macro but it’s less than 10 LOC. I was actually surprised that something like #[wasm_bindgen(error)] wasn’t available (maybe it is and I just can’t find it; heck maybe it’s worth contributing upstream).

This is a quality of life improvement that has saved me many hours of grief: print the exact build version, dirty status, and Git hash to the console on startup. If you’re working on your Wasm project at the same time as developing a pure-JS library that consumes it, getting a JS bundler like Vite to pick up changes can be flaky at best.

This takes a bit of setup, especially if you’re in a Cargo workspace, but pays off. Here’s my current setup:

$WORKSPACE/Cargo.toml
[workspace]
resolver = "3"
members = [
 "build_info",
 # ...
]
$WORKSPACE/build_info/Cargo.toml
[package]
name = "build_info"
publish = false
# ...
$WORKSPACE/build_info/build.rs
use std::{
 env, fs,
 path::{Path, PathBuf},
 process::Command,
 time::{SystemTime, UNIX_EPOCH},
};
 
#[allow(clippy::unwrap_used)]
fn main() {
 let ws = env::var("CARGO_WORKSPACE_DIR").map_or_else(
 |_| PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap()),
 PathBuf::from,
 );
 
 let repo_root = find_repo_root(&ws).unwrap_or(ws.clone());
 let git_dir = repo_root.join(".git");
 
 watch_git(&git_dir);
 
 let git_hash = cmd_out(
 "git",
 &[
 "-C",
 #[allow(clippy::unwrap_used)]
 repo_root.to_str().unwrap(),
 "rev-parse",
 "--short",
 "HEAD",
 ],
 )
 .unwrap_or_else(|| "unknown".to_string());
 
 let dirty = cmd_out(
 "git",
 &["-C", repo_root.to_str().unwrap(), "status", "--porcelain"],
 )
 .is_some_and(|s| !s.is_empty());
 
 let git_hash = if dirty {
 let secs = SystemTime::now()
 .duration_since(UNIX_EPOCH)
 .unwrap()
 .as_secs();
 format!("{git_hash}-dirty-{secs}")
 } else {
 git_hash
 };
 
 println!("cargo:rustc-env=GIT_HASH={git_hash}");
}
 
fn cmd_out(cmd: &str, args: &[&str]) -> Option<String> {
 Command::new(cmd).args(args).output().ok().and_then(|o| {
 if o.status.success() {
 Some(String::from_utf8_lossy(&o.stdout).trim().to_string())
 } else {
 None
 }
 })
}
 
fn find_repo_root(start: &Path) -> Option<PathBuf> {
 let mut cur = Some(start);
 while let Some(dir) = cur {
 if dir.join(".git").exists() {
 return Some(dir.to_path_buf());
 }
 cur = dir.parent();
 }
 None
}
 
fn watch_git(git_dir: &Path) {
 println!("cargo:rerun-if-changed={}", git_dir.join("HEAD").display());
 
 if let Ok(head) = fs::read_to_string(git_dir.join("HEAD")) {
 if let Some(rest) = head.strip_prefix("ref: ").map(str::trim) {
 println!("cargo:rerun-if-changed={}", git_dir.join(rest).display());
 println!(
 "cargo:rerun-if-changed={}",
 git_dir.join("packed-refs").display()
 );
 }
 }
 
 println!("cargo:rerun-if-changed={}", git_dir.join("index").display());
 
 let fetch_head = git_dir.join("FETCH_HEAD");
 if fetch_head.exists() {
 println!("cargo:rerun-if-changed={}", fetch_head.display());
 }
}
$WORKSPACE/build_info/src/lib.rs
#![no_std]
pub const GIT_HASH: &str = env!("GIT_HASH");

…and finally where to get it to print in Wasm:

use wasm_bindgen::prelude::*;
 
// ...
 
#[wasm_bindgen(start)]
pub fn start() {
 set_panic_hook();
 
 // I actually use `tracing::info!` here,
 // but that's out of scope for this article
 web_sys::console.info1(format!(
 "️your_package_wasm v{} ({})",
 env!("CARGO_PKG_VERSION"),
 build_info::GIT_HASH
 ));
}

Wrap Up

Rust+Wasm is powerful—but unforgiving if you pretend the boundary isn’t there. Be explicit, name things clearly, pass by reference, and duck typing around any (unreasonable) limitations bindgen places on you.

With any luck, that’s helpful to others! I may update this over time as I find myself using more patterns.


Read the original article

Comments

  • By mendyberger 2026-03-0815:081 reply

    Things are slowly getting better in wasm world.

    The c-style interface of wasm is pretty limiting when designing higher level interfaces, which is why wasn-bindgen is required in the first place.

    Luckily, Firefox is pushing an early proposal to expose all the web apis directly to wasm through a higher level interface based on the wasm component-model proposal.

    See https://hacks.mozilla.org/2026/02/making-webassembly-a-first...

    • By flohofwoe 2026-03-0815:584 reply

      Tbh, apart from the demonstrated performance improvement for string marshalling I really fail to see how integrating the WASM Component Model into the browser is a good thing. It's a lot of complexity for a single niche use case (less overhead for string conversion - but passing tons of strings across the boundary is really only needed for one use case: when doing a 1:1 mapping of the DOM API).

      I really doubt that web APIs like WebGPU or WebGL would see similar performance improvements, and it's unclear how the much more critical performance problems for accessing WebGPU from WASM would be solved by the WASM Component Model (e.g. WebGPU maps WGPUBuffer content into separate JS ArrayBuffer objects which cannot be accessed directly from WASM without copying the data in and out of the WASM heap).

      • By josephg 2026-03-0818:451 reply

        1. It’s not just one use case. I’m working on a product which makes heavy use of indexeddb from JavaScript for offline access. We’d love to rewrite this code into rust & webassembly, but performance might get worse if we did so because so many ffi calls would be made, marshalling strings from wasm -> js -> c++ (browser). Calling indexeddb from wasm directly would be way more efficient for us, too!

        2. It’s horrible needing so much JS glue code to do anything in wasm. I know most people don’t look at it, but JS glue code is a total waste of everyone’s time when you’re using wasm. It’s complex to generate. It can be buggy. It needs to be downloaded and parsed by the browser. And it’s slow. Like, it’s pure overhead. There’s simply no reason that this glue needs to exist at all. Wasm should be able to just talk to the browser directly.

        I’d love to be able to have a <script src=foo.wasm> on my page and make websites like that. JS is a great language, but there’s no reason to make developers bridge everything through JS from other languages. Nobody should be required to learn and use JavaScript to make web software using webassembly.

        • By azakai 2026-03-0819:431 reply

          > Wasm should be able to just talk to the browser directly.

          Web APIs are designed for JavaScript, though, which makes this hard. For example, APIs that receive or return JS Typed Arrays, or objects with flags, etc. - wasm can't operate on those things.

          You can add a complete new set of APIs which are lower-level, but that would be a lot of new surface area and a lot of new security risk. NaCl did this back in the day, and WASI is another option that would have similar concerns.

          There might be a middle ground with some automatic conversion between JS objects and wasm. Say that when a Web API returns a Typed Array, it would be copied into wasm's linear memory. But that copy may make this actually slower than JS.

          Another option is to give wasm a way to operate on JS objects without copying. Wasm has GC support now so that is possible! But it would not easily help non-GC languages like Rust and C++.

          Anyhow, these are the sort of reasons that previous proposals here didn't pan out, like Wasm Interface Types and Wasm WebIDL bindings. But hopefully we can improve things here!

          • By wongarsu 2026-03-0820:111 reply

            At least the DOM APIs are ostensibly designed to work in multiple languages, and are used by XML parsers in many languages.

            Some of the newer Web APIs would be difficult to port. But the majority of APIs have quite straight forward equivalents in any language with a defined struct type (which you admittedly do have to define for WASM, and whether that interface would end up being zero-copy would change depending on the language you are compiling to wasm)

            There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm

            • By azakai 2026-03-0823:25

              > There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm

              Correct, but this is has been one of wasm's guiding principles since the start: move complexity from browsers to toolchains.

              Wasm is simple to optimize in browsers, far simpler than JavaScript. It does require a lot more toolchain work! But that avoids browser exploits.

              This is the reason we don't support the wasm text format in browsers, or wasm-ld, or wasm-opt. All those things would make toolchains easier to develop.

              You are right that this sometimes causes duplicate effort among toolchains, each one needing to do the same thing, and that is annoying. But we could also share that effort, and we already do in things like LLVM, wasm-ld, wasm-opt, etc.

              Maybe we could share the effort of making JS bindings as well. In fact there is a JS polyfill for the component model, which does exactly that.

      • By mendyberger 2026-03-0816:591 reply

        It's not just about string performance, it's about making wasm a first class experience on the web. That includes performance improvements - because you don't need to wake up the js engine - but it's a lot more than that. Including much better dev-ex, which is not great as you can see in the OP.

        It would also enable combining different languages with high-level interfaces rather than having to drop down to c-style interfaces for everything.

        • By flohofwoe 2026-03-0818:002 reply

          > Including much better dev-ex, which is not great as you can see in the OP.

          IMHO the developer experience should be provided by compiler toolchains like Emscripten or the Rust compiler, and by their (standard) libraries. E.g. keep the complexity out of the browser, the right place for binding-layer complexity is the toolchains and at compile time. The browser is already complex enough as it is and should be radically stripped down instead instead of throwing more stuff onto the pile.

          Web APIs are designed from the ground up for Javascript, and no amount of 'hidden magic' can change that. The component model just moves the binding shim to a place inside the browser where it isn't accessible, so it will be even harder to investigate and fix performance problems.

          • By bloppe 2026-03-0818:521 reply

            The dev-ex issues largely occur at the boundaries between environments. In the browser, that's often a JS-Rust boundary or a JS-C++ boundary. On embedded runtimes, it could be a Go-Rust boundary, or a Zig-Python boundary. To bridge every possible X-Y boundary for N different environments, you need N^2 different glue systems.

            You're probably already thinking "obviously we just need a hub-and-spoke architecture where there's a common intermediate representation for all these types". That kind of architecture means that each environment only has to worry about conversions to and from the common representation, then you can connect any environment to any other environment, and you only need 2N glue systems instead of N^2. Effectively, you'd be formalizing the prior system of bespoke glue code generation into a standardized interface for interoperation.

            That's the component model.

            • By flohofwoe 2026-03-0910:181 reply

              > "obviously we just need a hub-and-spoke architecture where there's a common intermediate representation for all these types"

              I'm perfectly happy with integers and floats as common interface types (native ABIs also only use integers and floats: pointers are integer-indices into memory, and struct offsets need to be implicitly known and compiled into the caller and callee).

              The WASM Component Model looks like a throwback to the 1990s when component object models where all the rage (COM, CORBA, and whatnot).

              • By bloppe 2026-03-0910:571 reply

                > I'm perfectly happy with integers and floats as common interface types

                Most people at least want strings too. And once you add strings, you need to make sure the format is correct (JS uses UTF-16, C uses NULL-termination, etc). So even if you don't allow a complex object model, you would still need N^2 glue systems just for strings.

                Then you might as well add arrays too.

                Before you know it, you end up with the component model.

                • By flohofwoe 2026-03-0911:202 reply

                  ...all way too high level for my taste and those problems are not WASM specific yet still have been solved outside WASM via operating-system/platform ABI conventions which compilers and 'user code' has to adhere to without requiring a 'component model'.

                  Some operating systems might want their strings as UTF-8 encoded, some as UTF-16. It's the job of the caller to provide the strings in the right format before calling the OS function. In the end it's up to the caller and callee to agree on a format for string data. There is no 'middleman' or canonical standard format needed, just an agreement between a specific caller and callee.

                  The good and important part of such an agreement is that it is unopinionated. As long as caller and callee agree, it's totally fine to pass zero-terminated bytes, other callers and callees might find a pointer/size pair better. This sort of agreement also needs to happen when calling between native Rust and C code (or calling between any language for that matter). My C code might even prefer to receive string data as pointer/size pairs instead of zero-terminated bytes when all my string-processing code is built on top of strings as pointer/size pairs (e.g. apart from string literals there is not a single feature in the C language which dictates that strings are zero-terminated bags of bytes - it's mostly just of convention of the ancient C stdlib functions).

                  IMHO the WASM Component Model is solving a problem that just isn't all that relevant in practice. System ABIs / calling conventions don't need a 'component model' and so shouldn't WASM.

                  • By bloppe 2026-03-0912:44

                    Luckily, the CM exists as a well-decoupled layer on top of core modules, which are just basic numeric types. So people can pretty easily just ignore the whole thing if they don't like it.

                    But for the other 99% of devs who just want to exchange strings across various language boundaries without quadratic glue complexity, we have the CM.

                  • By swiftcoder 2026-03-1117:11

                    > operating-system/platform ABI conventions which compilers and 'user code' has to adhere to without requiring a 'component model'

                    I don't think there's a huge distinction here - the component model is more or less just a (standardised) ABI convention.

          • By josephg 2026-03-0819:003 reply

            How would a compiler toolchain ship a debugger for webassembly? It’s kind of impossible. The only place for a debugger is inside the browser. Just like we do now with dev tools, JavaScript, typescript and webassembly languages.

            > The browser is already complex enough as it is and should be radically stripped down

            I’d love this too, but I think this ship has sailed. I think the web’s cardinal sin is trying to be a document platform and an application platform the same time. If I could wave a magic wand, I’d split those two used cases back out. Documents shouldn’t need JavaScript. They definitely don’t need wasm. And applications probably shouldn’t have the URL bar and back and forward buttons. Navigation should be up to the developers themselves. If apps were invented today, they should probably be done in pure wasm.

            > Web APIs are designed from the ground up for Javascript

            Web APIs are already almost all bridged into rust via websys. The APIs are more awkward than we’d like. But they all work today.

            • By azakai 2026-03-0819:48

              > How would a compiler toolchain ship a debugger for webassembly?

              You can integrate external debuggers, like Uno documents here:

              https://platform.uno/docs/articles/debugging-wasm.html

              I assume that uses some browser extension, but I didn't look into the details.

              You can also use an extension to provide additional debugging capability in the browser:

              https://developer.chrome.com/docs/devtools/wasm

            • By flohofwoe 2026-03-090:22

              FWIW, it's possible to setup an IDE-like debugging environment with VSCode and a couple of plugins [1]. E.g. I can press F5 in VSCode, this starts the debuggee in Chrome and I can step-debug in VSCode exactly like debugging a native program, and it's even possible to seamlessly step into JS and back. And it's actually starting faster into a debug session than a native macOS UI program via lldb.

              [1] https://floooh.github.io/2023/11/11/emscripten-ide.html

            • By pjmlp 2026-03-098:211 reply

              Inside the browser hardly matters if it isn't maintained, Google has done almost nothing to their DWARF debugging tooling since it was introduced as beta a few years ago.

              • By flohofwoe 2026-03-0910:221 reply

                > Google has done almost nothing to their DWARF debugging tooling since it was introduced as beta a few years ago

                WASM DWARF debugging works perfectly fine though?

                The 'debugger half' just moved from Chrome into a VSCode debug adapter extension where debugging is much more comfortable than in the browser:

                https://marketplace.visualstudio.com/items?itemName=ms-vscod...

                I use that all the time when working on web-platform specific code, with this extension WASM debugging in VSCode feels just like native debugging (it actually feels snappier on macOS than debugging a native macOS exe via LLDB).

                • By pjmlp 2026-03-0910:231 reply

                  More like it mostly works.

                  • By flohofwoe 2026-03-0910:44

                    ...'mostly works' describes pretty much every debugging experience outside Visual Studio. But I really can't complain, I've set up 'F5-debugging' in VSCode, pressing F5 starts a local web server, starts Chrome with the debuggee, and starts a remote debug session in VSCode and stops at the first breakpoint (and interestingly all this happens 'immediately', while debugging a native Cocoa macOS app via LLDB easily takes 5 to 10 seconds until the first breakpoint is reached).

                    TL;DR: It works just fine (early versions of the DWARF extension had problems catching 'early breakpoints' but that had been fixed towards the end of 2024.

      • By justinclift 2026-03-0823:13

        > ... I really fail to see how integrating the WASM Component Model into the browser is a good thing.

        One of the common (mis-)understandings about WASM when it was released, was people could write web applications "in any language" that could output WASM. (LLVM based things as an example)

        That was clear over-selling of WASM, as in reality people still needed to additionally learn JS/TS to make things work.

        So for the many backend devs who completely abhor JS/TS (there are many), trying out WASM and then finding it was bullshit has not been positive.

        If WASM is made a first class browser citizen, and the requirement for JS/TS truly goes away, then I'd expect a lot of positive web application development to happen by those experienced devs who abhor JS/TS.

        At being said, that viewpoint is from prior to AI becoming reasonably capable. That may change the balance of things somewhat (tbd).

      • By throawayonthe 2026-03-0819:22

        https://moq.dev/blog/to-wasm/ is relevant i think

  • By flohofwoe 2026-03-0811:57

    This reads like the same problems as using Emscripten's embind for automatically generating C++ <=> JS bindings, and my advice would be "just don't do it".

    It adds an incredible amount of complexity and bloat versus writing a proper hybrid C++/JS application where non-trivial work is happening in handwritten JS functions versus hopping across the JS/WASM boundary for every little setter/getter. It needs experience though to find just the right balance between what code should go on either side of the boundary.

    Alternatively tunnel through a properly designed C API instead of trying to map C++ or Rust types directly to JS (e.g. don't attempt to pass complex C++/Rust objects across the boundary, there's simply too little overlap between the C++/Rust and JS type systems).

    The automatic bindings approach makes much more sense for a C API than for a native API of a language with a huge 'semantic surface' like C++ or Rust.

  • By Twey 2026-03-0818:102 reply

    > FWIW I prefer `futures::lock::Mutex` on std, or `async_lock::Mutex` under no_std.

    Async mutexes in Rust have so many footguns that I've started to consider them a code smell. See for example the one the Oxide project ran into [1]. IME there are relatively few cases where it makes sense to want to await a mutex asynchronously, and approximately none where it makes sense to hold a mutex over a yield point, which is why a lot of people turn to async mutexes despite advice to the contrary [2]. They are essentially incompatible with structured concurrency, but Rust async in general really wants to be structured in order to be able to play nicely with the borrow checker.

    `shadow-rs` [3] bears mentioning as a prebuilt way to do some of the build info collection mentioned later in the post.

    [1]: https://rfd.shared.oxide.computer/rfd/0609 [2]: https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#wh... [3]: https://docs.rs/shadow-rs/latest/shadow_rs/

    • By expede 2026-03-091:221 reply

      Author of the article here! I've actually come to agree with you since writing that article. I'm actually not a fan of mutexes in general and miss having things like TVars from my Haskell days. Just to shout out a deadlock freedom project that I'm not involved in and haven't put in production, but would like to see more exploration in this direction: https://crates.io/crates/happylock

      • By Twey 2026-03-094:47

        Thanks for the article! As someone who writes a lot of Rust/JS/Wasm FFI it gave me some good food for thought :)

        Yes! Mutexes are much nicer in Rust than a lot of languages, but they're still much too low-level for most use-cases. Ironically Lindsey Kuper was an early contributor to the Rust project and IIRC at roughly the same time started talking about LVars [1]. But we still ended up with mutexes as the primary concurrency mechanism in Rust.

        [1]: https://dl.acm.org/doi/10.1145/2502323.2502326

    • By galangalalgol 2026-03-0819:192 reply

      I try to avoid tokio in its entirety. There are some embedded use cases with embassy that make sense to me, but I have never needed to write something that benefited from more threads than I had cores to give it. I don't deny those use cases exist, I just don't run into them. I typically spend more time computing than on i/o but so many solid libraries have abandoned their non-async branches I still have to use it more often than I'd like. I get this is a bit of a whine, I could fork those branches if I cared that much. But complaining is easier.

      • By Twey 2026-03-090:081 reply

        I think the dream is executor-independence. You shouldn't really need to care what executor you or your library consumer is using, and the Rust auto traits are designed so that you can in theory be generic over it. There are a few speed bumps that still make that harder than it really should be though.

        I'm not sure what you mean by ‘more threads than I had cores’, though. Unless you tell it otherwise, Tokio will default to one thread per core on the machine.

        • By galangalalgol 2026-03-090:301 reply

          When you are compute bound threads are just better. Async shines when you are i/o bound and you need to wait on a lot of i/o concurrently. I'm usually compute bound, and I've never needed to wait on more i/o connections than I could handle with threads. Typically all the output and input ip addresses are known in advance and in the helm chart. And countable on one hand.

          • By Twey 2026-03-094:41

            Oh, right, sure. In Rust the async code and async executor are decoupled. So it's your _executor_ that decides how/whether tasks are mapped to threads and all that jazz.

            Meanwhile the async _code_ itself is just a new(ish), lower-level way of writing code that lets you peek under an abstraction. Traditional ‘blocking’ I/O tries to pretend that I/O is an active, sequential process like a normal function call, and then the OS is responsible for providing that abstraction by in fact pausing your whole process until the async event you're waiting on occurs. That's a pretty nice high-level abstraction in a lot of cases, but sometimes you want to take advantage of those extra cycles. Async code is a bit more powerful and ‘closer to the metal’ in that it exposes to your code which operations are going to result in your code being suspended, and so gives you an opportunity to do something else while you wait.

            Of course if you're not spending a lot of time doing I/O then the performance improvements probably aren't worth dropping the nice high-level abstraction — if you're barely doing I/O then it doesn't matter if it's not ‘really’ a function call! But even so async functions can provide a nice way of writing things that are kind of like function calls but might not return immediately. For example, request-response–style communication with other threads.

      • By IshKebab 2026-03-0822:54

        I agree. Async makes sense for Embassy and WASM. I'm skeptical that it really ever makes sense for performance, even if it is technically faster in some extreme cases.

HackerNews