Intro
At the time, our frontend platform looked like a classic micro-frontend setup: multiple teams owned separate applications, but the host dashboard still had to be rebuilt and redeployed whenever one of those micro-frontends changed.
That coupling hurt the small changes most. A one-line visual fix in a single micro-frontend could still trigger a long host-app build, a long staging wait, and a rollback process that depended on rebuilding the whole shell again.
What the npm package approach gave us
Each micro-frontend was published as a private npm package. The main app consumed those packages, bundled them together, and shipped the result to the CDN as a single deployable dashboard.
That model kept ownership boundaries clear, but it pushed release control back into the host application. Even when only one team needed to ship, the whole top-level app became the bottleneck.
- Build time scaled with the number of micro-frontends in the shell.
- Small staging changes still required a host rebuild.
- Rollback meant changing package versions and rebuilding again.
- Static asset handling stayed awkward because package code did not know its final public asset URL.
Why import maps won
We evaluated two runtime-integration directions: Webpack Module Federation and import maps. Module Federation was still early, tied us to Webpack, and did not behave reliably enough in our proof of concept at the time.
Import maps kept the runtime model simpler. Instead of embedding every micro-frontend into the host build, we could resolve named modules in the browser and point each name to a separately deployed bundle.
How the runtime model changed
Because browser support for import maps was incomplete in 2020, we used SystemJS to make the approach practical across modern browsers. Each micro-frontend pipeline started outputting SystemJS-compatible bundles and publishing them directly to the CDN.
Alongside that, we introduced a separate repository for the import map itself. Its job was intentionally narrow: update a JSON file so the module name pointed to the new bundle filename, commit that change, and publish the updated map.
- Micro-frontend pipelines built and uploaded their own hashed bundles.
- A dedicated import-maps repository managed the runtime module map.
- Deployment and rollback became import-map updates instead of host-app rebuilds.
How the host application loaded modules
The host HTML declared the import-map type for SystemJS and referenced the remote JSON file. At runtime, the shell could call a module by name instead of embedding all micro-frontend bundles into the shell bundle itself.
<meta name="importmap-type" content="systemjs-importmap">
<script type="systemjs-importmap" src="https://static.messagebird.com/import-maps/mfes.json"></script>The import map itself stayed small and explicit. Stable module names pointed to hashed CDN URLs for the active bundle versions.
{
"imports": {
"@messagebird/flowbuilder": "//static.messagebird.com/mfes/@messagebird/flowbuilder/messagebird-flowbuilder.9f544594e16f089c026c.js",
"@messagebird/developers": "//static.messagebird.com/mfes/@messagebird/developers/messagebird-developers.2e56ce54b98984a4302f.js",
"@messagebird/integrations": "//static.messagebird.com/mfes/@messagebird/integrations/messagebird-integrations.a3b75369872348817097.js",
"@messagebird/dashboard-conversations": "//static.messagebird.com/mfes/@messagebird/dashboard-conversations/messagebird-conversations.f5db1861c49c7473ae7f.js"
}
}The shell-side loading code also stayed narrow. The host resolved the requested module name through SystemJS and handled loading state and error reporting around that runtime import.
/** Resolve mFE in-browser module at runtime */
export function useMfeModule(
mfeName?: string,
): [Application | null, boolean, Error | null] {
const [isLoading, setIsLoading] = React.useState(false)
const [error, setError] = React.useState<Error | null>(null)
const [mfeModule, setMfeModule] = React.useState<Application | null>(null)
React.useEffect(() => {
if (!mfeName) {
return
}
setIsLoading(true)
System.import(mfeName)
.then((appModule) => {
setMfeModule(appModule)
traceCounter('mfe_loading_success', { mfeName })
})
.catch((error) => {
traceCounter('mfe_loading_error', { mfeName })
console.error('failed to load mFE module', mfeName, error)
setError(error)
})
.finally(() => setIsLoading(false))
}, [mfeName])
return [mfeModule, isLoading, error]
}Operational results
The biggest improvement was delivery speed. A team could build and ship its own micro-frontend without waiting for the host dashboard pipeline. Staging updates that previously took many minutes dropped to seconds because the import-map change itself was tiny.
Rollback also got faster. Since older hashed bundles were still available in storage, reverting meant repointing the import map to the previous version instead of rebuilding the platform shell.
Caching and tradeoffs
The previous host-bundled model made JavaScript caching fragile because any micro-frontend change could invalidate the large composed bundle. With separate hashed bundles, browsers could keep reusing modules that had not changed.
The downside was duplication. Once micro-frontends were built independently, some dependencies appeared in more than one bundle. We extracted a few heavy shared packages such as React, but long shared-dependency lists can become their own maintenance burden.
Takeaway
Import maps gave us a cleaner release boundary between the shell and the product surfaces owned by individual teams. For our constraints at the time, that boundary mattered more than squeezing every last byte out of the combined asset size.
If the core deployment pain is that independent teams still depend on a single host rebuild, the architecture probably is not truly independent yet. Runtime composition can be worth it when it fixes that operational bottleneck directly.
