Diagram showing a web app flowing through a corporate proxy with constrained features

I built a full web app that works on locked down networks, and here is what broke

December 18, 2025

I wanted to build a web app that anyone could open on a company laptop, behind a strict proxy, with the browser locked down by policy. The goal sounded simple: give people a reliable way to save, organize, and share links without installing anything. The reality was not simple. Every assumption I had about modern web apps met the hard edges of enterprise networks and fleet management.

This is a story about constraints, failures, and the final boring solution that actually shipped. If you are designing for reliability first, and convenience second, this will save you time. I also included a tiny demo repo that shows the patterns in a minimized form.

The constraints

Before writing code I listed the rules that would probably apply in a locked down environment. Most of these came true in testing.

  • Single origin only: everything must come from the same domain and scheme. No subdomains, no third party CDNs.
  • No WebSockets in practice: many proxies intercept or drop upgrade requests. Even if they work, they fail often enough to hurt trust.
  • Strict CSP and network filters: cross origin requests, inline scripts, and exotic headers are blocked or rewritten.
  • SSL inspection and proxies: certificates get replaced, some ciphers differ, and unusual protocols can break.
  • Limited storage: aggressive policies can clear or block IndexedDB and sometimes Service Worker caches. Cookies may be short lived or blocked by partitioning.
  • Slow or intermittent connections: think high latency, timeouts, and periodic resets. Every retry path must be gentle.
  • No extensions or installers: users cannot change the machine. Everything must be pure web.
  • Minimal font and asset budgets: corporate caches may not have your fonts. Fallbacks must be good enough.

These constraints shaped every choice. Most developer defaults assume a friendly network and a permissive browser. Locked down environments assume the opposite.

What broke first

I started with a standard Next.js stack and a simple starter API. Then the failures arrived.

  • WebSockets: connections upgraded sometimes, then stalled, then got cut mid stream. In some offices they did not upgrade at all. In others, an upgrade succeeded but relay proxies silently dropped frames. Presence indicators and live updates were unreliable.
  • Third party assets: any request to a CDN produced mixed results. Some proxies cached old versions, others blocked the host. Font files were the worst. Customs fonts disappeared or caused waterfalls that ended in timeouts.
  • Cross origin fetch: even with CORS configured correctly, a subset of machines rewrote headers, denied preflight, or cached error pages. Diagnostics were hard because errors looked like normal network flakes.
  • Service workers: on paper they worked. In practice fleet policies turned them off in some departments. Offline caching could not be counted on. When enabled, different machines had different cache lifetimes, which made updates unpredictable.
  • Cookie based sessions: partitioning and strict SameSite handling caused sign in to appear to work, then fail on the next navigation. Some machines had session cookies cleared aggressively.
  • HTTP streaming: some proxies buffered responses until the entire body arrived. Server Sent Events lost their benefits when a middlebox decided to flush only at the end.

After a week, the list grew. Every clever trick felt fragile when a proxy decided to be “helpful.”

The boring solution that shipped

At this point I stopped trying to be clever and started writing for the least friendly path that still felt modern. The boring solution has a few pillars.

  • One origin, one path: the app, assets, and API live under the same host and scheme. There is no CDN and no subdomain. All links are relative. This avoids most CSP conflicts, CORS issues, and upgrade oddities.
  • Progressive transport: the client tries streaming updates first, but happily falls back to long polling. Long polling always works, even through chatty proxies. Backoff is gentle and capped.
  • Minimal headers: requests avoid unusual headers and keep payloads small. The server does not rely on exotic cache control. Everything is simple and explicit.
  • No external fonts: the UI uses system fonts with careful spacing and weights. If a corporate cache strips your fonts, nothing breaks.
  • Static first: most views render static HTML and hydrate carefully. Lists avoid huge diffs. Dates use deterministic formatting. Client side state is kept small.
  • URL tokens over cookies: authenticated API calls pass a short lived bearer token via header or query. If a policy erases cookies, the app still works after a clean page reload.
  • Optional Service Worker: the app works fine without it. If the browser enables it, caching helps; if not, nothing degrades.

None of this is exciting. All of it reduces the number of things that can break when a proxy is grumpy.

Tiny demo repo

I made a minimal server and client that demonstrates the fallback pattern. It starts with Server Sent Events, then falls back to long polling if the network buffers or drops streams. Everything runs on one origin.

repo: locked-down-web-demo

/ server.js          # tiny Node server
/ public/index.html  # single page client
/ public/app.js      # progressive transport

Server code:

// server.js
const http = require('http');

const clients = new Set();

function sseHandler(req, res) {
  res.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
    'Connection': 'keep-alive',
  });
  clients.add(res);
  res.write(`event: hello\ndata: ${JSON.stringify({ ok: true })}\n\n`);
  req.on('close', () => clients.delete(res));
}

function pollHandler(req, res) {
  // Respond quickly with the latest event. In a real app you would keep state.
  res.writeHead(200, { 'Content-Type': 'application/json' });
  res.end(JSON.stringify({ ok: true, ts: Date.now() }));
}

const server = http.createServer((req, res) => {
  if (req.url === '/events') return sseHandler(req, res);
  if (req.url === '/poll') return pollHandler(req, res);
  if (req.url === '/' || req.url.startsWith('/public/')) {
    // serve static files
    const fs = require('fs');
    const path = require('path');
    const file = req.url === '/' ? 'index.html' : req.url.replace('/public/', '');
    const full = path.join(__dirname, 'public', file);
    const body = fs.readFileSync(full);
    const ext = path.extname(full);
    const type = ext === '.js' ? 'application/javascript' : 'text/html';
    res.writeHead(200, { 'Content-Type': type });
    return res.end(body);
  }
  res.writeHead(404);
  res.end('not found');
});

setInterval(() => {
  // broadcast a heartbeat event to SSE clients
  for (const res of clients) {
    res.write(`event: heartbeat\ndata: ${JSON.stringify({ ts: Date.now() })}\n\n`);
  }
}, 5000);

server.listen(3000, () => console.log('http://localhost:3000'));

Client code:

Client HTML (public/index.html):

<!doctype html>
<html>
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <title>Locked down demo</title>
  </head>
  <body>
    <h1>Progressive updates</h1>
    <pre id="log"></pre>
    <script src="/public/app.js"></script>
  </body>
</html>
// public/app.js
(function () {
  const log = document.getElementById('log');
  function write(x) { log.textContent += `\n${x}`; }

  function trySSE() {
    try {
      const es = new EventSource('/events');
      es.onopen = () => write('SSE connected');
      es.addEventListener('hello', (e) => write('hello ' + e.data));
      es.addEventListener('heartbeat', (e) => write('beat ' + e.data));
      es.onerror = () => {
        write('SSE error, falling back');
        es.close();
        tryPoll();
      };
    } catch (e) {
      write('SSE failed early, falling back');
      tryPoll();
    }
  }

  function tryPoll() {
    write('Using long polling');
    const tick = () => {
      fetch('/poll')
        .then((r) => r.json())
        .then((j) => write('poll ' + JSON.stringify(j)))
        .catch(() => write('poll error'))
        .finally(() => setTimeout(tick, 5000));
    };
    tick();
  }

  trySSE();
})();

This demo keeps everything under one origin, uses simple headers, starts with streaming, and drops back to long polling when the network is unfriendly. It is intentionally plain. The goal is reliability, not style.

How to test this without access to a corporate network

If you cannot reproduce enterprise constraints locally, you will ship optimistic code and only learn the truth from frustrated users.

Here are practical ways to simulate the most common failure modes:

  • High latency + packet loss: use Chrome Network throttling (DevTools → Network) and test cold loads plus first interaction.
  • Proxy buffering: run traffic through a local proxy tool (a debugging proxy) and watch whether streaming responses flush incrementally.
  • CSP strictness: set a strict CSP early and keep it boring. If your app relies on inline scripts or third-party assets, you’ll find out immediately.
  • Storage instability: test with storage cleared on every reload and confirm the app still works (login flows, state recovery, and error messages).

The goal is not perfect simulation. The goal is to discover whether your “fallback path” is real.

What I would do earlier next time

  • Write for one origin from day one. Treat third party assets as optional sugar.
  • Start with long polling as the baseline. Add streaming only if it improves user experience without hurting reliability.
  • Keep CSP strict and simple. Avoid inline scripts. Avoid confusing header configs.
  • Assume storage can be cleared. Design sign in flows that tolerate cookie or cache loss.
  • Collect network traces early. Test with a proxy that buffers and rewrites. Make this part of CI.
  • Instrument fallbacks. Log how often you drop to polling, how long requests take, and what errors occur.
  • Be boring on the UI layer. System fonts, simple spacing, predictable layout. No asset surprises.

Final thought

Shipping beats being clever. In locked down networks the path that works is the path that users remember. If your app opens consistently, saves correctly, and keeps working after a reload, people will keep using it. That is the real win.

Similar posts

Ready to simplify your links?

Open a free notebook and start pasting links. Organize, share, and keep everything in one place.

© ClipNotebook 2025. Terms | Privacy | About | FAQ

ClipNotebook is a free online notebook for saving and organizing your web links in playlists.