Posts A short note on Browser Functionality
Post
Cancel

A short note on Browser Functionality

Welcome everyone! So recently I was working on wasm application and realised how exciting and useful it is to know about Browser. So lets see how it works and hopefully be able to improve and optimize the web apps we will make in future.

Browser process the the parsing, layout, and scripting together to paint the pixels of the screen. Browser Processing Pipeline

A modern browser is a platform specifically designed for fast, efficient, and secure delivery of web applications. In fact, under the hood, a modern browser is an entire operating system with hundreds of components: process management, security sandboxes, layers of optimization caches, JavaScript VMs, graphics rendering and GPU pipelines, storage, sensors, audio and video, networking, and much more.

When designing a web application, we don’t have to worry about the individual TCP or UDP sockets; the browser manages that for us. Further, the network stack takes care of imposing the right connection limits, formatting our requests, sandboxing individual applications from one another, dealing with proxies, caching, and much more.

WEB-SOCKET:

WebSocket enables bidirectional, message-oriented streaming of text and binary data between client and server. It is the most versatile, flexible and closest API to a raw network socket in the browser. Just by passing in a WebSocket URL to the constructor, setting up a few JavaScript callback functions, and we are up and running also the rest is handled by the browser.

WebRTC:

It is basically a bag of standards, protocols, and JavaScript APIs, the combination of which enables peer-to-peer audio, video, and data sharing between browsers (peers). Instead of relying on third-party plug-ins, WebRTC turns real-time communication into a standard feature that any web application can leverage via a simple JavaScript API.

Different Browsers implement different logic for when, and in which order, the individual resource requests are dispatched. As a result, the performance of the application will vary from browser to browser.

In any complex system, a large part of the performance optimization process is the untangling of the interactions between the many distinct and separate layers of the system, each with its own set of constraints and limitations.

3 Performance Pillars: Computing, Rendering, Networking The rendering and scripting steps follow a single-threaded, interleaved model of execution Therefore, it is not possible to perform concurrent modifications of the resulting Document Object Model (DOM). Hence, optimizing how the rendering and script execution runtimes work together is of critical importance.

So no lets talk about Browser Optimization, a modern browser is much more than a simple network socket managerand are getting smarter every day: preresolving likely DNS lookups, pre-connecting to likely destinations, pre-fetching and prioritizing critical resources on the page, etc.

The core optimization is divided in 2 broad classes:

  1. Document-aware optimization The networking stack is integrated with the document, CSS, and JavaScript parsing pipelines to help identify and prioritize critical network assets, dispatch them early, and get the page to an interactive state as soon as possible. This is often done via resource priority assignments, lookahead parsing, and similar techniques.

  2. Speculative optimization The browser may learn our navigation patterns over time and perform speculative optimizations in an attempt to predict the likely actions by pre-resolving DNS names, pre-connecting to likely hostnames, and so on.

It feels important to understand how and why these optimizations work under the hood, because we can assist the browser and help it do an even better job at accelerating our applications. There are four techniques employed by most browsers:

  1. Resource pre-fetching and prioritization Document, CSS, and JavaScript parsers may communicate extra information to the network stack to indicate the relative priority of each resource: blocking resources required for first rendering are given high priority, while low-priority requests may be temporarily held back in a queue.
  2. DNS pre-resolve Likely hostnames are pre-resolved ahead of time to avoid DNS latency on a future HTTP request. A pre-resolve may be triggered through learned navigation history, our action such as hovering over a link, or other signals on the page.
  3. TCP pre-connect Following a DNS resolution, the browser may speculatively open the TCP connection in an anticipation of an HTTP request. If it guesses right, it can eliminate another full roundtrip (TCP handshake) of network latency.
  4. Page pre-rendering Some browsers allow us to hint the likely next destination and can pre-render the entire page in a hidden tab, such that it can be instantly swapped in when the user initiates the navigation.

We can assist the browser in this optimization by taking care of following:

  1. Critical resources such as CSS and JavaScript should be discoverable as early as possible in the document.
  2. CSS should be delivered as early as possible to unblock rendering and JavaScript execution.
  3. Noncritical JavaScript should be deferred to avoid blocking DOM and CSSOM construction.
  4. The HTML document is parsed incrementally by the parser; hence the document should be periodically flushed for best performance.
  5. Reduce DNS lookups Every hostname resolution requires a network roundtrip, imposing latency on the request and blocking the request while the lookup is in progress.
  6. Reuse TCP connections Leverage connection keep alive whenever possible to eliminate the TCP handshake and slow-start latency overhead
  7. Minimize number of HTTP redirects HTTP redirects can be extremely costly, especially when they redirect the client to a different hostname, which results in additional DNS lookup, TCP handshake latency, and so on. The optimal number of redirects is zero.
  8. Use of Content Delivery Network (CDN) Locating the data geographically closer to the client can significantly reduce the network latency of every TCP connection and improve throughput.
  9. Eliminate unnecessary resources No request is faster than a request not made. xD!
  10. Cache resources on the client Application resources should be cached to avoid re-requesting the same bytes each time the resources are required. (Like we have on this website.)
  11. Compress assets during transfer Application resources should be transferred with the minimum number of bytes: we should always apply the best compression method for each transferred asset.
  12. Eliminate unnecessary request bytes Reducing the transferred HTTP header data (i.e., HTTP cookies) can save entire roundtrips of network latency.
  13. Parallelize request and response processing Request and response queuing latency, both on the client and server, often goes unnoticed, but contributes significant and unnecessary latency delays.

Google’s PageSpeed Optimization Libraries (PSOL) provide an open source implementation of over 40 various “web optimization filters,” which can be integrated into any server runtime and applied dynamically to any application. Backed by PSOL libraries under the hood, the mod_pagespeed (Apache) and ngx_pa‐ gespeed (Nginx) modules can both dynamically rewrite and optimize each delivered asset based on specified optimization filters—e.g., resource inlining, minification, concatenation, asset sharding, and many others. Each optimization is applied dynamically (and cached) at request time.

Thanks for taking time to read and willing enough to reach the end!

A Quest Of survival

Gaussian Processes

Comments powered by Disqus.