« Back to home

Using WebWorkers to bring native app best practices to javascript SPAs

Posted on

Do you remember when jQuery exploded? Suddenly it seemed that for every feature you might want on your website, there was a jQuery plugin. All you had to do was include the script tag, and you were golden.

If you do much front-end development these days, you may have realized that the days of "just drop in a script tag" will soon be over, if they aren't already.

While jQuery brought about a revolution in Javascript usage, the front end still lacked organized code, dependency management, and the build tools required to ensure optimized web assets.

This transition –away from script tags and into full applications built with Javascript– has been confusing and difficult for developers not on the bleeding edge.

Enter Ember. Specifically, enter ember-cli. Ember-cli gives ember a full asset pipeline built over Broccoli. Broccoli's power is easy to miss in it's simplicity, and changes what is expected of an asset pipeline.

Gone are the short-lived days of simple script concatenation with Grunt, Gulp, or the rails asset pipeline: enter constant time rebuilds with es6 and es7 transpilation. If this were all Broccoli and Ember-cli offered, it would be enough to draw many an experienced dev into the Ember ecosystem. But, Broccoli actually enables Ember-cli to accomplish so much more.

If you haven't heard of Ember addons, you should take a peak. ember-observer and emberaddons both offer a convenient way to browse them. To use an ember addon, you use the command line to ember install addon-name. That's it. You're done. You can now use that addon's features anywhere in your app, no additional work required. Gone are the days of wiring in dependencies to your build system, and figuring out how to expose them in your app. This is true plug-and-play.

Ember addons are what enable Ember to consistently build and offer the features we hope to see in the next generation of javascript, neigh programming, today.

ember-watson is attempting to enable self maintaining code. liquid-fire brings a new paradigm and ease to animations and transitions. ember-cli-babel lets everyone write with es6 and es7, while ember-component-css let's you put the styles for your components in the same location as your component code.

Want to use a CSS pre-processor? AND have it work with your component css using ember-component-css? AND you want to use Stylus in your app but also utilize CSS from an addon that uses SASS? Ember addons can do this, and again, without any work beyond doing something as simple ember install ember-cli-stylus.

Love that new "wormhole" or "tether" style of working with modals and popovers? Drop in ember-wormhole or ember-tether and you're done. Hopefully by now I'm beating a dead horse and you've gotten the point, Ember addons are the new, improved jQuery plugins.

Addons are also a breeding ground for new features for Ember as a framework. An early example of an addon likely to be absorbed into the core is ember-truth-helpers. Tonight, I want to introduce a new experiment, for the moment I'm calling it ember-lift.

Maybe you've used WebWorkers before, but odds are you are like most the other web developers I know and have only either tinkered for fun or nodded at their existence and moved on.

Maybe you've thought "this algorithm would work better threaded". That's usually the demo/example/proof of concept like code I see around. It doesn't seem as though many people have seen or embraced WebWorkers for their potential: multi-threaded javascript applications.

I don't mean using a little parallel processing, I don't mean offloading a few heavy tasks to a worker, I mean a true multi-threaded application.

I've been following Ember's evolution closely as another recent ember addon was developed: ember-cli-fastboot. Fastboot is promising, it will enable Ember to ship prepared HTML to a browser and hook up a running Ember app instance to it later.

Fastboot's importance to me actually had very little to do with SEO or initial load times, the problem for which it's built to address. The curious (and critical) piece of Fastboot that fascinated me was being able to run a DOMless Ember. WebWorkers, as you may or may not know don't have access to anything DOM related. Mull those last two sentences, and you may see the Javascript future I'm about to argue for.

The barriers holding Javascript applications back from "Native" quality are shrinking.

  • WebViews are increasingly optimized, and get lower level access. Shells such as electron NW.js, and cordova utilizing webkit, wkWebview and chromeView (via crosswalk) continually offer deeper native integrations with higher performing WebViews.

  • Rendering layers are gaining more access to system metal. Whether through improved GPU acceleration, WebGL or similar.

  • Projects mirroring DOM onto canvas and web-gl are attempting to improve the rendering layer from the other end, and are compatible with the diffing approach and DOM layer used by Glimmer.

  • CSS is changing. The way we write CSS is changing. The way we use CSS is changing. CSS itself will eventually catch up, but until then the scope, creep, and rendering issues caused by untamed CSS are being curbed by projects that strongly scope CSS. The ideal is reusable, non-leaky componetized CSS, and we're nearly to that, and it's a critical step in moving the Web to a performance rich future.

  • Sometimes less is a lot more: Ember Addons are making widespread use of occlusion and incremental rendering pluggable by anyone.

  • You may have heard this before, but highly optimized Javascript is only 17% slower than highly optimized C++. That number isn't because Javascript is fast, but because javascript compilers and runtimes have been optimized to amazing levels. Using type and lifespan knowledge revealed by ES6/7 syntax, those optimizations will improve, and we're likely to see the language's performance gap close significantly more.

  • The widespread use of specific minification and tooling libraries (such as Lodash and Uglify.js) is enabling Javascript compilers to write code path specific optimizations.

Basically, what's possible in an HTML5 App is changing rapidly. We are already, today, capable of building rich interactive near-native quality interfaces in Javascript. The combination of micro and macro optimizations above will only continue to make that story better.

But what do the above improvements have to do with Fastboot and WebWorkers? Only their shared goal: high performance applications. There's only one critical piece of the performance puzzle missing: threading. And threading is the macro optimization that stands to give us the most gain for the least effort in the near term.

To be clear, threading and performance aren't a 1:1 relationship. Utilizing a single thread efficiently and smartly can do much more for a program than using multiple threads, that's the power of Node's run loop, that's the power of Ember's run loop, that's why non-blocking IO, async/await, and Promises are such powerful concepts and primitives.

But threading and performance are related.

The main problems with Javascript's current state arise when you attempt to build high-quality mobile applications in Javascript. You might look at me and say "you're crazy" for wanting to build a high quality mobile app in Javascript, but I have my reasons, and the biggest of those is a pretty sweet blog post and tech talk I'll be producing at a later date revealing how maintainable and performant an all-platform single-code-base application can be. All means web too, and to do web and mobile-web in addition to native platforms you need performance minded Javascript. To take Javascript applications to the next level on iOS, Android, and Desktop requires a commitment to structuring your app with performance in mind.

Surprisingly, this isn't difficult with Ember, because Ember does a LOT of optimization for you, and efficient use of the run-loop get's you much the rest of the way. If you know a little about optimizing your HTML, CSS, and finding choke points you can build a quality all-platform app in Ember as a single dev in the same amount of time it would have taken you to build a quality single-platform app in Ember.

By quality, I mean that your app will be better than 90% of the other HTML5 applications out there. However, there will be a lot of times and areas in which your app won't feel native, and almost all of these are related not to the level of access your app has to the device, but to Javascript's single threaded nature.

You see, the biggest differentiator between a native app and an HTML5 app is in your screen transitions. When it comes time to judge your app's UX, it's that transition that defines the feel by which you are judged.

  • Does the transition respond to the position of your finger?
  • Is the transition smooth?
  • Does the next screen immediately render and display underneath the current one as you pull the current screen back?

What keeps Javascript applications from these kinds of transitions? Besides that most Javascript frameworks don't currently enable you to pre-render the next route (this will change, soon), the answer is threading.

To make a transition, your app needs to load in new data for the next route, assemble DOM, render that DOM, setup the controller/components/objects associated with that route, and perform any necessary computations to display complex properties. It needs to do all of this while it is animating the movement of the current screen, and the fade-in + movement of the new screen it is still assembling.

All your JS Apps Are Broke.

Broke. Not Broken.

Your app has one thread, and it's too busy to produce the native quality animations (of which it is capable).

With all this extra work interfering with rendering frames on our single thread, the animation, likely done via a combination of CSS and JS and utilizing GPU acceleration, becomes choppy or skips. Despite this, I proffer that the problem isn't Javascript, it's threading.

In native app development, a little bit of lag time while data is retrieved and processed is easily hidden by the transition animation. This works, because data is retrieved and processed on non-ui threads.

In native app development, you can have multiple ui threads, that pre-assemble screens for you, and can keep screens on a stack to pull of / push in as necessary.

But wait! You say. The problem is threading which means the problem IS Javascript.

Well....

well....

No.

You see, Javascript has had WebWorkers for a VERY long time, we just haven't been utilizing them to their full potential.

WebWorkers work on all evergreen browsers, they work IE10+, or in other words, they work. Much like localStorage and WebSockets which still seem to surprise developers with their compatibility matrix, WebWorkers are here, and it's time to use them.

Yes, Ember currently supports IE9+, but remember addons? If you need WebWorkers, you can install an addon for WebWorkers and lock your own standards to IE10+. With IE9's share being so low, that's feasible for most companies.

Ember's Fastboot work means that we can now run Ember instances within WebWorkers, unleashing a brightly threaded future.

Ideally, an application has four varieties of threads.

  • ui threads
  • data threads
  • services
  • helpers

With WebWorkers, DOMless Ember, and Javascript, the main thread becomes the ui-runtime, responsible for assembling the UI, handling user interaction, doing most the things we expect Ember to do for our interface.

Meanwhile, most of the heavy lifting occurs in workers. Our ember-data adapters, serializers, and data requests happen in one set of data-threads. Long running background services (such as location, various tracking, cleanup) happen in service-threads, expensive logic reaches out to helper-threads for parallelism, and finally, pre-rendering happens in ui-threads.

Let's talk about pre-rendering and ui-threads.

Instead of building a route's DOM when the route is entered by the user, with each screen load workers will break down the various possible routes a user can request from the current screen, and in top-to-bottom order begin constructing String DOM for that route on their thread.

There's a lot of optimization (and probably a lot of developer choice) involved with deciding precisely what happens next. But via one mechanism or another, the main thread will request the payload for specific routes during downtime (or at the moment the route is needed).

The current API I've sketched looks like this. Your router becomes "stack aware", and is able to retrieve a documentFragment and JS payload for rehydration when a route is requested. This portion of the process is entirely invisible to the developer. Currently, I'm planning on storing the stack in an iFrame using the importNode and adoptNode APIs.

Route's that have previously been exited are always pushed into the stack for later availability.

Route's that are signaled for pre-render have their DOM built by Ember running in a WebWorker. (WebWorkers can't yet transfer object or utilize DOM, so the transfer back will be string based). The String DOM and the associated JS payload for rehydration (via the Fastboot mechanisms) will be delivered to the stack, where a documentFragment is constructed from the string DOM and queued for use.

Workers are injected similar to Services, with some workers (such as the stack) being available out of the box.

{
   stack: Ember.inject.worker('stack')
}

To prerender you would supply the stack a URL.

this.stack.prepare('example/1');  

The stack introduces a new concept to routing, peek. Peeking at a route allows you to see the route's DOM, but does not hydrate the route or transition into it. This allows for transitions that look to begin immediately as your finger swipes across a screen. As your finger passes X% threshold, the transition actually would occur. (The transition, animation, and where the threshold is would all be done by you, the stack and peek would simply be primitives usable to compose these sorts of gesture rich interfaces.)

To peek

let nextScreen = this.stack.peek('example/1');  

The power of offloading data, services, and helpers to workers is proven (many, many times over). The power and the actual mechanics of rendering this way, however, are experimental subject to change. Largely this is a mental exercise with limited JSPerf bins and some hasty work on an addon proof-of-concept constructed to support the feasibility of the approach.

For the pre-rendering to work, the Fastbook work will need to near completion, and Ember's internals will need to be updated to support loading up a route in a running app with JS and DOM provided from elsewhere. This will be the same mechanics as Fastboot, but done on a running app.

All this said, there is a bigger goal. If we can prove that threaded rendering works in JS, and via addon bring it to easy mass adoption, it puts increased pressure on browser vendors and the standards committee to enable DOM within workers, and to create transferableFragments as well as improve transferableObjects. At which point, the mechanics of this become dead simple, all routes are pre-rendered in workers, and immediately available by request of the main thread: records, components et. al. included!