Most of us joined the business of website optimisation because we love making that subtle yet impactful change and seeing the performance skyrocket.
I myself still get surprised how a seemingly small adjustment can change the way people interact with a website and dramatically increase its performance.
To implement these changes, most A/B testing platforms have a WYSIWYG type website editor which is ok; but go beyond anything but the simplest of changes and you’ll quickly realise that making client-side changes in this way is going to limit what you are capable of doing.
This in some circumstances can result in unstable and unreliable experiment builds that don’t play nicely across devices and in a variety of browsers.
So, what is the way forward when you want to experiment beyond the rudimental principles of optimisation?
Fortunately, there is a way to extend your testing capabilities beyond WYSIWYG.
Onboarding an experienced web developer who is able to code your experiences directly into your A/B platform will maximise the power and complexity of what you want to test on a website. But with great power comes great responsibility (as Spider Man’s uncle once said).
More complicated tests require more complex logic and code and as the lines of code pile up you are always running the risk of hindering the performance of your test (and for that matter the performance of the website you are testing on).
This also greatly increases the likelihood of triggering the infamous ugly sister of A/B testing, the dreaded content flash/flicker effect – that noticeable delay of an experience loading after the website itself has loaded.
All of this aside, we work in the industry of optimisation, so none of this is insurmountable.
Let’s take a look at how the optimisation process itself can be optimised when it comes to building A/B tests, a crucial process to get to grips with, especially when your team starts to grow.
Usually there are several people responsible for taking a proposed test idea through its lifecycle to becoming a successful A/B test. At the development stage the experience may be looked at by a colleague developer or the team leader for internal sign-off.
When the experience is built and has passed the QA stage it will most likely be checked by your client’s development team.
Both internal and external code checks are performed by someone who is looking at this code for the first time therefore the code should be easy to read and understand.
The main steps to achieve this are:
Even if your developer doesn’t care how her or his code looks, chances are that whoever will be signing it off does.
The easiest way to get the code properly indented is to get any of the most common code editors like “Atom”, “Sublime”, “VS Code” etc. and install one of the available JavaScript beautifier extensions. Pasting code into the editor will auto indent it and make it much easier to read.
Variables should have logical names that would sum up the purpose of that variable.
For example, if you use a variable for some text contained in <ul><li> elements, name those variables li1Text, li2Text or li1_text, li2_text. Don’t use overly descriptive names either (e.g. textForTheLineElement_2) as names like this may make the code even harder to read.
Always add concise and descriptive comments to the code. This will be helpful to anyone who is trying to understand what the code is doing.
This includes the original developer, who may know the code inside out now but when revisiting it after a period of time will be thankful for the reminders of what is doing what.
A good example of this is when a winning test is rolled out on a 95% (experience) / 5% (control) split. This is fairly common practice as it allows for a final controlled check to monitor the performance of an experience over an extended period of time, and acts as a temporary solution to gain the benefits of that experience whilst it is being integrated into the core code of a website for a permanent rollout.
It could take weeks or months for this cycle to complete, so helpful comments will allow a developer to get back up to speed about a specific experience quickly.
Having good “housekeeping” rules is useful in two ways:
First, create clear naming conventions and incorporate the information in the test name itself:
The above steps will allow everyone involved in your testing program (e.g. developers or analysts) to identify and easily find the experience in question.
Secondly, make sure that obsolete experiences are archived (if they may be useful later) or deleted if not needed at all. .This will improve on the above point as well as will lower the overall testing platform file size. This is essential as larger file sizes lead to noticeable delays and content flashes.
Some A/B testing platforms do separate the live environment experiences and the draft, paused and archived experiences so the later ones do not contribute to the overall file size in the live environment but most of the testing platforms do not do this and make sure there are no unused experiences is essential.
When it comes to performance, most developers will likely assume the code they’ve created is efficient, but there is always room for improvement.
Efficient code is key to providing a good user experience. Ever-improving bandwidth and speed of the household internet connections mean that user expectations have also increased accordingly.
In fact, Google believe that for a smooth experience the app must respond within the first 100ms of the user action [1]. If the user receives a response to their action within 100ms it feels like the result is immediate. If the browser takes about 300ms (i.e. approx. a third of a second) to respond the delay is subtle but noticeable.
Anything beyond 1000ms and your user may lose focus and become frustrated. Therefore, to get a successful A/B test out into the world optimising your code to be as efficient as it can be is a primary concern.
Whether experience code is written in vanilla JS or using a library like jQuery, caching the selectors is the easiest way to cut down on the workload your code puts on the browser. Imagine the following example:
You can find the above code example on codepen.io. The updateDiv function is triggered any time the mouse cursor is moved anywhere in the main browser window.
The <div class=”count”> container shows how many times the updateDiv function has been called so far and the <div class=”clientPos”> displays the information of the mouse cursor X and Y positions relative to the top left corner of the window.
There are three major issues in this code example. The jQuery .css function is executed twice on the same element. Most jQuery methods allow for method chaining which uses the same initial selector without the need to re-initiate the jQuery object and re-query the DOM.
There are also a few jQuery methods that allow to set several attributes or properties at once. The .css method is one of them therefore lines 8 & 9 can be joined into a single line $(‘.theDiv’).css({‘left’: e.clientX, ‘top’: e.clientY});.
As mentioned before, most of the jQuery methods allow for method chaining so we can chain the .html method used in line 11 to the .css method. We get a single line $(‘.theDiv’).css({‘left’: e.clientX, ‘top’: e.clientY}).html(‘<div class=”count”>…..’);.
The above two steps optimise the code significantly. If you look at the code example on codepen.io, you’ll notice that the Count: x increases very quickly as you move the mouse around the window.
This means that in the case of the original code every time the user would move their mouse jQuery would query the DOM three times to perform three actions on the same element. After the above two improvements the element is queried only once for each updateDiv function call.
The following step, which will further improve the code efficiency, is the most important step in this section – caching the selector. We query the DOM only once per updateDiv function call, but that is still hundreds of times as the user moves their mouse. We can cache the DOM element outside of the mousemove function.
This way we will only have to query the DOM once! The final optimised version of the above code example would be:
The updated code example on codepen.io.
We create a variable let $theDiv outside of the mousemove function callback and assign the ‘.theDiv’ DOM element to it. Now we never need to traverse the DOM in order to find this element again.
All the above code efficiency optimisation techniques apply not only to jQuery. The same applies to other libraries like dojo and to Vanilla JavaScript as well.
A note of caution however regarding the extensive use of AJAX on most websites. Although AJAX improves UX by allowing content updates in the background without the need for a page refresh, it also means that if you cache an element and it gets replaced at any point, the element in the cache will no longer be referring to the element on the DOM.
At this point you would need to re-cache the element. In most eCommerce websites for example, the mini bag (usually visible in the main menu) is updated using an AJAX request every time the user adds a new item to their bag. In such cases, instead of re-caching the DOM element, find and cache the outermost element that is not being replaced by the AJAX call.
For example, if you need to take the innerText value of a <span></span> element inside the mini bag do not cache the span but rather cache the outermost container that remains on the DOM after the mini bag gets refreshed. Then perform .find(‘span’) (in jQuery usage case) or .getElementsByTagName(‘span’) (in Vanilla JS usage case) method on the cached element to get to the span.
A few notes on element caching specific to jQuery. If your code calls $(document) or $(window) several times you should cache these as well. This will save the browser having to recreate the jQuery object holding the document or the window objects.
On the same note in the event handler callbacks (e.g. .click(callback), .hover(callback) etc.) if $(this) is used more than once to get to the current element on which the event is being triggered, cache $(this) to a variable as well. This also saves the browser having to re-instantiate a new jQuery object each time $(this ) is being called. An example of this can be found below and on codepen.io.
…and recursive functions calling setTimeout with a short delay. Calling the function recursively or using a setInterval method is not the optimal monitoring strategy.
If you have a very good reason to use one of these two methods, you should at the very least use the element caching technique described in the section above and cache all DOM elements that can be cached.
One of the methods we recommend to monitor for DOM changes is the MutationObserver method 2. It is supported in all modern browsers and even IE11.
However, careful usage is advisable. Monitoring for changes in a DOM element and then modifying that DOM element in the MutationObserver callback is akin to sawing the branch you’re sitting on. This will result in an infinite loop where you react to a change in the DOM tree by making a change to the DOM tree which, yes, changes the DOM again.
The MutationObserver provides a .disconnect method which temporarily prevents the MutationObserver instance from receiving notifications of DOM mutations. This will then allow you to make the required changes. You can then make a call to the .observe method on the same MutationObserver instance again.
Another good way to achieve the same goal that works in the cases where the DOM is modified after a jQuery AJAX call is to attach the .ajaxComplete3 event listener to the $(document) object. This will fire your callback every time a jQuery-initiated AJAX request completes. It also provides the AJAX request related information in the callback arguments.
It is inevitable that you will need to perform a task that takes rather longer than you’d ideally like. An example may be adding a product to the basket or fetching some product information upon user action via an AJAX request.
Often these requests may take a second or more to complete and the user is left wondering what is happening and probably thinking something along the lines of “Hmmm… Did I do this correct?” or “OK. I wonder if I clicked it or should I click it again?”. To avoid this sort of thought process and potentially negative impact on a test providing feedback is essential.
As previously discussed, if a user does not receive a reaction within the first 100ms-300ms the delay becomes noticeable. If upon a user’s input, you are required to perform an action that takes 300ms or more you can improve the user’s experience and keep the user focussed on the task by providing progress indication.
Solutions around this may include dimming the CTA or even the entire CTA’s parent container and adding a “loading” spinning gif to the centre of it. This will assure the user of two things:
Bear in mind that even if the AJAX requests complete very quickly while the experience is in development this may not be the case when the experience goes live. Often when we are developing tests we are lucky enough to be working on super-fast 100-1000 megabit internet connections.
A request that takes 400ms while developing the experience may take 1500ms (1.5 seconds) for someone on an average home network or even 4 – 5 seconds on a poor 3G mobile network. Providing some progress indication for the “heavier” tasks may ensure the user does not get frustrated with an unresponsive site and proceeds to make the purchase.
The hope is that long, boring tasks will be perceived as shorter and more tolerable when the user can tell they’re making progress 1, 4.
References: