Webforms are typically the last thing a website visitor encounters before they become a successful “conversion”. It’s your last opportunity to ruin a successful customer journey and all too often this is what happens. All the effort in getting a potential customer to a site and guiding them to the final gateway is wasted when a form has a terrible UX, asks unnecessary questions or fails to manage expectations properly.
And yet, businesses too often treat the form as a “black box”. They know how many people reach the form and they know how many end up successfully completing it. From that they can calculate a conversion rate, rub their hands and walk away.
Yet they don’t know what happens in between. That period where the user is staring, clicking and keytapping away before abandoning in a fit of frustration.
This knowledge gap is what prevents businesses from truly optimizing the experience of their form visitors. Without information they can’t truly pinpoint the friction points in their form so they can’t generate hypotheses on how to improve things and, ultimately, increase the volume of successful conversions through their checkout.
This article addresses this gap. It will take you through where to get the data you need to improve your form, what the key metrics to look out for are, and what they tell you about your form.
Where to get the data
The good news is that accessing the treasure trove of user behaviour data that your form generates is a relatively simple matter.
Your first instinct will probably be to reach for Google Analytics. It’s certainly possible to spend time tagging up all your form fields in GA to get some sort of analysis. However, this not only requires a lot of time but also the expertise needed to do a good job of it. Fortunately, there are out-of-the-box form analytics software products that are much easier to set up and do not require developer time to get up and running.
Full disclosure - we Zuko are a form analytics provider so we are a little biased on which product you should use, but there are a number of reputable data platforms that could do the job for you.
Once you’ve successfully got the software configured and pulling in your form data, the next step is to discover what it can tell you about your form and its users.
What to look for in the data
Perhaps the most obvious; abandonment data shows you the form field that users interacted with last before leaving your site forever. By looking at where most of the audience drops off, you can identify the biggest pain points and focus your energy on them.
Having said this, you do need to be a little careful. There are a few things you need to think about before drawing your final conclusions:
Firstly, make sure you take account of abandonment rates rather than just looking at absolute volumes. Many forms have conditional pathways that show different questions depending on previous inputs. You will often find fields that have a low volume of total abandonments but a high abandonment rate. You need to be paying attention to these fields as well as those with high overall abandons.
For example, in the table below, Field A has the highest total abandon count. While this would need to be looked at, Field D should also be a priority as it has a huge abandonment rate despite making up less abandons in total than Field A or Field C.
Secondly, be wary if you have a particularly involved field (think long text answers in university application forms as an example). Abandonment statistics track the last field interacted with. While this will often be the one causing the user friction, it isn’t always so.
In the case where your form has a particularly complex field, the user may be put off from starting it altogether. This pushes the “abandonment” trigger to the preceding field, artificially inflating its dropout rate. If your analytics data is indicating a strangely high abandonment rate for a “simple” field, check whether the next field could be scaring them away due to the complexity of its requirements.
Finally, it is pretty common to see the “Submit” button come up with a high abandonment rate. Of course, this doesn’t mean that the button itself is bad. More likely, the user has clicked on it and received a barrage of red error messages which has caused them to leave. If this pattern shows up in your data you’ll need to dig deeper (more on this later).
(ii) Field Returns / Corrections
One of the things that grinds customers’ gears is having to return to a field after they thought they had completed it, usually because of a bright red error message. This may be due to poor form instructions, unnecessary validation or a rogue autocorrect function (amongst other things) but you can guarantee the user’s cortisol is rising making it more likely they will want to drop out. The more users are returning to a particular field, the more you’ll want to address it.
(iii) Time Taken
Of itself, the time taken to complete a field doesn’t necessarily indicate frustration. Some fields naturally take a long time to complete (such as longer text questions in university applications). You’ll need to make a comparison to be sure whether a long time spent in a field indicates a propensity to abandon.
One way to do that is by comparing yourself to benchmarks such as the ones for common fields below taken from Zuko’s benchmarking studies.
However, this can still get you into trouble. No form is the same so you need to be careful when comparing yourself to others. A more reliable method is to follow the technique in section (iv) below….
(iv) Data segmentation - Abandoners Vs Completers
For both field returns and time taken data, it can be difficult to determine if there is a genuine issue or if a field naturally requires a lot of time or corrections. The best way to evaluate this is by analysing the difference in behaviour between those who successfully complete a form and those who end up abandoning. A significant difference will point to user friction that is directly causing abandonments.
As an illustration, the below table shows example data for a form; the proportion of user sessions where they returned to correct fields and plus the mean seconds spent completing each field. The data is segmented between abandoned and completed sessions so we can run this type of analysis.
The raw data might indicate that Field E has a lot of friction as it has the highest return rates alongside a longer time needed to complete it.
However, the behaviour of abandoners and completers for this field is very similar so there is no indication that these factors are driving abandonment. Compare this to Field H - abandoners are much more likely to return to the field and to spend more time on it. This is a clear indicator of a likely issue with the field that you should spend time investigating.
(v) Error Messages
By definition, when an error message triggers something has gone wrong. If you can track the occurrence of these messages you can work backwards to diagnose what has gone wrong and home in on the biggest issues on your form.
(vi) Post Submit Activity
We mentioned in point (i) that the submit button often triggers errors that cause user frustration and / or abandonment. We can use this to our advantage. If your data platform can show you what people did immediately after clicking that submit, we can quickly identify what is causing the biggest headaches for people who are trying to give us their details but failing due to form-related issues.
The sankey example below illustrates this. It shows us what users who ultimately abandon do immediately after clicking the submit button.
We can see that, while a proportion of submitters abandon immediately, a lot do go straight back to the password or mobile phone fields. This analysis gives us swift insight on the most pressing issues for users and allows us to focus our improvement efforts on the most problematic fields.
Wrapping it Up
The clear point to take from this is that you can’t optimize your form in the dark. There are various ways to gather the necessary data; if you are serious about improving your form’s conversion rate you need to be accessing this and using it to drive the generation of your testing hypotheses.