November 10, 2017
Your marketing team is about to launch a new campaign that will steal a march on your more direct competitors. Thousands of tickets will go on sale at incredibly low fares. Advertising and social media campaigns will support the initiative...
Coming up next: thousands of visitors swarm to your website. Your booking engine crashes.
As a result, not only your sales campaign is affected, but you lose some of your regular sales outside of it and provide a bad user experience that is detrimental to your airline’s image.
No at all.
The good news is that you the airline managers in the previous example could have easily fixed this issue by caching pricing data.
In fact, it can get a lot more granular than that.
By deploying an availability cache tool you can filter and separate your visitor flux based on the channel they have come from (your main booking site, secondary ad-hoc promotional sites, online travel agents…), the data range or booking window. You can then assign different levels of latency to the pricing data that you show to each of them.
For example, you could offer direct real time pricing info to visitors coming direct to your site but cache pricing info for those coming from other sites, even those that you may set up yourself for marketing purposes.
If we go back to the hypothetical case at the beginning of this article, let’s say your marketing team has created a dedicated site promo.airlinexyz.com to funnel traffic from social media campaigns to your booking site. As there is a risk that sudden peaks of traffic may swamp your booking engine for limited periods of time, you may want to cache the data these users see. Granted, it is not going to be real time, but it will still be fresh enough to make no practical difference in the vast majority of requests.
In fact, for that small percentage of cases where there is a variation between the cached and real time price, you can run a quick price check once the user has entered the regular workflow. This simple procedure will filter out most of those that are just checking prices out of curiosity and will concentrate resources on those that show more serious purchase intent.
In addition to these operational benefits, there are other positive effects that airlines can derive from caching data, namely, the reduction of their look-to-book ratios.
The Look to Book Ratio refers to the number of price info requests that users make to the Passenger Service System (PSS) in relation to those that finally end up booking a ticket. Look to Book ratios, can be an issue. In some some PSS, like, for example, Amadeus’ Navitaire, the expenses can add up quickly, as the airline would be charged for the “look” even if no “book” ensues.
A sudden inflows of visitors with low conversion rates can take look-to-book ratios through the roof, which is not very efficient from a financial point of view.
By caching data, you reduce the number of real time pricing requests that your users send to your PSS, therefore limiting your expenses for this concept.
A degree of intelligence on market dynamics and the capacity to react rapidly to the fast evolving situation are, of course, necessary to make the most out of these capabilities. Our availability cache tool, for example, has a smart algorithm that monitors inventory levels across routes, flights and date ranges to determine the right data refresh speed. Think of it as the person directing people to one queue or another before the airport security control.
The key thing here is not to die (well, your booking website) as a result of success!