ADASTRO--Business IS Rocket Science
We Add Value Our Story Business IS Rocket Science Useful White Papers Contact US
 

Home » Useful White Papers » Land of Lost Revenue: What's Important to Measure in E-Commerce?

 
  Land of Lost Revenue:
What's Important to Measure in E-Commerce?

by Cliff Kurtzman
Chief Executive Officer, ADASTRO Incorporated.
 
 

November 23, 2004

"One accurate measurement is worth a thousand expert opinions."

-- Admiral Grace Hopper

In the September 15, 2004 issue of the Apogee, we reviewed a presentation from last summer's Emetrics Summit that covered tactics that Amazon.com uses to automate, optimize and enhance revenues from their online marketing activities. If you missed it, you can read it at:

Based on feedback from our review of that session, we are pleased to bring our readers an additional review of another outstanding session from that conference. This session, presented by User Interface Engineering founder Jared Spool, examined things that organizations don't realize they are doing online that cause them to lose potential revenue from their online marketing practices.

Jared started out by stating that he had tried to put together a presentation that would discuss what it was like to look at emetrics issues from "the usability side of the world." He gave out a couple of disclaimers to lead off his presentation... the first one was to state that until a few days earlier, he knew absolutely nothing about emetrics (the science of measuring and improving the performance of your online marketing activities) or web analytics (the science of analyzing the performance of your web site). He also stated that he has never set up an emetrics strategy and it is in fact a life goal of his to NEVER set one up--an objective that he figures he is about half way done fulfilling.

Jared also stated that while his presentation talked a good bit about The Gap's online marketing efforts, he wanted us to realize that The Gap was not a current or past client of his, and after the presentation he was making, they would likely never be one in the future either!

Jared also warned the audience that his presentation had a whole lot of statistics in it, and that one should keep in mind what John Thompson, who is president of BestBuy.com, had told him: "If you torture the data long enough, it will confess to anything you want."

Jared put up a screen shot from gap.com. He speculated that the people that run the gap.com site think about things like setting up a strategy to measure its performance, looking at all the same kind of variables that any of us would look at if we were trying to measure a similar site and make it better. Being an e-commerce site, making gap.com better should directly correlate with increasing the revenues that it produces.

Jared speculated on what issues The Gap would face in trying to learn whether or not the strategy they put together for measuring their site's performance would actually tell them what is happening on the site, as well as learn what problems are inherent in the site that need to be solved to improve its performance.

He related a test his company had conducted using someone that came into their usability testing lab. His name was Keith, and he was an extremely smart 25-year-old male who wanted to buy a sweater for his girlfriend. In particular, Keith knew that he wanted to buy an "ecru ribbed sweater," and he knew that ecru was a color, which is something that Jared mentioned that they don't generally teach in "guy school." Keith knew that The Gap carried the sweater because he had been in a Gap store with his girlfriend when she saw it and indicated that she really liked it. He knew what it cost, and he knew his girlfriend was a dress size 6 (also something that they don't teach in "guy school").

Keith went to Gap.com with the intention of buying the ribbed sweater. He found the product at the right price and color on their site without problem. He was just about to buy it... and then he stopped. He stopped because the sizes for the sweater that were listed ranged from extra-small to extra-extra-large. And another thing that they don't teach you in guy school is how to translate women's dress sizes into sweater sizes ranging from extra-small to extra-extra-large.

Fortunately on the gap.com site there was a size chart. So Jared asked Keith to tell him what he thought he would get if he clicked on the size chart. Keith told him he expected he would get a table that would show sizes ranging from extra-small to extra-extra-large, along with the equivalent dress sizes for each category.

And sure enough, Keith got a table, but it wasn't quite what he expected -- it had chest size rather than dress size. At this point Jared noted that one of the things that they DO teach you in guy school is to NEVER guess your girlfriend's chest size. Keith wasn't sure what to do at this point. Gap.com does give instructions for measuring chest size--it says "lift your arms and measure around the fullest part of your chest." Keith tried this, but measuring around the fullest part of his chest didn't quite produce a result that would help him buy a sweater for his girlfriend. And Keith knew that if he guessed the size of his girlfriend's chest and got it wrong, it would not only cost him the price of the sweater, but it would cost him flowers or jewelry as well. So in the end, he didn't buy the sweater.

It is easy for Gap.com to fix this problem if they realize it exists... it is just a matter of adding another column in the chart. Another way to fix it is to do what you would do in a store... if Keith was in a store and he told the clerk that his girlfriend was a size 6 and asked what size sweater that equated to, the clerk would likely have had no trouble telling him the probable size, along with the magic words "if it doesn't fit, she can return it." Those magic words don't appear on the page that Keith looked at on Gap.com, nor is there any way to instantly query a sales assistant.

The real problem, Jared noted, is how will Gap.com ever learn that this problem exists? What in their emetrics/analytics strategy will discover the problem? (It is now several months later, and while the sweaters offered on Gap.com have changed, the size chart remains exactly as it was when Keith looked at it.) Gap.com doesn't know why the purchase didn't happen. They don't know how frequently the problem occurs. They don't know how much revenue the problem is costing them. This is because, even though they are measuring a lot of cool stuff about the site, the statistics that they are measuring probably don't actually tell them about these kind of problems.

Among the things that Gap.com probably IS measuring is conversion rate. Conversion rate is one of the most popular things that people seem to measure. Jared mentioned that he had heard a lot of speakers at the Summit talk about measuring conversion rate. Every client Jared talks to seems to have a goal of increasing conversion rates. But, Jared noted, this seems like a rather odd thing to measure. He went on to explain why.

While people may measure conversion rates differently, one of the most popular ways to do it is to divide the number of purchasers by the number of visits to the web site. But there is a trap if you think that it is important to just increase the conversion rate. Because conversion is a ratio, rather than an absolute number, so many folks seem to treat it as a number where the denominator is well understood. But you need to realize that it isn't a number, it is a ratio, where numbers on both the top and bottom of the ratio can change.

You have to be careful... for example, should you be counting people that come to your site because of an errant marketing campaign? The Gap, for example, for a time was featuring in their commercials new artists singing new songs. And a whole bunch of people were coming to their site to find out who the artist was and what the song was, not to buy any clothes. If you count those people, your conversion rate will drop even though you haven't made any change in your site. Should those people be counted in the conversion rate? Should you also be counting in the conversion rate those people that come to the site not to purchase but to research products for their wish list or to later buy in the store?

In his research lab, Jared has found a way to increase conversion rates that has had phenomenal results. It is a fast way to increase conversion, it is cost effective, and it works REALLY well. Basically, what you do is you stop marketing... just completely shut down your company's marketing department. The return on investment (ROI) on it is amazing.

What happens when you stop marketing is that no new visitors come to your site, because no one new will hear about it. The primary people that will continue to visit your site will be old visitors that are in love with your brand and who remember who you are and who have no other incentive to visit your site than to make purchases.

Your overall visitor counts will drop dramatically, but your percentage of visitors that purchase will increase substantially. So your conversion rate will increase (while your revenues decrease). If your goal is really to get your conversion rate as high as possible, this is a great way to do it.

Jared then asked the audience a question... if you could choose between two options, which would you pick? Option number one is to have a huge increase in conversion rate for your site, but no increase in revenues. Option number two is to have a dramatic increase in revenue, but a decrease in conversion rate. Which do you pick? Everyone in the audience picked the second option. So Jared then asked us "Why are we talking about conversion rates? Why is it even in our lexicon? Nobody really wants conversion rate... they want revenue! So it baffles me as to why we are all so focused on it... all my clients are focused on it, and I don't understand it, I just don't get it!"

Then next question becomes... if conversion rate is not what you should focus on, how do you best measure revenues?

Jared has found that the best way to measure performance is not to look at the overall site revenue because you don't know how to figure out how your actual revenues relate to your total possible revenues. Instead, they have found it beneficial to look at a quantity called "lost revenue." Lost revenue is the revenue that people would have spent on the site but for some reason didn't. When you enhance the design of your site, and remove obstacles to making sales, theoretically your lost revenue should decrease. If we understand how to reduce the lost revenues, we should see increases in sales which should make everyone happy. This affects not only new customers that we are advertising to, but also existing customers that are less expensive to market to and have a greater ROI.

In Jared's testing lab, they measure lost revenue using something called the Seven Eleven Milk Experiment. This experiment works as follows: imagine that you have a magical device that goes off every time someone in a five mile radius of where you are runs out of milk. You instantly drive to their house, and find them sitting at their kitchen table with a empty carton of cereal and a dry bottle of milk. You then put them in your car and take them to the nearest Seven Eleven store. Just to make sure they purchase, you give them the cash for the milk.

What would you expect that the conversion rate would be for that customer at that point? It would probably be right near 100%--the Seven Eleven would really have to mess up to not sell milk to that customer at that moment. A Seven Eleven can sell 100% of the time, and a Seven Eleven is not exactly an optimally designed shopping experience!

What Jared's company does in online marketing tests is to find people (like Keith in our Gap.com example) that need products. They then bring them to sites that they know have the products, and they give them the cash to buy the product, just like in the Seven Eleven Milk Experiment. They should see 100% conversion, but in their lab, they are having a good day when they get 30% conversion for an online purchase. 70% of the time the site is refusing to sell the products that the customer wants and has the money to purchase. And you have to keep in mind that this is a very skewed setting... this isn't every potential customer that could potentially visit the site, rather it is a small subset of visitors that know what they want, are ready to buy, have the money to buy it, and all they want to do is complete a transaction. And if you can't sell to them more than 30% of the time, what is going to happen with the people that come to the site not sure what they want to buy or are uncertain whether you even offer to sell them what they are seeking?

In a recent study on one client site, Jared's company was able to quickly identify more than 280 obstacles that prevented people from shopping on their site. Jared talked about observing the "Customer Sieve" and applied it to their experiences with people that had been given the online version of the Seven Eleven test. He gave an example where for every 100 people came to the home page, 91 made it to the Department page, only 83 made it to the Gallery page, only 58 made it to the Product page, only 45 made it to the Checkout page, and only 34 ended up completing a purchase. In total, 66 people that wanted to make a purchase dropped off the site because it "refused" to let them go further.

His company did a Seven Eleven type experiment of 13 apparel sites. For every $1000 that they gave people to shop with, on an ideal site, they should have spent the full $1000. Their results were not so ideal. The best two sites were the Gap's site, where people spent an average of $660 out of the $1000, and the Lands' End site, they spent an average of $465. The worst two sites were the Macy's site, where they spent an average of $156, and the Newport News site, where people only spent an average of $63 out of every $1000 they were given to shop on the site.

There is a 10x difference in performance between the Gap site and the Newport News site... what causes that kind of a difference? Jared's team collected a ton of data during this testing, and the data told them that there was a segment of the clickstream that was critical to completing the sale. The critical segment of the clickstream turned out to be the interval between when the buyer reached the home page and the point that they clicked on the item to add it to their cart. There was a direct correlation between the amount of sales and the number of clicks required to reach the point where the item was added to the cart. On the Gap's site, there was an average of 12 clicks required. On the Lands' End site, it was an average of 16 clicks. On the Macy's and the Newport News sites, it was an average of 51 clicks.

So they next looked to see what was happening within the Macy's and Newport News clickstream that was not happening within the Gap and Lands' End clickstream.

On the Newport News site (http://www.newport-news.com/), products are grouped into very odd made-up categories, including: "Trends", "Lifestyle", "Jeanology", and "Shape fx". How many people have ever shopped for a "Lifestyle"? People clicked all over the site because they had no clue what they were looking at. Lands' End had a very different design for a similar kind of page. They took a department like swimwear and divided it into intuitive categories, such as "Hips Wider Than Shoulders", that made it easy for the site visitor to find the product that was right for their body type.

Jared's experiment identifies how much revenue is being lost due to problems at each stage of the buying process. The Newport News site actually worked better than the Gap's site once the site visitor found the product for which they were searching. The problem was in how difficult the Newport News site made it to find the desired product.

In the apparel example, Jared found that his client (the retailer that had commissioned the study of the 13 sites) performed half as well as the best site overall. His client was #4 out of 13 in the ability to find a desired product, so this was not deemed to be the most critical area for their client's improvement. But their client came out in the middle in effectiveness at placing the item into the cart, and they were able to identify several sites which did it better and which they could learn from. Their client also came out second to last in the checkout process, showing a very large room for improvement in that aspect of their design.

Jared noted that this kind of testing can produce incredible results, with clients reporting revenue increases of 180% to 360% within twelve months. Still, it is labor intensive and expensive, and there are no automated tools that do it. It costs between $4000 and $8000 per shopper to conduct this kind of testing. He also noted that while this kind of testing works for some e-commerce sites, there are lots of cases where this technique won't work--for example, he mentioned they can't do anything with this kind of testing on eBay right now.

Jared then went back to talking about conversion rates. He related that the large brands that his company works with tend to be very happy to see 2% conversion rates on their sites. That means that often tremendous resources are being expended on those 98 out of 100 who will never purchase. He asked "What would happen if we only focused on the people who purchase?" Jared then concluded his presentation with a case study using another one of his clients, a Fortune 100 specialty retailer with 750 stores in North America, 100,000 employees, and "a trusted brand with very loyal customers."

In 2003 this company had $450,000,000 in web revenues (about 1/20th of the company's total revenue), and 121,250,000 web visitors. BizRate scores for this company show that its customers are very satisfied. Their online conversion rate is 1.6%, which is fairly typical for this category of retailer and equates to 1,940,000 purchasers.

To help the audience visualize what a 1.6% conversion rate means, Jared produced a ball of yarn that was 62.5 feet long, and with the aid of the audience holding it in the air they unrolled it around the conference room. If the total length of the yarn represented the entire 121.25 million visitors that went to his client's site, then only the very last foot of the 62.5 feet of yarn represented the 1.94 million that were actual purchasers.

Of the last foot of yarn, he noted that just 2.4 inches of it, representing 388,000 (20%) of the purchasers, made up $351 million dollars of revenue (78% of their total web revenue). Those 388,000 customers visit an average of six times per year, and spend an average of $150 per visit.

They realized that it might be much easier to get these 388,000 known customers to spend an additional $100 per visit (resulting in an additional $232,800,000 in revenue) than it would be to add equivalent revenue by finding 2.3 million new customers that spend $100 each. They believed that the web site was losing a lot of potential revenue on each sale. But the problem was, the client had no way to identify whether someone was part of the 388,000 when they came to their site (or visited their store). The client also didn't know how to change the design of their site to no longer leave money on the table each time one of these people visited their site. So Jared's team that is working for this client is now focused not on increasing conversion rates, but rather on addressing issues such as how to segment data by top spenders, how to know what obstacles they encounter, and how to know the frequency with which they encounter obstacles.

Many of us might do well to adopt these strategies too!


During the course of the Emetrics Summit, we also heard case studies presented by InterContinental Hotels, SAP, Hewlett Packard, Avaya and SmartDraw. Jim Novo gave a great presentation on determining customer lifetime value, Terry Lund covered the topic of how to evaluate vendors of Emetrics software tools, and Eric Peterson of Jupiter Research talked about key performance indicators for web analytics. A panel of Emetrics software vendors gave briefings on the strengths of their products, and answered audience questions as conference attendees tried to sort through the various offerings.

If you have an interest in learning more, you can get a copy of the full handouts from the Summit along with audio recordings of the full sessions at: http://www.emetrics.org/summit604/proceedings.html

The 2005 Summits are being planned. They will be held in Santa Barbara June 1-3, 2005 and in London June 8-10, 2005.

Details are at: http://www.emetrics.org/summit605/index.html

 
 
We Add Value | Our Story | Business IS Rocket Science | Useful White Papers | Contact Us

ADASTRO Incorporated
www.adastro.com

791 Price Street, Suite 144
Pismo Beach, CA 93449
(281) 480-6300
 


 
Other Starhold Enterprises:
Online Advertising Discussion List
Tennis Server | Tennis Server Ticket Exchange
MyCityRocks | MyCityRocks Ticket Exchange
 
Featured events in our Ticket Exchanges:
 
 
Copyright© 2003-2006 ADASTRO Incorporated. All rights reserved.
Site Access and Use Policy. Privacy Policy. A Starhold Enterprise.