news

InfoDesign newsletter

Powered by

Jared Spool: The InfoDesign interview

Jared Spool

By Dirk Knemeyer (April 2004)

Each month, InfoDesign interviews a thought leader in the design industry, focusing on people who are identified with or show strong sensibilities to the design of information and experiences. This month, Dirk Knemeyer interviews Jared Spool.

Jared is one of the most important - and best-recognized - voices in the field of usability. User Interface Engineering (http://www.uie.com/), the firm that he founded in 1988, is the world's largest research, training and consulting firm specializing in website and product usability.

Dirk Knemeyer (DK): Jared, share with us what you've been thinking about and working on lately.

Jared Spool (JS): Well, lately, I've been thinking about chocolate chip cookies. I really like them. I probably don't get enough of them.

Oh, you mean about work. Ignore the cookie thing, then.

We've been working on lots of interesting stuff. It all ties in to having more information in each of the design stages.

For example, one project we've been working on is an informative-content framework. We're interested in information-rich designs, such as a site to help chemotherapy patients understand how to reduce symptoms or a technical support site that assists web services customers how to better implement and maintain the services.

When a design team has to tackle one of these designs, how do they know what content is required? They could do field studies (contextual inquiries and ethnography) to determine who the users are, what content they need, and when. However, that's an expensive, time consuming process and it comes at the beginning of a project, when resources and funds are extremely tight.

Therefore, we've been looking at ways to derive, at least for a first cut, the needs of users. In this, we turned to on-line discussion forums, where people discuss issues amongst themselves and we've been looking at the patterns you see in the questions and answers that people post.

We discovered that there are basically 14 types of questions, no matter what the subject matter. We're hoping these 14 types, which we're calling topic perspectives, can guide designers to plan and implement an initial information resource that is complete and helpful and delights their users.

"We discovered that there are basically 14 types of questions, no matter what the subject matter."

Another research project we have under way has to do with selecting participants for studies, whether they be usability tests, focus groups, field studies, or surveys. We know that information-gathering techniques like these are extremely helpful to designers. However, nobody has really looked at what happens when you select poor-quality participants for a study.

If you select a poor-quality participant - a participant who doesn't really match the true target you want - you get distorted results that require more interpretation and often more participants to validate. In the worst case, you assume the information is valid and make critical design decisions that inevitably do not achieve your goals. Poor-quality participants really increase development costs dramatically.

Because we regularly conduct large studies with 30-80 participants, we can start to compare the information collected during the recruitment and screening phases with the behaviors and information we collect during the study. We can see if there are differences in what we can learn during recruitment and screening that can help us get results from our study.

For example, design teams often want to study the behaviors of both novice users and experts. They divide users into these two categories because they perceive the users will behave differently and they want to ensure that their designs meet the needs of both groups.

However, if you have a limited study budget and can only recruit, say, four participants from each category, how do you ensure that each participant is truly in their respective categories? By looking at the actual behaviors of participants in our studies, we've learned exactly what questions to ask in the screening process to ensure that we're categorizing folks properly. It's a process that currently involves a lot of statistical modeling, however we are now working on ways to make it cost effective for every design team.

All of our work is about giving designers confidence about the decisions they are making. If you analyze where design costs come from, they typically come from either poor decisions or the inability to settle on a decision because nobody is confident about the best alternatives. If we can deliver tools, frameworks, and techniques that give designers better confidence, we can have a positive influence on the designs they create.

OK, gotta go find a chocolate chip cookie.

User Interface Engineering

DK: You're working on really interesting stuff. To give our readers more context and understanding, talk about your company, User Interface Engineering (UIE). And, how have your focus and the services you offer changed since you founded the company in 1988?

JS: Our goal at UIE really is quite simple: We want to eliminate any frustration that comes from the introduction of new technology.

Designers design with the intention to delight. Yet, far too often, the result is not delight, but frustration and disappointment. Our goal is to understand why that is and give designers tools and techniques to achieve their intentions.

We think this is going to take a really long time. At least a hundred years. So, we're buckled in for the long haul.

Right now, we're in a stage where we're trying to understand the basics: Why do some designs frustrate some people some of the time? Why do others always frustrate everyone? Why do some designs, like Google, rarely frustrate people? What did the designers do in each situation? What works? What doesn't?

So, we're spending our available resources on trying to understand where frustration comes from. This is our primary activity.

"Certain designs, no matter what product they were implemented in, produced the same reaction in users."

Doing research like this isn't what we've always done. Back in 1988, we thought we could achieve our goals just by providing usability testing to any company that wanted it. We were one of the first outside usability testing services. We worked with all the big players.

As we conducted test after test, we realized we were seeing the same patterns in user behavior, over and over again. Certain designs, no matter what product they were implemented in, produced the same reaction in users.

We started documenting these patterns and talking about them with our clients. Soon we were giving presentations and courses. Designers were implementing what we'd found and their designs were getting better.

In 1996, we turned our attention to the web. Because of the web's evolutionary nature, it's easy for us to look for designs that work and those that don't. We've spent the last 8 years focusing on this medium and we think we've made a lot of progress.

This research can be very expensive. We fund most of it with money from our conferences, roadshows and publications. A small portion is privately funded with client consulting.

We've come a long way from our roots of being a usability testing service. We really don't do that anymore, primarily because our research has shown that the most successful design teams are those that do their own testing. Farming your testing out substantially reduces its effectiveness.

Instead, we help teams start and maintain their own internal testing process. We've helped hundreds of organizations start doing their own testing. Once they get going, it's easy for them to keep it up. And, without exception, they find it improves their development process dramatically and gives them more confidence in producing designs that delight their audience.

Approaching Clients

DK: Talk a little more about what goes into helping teams start and maintain their own internal testing process. And how broad is the scope: everything from providing basic guidelines to development of an in-house usability lab and everything in between? Or is the scope of your services a little more tightly defined than that?

JS: Our scope is fairly broad. In this case, the end goal of our work is to form a process where the design team is making confident, informed decisions about the design. We look at any activities that help them with these informed decisions, from testing to ethnography to studying how their own internal operations work.

We approach every client differently. We've never found two clients that are the same, therefore we don't have any set process or methodology that we train and instill. Instead, every project is always an adventure.

We start every engagement by learning what the client's organization is like and what they already know. We want to know: How did they get to where they are today? What research have they already done? How are decisions made now? Who is on the design team? How do they respond to change?

"Overly simplistic usability testing can produce too many issues, most of which will not have any desirable effect on the goals of the organization."

We take a broad notion of the design team. In our work, it consists of anyone who has influence over the design. Sometimes this is just people with the title of designer, but more often it includes many other titles, such as product manager, engineer, and even CEO. We look at people's actual behaviors in the design process more than we look at the designated titles.

We also need to know: What stage is their product, service, or site in? Everything goes through stages: we talk about this in my article Market Maturity (http://www.uie.com/articles/market_maturity/). Do they have a 5-10 year vision? Without a vision, you're just reactive in a market where the pace usually accelerates with an exponential speed. Eventually, you can't keep up, no matter how good you are.

And the most critical information: What do they really know about their customers, users, or whatever they call the people they are designing for? How good are their hunches? How do they know if they've done a good job? What feedback systems do they have in place already?

Someone once referred to us as 'Chinese soup'. As I understand it, in traditional Chinese meals, they serve the soup last. This is so it can fill in the cracks.

We fill in the cracks of the existing design team. If they are in an early product-maturity stage, having gotten there on their hunches (which have probably been very good), we'll work with them to identify priorities and put in a feedback loop to validate those hunches.

If they are farther along, we'll look at the specific issues they are dealing with and help them with those. For example, we're currently working with a team to help them learn what not to fix.

Overly simplistic usability testing can produce too many issues, most of which will not have any desirable effect on the goals of the organization. As a result, teams can easily waste valuable resources fixing things that don't need fixing. In this team's case, we're working on introducing revenue-driven metrics (they are an e-commerce group) they only fix problems that will positively affect their bottom line.

To do this, we're using our compelled-shopping techniques to measure exactly how much money their current design is leaving on the table. We can get pretty accurate with this -- to the dollar, in fact. Using these techniques, we can get some very dramatic results.

With this client, they are now changing their checkout process, based on our research. They confidently believe these changes will increase revenues by $210 million over the next year. We worked with them to generate these measures, using existing internal sales data, usability tests, analyzing customer support data, looking at site logs, and about a dozen other critical information sources.

The trick is that this isn't just a one-time study where we found some glaring error that happens to win big. We're changing the way these guys make decisions going forward. Effectively utilizing the resources already available to them and enhancing those resources by providing new information into the design process. This approach is likely to deliver them even bigger wins over the next 3-4 years.

We focus our clients on large, long-term results. When we do that, it helps them put everything into perspective and they come away with a dramatically more efficient process with far more confidence in their own abilities and results.

Present-Customer-Value and Lifetime-Customer-Value

DK: You talked a little about developing revenue-driven metrics, and in some cases measuring results to the dollar. One disconnect that I have found in existing metrics is that they only measure certain linear things. For example, a web experience may prove very usable and effective, resulting in an increase of short-term sales and positive metrics, but in accomplishing that increase the brand experience is compromised in a way that makes it less likely people will pay a premium for products in the future, resulting in a net negative. In your opinion, what are the limitations of ROI measurements and other current metrics, and how can measurement evolve in the future to be more informative and holistic?

JS: Excellent question.

Businesses have to always balance between present-customer-value and lifetime-customer-value. Present-customer-value is what you can get from that customer right now. Lifetime-customer-value is what you'll make from them during their lifetime.

The formula for ROI in most businesses is extremely complex. That's why we focus on revenue-based metrics. We start with metrics that are immediate (reflecting present-customer-value). After we've achieved baselines that satisfy our clients, we move more towards long-term metrics, such as how much does a customer spend in a year or over five years? These reflect future-customer-value.

It's very easy to sacrifice lifetime-customer-value to get higher present-customer-value. Sometimes, that's the right thing to do. Think of the ballpark hotdog vendor: They have a captured market. You either buy from them or you go hungry until after the game.

The hotdog vendor doesn't care how much money you'll spend with them over your lifetime. They just have something you really want right now and they are going to charge 400% more than it would cost you to eat at home.

"Brand experience is a long term investment."

Most businesses don't have this luxury. Instead, they have to make costly investments into activities that only pay off in the long-term, such as quality. In the 1950's through 1970's, American car manufacturers actively decided not to invest in quality. It was too expensive, for several reasons:

  1. Quality people have expensive salaries that are just overhead.
  2. When quality people found a problem with a car, it was usually after the car was assembled. That car couldn't be sold, therefore making production yields go down and lowering profits.
  3. Dealers made tons of money from servicing cars. If cars were better quality, their service revenues would take a huge hit.
  4. Because service bays were often attached to new car showrooms, dealers would regularly sell new cars to people who wanted to be rid of the service headaches from their existing car. Quality cars would mean less sales.

So, in those years, the American car manufacturers focused on present-customer-value, not lifetime-customer-value. They were just like the ballpark hotdog vendor, believing they had a captured market.

Of course, everything changed when Japan's car manufacturers made a huge investment (with government assistance) in vehicle quality. In the 1980's, people started buying Japanese cars because you could get 150,000 or 250,000 miles out of them without much trouble. This made the American car manufacturers take a serious look at their quality investments.

The design team has to be aware of these trade-offs. Brand experience is a long term investment. Ballpark hotdog vendors don't need to have a brand, because you have no choice. Brand only matters when people have choices.

Of course, I'm talking metaphorically about ballparks. In many of the today's major league parks and arenas, fans are given multiple choices for food, often from major brands. But, that's a whole other discussion.

I recently sat through a briefing from a major pizza chain about their web experience. If you live in the US, you've probably ordered pizza from these guys at some point. Even if you don't like pizza, you would definitely have heard of them.

They have a test site in Cleveland, Ohio, where you can order pizzas online and then have them delivered to your house or pick them up at the local store. In the briefing, they told us that they are very happy with the site, because 8% of the people who visit the site are making purchases. They were very quick to point out that the 'e-commerce industry average' for conversions, as this ratio is called, is 4.6%, so they are doing well above average. 8% made them very happy.

How do people hear about this site? Well, existing customers get flyers when they order pizzas by phone or come into the store. Also, they are advertising in Cleveland newspapers and on the local radio, telling people to visit the site. So, the site must be good because they are getting almost twice the industry average for conversions, right?

I don't think so. 8% translates into one pizza-buying customer for every twelve visitors. What are the other eleven visitors doing? Why did they come to the site in the first place? Did they have an objective *other* than buying a pizza? What was it? Why didn't they buy? Does anybody know?

Imagine if only one out of every twelve people who called their local pizza shop were the only ones who ordered a pizza. All night long, the pizza shop would get calls from people who aren't interested in ordering. Would the shop owner be happy?

In my opinion, before this pizza chain can think about lifetime-customer-value, they need to figure out why they aren't getting any present-customer-value from 92% of the site's visitors. What is happening with those visitors? Why aren't they purchasing?

This is typical of the web's current state. Most users are failing on most websites and nobody knows why. We don't even have a good handle on how to find out why. So, we basically ignore the problem.

Yet, design teams usually have limited resources. They need to balance investing these resources into maximizing present-customer-value with investing in maximizing future-customer-value. The future can't be ignored, but neither can the present.

How do they achieve that balance? Well, fortunately, everything we've seen in our research tells us brand engagement dramatically increases when users have a good experience on a website. It dramatically decreases when they've had a bad experience. Understanding what makes a good experience is essential, which is why we've been studying just that.

What is the #1 contributor to the user having a good experience? Our research shows that users are most satisfied with a site when they complete their objectives. When they don't achieve their objectives, they become significantly dissatisfied with the site. Little else really matters beyond completing objectives.

So, right now we believe design teams can contribute most to the long-term brand experience by ensuring that every user achieves their goals. Once you've achieved that, you can focus on other metrics, such as referrals, loyalty, trust and devotion.

What is brand devotion? Ask any Apple customer. It's spending higher-than-market value on products because you are devoted to the brand and what it stands for. Customers with high brand devotion would believe the world would be a worse place if the brand vanished.

Design teams need metrics that focus on both present-customer-value and future-customer-value. They need to decide what they want for acceptable baselines of each. With metrics for both, they can tell if designing for one is upsetting the other. And they can set goals and objectives that are measurable and attainable, using constant, continuous improvement techniques.

Working with Third Parties

DK: Based on your earlier description, it sounds like UIE typically works direct with client companies, instead of through other technology or branding firms. What challenges do you face in integrating your work with what other outside providers are doing? What approaches or processes provide smooth working relationships and implementation, particularly with other outside partners that may not share your vision or sensibilities?

JS: You are correct. We rarely work through a design firm or integrator. Most of the time, we work directly with the client.

It's easy to integrate our work since we don't actually do anything.

When we work with a client, we're typically interested in answering their questions about how people interact with their current design, their competitor's design, or some proposed designs they are considering. We work with the design team to identify what they need to know, then we go out and figure out clever ways to answer their questions.

When the client has third-parties working with them, we interact with them as we would interact with any other design team members. It's not unusual for us to forget that the third-party consultants aren't members of the client's company. We really don't distinguish between them.

Observations, Inferences, and Opinions

DK: Given that, talk about your basic process and approach to finding answers for your clients.

JS: We strongly believe in the separation of observation from inference and inference from opinion. When we conduct a study, we collect observations. For example, some observations might be that a particular user enlarged a product photo and then said, 'I wish I could get a sense as to how big this phone is.'

From those observations, we might infer that the user is making a purchase decision based on size. Our opinion might be that the site has to provide better information about product size.

Separating observation from inference allows that alternative inferences are possible. For example, it might be that the user actually wanted a sense as to how big the phone's buttons were. Maybe they had experiences with phones where the buttons were too small to read or press. If that were true, the total size of the phone isn't as important to the user as the button sizes are.

"Separating observation from inference allows that alternative inferences are possible."

Changing the inference would change our opinion, which would change any recommendations or design alternatives the team might consider. By allowing for alternative inferences, it makes you more rigorous in your data collection. You tend to ask more questions, trying to, in real time, test hypothesis formed by incoming inferences.

This practice also keeps us honest. We know that there are many inferences for any collection of observations. We collaborate with the design team to explore all the possible inferences.

This is a mistake that I see many usability professionals making on a regular basis. They collect a bunch of observations, say from usability tests. Maybe the tests are well conceived and executed, maybe not. However, they then write a set of recommendations without really ever exploring that there could be other explanations.

Teams who believe their results end up potentially wasting money designing to these recommendations without seeing results. Often, the recommendations are so detached that the teams just stop believing the results and only work with those recommendations that coincide with what they already believed. Either way, precious resources have gone to waste and the team loses confidence in their usability processes.

When we work with clients, we are very clear about when we're talking about an observation, our inferences, alternative inferences, and our opinions. We make it clear to the clients that our observations are the real meat that we bring to the table. The inferences we've formed are just side dishes. And the opinions are dessert -- something that is worth considering and occasionally ignored.

It's interesting to note that the more we play down our opinions, the more clients beg us to give them and tend to seriously consider them when making their decisions.

Terminology and Value

DK: Let's talk a bit about terms, labels and external communication. I think we can easily agree that usability, information architecture, visual design and a host of other disciplines all fall under the very broad umbrella of 'design'. However, there is little agreement within disciplines - let alone between them - as to how we all fit together and where boundaries begin and end. What do you think that we collectively - as active participants in the design industry - can do to establish common language and understanding, so we can focus on increasing our value and role in both business and society?

JS: Let me answer this question in parts. First, I'd like to talk about establishing a common language and understanding.

In many US medical schools, when you train to become a surgeon, you first learn general medicine. Then you do rotations in different parts of the field that have little to do with surgery, such as emergency medicine, radiology, or pediatrics. Then you do a surgical rotation.

The purpose of all these 'distractions' is to gain an appreciation for these different branches of medicine. When you're operating on a trauma patient, reviewing a CAT scan, or considering surgery on an infant, you've got the background to understand how the other members of the team are contributing.

I've often thought that we should have a similar notion of education and rotation in our design education process. Before you can really call yourself a usability professional, you should spend time doing the work of information architects, visual designers, software developers, product managers, and marketing professionals. You should gain perspective on what makes those jobs challenging and how they focus.

The disciplines we have under this umbrella of design are really just different viewpoints to attack the design problem. Information architects look at the world from structure and navigation. Designers look at the visual presentation and communication. Usability folks see the world from a user frustration perspective. These aren't separate branches of knowledge. They are different viewpoints by which you attack the same problem: creating a successful design.

"The disciplines we have under this umbrella of design are really just different viewpoints to attack the design problem."

That's why, at our annual conference, we never separate things out by these disciplines. You'll find sessions on 'Cascading Style Sheets' because usability people, visual designers, and information architects need to know how web pages are formed and what is easy versus what is hard.

This year, Jeffrey Veen is giving an in-depth seminar on content management systems (CMS), because everyone needs to know how to prepare for a CMS. Ginny Redish is going into details on advanced usability testing. Hagan Rivers is spending an entire day talking about the state-of-the-art in web-based applications. All important topics that everyone in the field needs exposure to, even if they aren't directly working in that area today.

It's no accident that these topics are the focus of our work. They are the pressing topics that people are dealing with and everyone in the field needs the common language and understanding of the nitty-gritty details.

As for demonstrating our value to business and society, I think of the immortal words of Forrest Gump: "Value is as value does."

The entire field is currently on this Holy Grail search for the ultimate ROI data. For a group of people who rarely agree on anything, it's amazing how focused everyone is uncovering this mystical data that will, once and for all, prove to the world that, indeed, we are valuable.

If we really want to prove to people that we're valuable, the most effective way is to be valuable. PDFs and PowerPoints aren't going to demonstrate that it's important to consider information architecture or conduct usability tests. Actually improving the product, service, or website in a reliable, measurable way that is critical to the organization will make the point.

Since UIE started as a consulting outfit, we had to learn quickly how to prove to people we were valuable. After all, when you're a consultant, you don't eat if nobody believes you're of value.

I learned quickly that business executives didn't care about usability testing or information design. Explaining the importance of these areas didn't get us any more work. Instead, when we're in front of executives, we quickly learned to talk about only five things:

  1. How do we increase revenue?
  2. How do we reduce expenses?
  3. How do we bring in more customers?
  4. How do we get more business out of each existing customer?
  5. How do we increase shareholder value?

Notice that the words 'design', 'usability', or 'navigation' never appear in these questions. We found, early on, that the less we talked about usability or design, the bigger our projects got. Today, I'm writing a proposal for a $470,000 project where the word 'usability' isn't mentioned once in the proposal.

When we work with teams, we teach them to follow the money and look for the pain. Somewhere in your organization, someone is feeling pain because they aren't getting the answers they want to one of the questions above. You need to find that person.

Once you find that person, you need to figure out what skills in your toolbox can help them achieve their objective. We teach teams to start small, reduce risk, and go for the 'low-hanging fruit'. Often, the first few simple exercises have big wins, which then befriend the people whose pain you've just relieved.

I'm reminded of the fable about the mouse who removes the thorn from the paw of the lion and they become friends for life.

This approach - start small; look for pain; focus on low hanging fruit - takes time. It can take years. However, as we've researched how the most successful user experience teams have attained their success, this is the one commonality we find. They all started this way.

The easiest way to convince people you're valuable is to actually be valuable to them.

State-of-Affairs in the Usability Industry

DK: This approach makes a lot of sense for us as individuals, or in the context of the companies we work for. But what can we do as an industry, collectively, to improve our opportunities? Or should we not even attempt to look for collective solutions and instead focus on our individual and direct spheres of influence?

JS: I think there are important things our industry can do collectively. Are we an industry? I think community is a better way to describe us. And I believe that we need to do them urgently.

First, I think there are plenty of opportunities out there. Everywhere I go, people understand the need for good design and easy-to-use interfaces. They understand how it makes a difference to their business and why it's important. Awareness isn't our problem.

The problem we're trying to solve here is much more insidious. It's an internal problem, not an external one. Our problem is that we believe in ourselves a little too much. And, as a result, we've stagnated.

Take the discipline of software usability. Software usability has been around, conceptually, since the late 1970's, though some aspects of it can be traced to the late 1960's and even earlier. So, it's been something people have been thinking about for almost 30 years.

In the 1970's and 1980's, we saw huge advancements. Graphical user interfaces, the notion of mental models, the discussion of affordances, and techniques like discount usability testing and ethnographic research were all part of the landscape.

Yet, in the last ten years, what new advancements have we seen? Virtually none. The techniques and foundations that we use today have remained the same for more than ten years.

Take the two pillar stone questions of usable design:

Users: Who will use your design?
Tasks: What do they want to accomplish with it?

Everyone can agree that understanding users and tasks are essential to good design. Yet, we're still using the same techniques today that we were using 15 years ago. And these techniques are deeply flawed. They don't correctly identify users and they don't bring out the tasks.

"If we want people to believe what we do isn't just people's opinions, we better come up with consistent results."

We base all the other things we do, in the usability world, on these two flawed pillar stones. We can't run a proper usability test if we can't figure out who to recruit or what to ask them to do. The results we produce are flawed, as a result. And we wonder why we have to fight to get people to pay attention to us!

And despite all of the mounting evidence to the contrary, we, as a community (or industry, as suggested), still believe these methods work. Even though the methods barely worked when the systems we were designing were only being used by, at most, a couple of hundred people, we're now trying to apply them to systems where millions of users log in each day.

We ignore the evidence, such as Rolf Molich's amazing CUE tests, where he asked different usability teams to simultaneously evaluate the same design. Not surprisingly, every team came up with completely different results with surprisingly little overlap. Yet, nobody is questioning this and asking why we're not consistent in our output.

If we want people to believe what we do isn't just people's opinions, we better come up with consistent results. How would you like to find out that your chest X-ray, when read by ten different radiologists, had ten completely different diagnoses with virtually no agreement? Which one should you act on?

We ignore the evidence that, in the last 10 years, there has been no discernable relationship between corporate investment in user-centered design practices and the regular production of usable products from those corporations. The companies that spend the most on UCD, such as Microsoft and IBM, are notorious for regularly producing unusable products, while companies that are wowing us, such as Amazon, Dell and eBay have very small UCD investments. To put things in perspective, Microsoft has more than 120 UCD professionals on staff, IBM has more than 200, Amazon has five and Dell has two, last we checked. One of Amazon's UCD people just went on maternity leave, so they are actually running at 20% less than normal for now.

You can make a list of the 10 best designed products you can think of. If you don't want to make a list, you can use Don Norman's (http://www.jnd.org/GoodDesign.html) - who, apparently, is now the self-appointed guardian of good design. Did their design teams follow the standard processes that we promote? Nope, apparently not.

Take the iPod. Design played a huge role in its success. In fact, design is so important that the rest of the marketplace defines itself in relationship to the iPod's design. Everybody wants to be as good, if not better, than an iPod.

Yet, was the iPod's design process a standard one? Nope. Have we dissected the process, so that everyone in the field knows exactly how they did it? Nope. Can we explain why Apple is in the process of shutting down all their usability labs? Nope. Have we even tried to answer these questions? Nope.

There are plenty of opportunities awaiting people who can promise and deliver quality designs that will meet the business objectives I mentioned before. Those being increasing revenues, decreasing expenses, increasing market share, customer contribution, and shareholder value. Yet, our 'industry' is incapable of making and keeping promises like that. How can we expect executives to take risks on us when we can't guarantee results? Nobody is asking those questions either.

The best way we can collectively improve our opportunities is to stop drinking the Kool-Aid and start questioning our own practices and beliefs. It shouldn't be a matter of faith that our practices produce good designs. It should be a matter of fact.

And it should be reproducible. If ten different design teams attack the same problem, they should come up with the same, excellent, result. We need to move past today, where our community is essentially talented craftspeople each with a unique skillset, to a world of tomorrow, where our practices and results are from skills, expertise, and foundational knowledge that has proven results.

Once we do that, the opportunities will be practically endless.

The Guidelines of Jakob Nielsen

DK: You mention Don Norman; there is a lot of attention paid to the East Coast-West Coast 'feud' between Jakob Nielsen and yourself. Share your perspective of the essential differences in approach and school of thought. And are you surprised it has evolved to the point where people are creating cartoons and rap music songs (http://www.ok-cancel.com) about it?

JS: When someone sent me that link, I was quite taken by surprise to see myself as a comic strip character. I never imagined that would ever happen. I thought it was great. I now wear far more bling bling around the office.

People perceive there to be a tremendous split between Jakob's philosophies and mine. Frankly, I think there's a tremendous amount of common ground and really only minor differences in perspective and presentation.

Without putting words in Jakob's mouth, I think we agree that designers need to have users front-of-mind when they are conjuring up their designs. That an informed design process is better than an uninformed process. That usability problems can cost organizations more than they save by not building a usability process.

That being said, Jakob, since he left Sun a few years back, has built a different organization with NNG than we've built at UIE. I think there are some things that distinguish our two approaches.

Since we started 15 years ago, there's always been a huge demand for guidelines and rules. Our clients ask us all the time for them. I'm sure Jakob's clients ask him for them too. That's why he's created his guideline reports. His business philosophy is to supply guidelines and suggest that people try to follow them as much as possible.

"We've avoided the guideline approach, primarily because it's our philosophy that we are really still too ignorant to know what the right guidelines are."

We've avoided the guideline approach, primarily because it's our philosophy that we are really still too ignorant to know what the right guidelines are. We haven't found a set of guidelines that really matches what is happening in the real world.

Early on, we discovered that you could take any guideline and turn it into a hypothesis. For example, take the commonly-found guideline that you should ensure your pages load in 8 seconds or less. Well, we can test this guideline with the hypothesis that any site with the average page-load time is more than 8 seconds will be less usable than sites with less than 8 seconds average.

The beauty of our work is that we collect a lot of data about a lot of sites. And we can test hypotheses like this extremely simply. Sure enough, when we look into our data, we find that, by any measure of usable we can come up with (such as task completion, user satisfaction, purchases, or lead generation), there is no difference between the fast loading sites and the slow loading sites. No matter how many sites we look at, we can't see any pattern that supports this guideline.

So, if there is no discernable difference, where did the guideline come from? Well, one could guess that if a site took two hours for each page to load, users would get frustrated. I have no doubt that's true. In fact, 30 minutes per page is still probably too long.

However, is 10 seconds too slow? How about 15 seconds? 30 seconds? 60 seconds? Why is the guideline for 8 seconds? Doesn't it depend on the context? What the user expects? What they are familiar with? How much they need this page?

I remember sitting in a test next to a guy who was using a 14.4 modem, watching an image take 180 seconds to load and noting that the user was quite pleased with the entire operation. There was no frustration at all for that user.

So, for us, guidelines such as this one really don't work. So much depends on the context, the users, and the tasks, that we think it's impossible to come up with a consistent set of guidelines that will be applicable no matter what.

If designers can't have guidelines, how do they know what to design? Our philosophy is to use an iterative approach. Take a design - any design - it doesn't matter. Put it in front of users. Change anything that doesn't work. Repeat.

Using an iterative approach will teach the design team who their audience is, what they are trying to do, and the context they are trying to do it in. From there, the team will know what to design.

Some people don't like our approach. They want to just have a set of rules to work within. Jakob's guidelines philosophy will work great for them.

There's a reason why God invented both chocolate and vanilla, my father always used to tell me. Having a choice is a good thing. And if you like chocolate, I highly recommend the new Triple Chocolate Dove Bars. Just amazing.

Role Models and Influencial People

DK: Who are some of the role models and/or mentors that had a meaningful impact on your thinking and professional development?

JS: Oh, dear. I think this is the most difficult question you've asked so far. There are just so many people. In just thinking about it, I can't believe how many different people have greatly influenced my work and thoughts.

Do you want to know about the people who got me started in usability? Gee, there was John Whiteside, Sandy Jones, Bill Zimmer, and Dennis Wixon, from my days at DEC. Before that there was Dick Goodman, who taught me the difference between programming as a hobby and programming as a profession. Then there was Charlie McCarthy, the person who, when I was 14 years old, believed in me enough to let me touch his computers.

Or maybe the people who helped me get going in business? Mike Wolf has been a brilliant, amazing mentor through the years. I learned a tremendous amount from Tom Peters, particularly how looking at the notion of success in business is critical to understanding what makes a good business. (He also taught me to use passion in presentations to keep an audience engaged.) Andy Bourland always has sage advice for me and knows when I need to be slapped back into reality.

"(...) I constantly surround myself with really smart people I admire."

Having not realized it before this, I guess I constantly surround myself with really smart people I admire. Carolyn Snyder, Tara Scanlon, Will Schroeder, Christine Perfetti, and Josh Porter are all people who I've learned a tremendous amount from and have had the great pleasure to work with. I learn something everyday from the people who surround me.

Of course, I have an entire army of heroes that I admire and try to emulate, including Bill Verplank, Laurie Vertelney, Kate Gomoll, Bill Buxton, Ben Schneiderman, JoAnn Hackos, Bill Horton, Ed Tufte, Nick Usborne, Gerry McGovern, Nathan Shedroff, Derek Powazek, Kim Goodwin, Alan Cooper, Stu Card, Peter Pirolli, and Ed Chi are the first people who come to mind.

Lately, I've been very influenced by Michael Lewis's book MoneyBall and a long-time influence on my work is Jim Collins' Built to Last. Both demonstrate how statistics can show things that a pure qualitative assessment miss. They are must reading for anyone who is thinking about how to improve our community.

Some Advice from Jared Spool

DK: Do you have any specific advice, guidance or insight for students and young designers?

JS: First, learn where we've been and how we got here. Look at the external and internal factors. It's not an accident that we are where we are today.

Second, assume the future doesn't need to repeat the past. Ask the difficult questions.

Every time something succeeded, ask what you did to make it succeed. Ask what you'll need to do next time to make that project succeed too. Take careful notes.

Every time something fails, ask why you thought it would succeed and how has your thinking now changed? Ask what you'll need to do next time to make that project succeed instead of fail. Take careful notes.

And most importantly, set aside time every few months to go back and read your notes.

Lessons Learned

DK: What can we learn from our history to guide the future of design, information and experience?

JS: For me, it's that we need to make sure we're constantly innovating. I really feel that we've stagnated our knowledge. We are using the same techniques now we were using 15 years ago and they just don't scale.

More than 15 years ago, we would work on network management applications for system administrators. Those users would use the programs approximately two hours per week. If the company was lucky and sold 5,000 licenses, that meant in the year between releases, there would be approximately 500,000 hours of usage across all customers.

How much design time was spent to achieve that 500,000 hours of usage? Let's look at just one metric: number of usability testing hours. On a good project, with open minded designers and developers, over the course of a one-year design and development cycle, we probably would get to run 16 two-hour usability tests, for a total of 32 user-testing hours. That's a ratio of 15,625 end-usage hours for each user-testing hour.

Today, we're currently working on a website where the development time is half that - six months. (And this is considered *long* in the web world.) Yet, that same site has 1,000,000 users a day who each spend 20 minutes on the site. That's 60,000,000 hours of usage between releases.

"If we look into our history, how do we start tackling these issues of scale?"

To have the same ratio of end-usage hours to user-testing hours, we'd have to have 3,840 hours of testing before launch. That's 32 hours of testing every day of development. (FYI, this project has a whopping 72 users in 3-hour sessions each, for a total of 216 user-testing hours. It will take 7 weeks to collect and analyze the data by a team of four people. This is considered one of the largest commercial usability testing projects ever undertaken. Nowhere near enough to get the same ratio, though.)

If we look into our history, how do we start tackling these issues of scale? How do we get the information into the design process that everyone needs to make the thousands of decisions that every project demands?

This to me is the big problem we need to solve. If we solve it, we will have a profound impact on how we design forever.

Near Future Projects and Activities

DK: Do you have any interesting projects or publications coming up that you'd like to share?

JS: Absolutely. Christine Perfetti and I are touring six cities over the next six months to give our two-day program. Christine and I will be sharing our latest research in navigation, advanced usability techniques, and providing useful and usable content. Last year's Roadshow attracted more than 1,500 attendees. This year's is already filling up.

Also, we're very excited about our upcoming User Interface 9 Conference, which will be October 11-14 in Cambridge, MA. I can't say who the speakers will be. We're still nailing down the final details. However, we've lined up some really excellent speakers on the top issues facing designers today. It's going to be very cool. We'll be sharing the program with our UIEtips subscribers at the end of this month.

As our research continues, we continue to update our UIEtips subscribers. People who want to be kept abreast of what we're doing should subscribe at the website of UIE (http://www.uie.com).

Final Thoughts

DK: Share some final thoughts that you would like for people to take away from this interview.

JS: Final thoughts sound so, well, final. Can we do something other than final thoughts?

Geez, I don't know what to say. I've been going on so long for the last while. It's hard to figure out if there is any more.

I guess, I'll just say that we're in this for the long haul. We see this as a 100-year mission.

Some days, when things aren't going as quickly as we'd like, we just sit back and remember that good research just takes time. That patience is a required trait. And that our work, when we do it well, is very much appreciated.

We love getting challenged on our work. It keeps us honest. However, I can tell you that it really means something when people tell us they appreciate our work. Money funds the hard research, but appreciation, that's what drives it.

Thanks for encouraging our behavior.

Buy Jared's Book

Web Site Usability: A Designers Guide

About Jared Spool

A software developer and programmer, Jared founded User Interface Engineering in 1988. He has more than 15 years of experience conducting usability evaluations on a variety of products, and is an expert in low-fidelity prototyping techniques.

Jared is on the faculty of the Tufts University Gordon Institute and teaches seminars on product usability. He is a member of SIGCHI, the Usability Professionals Association, the Association for Computing Machinery, and the IEEE. Jared is a recognized authority on user interface design and human factors in computing. He is a regular tutorial speaker at the annual CHI conference and Society for Technical Communications conferences around the country.

About Dirk Knemeyer

Dirk is the Chief Design Officer at Thread Inc. One of the architects behind 'InfoDesign: Understanding by Design', Dirk is a prolific writer and frequent public speaker. He is a member of the Board of Directors for the International Institute for Information Design and the AIGA Center for Brand Experience. Dirk's primary interests include using Design as a catalyst to improve business and culture.

Print-friendly version of the interview.