Fundamentally how hard it is to use a user interface depends on the cognitive load of that interface–that is, how much thinking you have to do in order to use the interface.
Now “thinking” is one of those fuzzy things that really needs to be quantified.
As a proxy for user interface complexity (or rather, how hard it is to use an interface) some people use the number of buttons that one has to press in order to accomplish a given task. But that is only a proxy: clearly even though it takes 19 button presses (including the shift key) to type “Hello World!”, one would never argue that these 19 button presses is as hard to accomplish as navigating through 19 unknown interface menus.
That’s because the button press is a proxy for a decision point. The real complexity, in other words, comes from making a decision–which goes right back to cognitive load, the amount of thinking you have to do to accomplish a task.
So decision points are clearly a sign of cognitive load.
Now think back when you first saw a computer keyboard, and how mystified you were just to type a single letter: search, search, search, ah! the ‘h’ key! Success! Now, where is that stupid ‘e’ key? Oh, there it is, next to the ‘w’ key–why is it next to the ‘w’ key? Weird. Now, if I were the ‘l’ key, where would… oh, there it is, all the way on the other side of the keyboard. Weird. At least I know to press it twice. And now the ‘o’–ah, right there, next to the ‘l’ key.
Ooops. I have a lower case ‘h’. How do I back up? …
Clearly, then, along with decision points we have familiarity with the interface as a factor in how hard a user interface is to use: if we know how to touch type, typing “Hello World!” doesn’t even require thought beyond thinking the words and typing them. For for the uninitiated to the mysteries of the computer keyboard, hunting and pecking each of those keys is quite difficult.
Complexity, then, is cognitive load. And cognitive load goes towards the difficulty in making the decisions to accomplish a task, combined with the unfamiliarity of the interface to find what one needs to do to accomplish that task.
Now, of course, from a user interface design perspective there are a few things that can be done to reduce cognitive load, by attacking the familiarity problem.
One trick is to have a common design language for the interface. By “design language” I’m referring, of course, to what you have to do to manipulate a particular control on the screen. If you always manipulate a thing that looks like a square by clicking on it–and the act of clicking on it causes an ‘x’ to appear in that square or disappear if it was there, then you know that squares on the screen can be clicked on.
And further, if you know that squares that have an ‘x’ in them means the item is somehow “selected” or “checked” or “enabled” of whatever–and you know that unchecking something means it’s “unselected” or “un-checked” or “disabled”–then suddenly you have some familiarity: you can quickly see boxes and realize they are check-boxes, and checking them means “turn this on” and unchecking means “turn this off.”
This idea of a design language can even extend to interfaces built strictly using text-only screens: if you see a text that looks like [_________], and a blinking square on the first underscore, and typing types in that field–and hitting the tab key moves to the next [_________] symbol on the screen, then you know all you need to know to navigate through a form. Other text symbols can have other meaning as well: perhaps (_) acts like our checkbox example above, or acts like a radio button (a round thing you can select or unselect which has the side effect of unselecting all other related round button thingies) or whatever.
The point is consistency.
And this consistency extends beyond the simple controls. For example, if you have a type of record in your database that the user can add or remove from a screen, having the “add” and “delete” and “edit” buttons in the same place as on other screens where other records are added or deleted helps the user understand that yes, this is a list of records, and immediately he knows how to add, delete, and edit them.
Visual language provides a way for a user to understand the unfamiliar landscape of a user interface.
The other trick is for selective revelation of the interface in a guided way, revealing decisions that need to be made in an orderly way.
For example, imagine an order entry system where the type of order must be first selected, then the product being ordered, then product-specific information needs to be entered. This could be implemented by selectively showing controls that need to be filled out at each step of the process. This could be accomplished by a wizard. And notice that unneeded information (such as the size of a clothing item, unneeded when ordering a purse) can continue to be hidden if not needed.
The goal with this is to help guide the decision making process, to help gather the information in the order needed by the system. And by guiding the decision making process you reduce cognitive load: you ask only the questions that are needed rather than overwhelm the user with a bewildering array of interrelated choices, some of which (such as the clothing size of a purse) are nonsensical.
The problem with all these user interface tricks (and there are plenty of others: arranging the information on the screen, tips and hints that dynamically come up, on-line help for the first time user, making interface reflect a consistent cognitive model, reducing short-term memory load by segregating items into 7 +/- 2 items or groups, etc.) is that they all go towards tackling the familiarity problem of the interface. In other words, they only go towards reducing the cognitive load of the interface itself.
And, honestly, most of these design patterns are pretty well known–and only go towards reducing the cognitive load of the first-time user. Once someone has gained familiarity with an interface–even a very bad one–the cognitive load imposed by a poorly designed interface is like the cognitive load imposed by a computer keyboard: eventually you just know how to navigate through the interface to do the job. (To be clear, reducing familiarity cognitive load reduces training costs if this is an internal interface, and reduces consumer friction and dissatisfaction of an external interface–so it’s important not to design a bad interface.)
Ultimately the cognitive load of a system comes from the decision points imposed by the interface. And a user interface can only present the information from the underlying system: ultimately it cannot make those decisions on behalf of the operator. (If the user interface could, then the decision wouldn’t need to be made by the operator–and the decision point really isn’t a decision point but an artifact of a badly designed system.)
What this means is that in order to simplify a product, the number of operator decision points must ultimately be reduced–either by prioritizing those decision points (noting which decisions are optional or less important to capture), or through redesigning the entire product.
Remember: a user interface is not how a product looks. It’s how the product works.