Over the years I’ve described several algorithms and written several “papers” which describe those techniques, which I’ve abstracted on a new page on my blog. This includes a detailed description of the LR(1) algorithm used by the OCYacc tool I recently open-sourced, as well as papers on computer graphics and computational geometry. Hopefully they will come of use to someone out there.
Category Archives: Uncategorized
Designing a User Interface? First, create a visual language.
Reading through the comments in this Slashdot article and I’ve noticed a few things.
Everyone seems to agree where Apple went off the rails was with iOS 7.
But why–no-one seems to be able to agree.
In the original article linked to by Slashdot, the article’s author seems to hinge his argument on the fact that the new operating system looks ugly. Apple Is Really Bad At Design
In 2013 I wrote about the confusing and visually abrasive turn Apple had made with the introduction of iOS 7, the operating system refresh that would set the stage for almost all of Apple’s recent design. The product, the first piece of software overseen by Jony Ive, was confusing, amateur, and relatively unfinished upon launch. While Ive had utterly revamped what the company had been doing thematically with its user interface — eschewing the original iPhone’s tactility of skeuomorphic, real-world textures for a more purely “digital” approach — he also ignored more grounded concepts about user experience, systematic cohesion, and most surprisingly, look and feel.
Now it almost sounds like the original author was about to stumble on the truth.
But then, he fails:
“It’s not just that the icons on the homescreen feel and look like the work of a lesser designer. They also vary across the system. For instance, the camera icon is a different shape in other sections of the OS, like the camera app or the lockscreen,” I wrote at the time. “Shouldn’t there be some consistency?” While this may seem like obsessive nit-picking, these are the kinds of details that Apple in its previous incarnation would never have gotten wrong.
And, at the bottom of the stack, the essay seems inspired by the iPhone X, which is described repeatedly in the article as “ungainly and unnatural”, “bad design”, “a visually disgusting element.” And by extension the entire Apple environment is described as “fucking crazy.”
The comments in the Slashdot article also riff on the visually consistent or visually pleasing aspects of design:
The result is what you see now in Apple products – a muddled mess of different ideas that just don’t fit together right, and very little actual customer value.
And you know what? We see this focus on the beautiful in many other applications. You can see it in how we describe UX jobs and who gets hired for UI design: UI, UX: Who Does What? A Designer’s Guide To The Tech Industry.
UX designers are primarily concerned with how the product feels. A given design problem has no single right answer. UX designers explore many different approaches to solving a specific user problem. The broad responsibility of a UX designer is to ensure that the product logically flows from one step to the next. One way that a UX designer might do this is by conducting in-person user tests to observe one’s behavior. By identifying verbal and non-verbal stumbling blocks, they refine and iterate to create the “best” user experience. An example project is creating a delightful onboarding flow for a new user.
It’s worth reading the entire thing to understand the state of the industry, or the fact that sometimes
The boundary between UI and UX designers is fairly blurred and it is not uncommon for companies to opt to combine these roles.
You know what is missing here?
From the movie Objectified, a movie anyone who is a designer or interested in design must watch:
I think there are really three phases of modern design. One of those phases, or approaches, if you like, is looking at the design in a formal relationship, the formal logic of the object–the act of form-giving: form begets form.
The second way to look at it is in terms of the symbolism and content of what you’re dealing with: the little rituals that make up making coffee or using a fork and knife, or the cultural symbolisms of a particular object. Those come back to a habit and gives form, helps give guidance to the designer about how that form should be or how it should look.
The third phase, really, is looking at Design in a contextual sense, in a much bigger picture scenario. It’s looking at the technological context for that object. It’s looking at the human and object relationship.
The first phase you you might have something fairly new, like Karim Rashid’s Kone vacuum, which is for Dirt Devil. The company sells this basically, “so beautiful, you can put it on display.” In other words you can leave it on your counter, it doesn’t look like a piece of crap.
Conversely you can look at Dyson and his vacuum cleaners. He approaches the design of his vacuum in a very functionlist manner. But if you look at the form of it it’s really expressing the symbolism of function. The color introduced into it is–he’s not a frivolous person–so it’s really there to articulate the various components of the vacuum.
Or you can look at, in a more recent manifestation in a kind of contextual approach, would be something like the Roomba. There, the relationship to the vacuum is very different. First of all, there is no more human interaction relationship. The relationship is to the room it’s cleaning.
I think it’s even more interesting the company has kits that are available in the marketplace called “Create”. It’s essentially the Roomba vacuum cleaner kit that’s made for hacking. You put a really wacky–I mean, you can create things like “Bionic Hamster” which is attaching the kind of play wheel or dome a hamster uses as a driving device for the Roomba. So it is the ultimate revenge on the vacuum cleaner.
How I think about it myself is that design is the search for form. What form should this object take.
– Andrew Blauvelt, Design Curator, Walker Art Center (around the 20 minute mark)
First, if you are really interested in design, do yourself the favor of watching Objectified.
But the most important part about the section above–to me, the most valuable two and a half minutes of that movie (outside of the opening sequence, of course)–is the notion that design is not just making things “look pretty.”
When you really look at it, the Dyson vacuum is an ugly looking contraption of a machine.
What is important is considering design as a formal process. And much of this “form-giving” really involves the visual language used to express the design of an object, since it is that visual language which helps us understand the object, our relationship to the object, and how we interact with that object.
Look again at the Dyson vacuum. The use of color is deliberate, Specifically, in the photo linked, the yellow parts are all components which either articulate, or which are components that can be moved, taken apart or put back together. The brush, for example, is the yellow thing at the bottom–and can be disassembled easily and reassembled for cleaning. Even the bucket which stores dirt can be disassembled by separating the yellow and gray components, as can the pathway which brings air in from the hose. (If you look carefully you see a small yellow thing just above the wheel; this is a lever which is used to disassemble the components which bring air into the storage bucket, and periodically need cleaning.)
This use of yellow expresses not just the form or function of the device–but clearly articulates how to use the vacuum: how to remove the cord, how to turn it on, how to take it apart and put it together for maintenance.
Yellow, in this example, is part of the visual language, and the consistent use of yellow is used to express function–to guide the user, to make using the vacuum easy.
Perhaps you’ve seen the poster. Perhaps not. But it says:
A user interface is like a joke.
If you have to explain it, it’s not that good.
The Dyson Vacuum, through the consistent use of the color yellow, and through the careful considered shapes of the yellow components, does not need to explain itself. It’s clear how to take apart the air pathway. It’s clear how to remove and clean the basket. It’s even clear how to disassemble down to the replaceable filter. Just consider the yellow parts, and twist, press or pull as the shape suggests.
This lesson reveals a larger one: if you want your user to understand how to use your interface, design and use a consistent visual language.
This includes the consistent use of shape and of color, so that the same shape or visual design performs the same way consistently across your entire application. Further, the same shape or visual design does not have multiple behaviors, and behaviors are not hidden behind different shapes or colors.
The best user interfaces are like the Dyson Vacuum. Good design does not have to be beautiful, but it should be suggestive of functionality. Good design should not hide functionality, though it does not need to be overly obtrusive.
And this is where things start falling apart on later versions of iOS. 3D Touch, for example, allows you to ‘hard press’ on an icon in the desktop of a later iOS device (like the iOS 7 phone) and have a pop-up preview or menu appear.
But how do you know to ‘hard press’ that icon or image?
How many of you who own iPhone 7’s even know you can do this?
You see the same thing with Android with the “long press”, which was used in prior user interface guidelines to show a pop-up menu to reveal further functions. (The new Material design tries to get away from this gesture–but how many people know the way they delete e-mails on earlier versions of Android was to long-tap the e-mail?)
But hell, even Apple knows these “hidden” gestures create a problem:
Adopt Peek and Pop consistently. If you support Peek and Pop in some places but not others, people won’t know where they can use the feature and may think there’s a problem with your app or their device.
And yet Apple never answers that most fundamental question: how does the user even know they can hard-press something in the first place? How do they know they can swipe left? How do they know they can swipe down? How do they know what they can do?
Now on iOS 6, at least some of these questions were answered: buttons had rings around them or were icons arrayed in a row.
But in the tension between visual clarity and being visually clean and visually beautiful, beauty won over understanding. We’ve moved away from the Dyson Vacuum–from the “functionalist” approach to deconstructing design and creating a constant visual language (where checkboxes, radio buttons, drop down lists and the like are easily understood because we’ve consistently used similar art to express the same functionality), towards a flat design which dispenses of any hints of functionality.
Today’s interfaces are as beautiful as users are befuddled.
But it gets worse. In dispensing the logical deconstruction of interface and the creation of consistent user interface “languages” (such as consistently using the same gesture to mean the same thing regardless of the context in which we operate), modern design is less about creating design manuals which express the visual language being used and how to consistently apply that language to solve design problems–and has become more about creating beautiful interfaces, but with absolutely no respect for usability. Usability has become, in today’s world, an afterthought, the equivalent of cargo cult thinking where we echo form without understanding what gives form.
Just look at the first of the core values listed on Apple’s human interface guidelines page, which focuses on Aesthetic Integrity, Consistency and Feedback–but without ever describing what these mean other than from an aesthetic perspective.
It’s no wonder why people in the original Slashdot article claim Apple can no longer design stuff.
Because to us “design” is no longer about the formal relationship of the design, or about the symbolism and content–but about if it looks pretty.
And thus, that notch on top of the iPhone X is considered “bad design” because it “looks ugly” without ever considering if the design (and the design language) serves a purpose or provides functional usability.
Because we don’t have the tools to describe the formal relationship between a human and a computer, all we’re left with is the artistic qualities of that object. So we look at iOS 7 and don’t understand why it fails. We don’t see that in moving towards the “print page” look and feel of user design, iOS 7 has eschewed the very visual hints we need to know if a red line of text is a button, or just a highlighted passage.
We’re left not knowing if the yellow knob on the vacuum cleaner is a lever, or an immovable bit of plastic added for style that we will break if we attempt to twist it.
We don’t understand why iOS 7 (and later versions of iOS) fail, even though the rather ugly Dyson Vacuum succeeds.
And sadly we are left with the ultimate message of the Slashdot article: that iOS 7 (released in 2013) fails because the iPhone X (which has yet to ship) has a notch at the top.
Weirdly, in the process, our design becomes more artistically radical and yet more conservative, more entrenched. Design patterns we barely understand (such as the inverted “L” of web design, or the bottom tab buttons of a mobile device, or using grids to lay web pages out) are reused without understanding what motivates those choices, because they seem familiar to us. We no longer have the tools to create truly revolutionary designs because we no longer know what makes them work.
Worse, in our conservatism mobile devices design becomes web page design but on a smaller scale. Web page design becomes mobile device design but on a larger scale. Desktop applications become mobile web page design but with menu bars and multiple windows.
All of it has become ritual without understanding why.
We’re dancing naked around a fire hoping the Gods will deliver rain to our crops.
You want to create good design?
Then first, you must create a consistent visual language. You must not start with a blank page and start drawing stuff on the page until it looks pleasing. Instead, you must start with a “design manual” for your application.
And you must answer some questions–because in today’s age we no longer have design guidance like we used to.
Questions like “how should I decompose my application?” “How should I consistently present printed information, photos, lists.” “How should I separate sections of information.”
And even more basically, “what does a button look like?” “How does that button behave?” “How can I tell the user that my icon is tappable” (and that could be simply a question of consistent placement), or “how do I tell the user that this button will reveal multiple options?”
Even deeper than this, we must answer fundamental questions such as “what are the nouns–the objects–in my application?” “What are the verbs?” (That is, the actions which operate on those nouns.) “What are the adjectives?” (The modifiers which modify nouns, like color to suggest a value is negative or bad or out of bounds.)
In many ways, because your users have a formal relationship with your application, and because your goal is to clearly communicate how to use what is probably a much more complex and feature-rich application than “pick up vacuum, press button, suck up dirt”, you probably want to eschew the beautiful in favor of the formal. (But even the Kone suggests how it is to be used: the seam bisecting the cone shows the point where the base separates from the enclosed vacuum, the flat top reveals the on/off button.)
And if that means you have little gray rectangles and small dots in places which make your application look more cluttered than you’d like–consider if your user is then able to use your application.
Beyond this, while you should respect the media and the established conventions where your design appears, so long as you are consistent in your designs, you can explore new ideas and new gestures. And you can consider new problems few designers are considering–such as the problem that on larger mobile devices, users can no longer hold the device in one hand and use it with his thumb if your controls are placed at the top of the screen where they are no longer in reach.
Because that’s the bottom line, isn’t it? Not if your application is pretty, but if your users can use it–and continue to use it, and continue to help you generate revenue.
OhMyGoodness, getting rid of affordances makes it harder on users? Who knew?
As seen on /.: It’s official: Users navigate flat UI designs 22 per cent slower
Have you ever bonked your head on a glass door because you had no clue how to open the door–because the architects decided to make the design “clean” by getting rid of anything that ruined the clean lines of the door?
Yeah, that’s our modern UI environment.
I promise you this is a picture of a door. Do you see how to open it?
I mean, look at the examples provided here.
First, let’s dispense with the stupid items listed as “features of flat design”. They list, amongst the supposed advantages of a flat design “functionality”, “close attention to details” and “clear and strict visual hierarchy”–because before the invention of flat design, none of us wanted to deliver functionality, and most of us slopped our user designs together in the same way we slop pigs. (*eye roll*)
And let’s look at the supposed “advantages”: “simplicity of shapes and elements”, “minimalism” and “avoiding textures, gradients and complex forms.”
Which suggests to me the problem with the photo of my door above is that it contains a complex shape and unclear hierarchy which distracts from the functionality of the door.
Here. Let me fix that.
I know the difference is subtle, but to the purist, makes the door much better looking. No more distractions from the pure essence of a door, one that has a single unitary shape, a minimalist door free of visual distractions.
Right up until you face-plant yourself because you can’t open the god-damned thing.
I mean, look at the animated example they give:
Setting aside the cute (and distracting animation of the weather icon to the side), how does the user know that by tapping and dragging he expands and shrinks a region? How does he know that it doesn’t scroll the page instead? Or that tapping (instead of swiping) would expand or shrink an area? Or that tapping instead pulls up an hourly prediction?
How does he know that swiping left and right gives the previous and next day’s weather prediction?
And notice the design is not entirely free from complex shapes. The two icons in the upper right? Is that two separate buttons, or a single toggle (as the shading suggests)?
Or notice the location in the Ukraine. Is the location tappable? Can we pick a new location?
The key here is that the user does not have a fucking clue. And let’s be honest: there is no delight in a “discovery” which seems more designed to make the user feel like a stupid idiot.
I’m not going to even address the complex and superfluous animations which, while cute, and may even be demanded in some markets, exist only to say how great the application is, but provide absolutely no aid to user comprehension.
Look, I’m not asking for buttons and checkboxes and the like.
It’s not like you have to beat your users over the head; you can have clean lines and still use affordances which subtly guide the user on how to use your application. Just create a consistent visual language so that, for example, all shapes with a small dot in the corner can be resized by dragging.
But I am suggesting if the user needs to spend time figuring out how to open the door, they’re less likely to go through the door.
And you lose users. And revenue.
Some thoughts on designing a computer language.
Designing a computer language is an interesting exercise.
Remember first, the target of a computer language is a microprocessor or a microcontroller. And microprocessors or microcontrollers are stupid: they only understand integers, memory addresses (which are just like integers; think of memory as organized as an array of bytes), and if you’re lucky, floating point numbers. (And even there, they’re handled like integers but with a decimal point position. Stored, of course, as an integer.)
Because of that, most modern computer languages rely on a run-time library. Even C, which is as close to writing binary code for microprocessors as most of us will ever get, relies on a run-time library to handle certain abstract constructs the microprocessor can’t. For example, a ‘long’ integer in C is generally assumed to be at least 32-bits wide–but if you’re on a processor that only understands 16-bit integers, any 32-bit operation on a long integer must be handled with a subroutine call into a run-time library. And heck, some microcontrollers don’t even know how to multiply numbers, which means a * b
has to translate internally into __multiply(a,b)
.
For most general-purpose programming languages (like C, C#, C++, Java, Objective-C, Swift, ADA and the like), the question becomes “procedural programming” or “object-oriented programming.” That is, which paradigm will you support: procedures (like C)? or objects? (like Java)
Further, how will you handle strings? How will you handle text like “Hello world?” Remember: your microprocessor only handles integers–not strings. And under the hood, every string is basically just an array of integers: “Hello world?” is stored in ASCII as the array of numbers [ 72, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100, 63 ], either marked somewhere with a length, or terminated with an end of string marker, 0.
In C, a string is simply an array of bytes. The same in C++, though C++ provides the std::string class which helps manage the array. In Java, all strings are translated internally into an array of bytes which is then immediately wrapped into a java.lang.String object. (It’s why in Java you can write:
"Hello world?".length()
since the construct “Hello world?” is turned into an object.) Objective-C turns the string declaration @”Hello world?” into an NSString, and Swift into a String type, which is related to NSString.
Declarations also become interesting. In C, C++ and Objective-C, you have headers which forces your language to provide a mechanism for representing external linkage. Those three languages also provide a mechanism for representing abstract types, meaning for every variable declaration:
int *a;
which represents the variable a
that points to an integer, you must be able to write:
int *
which represents the abstraction of a variable which points to an integer.
And for every function:
int foo(int a, int *b, char c[5]) {...}
You need:
extern int foo(int, int, char[5]);
But Java does not provide headers, so it has less need for header declarations–but then adds the need to mark methods as “public”, “protected” or “private” so we know the scope of methods and variables which can be hidden in C by simply omitting the declaration from the header.
This means Java’s type declaration system can be far simpler than C’s.
And while we’re at it, what types are you going to allow? Most languages have integer types, floating point types, structure or object types (which basically represent a record containing multiple different internal values), array types, and pointer or reference types. But even here there are differences:
C allows the use of unsigned values, like unsigned integers. Java, however, does not–but really, the only effective difference in performing math operations between signed and unsigned integers are right-shift operations and compare operations. And Java works around the former with the unsigned right shift (‘>>>’) operator.
C also represents arrays as simply a chunk of memory; C is very low level this way. But Java represents arrays as a distinct fundamental type, alongside basic types (like integers or floating point values) and objects.
And pointers or references can be explicit or implicit: C++ makes this explicit by requiring you to indicate in a function if an object or structure is passed by value (that is, the entire object is copied onto the stack), or by reference (that is, a pointer is passed on the stack). This makes a difference because updating an object passed by value has no effect on the caller. But when passed by reference, changes to the object can affect the caller’s copy–since there really is only one copy in memory.
Java, on the other hand, passes objects and arrays by reference, always.
This passing by reference makes the ‘const’ keyword (or its equivalent) very important: it can forbid the function being called from modifying the object passed to it by reference.
On the flip side, Java does not have the concept of a ‘pointer’.
And let’s consider for(...)
loops. The C language introduces the three-part for
construct:
for (initializer, comparator, incrementer) statement
which translates into:
initializer loop: if (!comparator) goto exit; statement incrementer goto loop; exit:
But Java and Objective C also introduce different for
loop constructs, such as Java’s
for (type variable: container) statement
which iterates the variable across the contents of the container. Internally it is implemented by using the Java’s Iterator interface, and translates the for loop above as:
Iterator<type> iterator = container.iterator; loop: if (!iterator.hasNext()) goto exit; type variable = iterator.next(); statement goto loop; exit:
Of course this assumes container
implements the Iterable interface. (Pro-tip: If you want to create a custom class which can be used as the container in a for loop, implement the Iterable interface.)
While we’re at it, if your language is object oriented, do you allow multiple inheritance, like C++ where an object can be the child of two or more parent objects? Or do you implement an “interface” or “protocol” (which specifies methods that are required to be implemented but provides no code), and have single inheritance, where objects can have no more than one parent object but can have one or more interfaces, such as in Java or Objective C?
Do you make exceptions a first-class citizen of your language, such as in Java or C++? Or is it a library, such as C’s setjmp/longjmp calls? Or is it even available? Early versions of Pascal did not provide for exception handling: instead, you must either explicitly handle problems yourself, or you must check to make sure that things don’t go haywire: that you don’t divide by zero, for example.
And we haven’t even dived into more advanced features. We’ve just stuck with the stuff that most general purpose languages implement. Ada has built-in support for parallel processing by making threads and synchronization part of the language. (Languages like C or Swift require a library–generally based on POSIX Threads–for parallel processing, though the availability of multi-threaded programming in those languages are optional.)
Other languages have built-in handling of mathematical vectors and matrices, or of string comparison and regular expressions. Some languages (like Java or LISP) provide support for lambda functions. And other languages combine domain-specific features with general purpose computing–such as PHP, which allow general-purpose programs to be written, but is designed for web pages.
Pushing farther afield, we have languages such as Prolog, a declarative language which defines the formal logic rules of a program without declaring the control flow to execute the rules.
(Prolog defines the relationships between a collection of rules, and performs a search through the rules in response to a query. Such a language is useful if we wish to, for example, provide a list of conditions that may be symptoms of a disease; a Prolog query would then list the symptoms, and after execution provide a list of diseases which correspond to those symptoms.)
But let’s ignore stuff like this for now, since my interest here is either procedural or object-oriented programming. (One could consider object-oriented programming as basically procedural programming performed on objects.)
The design of a programming language is quite interesting.
And how you answer questions like this (and other questions that may come up) really determine the simplicity of learning verses the expressive power of the language. Sadly, expressive power can become confusing and harm learning: just look at the initial promise of Swift as an easy and painless language to learn. A promise that has since been retracted, since Swift is neither a stable language (Swift 1 does not look like Swift 4), nor simple. Things like the type safety constructs ? (optional) or ! (forced) are hard to understand, since they rely on the concept of “pointers” and the safety (or lack thereof) of dealing with null pointers (that is, pointers to memory address 0, which typically means “not initialized” or “undefined”).
Or just look at how confusing the C type system can become to a beginner. I mean, it’s easy for a beginner to understand:
int foo[5];
That’s an array of 5 integers.
But what about:
char *(*(**foo[][8])())[];
Often you find C programmers avoiding the “expressive power” of C by using typedefs instead; declaring each component of the above as an individual type.
It is in large part because of C’s “expressive power” (combined with terse syntax) which allows contests like the International Obfuscated C Code Contest to exist: notice we don’t see an “obfuscated Java contest”.
Behold, a runner up in that contest.
But at least it isn’t APL, a language once described to me as a “write-only programming language” because of how hard it is to read, making use of special symbols rarely found on older computers:
(~R∊R∘.×R)/R←1↓ιR
This is the Wikipedia example of an APL program which finds all prime numbers from 1 to R.
No, I have no clue how it works, or what the squiggly marks mean.
Simplicity, it seems to me, forgoes expressive power. Java, for example, cannot express the idea of an array of pointers to functions returning pointers to arrays–since Java does not have the concept of a pointer to a function (that’s handled by the reflection API), or does Java have the concept of pointers. Further, Java does not permit the declaration of complex anonymous structures; first, everything is a class. And second, classes are either explicitly named or implicitly named as part of an anonymous declaration. It’s hard to declare something like the following C++ declaration; you’re forced to break down each component into its own declaration.
struct Thing { struct { int x; int y; } loc; struct { int w; int h; } size; };
And it’s just as well; this makes more sense if you were to write:
struct Point { int x; int y; }; struct Size { int w; int h; }; struct Thing { Point loc; Size size; };
It becomes clear that “Thing” is a rectangle with a location and a size.
But then, people often complain that Java requires a lot more typing to express the same concept.
It’s a balance. It’s what makes all this so fascinating.
My comment to the FAA regarding a request for comments.
After reading about a request from the FAA regarding a redesign of the Embraer ERJ 190-300 aircraft on Schneier on Security, I took a moment to leave a comment with the FAA.
Note that my reading of the original call for public comments, it seemed to me what Embraer wants to do is create a single network off of which all equipment–from in-flight entertainment to some of the aircraft avionics–on a single network which includes off-the-shelf operating systems.
That is, I believe Embraer is not moving towards “security through obscurity” (though this may be an effect of what they are doing), but towards using more off-the-shelf components and towards using network security in place of physical security when designing in-flight electronics. (Current aircraft provide for network security by physically separating in-flight passenger entertainment systems from aircraft avionics.)
With that in mind, I left the following comment with the FAA.
Note that I did this without my usual first cup of coffee, so it’s not very polished.
(Oh, and as a footnote: if you choose to leave a comment with the FAA, be polite. And realize that, unlike most bureaucracies in the Federal Government, the FAA tends to attract people who love to fly or who love to be around airplanes. So if you leave a comment, realize you’re leaving a comment with a bunch of old pilots: very smart folks who are very concerned about aircraft safety, and who are trying to learn about new technologies to make sure the system remains the safest in the world.)
From the background information provided in FAA-2017-0239, it sounds to me like the new design feature of the Embraer Model ERJ 190-300 aircraft is the use of a single physical network architecture which would tie in equipment in the “passenger-entertainment” domain and the “aircraft safety” domain, and provide for separation of these domains through network security. That is, it would provide isolation in these domains through software, and would therefore require network security testing in order to verify the separation of these domains.
When considering computer security, it is useful to classify the potential problems using the “CIA triad”: confidentiality, integrity and availability.
In this context, confidentiality is the set of rules and the systems and mechanisms which limit information access to those who are authorized to view that information. Integrity are the measures used to assure the information provided is trustworthy and accurate. And availability is the guarantee that the information can be reliably accessed by those authorized to view the information in a timely fashion.
Taken individually we can then consider the types of attacks that may be performed from the passenger-entertainment domain against the aircraft security domain which may cause inadvertent operation of the aircraft.
Take attacks against availability, for example. Attacks against availability include denial of service attacks: one or more malfunctioning pieces of equipment flood the overall network with so much useless data that it chokes off the flow of information across the network. A similar attack, known as a SYN flood attack, attempts to abuse the underlying TCP/IP network protocol’s three-way handshake connection protocol to prevent a piece of equipment from responding to a network connection.
Embraer needs to demonstrate if they use a unified network architecture that the avionics and navigation equipment in their aircraft can survive such a denial of service attack against network availability. This means they need to demonstrate that map products and weather products are not adversely affected, that control services and auto-pilot functionality is not adversely affected, and that attitude, direction and airspeed information is not adversely affected during a denial of service attack. (If the test is to be passed by resorting to back-up indicators, the system must clearly visually indicate to the pilot that avionics are no longer reliable or available.)
Attacks against integrity include spoofing network packets–equipment plugged into the passenger-entertainment domain which spoof on-board data-packets sent from various sensors, and session hijacking: creating properly shaped TCP/IP packets which intercept a network connection between two pieces of equipment. (A sufficiently sophisticated attacker can, with a laptop connected to a network, create a seemingly valid IP packet that appears to come from anywhere in the network.)
Embraer needs to demonstrate that the network protocol they use to communicate between pieces of equipment are hardened against such spoofing. Hardening can be performed through using end-to-end encryption and through the use of checksums (above that provided by TCP/IP) which validate the integrity of the encrypted data. Embraer will also need to have processes in place which allow these security encryption protocols to be updated during maintenance in the event a disgruntled employee leaks those security protocols.
Embraer also needs to demonstrate their software is hardened against “fuzzing”. Fuzzing is an automated testing technique that involves providing invalid, unexpected or random data to a system and making sure it doesn’t fail (either stop working or provide inaccurate information to the pilots). Fuzzing can be performed both by providing completely random inputs, and by varying valid packets by small amounts to see if it creates problems with the system being tested.
In proving that systems are hardened against such protocol attacks, Embraer needs to demonstrate not only that invalid information does not flow into the aircraft-safety domain, but they must also demonstrate that such attacks do not shut down communications between the aircraft-safety domain and various sensors sending vital data.
Confidentiality attacks include passive attacks: users sniffing all the data on a network and eavesdropping on the data being sent and received, and active attacks: gaining internal access to systems which they are not authorized to access. Confidentiality attacks can lead to attacks against availability (by, for example, locking out someone using a password change) and integrity (by, for example, corrupting IFR map products).
Embraer needs to demonstrate their software is hardened against unauthorized access, both to the flight sensor data being sent across the network (through end-to-end encryption) and by verifying access to in-flight data products are restricted to authorized users (by using some form of access control, either though the use of a physical key, a password or biometric security: the “three factors” of authentication). For example, one way to satisfy this requirement is met for mapping products is to require that map data can only be changed from within the cockpit.
Of course, as always, Embraer needs to demonstrate that back-up attitude, directional and air-speed indicators continue to work in the event of a problem with the avionics, that the avionics provides a clear indication that the information they are providing may be invalid due to failure or due to a network attack. And Embraer needs to demonstrate the flight controls of the aircraft continue to operate in the event in-flight avionics are compromised. This includes making sure that all flight control systems are kept separate from the in-flight avionics, with the exception of the autopilot (which should not prevent the pilot from taking control with sufficient force).
I understand that this is an incomplete list of potential security attacks. But by classifying attacks in a networked environment along the “CIA triad”, it should be possible to create and maintain a comprehensive list of tests that Embraer can perform to assure the integrity of the aircraft security domain.
Thank you for your time.
– Bill Woody
One of my biggest complaints about the way we approach government is that we constantly complain about the lack of experts in government–but then, those of us who have some expertise never bother to publicly participate in what is supposed to be a democratic process.
So even if my comments are utterly worthless, they are far better than the expert in the field who never says anything.
A clever man in the middle attack which is why you should always receive reset instructions by e-mail.
How hackers can steal your 2FA email account by getting you to sign up for another website
tl;dr: When building a web site, NEVER create a reset password flow that asks security questions. Always send an e-mail with reset instructions.
In a paper for IEEE Security, researchers from Cyberpion and Israel’s College of Management Academic Studies describe a “Password Reset Man-in-the-Middle Attack” that leverages a bunch of clever insights into how password resets work to steal your email account (and other kinds of accounts), even when it’s protected by two-factor authentication.
Here’s the basics: the attacker gets you to sign up for an account for their website (maybe it’s a site that gives away free personality tests or whatever). The sign-up process presents a series of prompts for the signup, starting with your email address.
As soon as the attacker has your email address, a process on their server logs into your email provider as you and initiates an “I’ve lost access to my email” password reset process.
From then on, every question in your signup process for the attacker’s service is actually a password reset question from your email provider. For example, if your email provider is known to text your phone with a PIN as part of the process, the attacker prompts you for your phone number, then says, “I’ve just texted you a PIN, please enter it now.” You enter the PIN, and the attacker passes that PIN to your email provider.
Same goes for “security questions” like “What street did you live on when you were a kid?” The email provider asks the attacker these questions, the attacker asks you the questions for the signup process, and then uses your answers to impersonate you to the email provider.
This has some serious consequences with account sign-up and password reset flows that do not involve a secondary channel, such as sending an e-mail or replying to an SMS message.
Note that this attack is insidious because it appears you’re answering questions on a third party web site, as that third party is using your answers to attack a trusted bank account.
Do you want to know the reason why? Cocoapods.
The Size of iPhone’s Top Apps Has Increased by 1,000% in Four Years
According to Sensor Tower’s analysis of App Intelligence, the total space required by the top 10 most installed U.S. iPhone apps has grown from 164 MB in May 2013 to about 1.8 GB last month, an 11x or approximately 1,000 percent increase in just four years. In the following report, we delve deeper into which apps have grown the most.
Do you want to know why?
Poor software management, and the increasing reliance on libraries which add code bloat.
The former happens when designers and product managers try to fit more and more features in to an app, and developers are rushed to add features wind up implementing the same functionality five or six different times.
And the latter–well, no-one has ever been fired for using Cocoapods (or another library manager) and sucking in functionality from a third party library of twenty.
One project I worked on a while back, the project manager didn’t tell me he had someone else working on the project as well–and while I noted it in the check-ins, I didn’t think anything of it, until one day suddenly two dozen libraries were checked in. I slow-walked my way out of that project starting that day, in part because the other developer replaced a very simple network interface (which worked reliably) with a link to a half-dozen libraries and a rewritten network interface which didn’t work correctly. (For some reason he thought including AFNetworking v2 was better than simply hitting NSURLSession–despite the fact that AFNetworking v2 uses the older NSURLConnection class–and despite the more important fact that he was using AFNetworking wrong.)
So if you’re an iOS developer and you’re wondering why your app is bloated?
Look in a mirror.
Yes, Apple’s tools have contributed to the problem somewhat. And yes, various resolution of artwork has helped–though even there I’d argue one huge problem there is that so many developers punt on gradients and other special effects by including a huge .PNG file rather than using CAGradientLayer and other API entry points.
But in the end, software bloat is coming from poorly built applications constructed by iOS developers who don’t know what they are doing being rushed by product management to deliver garbage.
A fascinating analysis of the navigation bar, why it must go away, and why I disagree somewhat.
All Thumbs, Why Reach Navigation Should Replace the Navbar in iOS Design
As devices change, our visual language changes with them. It’s time to move away from the navbar in favor of navigation within thumb-reach. For the purposes of this article, we’ll call that Reach Navigation.
Read the rest of the article for a fascinating analysis as to why we should redesign navigation in iOS applications to support “reach navigation” by placing important elements towards the bottom left of the screen, and by supporting certain gestures, such as swipe to move back on the screen.
Of course I have two problems with the idea of completely obliterating the UINavigationBar.
Discoverability.
One of the trends we see in iOS and on mobile devices in general is the loss of discoverability: the loss of the affordances which allow a new user to, for example, recognize something can be tapped on, or something reveals a drop-down list of items. You can see this in iOS 10 on the new iPhone 7: pressure-taps on home screen icons for some applications pop up a list of options–but you have no way of knowing this. On earlier versions of Android if you wanted to delete an item in a table, you would do a “long-tap”: press and hold for some period of time. But is there anything in the table telling you this? Scrolling on both platforms give no hint that something can be scrolled if the list happens not to have an item that is “clipped” by a boundary–which is why Android started adding fade effects, and why some designers on iOS were adding the same thing.
Affordances allow us to determine that we can do a thing. And the UINavigationBar, even in it’s post iOS 7 stripped down state (where only color–a violation of the idea that some people are color-blind) indicates something can be tapped on, still communicates the screen we’re on can go back to a prior screen.
If we do away with the UINavigationBar, how will someone new to the platform know to swipe to cancel?
Further, if swipe to cancel becomes a standard gesture, how do we accommodate swiping left and right to browse a list of photographs? I know that Apple prefers swipe up and down for browsing and reserves swipe left and right for other gestures–but most designers working on the iOS platform don’t understand such subtitles, and Western speakers generally think in terms of scanning left and right.
Lower usage controls.
Sometimes we need to provide controls which permit us to do an operation, but which we don’t expect the user to commonly use. They still need to be discoverable.
Formerly when we considered design as an inverted L (with the most visually important thing upper left, and the top being the important branding and navigation area), we’d place the least used controls at the bottom.
But just because in today’s world the easiest to reach controls are at the bottom doesn’t mean we don’t have need for those lesser-used controls–nor should they be accessed by a hidden gesture. (I imagine for Pagans that a custom swipe of a pentagram on the screen reveals a secret settings page. And for the advanced, invoking pentagram swipes invoke different settings: swiping to invoke water, for example, brings up a different screen than invoking spirit.)
Instead, with the notion of reachability navigation, the ideal place to put lesser used settings would be the upper right–where you would need two hands to reach.
Further, while the designer in this article may hate branding–it still is useful from a business perspective.
So use the UINavigationBar. But rethink your navigation. And try to design your application so the most important elements are within thumb’s reach–at the bottom of the screen, where you would normally dump functionality you don’t want to deal with.
Transitioning To WordPress
So I’m most of the way to transitioning the entire web site to WordPress. Really the only thing I was using this site for was blogging, with the occasional page referring to some software project or another. So I’ve decided to just go ahead with the switch.
Of course it doesn’t help that my old hosting company were being a little annoying. I mean, who gives a business a 24 hour notice to rectify a problem on a Sunday? Really?
If you can’t explain it to someone not in your field, you don’t understand it.
Richard Feynman’s ‘Prepare a Freshman Lecture’ Test
Feynman was a truly great teacher. He prided himself on being able to devise ways to explain even the most profound ideas to beginning students. Once, I said to him, “Dick, explain to me, so that I can understand it, why spin one-half particles obey Fermi-Dirac statistics.” Sizing up his audience perfectly, Feynman said, “I’ll prepare a freshman lecture on it.” But he came back a few days later to say, “I couldn’t do it. I couldn’t reduce it to the freshman level. That means we don’t really understand it.”
John Gruber continues, regarding Brian Merchant’s The One Device:
Schiller didn’t have the same technological acumen as many of the other execs. “Phil is not a technology guy,” Brett Bilbrey, the former head of Apple’s Advanced Technology Group, says. “There were days when you had to explain things to him like a grade-school kid.” Jobs liked him, Bilbrey thinks, because he “looked at technology like middle America does, like Grandma and Grandpa did.”
A couple of Apple folks who’ve had meetings with Phil Schiller and other high-level Apple executives (in some cases, many meetings, regarding several products, across many years) contacted me yesterday to say that this is pretty much standard practice at Apple. Engineers are expected to be able to explain a complex technology or product in simple, easily-understood terms not because the executive needs it explained simply to understand it, but as proof that the engineer understands it completely.
Based on what I’m hearing, I now think Bilbrey was done profoundly wrong by Merchant’s handling of his quotes.
I concur.
If you cannot explain it to a beginner in your field in the form of a “freshman lecture”, it means you do not understand it.
And if you do not understand it, it means whatever it is you’re producing will be an unmitigated disaster–because if you don’t understand it, chances are if you’re coding it you’re just piling up code until it seems right; if you’re designing it you’re just piling up images until you think you’ve dealt with the design; if you’re selling it you’re just using smoke and mirrors to avoid the gaps which make your product an inconsistent and incoherent mess.
Apple’s strength, by the way, does not come from being the first mover in the market. Apple has never been a first mover in any market they now dominate. Apple did not invent the personal computer. Apple did not invent the smart phone. (They didn’t even invent the touch screen smart phone.) Apple did not invent the music player. Hell, even iTunes started life as an acquisition; the current user interface still contains many of the elements when it was SoundJam MP, a CD ripper app with music player synchronization features.
What Apple does, which has allowed it to dominate the mobile phone market, the mobile player market, the table computer market and have inroads in the mobile computing and desktop computing markets, is to make things simple. To create products that can be explained to Grandpa and Grandma.
To make things that work.
And if you want to beat Apple at its own game, make products which are beautiful but simple, whose form follows function and whose function is suggested by form. Make a product which is beautiful but obvious, which are well designed from the hardware through the software to the user interface.
In other words, make a product which explains itself (through good design) to Grandpa and Grandma.
Sadly, given the amount of disdain shown by most people in the computer industry–from programmers who don’t understand what they’re doing and who refer to their user base as ‘lusers’ to designers who don’t realize they’re telling a story and are just drawing pretty pictures–Apple’s monopoly on good design won’t be challenged in a meaningful way anytime soon.
Not even by the likes of Google, Microsoft or Facebook–who may pay lip service to good design, and who may even have some really top notch designers on their staff–but who are still seeped in the mock superiority and arrogance of Silicon Valley.