About William Woody

I'm a software developer who has been writing code for over 30 years in everything from mobile to embedded to client/server. Now I tinker with stuff and occasionally help out someone with their startup.

The real problem: improper separation of concerns.

I just read the following article: Stop Using Function Parameters Now.

It was an article which called for using a Javascript feature for passing parameters, which in a language like Swift could be done using default parameter assignments and named fields.

Here’s the example:

// Recieves a name and returns a greeting message
// Typescript
function greet(name: string): string {
    return `Hello, ${name}`; 
// Javascript
function greet(name) {
    return `Hello, ${name}`; 

The contract is simple — provided a name, the function will return a greeting message. A name is all we’ve got and all we need at the moment.

In a few months though, when the function is already used all over the code base, we find ourselves in a need of adding a last name to the greeting message.

We could, of course, just use the name parameter for the last name as well, but since we’re deeply aware of the differences between Internationalization and Localization we know that in some countries the last name can come first. We want our product to be the best fit for our users around the globe, so we want to support this use case.

Since the function is already used in many places across the code base we want to make sure it’s backwards compatible.

The article then goes on for a while talking about how messy this interface can be if you add every conceivable parameter (like first name, last name, honorific, surname) and how we can solve this by using a Javascript feature for passing arguments. It’s worth scanning the article to understand what the argument is.

Here’s the problem.

What is a name?

One of the things I appreciate about software development is that at some level you have to become a philosopher if you want to become a really great software developer. I don’t claim to be great, but I do find philosophy helps.

In this case, philosophical ideas around taxonomy, classification, and ontology–or how we know things.

So again, the question is “what is a name?”

Is a name a string? Is it the tuple (first name, middle name, last name)? What about people without middle names? Is it a unicode string? Does it have type attributes associated with marriage, degree, honorifics? Is there an optional “the Third, the Fourth, Junior” attributes?

When we start asking these questions, and start thinking of how to pass a name with all these attributes to our modified greeting function, we’ve lost our way completely.

The answer? A name is an attribute of a named thing. For our purposes in this application, a name is something we call a person.

This is why I really dislike the article linked at the top.

Because any time you find yourself looking at a function like greet above, and you find you’re trying to add additional attributes to handle different cases of what a ‘name’ is–rather than using some language trick to hide the parameters being passed (and notice how the body of greet later in his article is completely glossed over–“some implementation” indeed!)–instead you need to stop and ask yourself “should I be passing an object that encapsulates the property I need here?”

Because you know damned good and well it’s not just greet that needs the name. The name is being used everywhere else in your code–and I guarantee you some variation of the body of the function in greet is scattered throughout your source kit. Perhaps even in a few dozen places–cut-and-pasted from the original greet function, sometimes poorly maintained or even ignored as the requirements change.

This falls into the category of “separation of concerns.”

Consider the “model/view/controller” model of writing a user interface. You may not think of greet as fitting into this paradigm, but it most definitely does–as view code. That is, as a part of your application engaged in presenting a user interface to the user.

Sure, it’s not a table with multiple variables populated across several databases joined by a complex query. But it still is a user interface element–view code.

A user interface element which ideally takes a model representation, and which is invoked by controller code that calls greet when required by our business logic.

And your greet function has no business implementing what should be model code–that is, what should be code that determines the string representing the name of a model object “person”.

(As a note: any time you find yourself cutting and pasting code in your application in order to do the same thing, you’re dealing with a separation of concerns issue. The repeated code would probably be better off wrapped as a method, where it is expressed once, and instead of a half-dozen places where maintenance engineers will forget to fix them all if a bug surfaces.

Just as anytime you find yourself hacking a method like greet to process names, when the model should be processing the person’s name, I guarantee you, you’ll find copies of the body of greet scattered throughout your code. And all “object destructuring” is doing is giving us a way to hide our crimes.)

The solution?

It depends on your business logic, but in the past I’ve implemented the equivalent of:

// Swift
func greet(person: PersonRecord) -> String {
    return "Hello, \( person.informalName() )"

With the method “informalName” being a method which provides an ‘informal name’ for the person–in the case of an application I worked on, the concatenation of a first name (if present in the database) along with the first initial of the last name (again, if present). So for my name “Bill Woody” the function would return “Bill W.”

(No, I did not design the database’s first name/last name scheme; I have had several Korean and Vietnamese friends and I know damned good and well in some cultures the first name is the family name; the second is the given name. But that’s a rant for another day.)

Your requirements may be different.

But notice: determining what the proper name is is not the job of the controller code which calls greet, and formatting the proper name is not the job of the greet function. It is the job of the PersonRecord model, which understands what a person is, and how people in your system are named.

And because all of our name handling code is now in one place–in the PersonRecord method–rather than scattered throughout the source kit, as would be the case if we followed the original developer’s advise of not pausing and considering if what we’re doing is even the right answer, we can tackle and deal with “myths about names that programmers believe”.

All in one place.

Design Patterns: Parsing A File

One of the more common problems we encounter when writing software is the problem of parsing a file. Generally that involves parsing a text file–that is, turning an array or stream of characters into a data structure that can then be operated on. While this routinely comes up when designing a compiler, we see text files and parsing text files literally everywhere: in configuration files, in HTTP transactions where we receive a text response to an API query, even internally parsing commands.

There are several tools out there for parsing a file, but the general design pattern I want to talk about is the two-phase parsing process we learn about in Computer Science when talking about building a compiler.

Those two steps are “lexifying” or “tokenizing” a stream–turning random arrays of characters into ‘tokens’ that indicate the type of thing we’re looking at (“word”, “number”, “string”, etc). And the second step of parsing the stream of tokens, to turn them into an internal data structure that we can then directly operate on.

This two-step process is performed when we compile a program. But we also see this two-step tokenization/parsing process happen when parsing a JSON file, when parsing XML, or when parsing our own configuration files.

As a side note, the idea that we’re employing here, of “chunking” big problems into little problems, is it’s own “design pattern” of sorts.

And “chunking” of big problems into more manageable parts is a common idea we see everywhere: in the implementation of a protocol stack for network communications, where each “layer” implements just one piece of the puzzle and is only responsible for solving one problem. In a graphics pipeline, where each state of computation is handled by a separate step which is responsible for handling only one part of the geometric transformation which results in pretty pictures on a screen. Even in the development of a computer operating system, which is generally assembled not as a single monolithic entity, but as a series of individual computer programs each responsible for solving only one specialized problem.

This idea of chunking a problem into smaller bits is also referred to as as separation of concerns, and is worth understanding in its own right. “Separation of concerns” not only discusses chunking problems, but formalizes the idea–and throwing in things like defining well-defined interfaces and building your code as a series of “black boxes” which can stand on their own.


The first step is tokenization–that is, taking the text file and discovering the “tokens” in that text file which represent our “language.”

Tokenization depends on what we’re parsing. For example, if we were interested in parsing JSON, the tokens we need to look for are specified in the JSON specification, which itself is a very small subset of Javascript. And we can use the specification on this page to build a tokenizer by hand.

The JSON specification itself describes three fundamental object types beyond the incidential one-character puncutation marks that are used to define objects: strings (which are surrounded by double quotes), numbers, and tokens (such as “true” or “false”). As JSON is based on the Javascript specification I assume “tokens” are all symbols that start with a letter or an underscore, and are followed by letters, underscores or numbers. (So, for example, NULL would be a token, and A134 is a token. Anything a modern computer programming language would consider a ‘variable’ would be a token.)

Now whenever we read a file for tokenization, we generally want to create a mechanism to “push back” read characters. The idea here is that sometimes we have to read the next character to know it’s not part of the current token. So, for example, if we see the following:


We know after reading ‘3’, ‘.’, ‘1’ and ‘4’, ‘a’ is not part of the number. So we want to save ‘a’ for later use for the next token.

So we need routines which handle reading from our input stream and pushing back to our input stream. One way to handle this is with an array of pushed-back characters.

We define our tokenizer class in C++, ignoring the things we’re not talking about now:

/*  JSONLexer
 *      JSON Lexer engine

class JSONLexer
                        JSONLexer(FILE *f);
        FILE            *file;
        uint32_t        line;

        uint8_t         pos;
        uint8_t         stack[8];

        int             readChar();
        void            pushChar(int ch);

That is, we create a lexer which tokenizes the contents of the file, and we define the push-back stack as the internal members pos and stack.

Our implementation of the internal readChar and pushChar methods–which follow our own notion of “chunking” from above (and thus are only responsible for reading and pushing characters from our file) look like this:

/*  JSONLexer::readChar
 *      Read the next character in the stream. Note this reads as 8-bit bytes,
 *  and does not validate unicode as UTF-8 characters.

int JSONLexer::readChar()
    int ret;

    if (pos > 0) {
        ret = stack[--pos];
    } else {
        ret = fgetc(file);

    if (ret == '\n') ++line;
    return ret;

/*  JSONLexer::pushChar
 *      Push the read character back for re-reading again

void JSONLexer::pushChar(int ch)
    stack[pos++] = (uint8_t)ch;

    if (ch == '\n') --line;

The push routine is easiest to explain: we simply add the character to our stack. (It’s limited to 8 characters, but we assume we won’t need to push back nearly that many characters during parsing. We also keep track of the number of newlines we’ve read, for reporting purposes. (This isn’t strictly necessary, but becomes convenient later.)

Note that we’d write the same two routines regardless of the file we’re parsing.

That is because any text file we’re reading may require a few characters “look-ahead” to determine the token we’re currently reading. That is also true if we’re parsing the C language or parsing an XML file.

Now that we have our push-back stack, we can build the tokenizer itself. Essentially our tokenizer reads as many characters as possible which fits into the pattern for “number”, “token” or “string”–and returns the text value of that object as well as the token ID for that object.

Basically think of this as an overblown version of ‘readChar’, except we read a token and return the token type as well as the contents of the token. (So, for example, our routine readToken may read “null” and return value “TOKEN” and the token string value for our token is set to ‘null’.)

Note that sometimes our parsers also need to read a token ahead. So we also need a flag that indicates we should return the last parsed token rather than read a new token.

The parts of our class that parses our token looks like this:

/*  JSONLexer
 *      JSON Lexer engine

class JSONLexer
                        JSONLexer(FILE *f);

        int             readToken();
        void            pushToken()
                                pushBack = true;

        std::string     token;
        bool            pushBack;
        int             lastToken;

That is, we track the last token we read, and we set a variable indicating if we should parse the next token or not. The text value of the token is in the field token.

Our tokenizer routine looks like the following. First, we determine if we’re just returning the last read token. If so, clear the flag, and return the token.

/*  JSONLexer::readToken
 *      Read the next token in the stream

int JSONLexer::readToken()
    int c;

     *  If we need to, return last token

    if (pushBack) {
        pushBack = false;
        return lastToken;

Now we strip out whitespace.

Note that for most programming languages and configuration files, white space is basically syntax sugar which separates tokens; it’s what makes ‘ABC DEF’ two tokens instead of one, from the parser’s perspective.

(Some languages make white space semantically important, like Markdown. Which means you’d need to use a different tokenizer to track white space in those languages. But for JSON, white space is not semantically meaningful, so we can strip it out.)

If we hit the end of the file, we return the end of file marker value, -1.

     *  Skip whitespace

    while (isspace(c = readChar())) ;
    if (c == -1) return -1;         /* At EOF */

Now comes the interesting part. Basically we read the next character and, based on that character, we determine the thing we’re reading. For most tokenizers the next character completely defines what we’re reading: if it’s a token, a number, or a string. Some languages may require us to read a few characters ahead to figure out what we’re doing–but for JSON it’s far simpler.

Stripped of the logic of reading the rest of the token, our tokenizer looks like this:

     *  Parse strings

    if (c == '"') {
        ... Read the rest of the string ...
        return lastToken = STRING;

    if (isalpha(c)) {
        ... Read the rest of the token ...
        return lastToken = TOKEN;

    if (isdigit(c) || (c == '-')) {
        ... Read the rest of the number, or determine if the minus sign is
            just a minus sign ...
        return lastToken = NUMBER;

     *  Return single character

    return lastToken = c;

The full source code for the class (and the rest of the JSON parser and pretty printer) can be found on GitHub.


Now that we have a stream of tokens, we can carry out the second half of the problem, parsing the tokens and making some sense of what those tokens mean.

There are several ways we can handle the parsing problem. One conceptually easy way is through hand-rolling a recursive descent parser. The idea here is that at the top our language defines a ‘thing’ which our parser’s parsing method is responsible for parsing. As we uncover the next token we then descend into a method which parses that sub-thing, and sometimes that requires us to recursively call earlier routines to handle the parsing task.

For JSON, there are fundamentally three language constructs: a “value” which can be a string, a number, “true”, “false”, “null”, or it can be an “object”, or an “array.” Objects and arrays in turn contain values themselves.

Our JSON language parser explicitly defines three methods, each which parses a “value”, an “object” and an “array”. (Note during the parsing process an internal data structure of the JSON object is built inside our class–but for the purposes of the discussion that’s not as important now.)

/*  JSONParser::parseValue
 *      Parse the value. This assumes we are at the start of a value to parse,
 *  and assumes the next token is either a start of object, start of array,
 *  true, false, null, a number or a string

bool JSONParser::parseValue()
    ... Parse the value. If we see a '{', call 'parseObject'. If we see a '[', call 'parseArray' ...

/*  JSONParser::parseObject
 *      Parse the object. This is called after the '{' token is seen

bool JSONParser::parseObject()
    ... Parse the object. When we need to parse a value, call 'parseValue' ...

/*  JSONParser::parseArray
 *      Parse the array object

bool JSONParser::parseArray()
    ... Parse the array. Values in the array are parsed by calling 'parseValue' ...

Side note: One aspect we use when building our parser here is that we take a playbook from the SAX parser for XML, and instead of having our JSONParser object construct a data representation of the JSON object as we parse it, instead, our parsing routines call several abstract methods indicating ‘events’, like “start object” and “start array.” This allows our parser to focus on just parsing the text, and defers the logic of what to do as we see different symbols to a separate class.

You can see this in the way we separate the concerns: the JSONParser class parses the file and fires the 11 separate abstract virtual methods in a specific order as we encounter values or objects. The separate JSONRecordParser extends our parser and implements the interface routines, using them to assemble a data structure which we can use elsewhere.

By separating the concerns in this fashion, in theory you could create your own class as a child class of JSONParser, intercepting the events and handling them in a different way.

Overall, then, our parsing process is invoked at the top level with the calls:

JSONLexer lexer(f);
JSONRecordParser parser;
JSONNode *node = parser.parse(&lexer);

And this is a hallmark of our overall two-step pattern: each step is represented as a separate class which handles its own operations. The second class then returns a data structure or object which represents what we parsed, and we can then use the contents of that item as we need to later in our code.


Whenever you see a text file that needs to be parsed, consider breaking the problem into a tokenization step where you break the text file into a series of tokens–and a separate parsing step which parses the tokens and makes sense of what the tokens represent. Usually the parser will wind up building an internal data structure: a parse tree, a document object model or an abstract syntax tree, though sometimes you may wind up employing other algorithms, such as the shunting yard algorithm to parse infix-notation mathematics, evaluating the expressions as they are parsed.

And there are tools which exist that make building such parsers easier–though they are not strictly required in order to build a parser. Sometimes building the parser and tokenizer by hand gives you greater flexibility in catching and correcting syntax errors in the input text.

Design Patterns: Introduction

A design pattern is defined as “general, reusable solution to a commonly occurring problem within a given context in software design.” The idea is that when you encounter a problem, it provides a ready to use template for helping to solve that problem.

If you look at the Wikipedia article linked above, we then drop into a bunch of things which are then called “design patterns.” And while they are useful–I have never really cared for the way we use the definition “design pattern” in practice because many of them are so “small,” in a sense. That is, a lot of design patterns seem applicable to user interface design–which is good, I suppose–but many of them aren’t more than just a way to rearrange objects in an object-oriented system. (And as the criticism in the article notes, many of these “design patterns” are more a reflection of missing features in a programming language than do they represent true “patterns.”)

For me, the more interesting design patterns I have encountered in my life are far more complex than this. But I see them often enough in my coding that I resort to these “algorithmic” design patterns to solve far more complex problems than “how to unspagetify my code.”

I think it’s also worth noting the difference between a “pattern” and an “algorithm.”

To me, an algorithm is something that can be built as a self-contained unit. For example, sorting an array can be written in most modern languages using things like Java Generics: define the object, specify the comparator which compares objects, toss to a sorting object with the generic specified as the object you’re sorting. Things like red/black trees and the like can similarly be coded in self-contained packages in such a way so that you simply specify the object you want stored–and you’re good to go.

The implementation details at that point, in a well-written library, becomes almost academic: you have no need to know how Merge sort or Red-Black trees work; just pass to Arrays.sort() or create a TreeSet and call it a day.

But some things we need to do are as much “design pattern” as they are “algorithm”; that is, they’re techniques which cannot exactly be packaged, but are larger than a pattern like the observer pattern.

So that is the point of this series of essays: to describe design patterns that are much bigger and more interesting than design patterns that are arguably missing features in a programming language. Though I may touch upon more interesting variations of existing design patterns as I encounter them. And I may discuss things that are not quite ‘algorithmic design patterns’ as much as they are ‘ways to solve problems’ that are beyond simple design patterns.

Mostly when I encounter a thing that’s interesting to me, I plan to describe it here.

Things to remember: compiler conditionals for MacOS/iOS/etc.

I’m putting this here so I have a place to look for this later. In macOS, iOS, tvOS, etc., there are a number of target conditionals that are set in “TargetConditionals.h” on Xcode which allow you to detect what you’re compiling for.

A number of these constants will probably never be seen in the wild. Certainly you’re not going to see a PowerPC running macOS Big Sur anytime soon.

I pulled this directly out of the comments.

Those are:


These conditionals specify which microprocessor instruction set is being
generated. At most one of these is true, the rest are false.

  • TARGET_CPU_PPC – Compiler is generating PowerPC instructions for 32-bit mode
  • TARGET_CPU_PPC64 – Compiler is generating PowerPC instructions for 64-bit mode
  • TARGET_CPU_68K – Compiler is generating 680×0 instructions
  • TARGET_CPU_X86 – Compiler is generating x86 instructions for 32-bit mode
  • TARGET_CPU_X86_64 – Compiler is generating x86 instructions for 64-bit mode
  • TARGET_CPU_ARM – Compiler is generating ARM instructions for 32-bit mode
  • TARGET_CPU_ARM64 – Compiler is generating ARM instructions for 64-bit mode
  • TARGET_CPU_MIPS – Compiler is generating MIPS instructions
  • TARGET_CPU_SPARC – Compiler is generating Sparc instructions
  • TARGET_CPU_ALPHA – Compiler is generating Dec Alpha instructions


These conditionals specify in which Operating System the generated code will
run. Indention is used to show which conditionals are evolutionary subclasses.

The MAC/WIN32/UNIX conditionals are mutually exclusive.
The IOS/TV/WATCH conditionals are mutually exclusive.

  • TARGET_OS_WIN32 – Generated code will run under 32-bit Windows
  • TARGET_OS_UNIX – Generated code will run under some Unix (not OSX)
  • TARGET_OS_MAC – Generated code will run under Mac OS X variant
    • TARGET_OS_OSX – Generated code will run under OS X devices
    • TARGET_OS_IPHONE – Generated code for firmware, devices, or simulator
      • TARGET_OS_IOS – Generated code will run under iOS
      • TARGET_OS_TV – Generated code will run under Apple TV OS
      • TARGET_OS_WATCH – Generated code will run under Apple Watch OS
      • TARGET_OS_BRIDGE – Generated code will run under Bridge devices
      • TARGET_OS_MACCATALYST – Generated code will run under macOS
    • TARGET_OS_SIMULATOR – Generated code will run under a simulator
  |                            TARGET_OS_MAC                            |
  | +---+ +-----------------------------------------------+ +---------+ |
  | |   | |               TARGET_OS_IPHONE                | |         | |
  | |   | | +---------------+ +----+ +-------+ +--------+ | |         | |
  | |   | | |      IOS      | |    | |       | |        | | |         | |
  | |OSX| | |+-------------+| | TV | | WATCH | | BRIDGE | | |DRIVERKIT| |
  | |   | | || MACCATALYST || |    | |       | |        | | |         | |
  | |   | | |+-------------+| |    | |       | |        | | |         | |
  | |   | | +---------------+ +----+ +-------+ +--------+ | |         | |
  | +---+ +-----------------------------------------------+ +---------+ |


These conditionals specify in which runtime the generated code will
run. This is needed when the OS and CPU support more than one runtime
(e.g. Mac OS X supports CFM and mach-o).

  • TARGET_RT_LITTLE_ENDIAN – Generated code uses little endian format for integers
  • TARGET_RT_BIG_ENDIAN – Generated code uses big endian format for integers
  • TARGET_RT_64_BIT – Generated code uses 64-bit pointers
  • TARGET_RT_MAC_CFM – TARGET_OS_MAC is true and CFM68K or PowerPC CFM (TVectors) are used
  • TARGET_RT_MAC_MACHO – TARGET_OS_MAC is true and Mach-O/dlyd runtime is used

I hate Swift.

I hate Swift.

I know this isn’t the sort of thing that, as a developer of macOS and iOS software I’m supposed to say. After all, Swift is new, Swift is great, Swift has idioms which prevent you from getting into trouble. And all things New And Improved!™ are supposed to be better.

Worse, if you’re a neanderthal like me who hates Swift, it’s because you’re a Bad Person. Bad people are terrible software developers, people who don’t know what they’re doing, people who don’t understand the way.

But I’m going to say it anyway.

I hate Swift.

Now don’t get me wrong; there are a lot of things Swift does really well. Swift handles null pointers and null references really well. I like the fact that a nullable type is a first-class object that requires explicit handling and explicit unboxing. I appreciate the ‘?’ operator for optional-chaining, and the ‘!’ operator for forced-value expression, and the ‘??’ operator for providing a default for a nullable variable. Granted all this took getting used to, but I appreciate them because they cause one to try to write better code, if used mindfully.

But Swift is a persnickety language, and it definitely has a “happy path.”

Meaning if you’re using Swift to string together some pre-existing views and controls and carry the data from those controls around to other places in your code–the 90% of the stuff you have to do in order to make a working macOS or iOS application–it’s fantastic. It’s great. The persnickety element sometimes blocks your flow and makes you think “hey, should I be doing this?” or “hey, shouldn’t I deal with this potential problem?” But it works.

For the happy path. For the 90% of your code.

But Swift definitely intrudes. It definitely has a way in which it wants you to do things.

Now let’s be clear: I personally prefer to catch as many errors at compile time than at run time as possible. And Swift’s optional-chaining/forced-value stuff surfaces null pointer errors as compile-time errors.

But Swift… weirdly it misses a few things.

Swift doesn’t have a way to define an abstract method or an abstract function. This makes certain idioms hard to write–and worse, Swift’s answer, used in parts of Apple’s frameworks, is to make what could have been a compile-time error (undeclared abstract method, such as you see in Java or C++) into a run-time error (by creating an ’empty’ base declaration that throws an exception).

So Swift’s nannying is… incomplete. Sometimes woefully so.

And weirdly so, given how persnickety Swift can be sometimes. “Yes, you have to think through exactly if and when and how this variable may be null. But an undeclared method you needed to declare? Meh, CRASH!

And Swift’s persnickety behavior makes anything revolving around pulling apart a String and handling it as an array of Characters… challenging. It can be done, but you wind up wandering down a hierarchy of declarations (String, Substring–which, ironically enough, is not a String, Character–which looks like a string except it isn’t, weird things like UTF16View, UTF8View and arrays of things that look like integers, as well as a whole bevy of ‘Unsafe’ pointer things whose declaration and usage seems to change every five minutes with language revisions) that makes writing a per-character lexical analysis program an exercise of looking through the hierarchy of Swift declarations to figure out the current ‘one true way’ to handle strings.

(And yes, I understand that Swift’s string handling is constrained by the Unicode standard, which itself is… oddly twisted in weird ways, such as with the handling of characters in the U+10000 – U+10FFFF “supplementary planes” range a pain in the ass, especially if you think “well, just encode it as an array of 16-bit unsigned integers”, as Java does. Simply saying “well, we can ignore these” doesn’t work if you ever want to write an emoji. 😀 But can’t I just get an array of 16-bit integers or 32-bit integers, manipulate that, and turn that back into a String without making a whole Federal production out of it?)

And what the hell was Apple thinking with the “Unsafe” pointer reference types? I mean, I pride myself in understanding the ins and outs of odd corners of a programming language–but really, what the hell, man; what the hell? At least Java dealt with it by making arrays of basic types first-class citizens, instead of returning unsafe blobs of memory that may or may not be represented by some genericized unsafe reference thing that may or may not be convertible to something useful.

All of this means that as soon as you wander off the happy path: as soon as you start (for example) implementing some complex algorithm involving anything more complex than a push-down stack or want to implement some string parsing system or build something using a complex array of data structures–or God help you if you need to handle memory in a way that is off the happy path of ARC–Swift definitely gets in the way.

And writing an algorithm, like building a variation of a convex hull algorithm in Swift, becomes akin to wrestling a bear.

Which I hate, because in implementing the algorithm I’m already wrestling a bear–and I don’t need Swift to intrude every five minutes with a “you can’t express walking down a stack with a pointer” or “you want to autoincrement what?” or “no, you can’t simply delete the middle five objects and return it as new object.”

It’s not that Swift makes implemneting such an algorithm impossible. But Swift keeps wanting to poke itself into the conversation when I need it to shut the hell up for a minute so I can think about my algorithm instead!

So I hate Swift.

Honestly, I prefer using Objective C to Swift. Or heck, dropping into C or C++.

Hell, I’ll take Java over Swift if I need to do anything more complex than “glue button A to array B so table C can be populated with the sorted values.” Even though Java requires you to be incredibly verbose about everything–at least you can use the code refactoring functionality of a good Java IDE to track all those weird imports and type declarations. Though honestly I think Java could use some of Swift’s compiler mojo to eliminate some of the verboseness.

After all, do I need to write `thing = Enum.VALUE;` when the compiler already knows `thing` is of type `Enum`?

But yeah, I’d rather take verbose Java over concise and persnickety Swift for writing anything more complex than the plumbing that is 90% of modern app development.

Plumbing which, I will grant, Swift absolutely excels at.

Security. Not “Cyber Security.”

Answering a question found on Marginal Revolution, an economics blog which I periodically follow:

Long-time MR reader here. I have a question: what is the appropriate framework to think about incentives (economic or otherwise) for electric power utilities to beef up their cybersecurity?

Here are some random thoughts, in no particular order. If you think you already understand security processes, you can skip down to “incentives.”

First, you can’t understand the incentives of a thing without understanding the thing itself. So it’s worth drilling down into what we mean by “cybersecurity.”

Second, I wish the hell we’d get away from the term “cybersecurity” and call it what it really is: security.

Talking about “cybersecurity” is like focusing on the locks of a door: how many pins it has, the type of the lock, if the lock is electronic or driven by a physical key, if it’s brass plated or plated in steel–all the while the window right next to your front door is wide open inviting anyone who wants to to climb right through.

Don’t believe me? The earliest hackers who hacked into the AT&T long-distance system in order to make free long distance phone calls discovered how the AT&T system worked by someone breaking into AT&T’s offices and stealing the specifications for their system.

And they did that by someone dressing up as a befuddled new employee who was wandering the offices on his first day of work trying to figure out where his desk was.

Never underestimate social engineering. It does’t matter if you have the best firewalls in the business if someone can simply walk through the front door dressed as a janitor, and obscond with a laptop computer containing all your company’s secrets.

Now security itself is a complex topic. But basically:

1. Security is not ‘salt’ which can be sprinkled into the recipe to make things taste better. Security is a corporate-wide principle that must be architected into all aspects of your business, your software, your processes, your practices. Security must start with the C-suite: if your CIO or CTO or CSO (Chief Secuirty Officer) cannot understand security, create policies and procedures to correctly implement security, and has the authority to implement these policies and procedures, then you will never have a secure company.

2. Security does not, however, require that your corporation be “paranoid.” You don’t need armed guards following around everyone in the building. But it does mean you should start at the front door–both virtually and physically–and check to make sure everyone entering has the proper authority to be there.

And note some forms of security can actually make things worse: requiring employees to change their passwords every 3 months and never use any duplicates means the person who has been there for 10 years would have had to create 40 separate secure passwords during his tenure–which implies he’ll be writing his passwords down on a sticky note somewhere on his desk. This form of paranoia has made things worse, not better.

3. Security also requires constant training. Computer security–including social hacks, e-mail attacks and hackers trying to guess passwords–require its own training. If we were just as dilligent about computer training as we were about HR training for sexual harassment, most companies would be far more secure than they are now.

4. Once we get passed the policies and procedures (making sure employees are properly identified at the front door–logging access as appropriate, making sure employees are properly trained not to open malicious e-mails, making sure employees leave sensitive documents at work, or at least use a separate corporate-controlled laptop when traveling abroad) — then we can talk about specific products. But even there, those products must be integrated properly throughout the system; otherwise, we’re back to discussing the best door lock for your front door, neglecting the open window right next to it.

All of this also applies to how we engineer software products: security is not a salt that gets sprinkled onto your software product in the hopes things are better. They must be architected into the system from the ground up.

Unfortunately most software developers don’t understand this. Worse, they think certain things–like certain protocols (such as SSH) are sufficient enough to protect confidential information, as if somehow your browser only makes a secure connection after the user has logged in.

(For example, I once worked on a mobile product with a back-end component. It turned out the back end was not checking the credentials on each transaction; it relied on the mobile app to only make API calls the user was authorized to make. And it relied on SSH for people not to be able to ‘sniff’ the packets to understand the API calls being made. The thing is, while SSH makes it harder (but not impossible) to sniff out other people’s transactions, there are a number of products which you can buy to allow you to sniff the packets between your own device logged into your own account and a back end. And that allows you to easily figure out the format of the packets–and create your own application which can use the back-end API.

Long story short: a dedicated hacker and a few hours of time could have easily hacked that system enough to create his own mobile app. And because our back-end was not validating the security of each transaction–they could have easily hijacked our system as a result.

No firewall, no authentication manager, no third-party product could fix that glaring oversight.)

So when architecting a product:

5. Engineer the product for security at every phase of processing data. Every part of the system: the front-end, the back-end, the database side–all should have an opportunity to refuse to process data because of a failed security check. Think of it in the same way as building physical security into a building: if everyone has a key card, it’s easy to put key card readers throughout different areas of your building to make sure the person is allowed to be there.

6. Separate rules follow the processing of credit cards and other sensitive financial data. Basically anything that contains sensitive data needs to be put into its own secure enclave. Only certain people should be permitted to access that enclave for testing and development purposes–and they should be held to higher standards of trust (forced to sign separate contracts that contain specific penalties for violating that trust, such as criminal prosecution for stealing credit card data).

7. All transactions must be logged. That goes double for anything accessing secure financial data, and the logs should be easily accessed for security review. And they should be regularly reviewed by someone on the team for any financial or access irregularities.

8. And I can’t believe I have to add this, but I must: REST architecture (that is, “stateless back-end architecture”) is a fucking disaster if you don’t allow the back-end to represent client security access in the back end. In other words, there are those who think that “REST” implies that all state–including client access permissions–should be represted in a state object on the client side. This is a fucking disaster–because that implies the client has the chance to change its own permissions illegally (see my story about the mobile app which was responsible for API security above)–and even if the “state object” is represented as an encrypted opaque object that you think a hacker can’t crack (*cough*), it does subject your API to replay attacks.

So after this long wind-up, here’s the pitch about “incentives.”

Because “security” is a corporate process and not a magic salt that can be sprinkled onto your bland food to make it taste better, and because we never think about security until our house is broken into and our brand-new TV set is stolen–customers never think about security until something breaks.

You probably have never considered for a millisecond if the assembly plant that made the parts for your car has a lock or a gate or armed security partrolling the grounds; you only care that your car handles well. Until the moment the on-board computer goes haywire and blows up the engine–at which point you’re angry your car doesn’t work.

And you probably never considered the possibility that it blew up because some hacker acting like a befuddled employee swapped the software being programmed into the car’s onboard engine management system downstream in some parts supplier to your car company.

Worse: sometimes the software fails because the company’s own software developers were idiots. Meaning it didn’t even require a hacker at all; just poor quality assurance processes during the development cycle. (Don’t believe me? Consider Boeing’s latest 737 issues caused by a softare glitch.)

All of this is to say that security is “invisible” to the average consumer.

And the incentives of all corporations is to sell a product to the consumer at the cheapest price point possible while providing the consumer a decent experience. (Meaning, from a manufacturing perspective, the balance is between “how cheap can I make it” while “how do I maximize sales” and simultaneously “how do I minimize points of contact with customer service, and how do I minimize product returns.”)

Security does not factor into any of this at all.

So, given that customers don’t actually care about security until their credit cards are stolen, what can we possibly do to incentive companies to improve their security?

Well, first, we can get away from calling it “cybersecurity” and get away from telling stories about Russian hackers creating DDOS attacks and the “threat to our power grid.”

A lot of this is bullshit being used to sell consulting time and software products from companies like Symantec (full disclaimer: I used to work for them) to companies who don’t want to go through the process of a security audit, because their C-suite hasn’t a fucking clue.

Second, we need to treat security in the same way we currently treat sexual harassment. That is, we need to educate the C-suite as to the need to understand security as a process–and just as we strive to make women secure at the workplace from abusive men, we need to strive to make the plant or office secure from unwanted intrusions.

This means security training–and the C-suite needs to take security training as seriously as it now takes sexual harassment training. That includes training products covering e-mail, fishing attacks, social engineering attacks, properly identifying employees, secure access points through the building using card readers.

And that also includes a number of potential software products depending on the line of business the company is in, including properly integrating two factor authentication, and having an IT department properly manage corporate laptops or desktop computers. That also includes putting some teeth behind some of these requirements: being willing to repremand employees for lax security practices in the same way we repremand employees for sexual harassment.

Third, we software developers need to properly segregate software and consider “enclaves” of information, challenging data at every point in the system in the same way we put card readers at various points inside an office building. That includes special attention to mission-critical systems and systems containing financial data–and that “attention” includes physical security: limiting access to those systems, logging card readers, regular security audits and review.

Fourth: if the Federal Government wants to get into the business of incentivizing security at corporations such as power plants, then they should encourage ‘security audits’ of those corporations–including encouraging corporations who lack the proper skill set within its C-suite to at least hire a ‘director of security’ who has the authority to mandate corporate change for all physical and computer security, including security audits of all software used by that corporation or audits of software developed for that corporation.

Unfortunately I don’t think we’ll get this from the Federal Government.

Instead, I suspect they’ll just throw money at the situation, subsidizing a bunch of high-priced security “experts” to recommend companies buy a bunch of security “toys”–like fancy front-door locks which look really nice, but fail to address that open window next to the front door.

Things to remember: passing in structure or class pointers in Objective C.

So in Objective C or Objective C++, if you pass in a pointer to something not a basic type (like ‘int’ or ‘double’ or ‘void’), the Objective C compiler thinks it’s an Objective C class. It needs to know this so it can perform automatic reference counting.

If you need to pass in a pointer to a class or a structure object, write this instead:

- (void)function:(class CPPClass *)classPtr;


- (void)function:(struct CStruct *)structPtr;

Otherwise, if you don’t use the class or struct pointer qualifiers, Objective C++ (and presumably Objective C for the struct keyword) will think it’s an Objective C class, and the compiler will fail.

Stating the obvious.

Just read another article telling us of yet another way to build a great product for yet another set of technologies. And I can’t believe I’m about to list some things that should be painfully obvious to any developer–things which were ignored by this article.

When building a web site or a mobile application which communicates with a back-end server:

Assume your wire protocol is completely insecure.

There are dozens of tools out there which allows a user of your app (your web site, your mobile app, your desktop app) to see transactions going back and forth to your back end server.

“But I’m using HTTPS, that’s secure.”

Sure, against third party snoopers–and not even there, if someone compromises the certificates, or if it’s a product designed to peek inside HTTPS packets. (I’m looking at you, Symantec.)

But it does not protect against someone with the right tools to create a proxy that allows them to decode traffic. And see exactly how your API works.

Corollary: If your business logic is in the front end, you don’t control your business logic.

Once someone has decoded your API protocol, it’s easy for them to then make calls into your API protocol. And if your logic to determine what a user can and cannot do is contained in your app, it’s easy for them to bypass those checks and (oh, say) ship a thousand dollars worth of product to their front door without paying a dime.

Corollary: If your security checks are in the front end, your site is insecure.

That basically follows from the above.

Interestingly we tend to forget this when attempting to implement ‘RESTful’ interfaces–that is, supposedly stateless user interfaces–by pushing security checks onto the client. But that subjects your app to a “replay attack”–where a bad guy snoops the web traffic, discovers the token representing security state, then fiddles that security state to obtain control.

A secure pure REST interface is impossible, if only because when a user logs in, that login state (such as an OAuth token) must be generated and transmitted to the front-end. More importantly it must be invalidated and a new login state token generated the next time the user logs in. You can’t issue the same OAuth token each time the user logs in (say, by doing an SHA-256 hash of the user’s ID plus a salt token), because that subjects your system to a replay attack.

It’s not to suggest outside of the authentication subsystem your interface shouldn’t be stateless.

But never allow the perfect to get in the way of the good–because there are unintended consequences.

And for God’s sake, don’t send the user an encrypted data structure which contains the access control entries they have access to! It’s just a matter of time before someone figures out how to decode that data structure, change the ACEs, and become a superuser.

Instead, admit your RESTful interface is not completely stateless–you have to manage access control lists as state–and move on.

Given all this, your app can be far more than basically a pretty presentation layer on a series of calls which return the contents of each app page as XML. There are times when it is appropriate, for example, to put some business logic in your front-end–but only to reduce the number of round-trips to the back end.

For example, if you have a page that accepts a credit card, you don’t need to do a network call just to see the user forgot to enter a date, a credit card number that passes the credit card number checksum, or to check if the user forgot the security code.

But this does not alleviate the need to duplicate these checks on the back end.

Ultimately, there is no such thing as a free lunch. There is no theoretical model which does a better job than existing models–just tried and proven concepts we should keep implementing, refining them only when there is an honest-to-God problem. We certainly shouldn’t be scrapping it all for the next blue-sky concept; that’s just giving into The Churn, one of the most monumental wastes of everyone’s time.

A Post For A Friend.

This is in response to someone asking for possible interfaces for building complex and/or queries.

Years ago I worked on a program called “BugLink,” and it had an interface for building complex and/or queries. The user interface worked like this:

When you first opened the application, the top of the screen had a search field that looked like this:

Image 1

The idea is simple: there is a field selector which selects a pre-defined field from a pop-down menu. There is an operator selector which picks from a list of predefined operators from a pop-down menu–and the type of operators change depending on the type of field. (For example, strings may include a “is substring of” operator that a number field does not.) And “value” can either be a pop-down for booleans (true/false), or an edit field or selector allowing you to enter a value. (If you allow dates, then tapping on the value field may bring up a calendar picker.)

Now if you clicked “Add Field”, you get a new row. But you also get a new column, associated only with the first row in the list of rows: the operator you wish to apply to the list of fields.

Image 2

The idea here is that now, you can pick how you want the rows to work: do you want all of them to be true before the query works? Or do you want any of them to be true? And or Or.

As before you can set the value of each of the rows–the field, operator and value. And fields can repeat; that allows you to search in a date range.

You can of course add more rows:

Image 3

And you can also selet rows:

Image 3S

Now here’s where the magic happens. You can also “group” and “ungroup” rows. When you do this, the selected rows indent to the right, and a new boolean operator appears:

Image 5

This allows you to create complex queries.

Naturally you need to be able to handle a bunch of fringe cases. For example, you need to decide how to handle grouping at multiple levels. (One possibility is to simply disable group and ungroup. Another is to pick an item–the first selected item or the deepest item–and move everything else into a group under that item, regardless of position in the tree.)

And you need to be able to handle other fringe cases that seem odd at first, such as having the same operator at different group levels:

Image 6

and having only one item in a group:

Image 4

But aside from handling the fringe case of grouping and ungrouping items that may be scattered across the query, there really are no fringe cases that can occur.

Further, I liked this user interface because it progressively reveals itself. The simplest query has no boolean operator. Adding a new row gives you a new option. And grouping and ungrouping reveals further complexity. (Of course you need a way to signal that queries can be grouped or ungrouped–one could handle this by showing a button that allows grouping or ungrouping. You can also explain there is a right-click pop-over menu that will show up–but I’ve never cared for pop-over menus because there is often nothing that suggests to the user that one could show up.)

But this was a fairly good interface for handling building complex boolean queries–and while it lacks a ‘not’ operator, one could theoretically add that to the operator pop-ups with a little extra work and a little extra consideration about how the user interface is to work. (For example, you could add “not” to the boolean popup if there is only one row in the group–or add a ‘not and’ and ‘not or’ if there are two or more items in that group.)

OCTools Update

I’ve taken the liberty to make a number of changes to the OCTools library to prepare for a first 1.0 release. Amongst other things I’ve updated the documentation, I’ve built sample parsers in Objective C and C++, and I’ve added support for generating Swift, along with an example Swift parser.

I’ve also taken the liberty to produce an installation package, which can be downloaded to install the tools in /usr/local/bin.

The GitHub library can be found here. Full documentation is here, and the algorithms are described here.