Please resize your designs to test if they work!

*sigh*

I remember months ago making this rant on my Facebook account. Yet it still continues.

Okay, look: if you’re laying out a user interface for Android or for iOS, remember that on iOS the minimum feature size someone’s finger can reasonably touch is 44 x 44 pixels. (On “retina-resolution displays” this becomes 88 x 88 pixels.)

So stop designing what looks good on your gigantic 30″ monitor, and start designing for what will look good on a 7″ iPad Mini or an iPhone 5 or an 4″ Android.

If you really want to see what a design will look like to the user, take your 2048×1536 Photoshop and resize it down to 628 x 471 pixels. (NO, not 1024 x 768.)

On a typical desktop monitor with 100dpi resolution, a 628×471 pixel image is approximately the same size as the iPad Mini with it’s 163 dpi screen.

If you resize your design down to that size and the design looks like a muddled mess–well, guess the fuck what? When the programmer is done implementing your design, it’s going to look like a muddled mess.

And as the designer, guess who’s fault that is? The programmer’s? Think again.

The same goes for the iPhone 5 (196 x 348), the iPhone 4 (196 x 294), the Google Nexus 4 (405 x 243), the Nexus 7 (592 x 370) and the like.

Remember: your desktop screen is around 100dpi. The devices you are designing for, however, are not–and what may look spacious and free and open on your 30″ monitor at 100dpi will looked tiny and cramped and cluttered on a device with only a 4″ or 7″ or 10″ diagonal…

Building a static iOS Library

I’m using the instructions that I found here: iOS-Framework

But here are the places where things deviated thanks to Xcode 4.6:

(1) At Step 2: Create the Primary Framework Header, for some reason (I suspect because things got changed), it appears that specifying target membership for a header file no longer appears to work. From the notes folks seem to suggest using the Build Phases: “Copy Files” section to specify where and how to copy the files.

So what I’m doing is every publicly available header file, (a) make sure it’s inserted into the “Copy Files” list, and (b) make sure the destination is given as “Products Directory”, subpath include/${PRODUCT_NAME}

(2) At Step 3: Update the Public Headers Location, note the script in step 5 uses “${PUBLIC_HEADERS_FOLDER_PATH}” to specify where files are to be copied from. So in Step 3, we need to make sure the public headers folder path is set to something more reasonable.

In this step, set the public headers folder path (and also the private headers folder path just because to “include/${PRODUCT_NAME}”.

These changes get me to the point where I can build the framework after step 5.


There was one other hitch: you cannot include the framework project (in the later steps) into the dependent project while the framework project is still open.

Learning to fly, and Jeppesen’s map data on Garmin is wrong.

It’s been a while since I’ve posted, I know.

But I have a good excuse. I’ve been learning to fly.

Flying is the coolest thing I’ve done in a long time. And in the past six months I’ve gone from my check ride to being just a few weeks away from my check ride. My last flight was my long cross country to Bakersfield and to San Luis Obispo from Whiteman Airport. And the view! There is nothing cooler than seeing Avila Bay from the front windscreen of an airplane under your control.

Now to justify spending all this money learning to fly (it ain’t cheap!) I’ve been spending some time building aviation-related software products. My first product was an E6B calculator which also includes methods for calculating maneuvering speed (that changes as the weight of your plane changes, and knowing it is vital when you hit turbulence).

My second product will be an EFB, a program which helps show you where you are on a map displaying airspace data, and also allows you to create a flight plan and file a flight plan with the FAA.

And it is building the mapping engine for Android (my first targeted platform), where my current story starts.

Building a map and testing.

In order to make rendering on Android quick, I’ve built a slippy map engine that uses OpenGL. There are several advantages to this; the biggest being if you scroll the map around you don’t have to redraw the entire screen. Instead, a few OpenGL translate calls or rotate calls–and you’re done. Some additional code to detect if you need to rebuild your tiles, and you can render the tiles in the background in a separate thread, and replace the tiles in the OpenGL instance once they’re done–this allows you to scroll around in real time even though it may take a second or so to render all the complex geometry.

And in testing my slippy map code, I noticed something.

My source of data for the shape of the airspace around Burbank, Van Nuys and Whiteman is the FAA FADDS (Federal Aeronautical Data Distribution System) database, updated every 56 days. I have the latest data set for this current cycle, and I’ve rendered it using my OpenGL slippy map engine in the following screen snapshot:

You can ignore the numbers; that’s just for debugging purposes. The airspace being hilited is the airspace at my altitude (currently set to 0); dim lines show airspace at a different flight level.

And I noticed something–interesting–when comparing the map with my Garmin Aera 796 with the Jeppesen America Navigation Data set:

There are a few differences.

Okay, let’s see what the printed VFR map shows for the same area:

I’ve taken the liberty to rotate the map to around the same orientation as the other maps.

And–there are errors.

Now the difference between the FADDS data set and the printed map is trivial: there is this extra crescent area that on the printed map belongs to Van Nuys:

The errors with the Jeppesen data set, however, are worse:

On this image I’ve superimposed the image of the terminal air chart for Van Nuys, Burbank and Whiteman on top of the Garmin’s screen. It may be hard to see exactly what’s going on, but if you look carefully you see three errors.

The first error is the west side of Van Nuys’ airspace it has been straightened out into a north-south line. On the chart, Van Nuys’ airspace curves to match Burbank’s class C airspace on the west.

The second error is the northern edge of Burbank’s C airspace over Whiteman. Whiteman’s class D airspace is entirely underneath Burbank’s class C airspace–but the border has been turned into a straight line on the Garmin.

The third error is a tiny little edge of airspace that shows Van Nuys and Whiteman’s class D airspaces overlapping. On the printed map, this little wedge belongs to Whiteman.

They say a handheld GPS device should only be used for “situational awareness” and not for navigation.

Well, one reason is simple: the airspace maps you’re looking at on the hand-held may be wrong.

I’ve also noticed similar errors around Oakland’s Class C airspace. Large chunks of the airspace have been turned from nice curving lines (showing a radius from a fixed point) into roughly shaped polygons.

Now my guess is this: the raw FADDS data from the FAA is similar to most mapping data: rather than specifying round curves, the georeferenced data is specified as a polygon with a very large number of nodes; some of the airspace curves are specified with several hundred points.

And somewhere during the conversion process, the number of points are being reduced in order to fit a compact file size, and so it renders quickly: if the length of one edge of the rendered polygon is less than a couple of pixels long, there is no point keeping the nodes of that line; you can approximate the polygon with one with fewer edges with less than a single display pixel of error.

But somewhere along the path, too many polygon edges are being removed–turning what should be a curved line into a straight edge.

I don’t know if this is because there is a limit to the number of polygon lines permitted in the file format being used for export and import, or if this is because approximation code is going haywire.

But curved lines are being turned straight–which implies if you are skirting someone’s airspace, and relying on your hand-held Garmin–you could very well be intruding into someone’s airspace and not even know it.

WordPong is out!

So now I can talk about the reason why I cared what the top 50 words in the English Language are (at least by looking at word frequency from various Project Gutenberg free books): WordPong on the Apple App Store.

The reason for the frequency is simple: in order to create a computer slider that goes from easy to hard game play, one thing I needed was a sorted dictionary of words, with the most often used words at the top of the list. Word frequency here is used as a proxy for word complexity: the more often a word is used, the more likely it will be familiar to younger members of the WordPong audience. And while I also use other parameter tweaks in order to simplify computer game play (such as how fast letters are picked by the computer and how likely the computer is to randomly flick a letter), word difficulty is one major aspect of the game.

It works quite well, by the way: at the first level the computer player limits itself to just the top 150 words in my sorted dictionary. At a middle level it’s restricted to the top thousand or so words: the theory being that the words should be almost instantly recognizable by the human player.

An interesting exercise.

So I’m developing this game in my spare time, and I needed a list of commonly used words. Downloaded the Project Guttenberg DVD of popular texts, an open source dictionary of words, and built a simple program which scanned the DVD, parsed out all the words, compared against the dictionary, and updated the count of those words to arrive at a list of commonly used words.

And here’s the top 50 words I found, in order:

the
of
and
to
in
that
he
was
it
his
is
with
for
as
you
on
had
not
be
at
but
by
this
her
or
which
from
have
they
she
all
him
we
are
were
my
me
so
one
an
no
their
if
there
who
said
them
when
would
been

I don’t know if this is cool or… mundane…

*sigh*

And then some asshole hacked my blog. The only thing I lost was the theme I had tinkered with to something resembling what I liked. Now I’m stuck with this until I can restore the old template formatting stuff I liked.

How do I know what environment variables are on iOS?

Apple’s iOS operating system is built on Unix, which implies that you can take your Unix-based C code, wrap it around some Apple iOS UI goodness, and have an iPhone application.

(Well, it’s more complicated than that. But if you’re a crusty ol’ Unix hacker like myself, there is something gratifying being able to use fopen() and getenv() instead of Apple’s NS-wrapped calls to these same routines.)

So how do you know what the environment variables are that are available on your phone?

Simple: I wrote a simple iOS program which neatly displays all of the available environmental variables.

And the variables available appear to be:

  • PATH: The path of available unix commands
  • TMPDIR: The temporary directory path (points to the temp folder in your sandbox)
  • SHELL: /bin/sh, natch.
  • HOME: The home directory of your application (points to the root of your sandbox)
  • USER: mobile, natch.
  • LOGNAME: mobile

and some odd ones I’ve never seen:

  • __CF_USER_TEXT_ENCODING = 0x1F5:0:0
  • CFFIXED_USER_HOME, which appears to be the same as HOME

In a debug environment I’m seeing the additional oddball variables:

  • CFLOG_FORCE_STDERR = YES
  • NSUnbufferedIO = YES
  • DYLD_INSERT_LIBRARIES = some path apparently on my host computer. (?)

Now this is on my iPhone running iOS 5.1; YMMV. Which is why I uploaded the program. Though it appears I would trust that both HOME and TMPDIR will both be available and point to the right place, and constructing the paths to the Documents and Library folders is just a matter of concatenating the path string returned from HOME. So if you need to write a new file to the Documents folder in the home directory of your application you can write:

char buffer[256];
strcpy(buffer,getenv("HOME"));
strcat(buffer,"/Documents/myfile.txt");
FILE *f = fopen(buffer,"w");
...
fclose(f);

UDIDs are gone.

After warning from Apple, apps using UDIDs now being rejected

UDIDs are now gone.

But it’s easy enough if you need a device unique identifier to generate one and track the device that way. Of course because you have your own unique identifier you cannot match numbers against other software makers, and you can’t guarantee the user will uninstall and reinstall the application–but on the other hand, if you save the unique identifier with the device and the user upgrades his phone, the identifier will move to the new phone, following the user.

Step 1: Create a UUID.

You can create the UUID using Apple’s built in UUID routines.

	CFUUIDRef ref = CFUUIDCreate(nil);
	uuid = (NSString *)CFUUIDCreateString(nil,ref);		CFRelease(ref);

Step 2: Write the UUID out to the Documents folder, so the UUID gets backed up with the phone.

Because under the hood iOS is basically Unix, we can use the C standard library to handle creating and writing the file for us:

	char buf[256];
     /* HOME is the sandbox root directory for the application */
	strcpy(buf,getenv("HOME"));
	strcat(buf,"/Documents/appuuid.data");

	f = fopen(buf,"w");
	fputs([uuid UTF8String], f);
	fclose(f);

Step 3: If the file is already there, use what’s in the file rather than generating a new UUID. (After all, that’s the whole point of this exercise; to have a stable UUID.)

Putting all of this together, we get the following routine, which can be called when your application starts up in the main UIApplicationDelegate when you load the main window:

- (void)loadUUID
{
	char buf[256];
	strcpy(buf,getenv("HOME"));
	strcat(buf,"/Documents/appuuid.data");
	FILE *f = fopen(buf,"r");
	if (f == NULL) {
		/*
		 *	UUID doesn't exist. Create
		 */
		
		CFUUIDRef ref = CFUUIDCreate(nil);
		uuid = (NSString *)CFUUIDCreateString(nil,ref);
		CFRelease(ref);
		
		/*
		 *	Write to our file
		 */
		
		f = fopen(buf,"w");
		fputs([uuid UTF8String], f);
		fclose(f);
	} else {
		/*
		 *	UUID exists. Read from file
		 */
		
		fgets(buf,sizeof(buf),f);
		fclose(f);
		uuid = [[NSString alloc] initWithUTF8String:buf];
	}
}

This will set the uuid field in your AppDelegate class to a unique identifier, retaining it across application invocations.

Now any place where you would need the UDID, you can use the loaded uuid instead. This also has the nice property that the generated uuid is 36 characters long, 4 characters narrower than the 40 character UDID returned by iOS; thus, you can simply drop in the uuid into your back-end database code without having to widen the table column size of your existing back-end infrastructure. Further, because the UDID format and the uuid formats are different, you won’t get any accidental collisions between the old and new formats.

And then it turns upside-down in an instant.

Cartifact is in the business of making print maps for high end real-estate companies, and I was brought in to help build a technical team, to make Cartifact a tech-oriented company in order to position it for an acquisition.

But sometimes things don’t work out the way you plan.

At the same time we brought in a new company President who spent the last several months going around and talking to different companies with whom Cartifact had a pre-existing relationship. And after doing this for several months, the conclusion was that Cartifact made beautiful maps, and so Cartifact should continue making beautiful maps, increasing their exposure via better marketing and trying to capture more of the market for making hand-drawn beautiful maps.

This means the idea of building a technology team and positioning Cartifact–well, that went away. I’m not sure exactly what happened behind the scenes, but the feeling I get was that they didn’t want to continue bleeding red for something they had no intention of building. And so there was no more money for me.

It’s troubling when you find out the opportunity you left a rather secure job for no longer exists. And it’s troubling to discover this opportunity no longer exists with perhaps a half-hour’s notice. But that’s the way of the world sometimes.

And the important thing is to re-orient yourself, shake out your contacts, and see where you want to go.

Ten years ago, nearly to the month, I wound down In Phase Consulting, a consulting company I had run with my wife for 9 years and took a job at Symantec for two reasons. One, right after 9/11, the market for freelancing was drying up thanks to the Dot Com crash and the worries over 9/11. And two, I found that I was bumping up against my limits: I had never managed a team, I had never hired people, I had never really learned how to delegate tasks or break down larger projects into smaller components.

And I wanted to get that experience.

Now I’m back to where I was 10 years ago, but this time, the market has changed. No more picking up the phone and making cold calls; we now have Facebook and LinkedIn and blogs and mailing lists and MeetUps. It’s easier now than ever to circulate out there and see what projects are there. And while this does not alleviate the marketing process, it does provide better avenues for searching.

And I’ve changed: I’ve successfully built a team from scratch. I’ve successfully hired and brought people up to speed. I’ve successfully delivered large scale projects, and for the most part when I’ve had enough control to keep things from going haywire, I’ve managed to consistently bring projects in on time and under budget.

So a few hours ago I filed an S Corp app with LegalZoom and created a very basic web site, Glenview Software Corporation, along with ordering business cards. I figure I’ll give the freelance thing a try again, but this time not be afraid of going after the larger projects.

We’ll see where this lands.

If you need a highly qualified software developer with 25 years of professional experience in mobile, embedded, web, desktop and client/server architecture software, please send me an e-mail.

Meanwhile I’m going to go back to what I do best: create damned good software through solving very hard problems. Like that LISP interpreter which I now have running embedded on an iPhone and an iPad which I plan to turn into a symbolic programmable calculator for those archiectures.

On the development of my own Lisp interpreter, part 3: revisiting closures.

I managed to solve my lisp closure problem outlined in this post by adopting the ideas behind this paper: Closure generation based on viewing LAMBDA as EPSILON plus COMPILE. The basic idea of the paper is that you can rewrite Lambda expressions as “epsilon” expressions (that is, ‘lambda’ but which all variable state outside of the lambda block is passed in as arguments), and compiling those functions with a special method which modifies the stack on calling the lambda function.

Let me explain. Suppose we have the following bit of Lisp:

    (define (funct a)
      (lambda (b) (+ a b)))

When we call this function with an argument, say (funct 5), this returns a function pointer to a function that takes one argument (b) and adds 5 to b. So:

> (define tmp (funct 5))
> (tmp 3)
8

See what happened here? By calling funct we created a new function (which we stored in tmp). We can then call that function, and get a result.

The idea of the paper is that in order to get proper closure rules, you can walk the expression (because the beauty of Lisp is that Lisp programs are just lists of lists that a Lisp program can easily parse), and perform a substitution to rewrite our function in terms of “epsilon”, or rather, lambda functions which only have local scope.

Thus, the lambda expression of our function funct above:

    (lambda (a)
        (lambda (b) (+ a b)))

which describes a function taking one argument that returns a function taking one argument, would be rewritten:

    (lambda (a) 
        ($$closure (lambda (a b) (+ a b)) a))

Now this special function $$closure is magic. It takes as it’s first argument a precompiled function, and as the tail of the arguments a list of parameters. It then creates a new function with pre-pends it’s arguments on the stack and jumps to the original function. Thus:

    ($$closure #<procedure:anon-1> 5)

In the above case, the above procedure would generate the following function:

	POP DS   ; Pop the marker giving the size of the stack
	PUSH 5   ; Push the parameter 5 onto the stack
	PUSH DS  ; Push the stack size marker back onto the stack
	JMP <procedure:anon-1>

That is, the function would simply pre-pend the list of arguments to the start of the parameter list (adjusting the stack frame as needed to properly mark the start and end of the stack frame), and then jump to the original procedure.

The code rewriter I built also tracks the current usage of variables so I properly ignore variables not used interior to a function. For example:

    (lambda (a b)
      (set! f1 (lambda (c) (+ a c)))
      (set! f2 (lambda (c) (- b c))))

This is rewritten as:

    (lambda (a b)
      (set! f1 
            ($$closure (lambda (a c) (+ a c)) 
                       a)) 
      (set! f2 
            ($$closure (lambda (b c) (- b c)) 
                       b)))

Meaning we run our $$closure function pushing only the additional parameters that are needed to each of my interior lambda functions.

We also handle the case where you can create a closure that is only accessible from an interior method. For example, if we define a function foo:

    (define foo (lambda (a) 
                        (lambda (b) (set! a (+ a b)) 
                                    a)))

This defines a function which, once run, will store the value ‘a’ away in a variable that is only accessible from the function that is returned. Calls to that interior function can then add to the interior value, and return that value.

Thus:

> (define bar (foo 5))
> (bar 1)
6
> (bar 2)
8
> (bar -3)
5

This interior variable is of course not shared:

> (define bletch (foo 3))
> (bletch 1)
4
> (bar 1)
6

This is accomplished by our routine by detecting if any of the interior lambdas perform a set! operation to update an interior variable held by closure. If so, we then “wrap” the variable in a list, so in effect we use the variable by reference.

Compiling our routine with the lambda closure function, we get:

    (lambda (a) 
      (set! a ($$wrap a)) 
      ($$closure (lambda (a b) ($$store a (+ ($$fetch a) b)) 
                               ($$fetch a)) 
                 a))

The $$wrap function wraps the variable with a list. The function $$store sets the wrapped value, and $$fetch obtains the wrapped value. These three functions are in fact declared as:

(define ($$wrap v) (cons v '()))
(define ($$fetch v) (car v))
(define ($$store v val) (set-car v val))

The way my code works is by doing an analysis pass, which computes the variable usage across the various lambdas in my declaration, then a rewrite pass, to use the information gathered in the variable scope parameters to determine how to rewrite my lambda functions.

The first method is $$eval-scope, which walks through the lambda function, maintaining a record of all the variables that have been encountered. The specific record that I track contains the following information:

1) An association list of the currently encountered variables, maintained as a “push-down” association list. When I encounter a new variable, the variable information is pre-pended to the list of variables. This variable state information contains the following information:

a) The variable name
b) A flag indicating if the variable was written to by an interior lambda (that is, by a lambda function not where this one was declared). This actually has three states: $$read (the variable was read), $$write (the variable was written) and $$unused (the variable was not used by an interior lambda).
c) A pointer to the lambda where this variable was declared,
d) A list of pointers to the lambdas where this variable was used.

Note that we only create records for the variables that we find inside declaration headers: that is, for an expression like (lambda (a b)), we only note ‘a’ and ‘b’. If we see a variable that hasn’t been declared in this lambda or an outer lambda, we assume it was a global variable and thus ignore it.

2) A flag next to the association list indicating if we’ve seen multiple lambdas in this pass. We can use this flag to ignore rewriting the function if we don’t see any embedded lambdas.

3) A push-down list of the lambdas that we’re currently examining. This is important because if we have three nested lambdas, but the middle lambda doesn’t use a variable declared in the outer that is used in the inner, we must pass the used variable through the middle lambda. So a lambda used in the interior is considered “used” by all of the lambdas between the current one we’re examining and the lambda where this variable was first declared.

4) A push-down list of pointers to our association list. This is used essentially to push and pop variable scope: when we enter a lambda function we push the old scope state, and when we’re done with the lambda we pop the scope state. This allows us to quickly unwind the current variable state once we’re done processing an interior lambda.

The $$eval-scope method walks through all of the declarations we’re currently examining, and updates the scope variable. We also, for convenience sake, rewrite lambdas we encounter (once we’ve processed them) with a special token ($$scope), which notes for each lambda we’ve encountered (a) the lambda method itself (transformed by $$eval-scope, naturally), (b) the un-transformed lambda expression (which we need because our scope variable uses pointers to the un-transformed lambda expression as an index), (c) the variables that are used outside the lambda function (that is, the state of the closure when the lambda was encountered), and (d) the variables created and used inside the lambda.

This can be a quite complicated: for our above example:

    (define foo (lambda (a) 
                        (lambda (b) (set! a (+ a b)) 
                                    a)))

The function $$eval-scope returns:

($$scope
 (lambda (a)
   ($$scope
    (lambda (b) (set! a (+ a b)) a)
    (lambda (b) (set! a (+ a b)) a)
    ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))
    ((b $$unused (lambda (b) (set! a (+ a b)) a) (lambda (b) (set! a (+ a b)) a))
     (a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))))
 (lambda (a) (lambda (b) (set! a (+ a b)) a))
 ()
 ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a)))))

Ignoring the inner lambda, the outer lambda is:

($$scope
 (lambda (a)
   ($$scope ... ;; rewritten method)
 (lambda (a) (lambda (b) (set! a (+ a b)) a))
 ()
 ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a)))))

This indicates that our outer lambda method declares one variable a (see it in the fourth list but not in the third? It starts “((a $$write …”), which is written to. It is used by our outer and inner lambda functions, and declared in our outer lambda. (Let me reformat the association list so you can see it:)

 ((a $$write                                         ;; The variable is written to
     (lambda (a) (lambda (b) (set! a (+ a b)) a))    ;; Where the variable was declared
     (lambda (b) (set! a (+ a b)) a)                 ;; All the places where it was used...
     (lambda (a) (lambda (b) (set! a (+ a b)) a)))))

The interior lambda gets the same treatment:

   ($$scope
    (lambda (b) (set! a (+ a b)) a) ;; The transformed lambda (which needed no transformation)
    (lambda (b) (set! a (+ a b)) a) ;; The original declaration
    ;; Variable state outside this lambda
    ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))
    ;; Variable state inside this lambda
    ((b $$unused (lambda (b) (set! a (+ a b)) a) (lambda (b) (set! a (+ a b)) a))
     (a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))))

Our second pass, the $$rewrite-lambda function, takes the state information expressed in our rewritten lambda function, and does two things:

(1) We rewrite all variable accesses to variables that are ‘written’ to (marked $$write in the association list) by substituting all ‘set!’ operations with ‘$$store’, all read operations with $$fetch, and we add to the start of the declaring lambda function a $$wrap call to wrap our arguments. (Our particular dialect of Lisp doesn’t have macros (yet), so ‘do’ and ‘let’ are explicit objects within our language–and the rewrite engine handles those correctly as well, rewriting things like ‘(do ((x 1 (+ 1 x))…’ as ‘(do ((x ($$wrap 1) ($$wrap (+ 1 ($$fetch x))))) …’.)

(2) We rewrite all $$scope expressions as lambda + closure. That is, our rewrite rules examine the $$scope function and generates a ($$closure (lambda …) …), prepending the variables that are passed outside of our function in as new parameters (added at the start), and creating a $$closure declaration with those added variables, in the same order.

In our above example, the lambda declaration gets it’s final form (for our compiler):

    (lambda (a) 
      (set! a ($$wrap a)) 
      ($$closure (lambda (a b) ($$store a (+ ($$fetch a) b)) 
                               ($$fetch a)) 
                 a))

If this seems a little heavy-weight, it’s because it probably is: for the Lisp compiler I’m building, I’m not expecting to use the closure feature a lot–and so, having something that rewrites my functions during the compilation process seems a reasonable trade-off for dealing with closures. It also means that my Virtual Machine doesn’t need to understand closures at all; I can just build it as a normal stack-based VM engine with garbage collection.