REST is not secure.

Simple Security Rules

Security by obscurity never works. Assume the attacker has your source code. If you are doing some super cool obscuring of the data (like storing the account number in the URL in some obscured manner like the Citi folks apparently did), someone can and will break your algorithm and breach your system.

Excuse my language, but what on this God’s mother-fucking Earth were the Citi folks thinking? Really? REALLY?!?

But I can see the conversation amongst the developers at Citi now. “We really need to implement our web site using modern REST techniques, meaning we cannot store state on the servers, but have the complete state representation in the client, and use stateless protocols to communicate between the client and server.

Hell, I’ve been part of this conversation at a number of jobs over the past few years.

But here are the problems with a truly stateless protocol.

Stateless protocols are subject to replay attacks.

If a request sent to the back end is stateless, then (unless there is a timestamp agreed to between the client and server which expires the transactions) it must necessarily follow that the same request, when sent to the server, will engage the same action or obtain the same results. This means if, for example, I send a request to transfer $100 from my checking account to pay a bill, that request, when sent again, will transfer another $100 from my checking account to the bill account. If sent 10 times, $1,000 will be transferred.

And so forth.

The biggest problem to replay attacks, of course, is a black-hat sniffing packets: theoretically, since the protocol is stateless, the black-hat doesn’t even have to be able to decrypt the stateless request. He just needs to echo it over and over a bunch of times to clean out your account.

Stateless protocols are insecure protocols.

As we saw in the case of Citi, vital information specifying your account was part of the request. This means that, if the bad guys can figure out how the account information was encoded, they can simply resend the request using a different account and–volia!–instant access to every account at Citi.

Combine that with a reworked request to transfer money–and you can now write a simple script that instantly cleans out every Citi bank and transfers the money to some off-shore account, preferably in a country with banking laws that favor privacy and no extradition treaties.

See, the problem with a stateless protocol is that if the protocol requres authentication, somewhere buried in every request is a flag (or a condition of variables) which say “yes, I’m logged into this account.”

Figure out how to forge that, and you can connect to any account just by saying “um, yeah, sure; I’m logged in.”

Stateless protocols expose vital information.

Similarly, a black-hat who has managed to decrypt a stateless protocol can now derive a lot of information about the people using a service. For Citi, that was bank account information. Other protocols may be sending (either in the clear or in encrypted format) other vital, sensitive information.

But we can just use SSH to guarantee that the packets are only being sent to our servers.

When was the last time you got a successful SSH handshake from a remote web server where you took the time to actually look at the authentication keys to see that they were signed by the entity you were connecting to? And when was the last time you logged into a secure location, got a “this key has expired” warning, and just clicked “ignore?”

SSH only works against a man-in-the-middle attack (where our black-hat inserts himself into the transaction rather than just sniffs the data) when users are vigilant.

And users are not.

But we can use a PKI architecture to secure our packets.

If your protocol is truly stateless, then your client must hold a public key to communicate with your server, and that public key must be a constant: the server cannot issue a key per client because that’s–well, that’s state. Further, you can’t just work around this by having the client hold a private key for information sent by the server; the private key is no longer private when a user can download the key.

So at best PKI encrypts half of the communications. And you have the key to decrypt the data coming off the server. And since all state is held by the client, that implies if you listen to the entire session, you will eventually hear all of the state that the client holds–including account information and login information. You may not know specifically what the client is sending–but then, you have the client, so with state information and detailed information about the protocol (contained in the client code), you’re pretty much in like Flynn.

My own take:

REST with a stateless protocol should be illegal when handling monetary transactions.

For all the reasons above, putting out a web site that allows account holders to access their information via a stateless RESTful protocol is simply irresponsible. It’s compromising account holder’s money in the name of some sort of protocol ideological purity.

Just fucking don’t do it.

You can eliminate most of these risks by maintaining a per-session cookie (or some other per-session token) which ties in with account information server-side.

It’s actually quite easy to use something like HttpSession to store session information. And if your session objects are serializable, many modern Java Servlet engines support serializing state information across a cluster of servers, so you don’t lose scalability.

Personally I would store some vital stateful information with the session, stored as a single object. (For example, I would hold the account number, the login status, the user’s account ID, and the like in that stateful object, and tie it in with a token on the client.) This is information that must not be spoof-able across the wire, and information that is only populated on a successful login request.

The flip side is that you should only store the session information you absolutely need to keep around, for memory considerations: there is no need to keep the user’s username, e-mail address, and other stuff which can be derived from a quick query using an integer user ID. Otherwise, if you have a heavily used system, your server’s memory may become flooded with useless state information.

And don’t just generate some constant token ID associated with the account. That doesn’t prevent against replay attacks, and if your ID is small enough, it just substitutes one account number for another–which is, well, pretty damned close to worthless.

Use a two-state login process to access an account.

It’s easy enough to develop a protocol similar to the APOP login command in the POP3 protocol.

The idea is simple enough. When the user attempts to log in, a first request is made to the remote server requesting a token. This token can be the timestamp from the server, a UUID generated for the purpose, or some other (ideally cryptographically secure) random number. Client-side, the password is then hashed using a one-way hash (like MD5 or SHA-1) along with the token number. That is, we calculate, client-side: passHash = SHA1(password + “some-random-salt” + servertoken);.

We then send this along with the username to the server, which also performs the same password hash calculation based on the transmitted token. (And please don’t send the damned token back to the server; that simply allows someone to replay the login by replaying the second half of the login protocol rather than requesting a new token for this specific session to be issued from the server.) If the server-side hash agrees with the client-side hash, then the user is logged in.

Create a ‘reaper’ thread which deletes state information on a regular basis.

If you are using the HttpSession object, by default many Servlet containers will expire the object in about 30 minutes or so. If you’re using a different mechanism, then you may consider storing a timestamp with your session information, touching the timestamp on each access, and in the background have a reaper thread delete session objects if they grow older than some window of time.

In the past I’ve built reaper threads which invalidated my HttpSessions, to deal with older JSP containers which apparently weren’t reaping the sessions when I was expecting them to. (I’ve seen this behavior on shared JSP server services such as the ones provided by LunarPages, and better to be safe than sorry.)

For God’s sake, just say “NO!” to REST. Especially when dealing with banking transactions. Especially if you’re the banking services I use: I don’t want my money to be transferred to some random technologically sophisticated mob syndicate, simply because you needed your fucking ideological purity of a stateless RESTful protocol and didn’t think through the security consequences.

Dragging and dropping objects in GWT

If you want to add a click-and-drag handler in GWT, so that (for example) if you click on an image object, you can move it around (and drag content logically associated with it), it’s fairly straight forward.

First, you need to implement a MouseDownEvent, a MosueMoveEvent and a MouseUpEvent handler, and attach them to your image. (Me, I like putting this into a single class, which contains the state associated with the dragging event.) Thus:

	Image myImage = ...
     
	EventHandler h = new EventHandler(myImage);
	myImage.addMouseDownHandler(h);
	myImage.addMouseMoveHandler(h);
	myImage.addMouseUpHandler(h);

Now the event handler needs to do some things beyond just tracking where the mouse was clicked, where it is being dragged to, and how the universe should be changed as the dragging operation takes place. We also need to trap the event so we can handle dragging outside of our object (by capturing drag events), and we also have to prevent the event from percolating upwards, so we get the dragging events rather than the browser.

This means that our event dragging class looks something like:

private class EventHandler implements MouseDownHandler, MouseMoveHandler, MouseUpHandler
{
	private boolean fIsClicked;
	private Widget fMyDragObject

	EventHandler(Widget w)
	{
		fMyDragObject = w;
	}
		
	@Override
	public void onMouseUp(MouseUpEvent event)
	{
		// Do other release operations or appropriate stuff

		// Release the capture on the focus, and clear the flag
		// indicating we're dragging
		fIsClicked = false;
		Event.releaseCapture(fMyDragObject.getElement());
	}

	@Override
	public void onMouseMove(MouseMoveEvent event)
	{
		// If mouse is not down, ignore
		if (!fIsClicked) return;

		// Do something useful here as we drag
	}

	@Override
	public void onMouseDown(MouseDownEvent event)
	{
		// Note mouse is down.
		fIsClicked = true;

		// Capture mouse and prevent event from going up
		event.preventDefault();
		Event.setCapture(fMyDragObject.getElement());

		// Initialize other state we need as we drag/drop
	}

A FlexTable that handles mouse events.

Reverse engineering the GWT event handler code to add new events is simple. Here’s a Flex Table which also handles mouse events:

	/**
	 * Internal flex table declaration that syncs mouse down/move/up events
	 */
	private static class MouseFlexTable extends FlexTable implements HasAllMouseHandlers
	{
		@Override
		public HandlerRegistration addMouseDownHandler(MouseDownHandler handler)
		{
			return addDomHandler(handler, MouseDownEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseUpHandler(MouseUpHandler handler)
		{
			return addDomHandler(handler, MouseUpEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseOutHandler(MouseOutHandler handler)
		{
			return addDomHandler(handler, MouseOutEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseOverHandler(MouseOverHandler handler)
		{
			return addDomHandler(handler, MouseOverEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseMoveHandler(MouseMoveHandler handler)
		{
			return addDomHandler(handler, MouseMoveEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseWheelHandler(MouseWheelHandler handler)
		{
			return addDomHandler(handler, MouseWheelEvent.getType());
		}
	}

Addendum:

If you want the location (the cell) in which the mouse event happened, you can extend MouseFlexTable with the additional methods:

		public static class HTMLCell
		{
			private final int row;
			private final int col;
			
			private HTMLCell(int r, int c)
			{
				row = r;
				col = c;
			}
			
			public int getRow()
			{
				return row;
			}
			
			public int getCol()
			{
				return col;
			}
		}
		
		public HTMLCell getHTMLCellForEvent(MouseEvent event) 
		{
			Element td = getEventTargetCell(Event.as(event.getNativeEvent()));
			if (td == null) {
				return null;
			}

			int row = TableRowElement.as(td.getParentElement()).getSectionRowIndex();
			int column = TableCellElement.as(td).getCellIndex();
			return new HTMLCell(row, column);
		}

On the development of my own Lisp interpreter, part 2: lambdas, fexprs and closures.

My thinking has been to make the keyword ‘lambda’ (and ‘nlambda’) represent entry-points to a byte-code compiler: the idea is that if we see an expression like:

(lambda (x y) (+ x y))

This function invokes the compiler and returns a procedure symbol which represents the bytecode that adds x and y together and returns a value. So internally, somewhere in my system, this would return a data structure like:

Procedure Definition

Also recall that my reason for using an ‘nlambda’ construct was to provide me a way to define fexprs, so I could easily write certain primitives without having to build those primitives into the interpreter. The idea was that if I could create an ‘nlambda’ declaration, then certain primitives could easily be written as fexprs, rather than having to complicate my interpreter or compiler. Thus:

(define quote (nlambda (x) x))

(define begin (nlambda ins) 
   (cond ((null? ins) '())
         (#t (eval (car ins))
             (begin (cdr ins)))))

The first is the definition of the ‘quote’ instruction, the second defines ‘begin’, a method for evaluating a sequence of instructions in a list of instructions.

Also recall that I decided that I wanted a lexical Lisp rather than a dynamically bound Lisp, for reasons of given in the previous post.

And thus my problem with closures arises.

Let me spell it out.

Suppose we assume that the binding “closure” of our function is the stuff within the function itself: that is, when we compile our procedure above, the scope of the accessible variables is either (a) the variables declared within the procedure (defined by the lambda function), or (b) the global variables in our system.

This makes sense, since it means if we write:

(define add (lambda (x y) (+ x y gz)))  ;; gz is global

Then we execute add:

> (define gz 3)
> (add 1 2)
6

Which makes sense since 1 + 2 + 3 = 6.

It also means if we use our ‘add’ routine in another function:

(define foo (lambda (gz) (add gz 2)))

It’s pretty clear our intent is that ‘gz’ within ‘foo’ is a local variable, and if we call ‘foo’ with 1:

> (foo 1)
6

This behavior makes intuitive sense because while the variable ‘gz’ in ‘foo’ hides the global variable ‘gz’, it shouldn’t change the intent of ‘add’ simply because I called it from within another function which hides a global variable. Meaning the intent of ‘add’ was always to use the global variable ‘gz’, and we should expect 6 from ‘(foo 1)’, not 4 (1 + 2 + 1), which we would have gotten in, say, a dynamically bound Lisp environment.

But…

If we also assume that the lexical binding rules apply to ‘nlambda’, then what happens if we define a function ‘setq’:

(define setq (nlambda (a b) (set a (eval b))))

And suppose we call ‘setq’ from another function:

(define bar (nlambda (q) (setq gz q)))

Now think for a moment: nlambda is a function call with local variable scope. This means that when we get to the statement ‘(set a (eval b))’, well, ‘a’ evaluates to the variable ‘gz’. And ‘b’ evaluates to the variable ‘q’.

And so we call (eval q).

But remember: nlambda is a function in a lexical language: calling it pushed the variable context. So when we call ‘eval’, here is what our stack looks like:

Stack Representation

And because we’re a lexical version of Lisp, our evaluator only looks at the head of the stack, and at the global variable list. Meaning our variable ‘q’ is not in scope.

So instead of having the intended effect, our setq will fail: it is unable to evaluate ‘q’.

It’s worse than that, however. Suppose we define our closure for all lambda functions based on the environment in which they are encountered. In other words, if we define (for example) the list of available variables by running our interpreter so that all new variables defined by ‘lambda’, ‘let’ and other constructs push onto a local list of available functions, so (for example) writing:

(let ((x 1)) (lambda (y) (+ x y)))

This operation tracks all variable being defined by lambda on a single local variable environment stack, and the lexical closure of lambda includes both the variables x and y, this still does not solve my problem, and that is the lexical closure of my ‘setq’ function does not actually involve q, so (eval b) will fail every time.

This problem does not occur in the original dynamically bound Lisp language invented in the 60’s simply because there was no closure. The entire stack of variables was searched in order to find the referring variable. Of course this makes variable resolution a non-trivial problem, because every time you access a variable you have to scan the entire state space for variables. You can’t just compile a variable name as a fixed offset within an array of variables and make variable access linear time.

At this point I think I need to make a few decisions.

(1) It’s pretty clear I need to build a full interpreter that is able to execute the language as well as a compiler within the lambda function.

(2) I’m unclear if I want to deal with closures right now. I suspect the correct answer is to bite the bullet and deal with closures.

(3) I’m going to have to bite the bullet and include a whole bunch of fundamentals (such as ‘let’, ‘setq’ or ‘set!’, ‘do’, ‘loop’, ‘prog’ or whatever other looping constructs that define local variable state) within my compiler, rather than defining them as fexprs. Fexprs don’t seem to work well for the purpose I want them for.

(4) Rethink my fexpr verses macro decision. Fortunately I can drop nlambda from my prototype, build in my primitives, and worry about macros later.

GWT and tall images revisited.

As a follow-up to Things to Remember: Why cells with an inserted image are taller than the image in GWT, the answer is:

VerticalPanel panel;
...
Image image = new Image("images/mydot.png");
DOM.setStyleAttribute(image.getElement(), "display", "block");
panel.add(image);

For whatever reason, the image object has ‘inline’ formatting by default, and when GWT assembles the table cell, the cell’s height is being derived from the font height rather than from the image height. Setting the image to block seems to resolve this issue.

Part 1: On the development of my own Lisp interpreter

Back when I was in high school (back in the early 1980’s) I learned Lisp. One of the things I really liked about the language was it’s expressiveness: because the language was such as simple one, it was fairly easy to build recursive pattern-matching and recursive search algorithms on arbitrary S-expressions, which made the development of things like a recursive descent algorithm for theorem proving relatively straight forward.

So my goal is to build an embedded Lisp interpreter/byte-code compiler that can run on a “small” footprint such as the iPhone.

(Note: I put “small” in quotes because, honestly, an iPhone is not a “small” device, CPU wise or memory-wise in the scope of the types of CPUs and memory footprints where Lisp was originally used. Back in the 80’s a computer as powerful as the iPhone with as much memory would have filled a room and cost millions.)

There are some language decisions that need to be made first.

Dynamic binding verses Lexical binding

“Binding” expresses the relationship between a variable and it’s value. A variable “a” is bound to “1” when you write an expression “a = 1;” in C.

In the Lisp world there are two types of binding: lexical binding and dynamic binding.

To summarize, lexical binding is similar to C: there are global variables and variables within the scope of a function. A function defines the variables used within that function, and when a variable is resolved, the interpreter looks for variables either within the immediate function scope or in global scope.

Dynamic binding, first demonstrated in the original Lisp language, is different. Instead, there are nothing but a global stack of variables. A function call may declare new variables or variables that are already existing in the global pool; the new variables are added to the list of variables, and when the function returns, those variables that shadowed others already in the pool have their values effectively put back.

One way to think of dynamic binding is to think of all of the variables in the current system in a linked list of variables:

Lisp Vars 1

This shows the variables a, b and c defined in our system. When we resolve a variable ‘a’, we start from the pointer ‘globals’, and walk the list until we find ‘a’; we then use that location for our variable binding.

Now suppose we call into a new function ‘foo’ defined as

(define (foo a) ...some function...)

When we invoke the function (foo 5) the invocation will push a new variable ‘a’ onto our global list of variables, saving the old value of the global pointer onto a stack:

Lisp Vars 2

When we then resolve the value of variable ‘a’, we walk the global list, and find the first variable ‘a’, which shadows the old variable ‘a’.

The upshot of dynamic verses lexical assignments is that the variable value to use within a function is dependent on the current global context in a dynamic environment, while in a static environment the value is completely defined: it is either locally declared or a global (albeit undefined) value.

So, in the following example:

(define a 1)
(define b 2)
(define c 3)

(define (foo a) (bar))
(define (bar) a)

Invoking (foo 5) in a dynamic environment would mask the value of a = 1 in our global stack, so when (bar) is invoked, a resolves to 5:

> (foo 5)
5

However, in a lexical environment, when bar is defined, the variable ‘a’ is bound to the global definition of ‘a’, as it would be in C. Thus, invoking (foo 5) does not mask the value of the global variable ‘a’ in bar, and invoking (foo 5) returns 1:

> (foo 5)
1

From my perspective, I would personally prefer a lexical LISP to a dynamic LISP, for two reasons:

(1) Because the variables are completely defined within the context of a single function call, it is easier to compile Lisp statements: as we process each statement, we know where in the list of global variables the variable will be. If we haven’t seen the variable, we can simply create a new undefined variable in the global variable space. This means our compiler can create a global array of variables, and each variable name resolves to a fixed index in that global array of variables.

The same can be said of local variables: as we invoke a function we create a new array of local variables: entering a function pushes a fixed size array onto an invocation stack, and all local variable references become a fixed index into that array.

Thus, variable resolution is linear in time.

(2) Functions are “well-defined” rather than depends on side effects, and we minimize accidentally breaking a function that relies on global variable state.

Poor function ‘bar’ would be in serious trouble if we depended on the global variable ‘a’ and it was aliased by accident by another function that calls ‘bar’.

Fexprs verses macros

Yes, I know supposedly this has been settled in favor of macros since the 1980’s and the publishing of Kent Pitman’s “Special Forms in Lisp” where he condemned the use of fexpr functions because you cannot perform static analysis on the forms. (Paper)

But I’d like to revisit the whole thing for just a moment.

In Lisp, a function call such as (+ a b) first evaluates a, then b, then calls the function ‘+’. So if a = 5 and b = 3, then we evaluate a, evaluate b, then add them together to return 8.

So far so good. But what if you want to set the value of a to 5 via a function via a hypothetical ‘set’ function. The natural way to write this would be (set a 4); however, this would first evaluate a (to 5), evaluate 4 (which is a special form which evaluates to itself), then calls the function ‘set’ with the arguments 5 and 4–which is illegal.

To get around this, we could use the ‘quote’ “function”, a special form of “function” which returns the literal value of it’s parameter: (quote a) evaluates to the literal ‘a’, and then write (set (quote a) 4)–which will call the function ‘set’ with the arguments ‘a’ and 4.

Ideally, though, we’d like to create a new function, call it ‘setq’, which automatically quotes the first parameter, evaluates the second, then passes them to our ‘set’ function.

When Lisp was first invented in the 1960’s, this was done using a special form, the fexpr function, which does not evaluate it’s parameters prior to invoking the function. This allows us to then define our ‘setq’ function. Using ‘nlambda’ syntax which indicates a fexpr function which does not evaluate it’s parameters, we could define setq as follows:

(define setq 
  (nlambda (var val) 
           (set var (eval val))))

This defines the symbol ‘setq’ as a lambda form which does not evaluate it’s parameters. Setq takes two parameters: ‘var’ and ‘val’. We then invoke our ‘set’ function with the value of ‘var’, but we evaluate the value in ‘val’. So when we invoke (setq a 5):

(setq a 5) ->
  [nlambda is invoked with var = 'a, val = '5, which expands to]
  (set 'a (eval '5)) ->
  (set a 5)

The beauty of being able to define our own fexprs via the nlambda mechanism is that we can quickly define a number of the special forms, such as quote:

(define quote (nlambda (a) a)))

Now, reviewing the Kent Pitman paper, it’s clear the first complaint he has, under the section “The Compiler’s Contract”, in fact has nothing to do with fexprs, and everything to do with compiling in a dynamic environment. But since I’ve decided above to use a lexical binding environment in my form of Lisp, the problem outlined in this section just goes away.

The second problem with fexprs, however, boils down to the problem of compilation: when compiling a function we cannot know how to generate the correct code to invoke a function within our compiled function unless we know if the code we’re calling is a regular lambda function or a special ‘nlambda’ function. To illustrate, suppose we define our ‘lambda’ keyword to invoke a compiler to compile our S-expression into a compiled form. So, we write the function:

(define foo (lambda (a) (bar a 5)))

Now, assuming ‘lambda’ compiles the parameters and body of our program into machine pseudo-code, and if we know all functions are lambda functions, then we could easily generate the following code:

function_prefix(1,0);   -- Special form which pulls 1 local variable from the stack into 'a'

push(0);                -- Push a (variable 0) onto stack
push_const(5);          -- Push constant 5 onto stack
call(bar);              -- Call bar. Return value puts value of bar onto stack

function_postfix;       -- Unwind the local variables, leaving the return value alone.

Here’s the thing, though: If ‘bar’ was a special ‘nlambda’ function, this behavior is incorrect. Instead, we would have wanted to generate:

function_prefix(1,0);   -- Special form which pulls 1 local variable from the stack into 'a'

push(a);                -- Push the symbol a onto stack
push_const(5);          -- Push constant 5 onto stack
call(bar);              -- Call bar. Return value puts value of bar onto stack

function_postfix;       -- Unwind the local variables, leaving the return value alone.

That is, how we compile a call to bar requires us to know if bar is a lambda or an nlambda function.

There are three solutions to this problem.

  • Punt on fexpr forms and use macros instead.

The disadvantage of this solution is that we cannot define a number of very basic forms in Lisp; instead, our interpreter is complicated by having to redefine a number of basic forms (quote and cond, for example) as function calls into a special interpreter. This significantly increases the bootstrapping of an interpreter, since we need to handle a whole bunch of special forms, rather than just writing those special forms in Lisp and compiling them with our Lisp compiler.

  • Require that any compiler we build compile against the full environment.

What this would mean is that rather than using the lambda and nlambda keywords to generate a compiled expression, we just use these keywords as markers, and our apply method (which evaluates lambda expressions) use those keywords to determine how it will interpret the expression. We then create a separate compile keyword which compiles functions based on the complete environment present at the time: at compile time we depend on all of the functions being available so we can make the determination if a function is a lambda or an nlambda form.

So our compile function would have the semantics of replacing a variable ‘a’ defined as (lambda vars prog) to a special #proc object containing byte code. It would fail if all of the functions invoked within the lambda procedure are not defined.

  • Presume any function not defined (and any variable not defined) are lambda form functions.

This third option allows us to define lambda and nlambda as entry points into our bytecode compiler, with nlambda also marking the procedure as a special fexpr function. If we don’t see the function declared already, we then assume the function is a lambda form.

Because compilation does two things: it marks the type of function, and it compiles the function, if we have a situation where the definition of an fexpr form relies on a later fexpr form that hasn’t been defined yet, we could potentially use this fact by writing:

(define a (nexpr (b c) '()))   ;;; Dummy form

(define b (nexpr (i) 
   ...
   (a foo bar)                 ;;; Our call to a, knowing a is an fexpr
   ...)

(setq a (nexpr (b c) 
   ... the real code for a ... )

I suspect this shouldn’t happen very often. We could even put in a special check in our assignment operation to complain if we change the type of our global variable assignment between a normal function and an fexpr style function.

My take is to use fexpr forms; they greatly simplify bootstrapping a Lisp interpreter. I also plan to structure the compiler with the third option: variables that haven’t been defined yet in the global namespace will be automatically defined as the special symbol ‘undefined’ (so we know it’s index in the global space), and will be presumed to be a normal lambda function if invoked. This allows me to use the ‘lambda’ and ‘nlambda’ keywords as entrypoints into my bytecode compiler.

This afternoon I’m going to see if I can’t hack a little on my interpreter in Lisp; as I go along I will update this category with my results.

GWT Window.Location values.

Arcane knowledge: I wanted to know what values are returned for GWT’s Window.Location class.

For the URL “http://127.0.0.1:8888/index.html?gwt.codesvr=127.0.0.1:9997&foo=555” the values are:

getHash:
getHost: 127.0.0.1:8888
getHostName: 127.0.0.1
getHref: http://127.0.0.1:8888/index.html?gwt.codesvr=127.0.0.1:9997&foo=555
getPath: /index.html
getPort: 8888
getProtocol: http:
getQueryString: ?gwt.codesvr=127.0.0.1:9997&foo=555

FYI…

GSM Feature codes that appear to work with the iPhone on the AT&T network.

Some random notes I cribbed together regarding GSM feature codes on the iPhone on the AT&T network. I tried some of them (including the #31# trick), but not all of them. YMMV.

GSM Feature Codes (that appear to work on the iPhone on AT&T)

http://www.geckobeach.com/cellular/secrets/gsmcodes.php

Feature codes 21, 67, 61, 62 appear to work. Note changing the number for unanswered voice calls from the default that was plugged into the phone may make it harder to reset; meaning if you set up call forwarding to something other than the voice mail number (on my phone 1-253-709-4040), write down what was was before so you can re-enable voice mail.

Feature code 31 appears to work. (#31#number hides your phone number.)

http://cellphonetricks.net/apple-iphone-secret-codes/

All codes work. The two digit call values all accept the same formats documented in the link above, I believe. (Note that some of these codes duplicate functionality that can be found in settings.)

http://www.arcx.com/sites/GsmFeatures.htm

This notes that you can also set a timeout parameter for “Forward if not answered” so as to set the amount of time before your call forwards somewhere. I don’t know if this works. Note that the FIDO code (3436) to send to voice mail doesn’t appear to work on the iPhone. Typing in a 10-digit number (starting with 1 + area code + number) appears to work.