An interesting exercise.

So I’m developing this game in my spare time, and I needed a list of commonly used words. Downloaded the Project Guttenberg DVD of popular texts, an open source dictionary of words, and built a simple program which scanned the DVD, parsed out all the words, compared against the dictionary, and updated the count of those words to arrive at a list of commonly used words.

And here’s the top 50 words I found, in order:

the
of
and
to
in
that
he
was
it
his
is
with
for
as
you
on
had
not
be
at
but
by
this
her
or
which
from
have
they
she
all
him
we
are
were
my
me
so
one
an
no
their
if
there
who
said
them
when
would
been

I don’t know if this is cool or… mundane…

Posted in Commentary | 1 Comment

*sigh*

And then some asshole hacked my blog. The only thing I lost was the theme I had tinkered with to something resembling what I liked. Now I’m stuck with this until I can restore the old template formatting stuff I liked.

Posted in Uncategorized | Leave a comment

How do I know what environment variables are on iOS?

Apple’s iOS operating system is built on Unix, which implies that you can take your Unix-based C code, wrap it around some Apple iOS UI goodness, and have an iPhone application.

(Well, it’s more complicated than that. But if you’re a crusty ol’ Unix hacker like myself, there is something gratifying being able to use fopen() and getenv() instead of Apple’s NS-wrapped calls to these same routines.)

So how do you know what the environment variables are that are available on your phone?

Simple: I wrote a simple iOS program which neatly displays all of the available environmental variables.

And the variables available appear to be:

  • PATH: The path of available unix commands
  • TMPDIR: The temporary directory path (points to the temp folder in your sandbox)
  • SHELL: /bin/sh, natch.
  • HOME: The home directory of your application (points to the root of your sandbox)
  • USER: mobile, natch.
  • LOGNAME: mobile

and some odd ones I’ve never seen:

  • __CF_USER_TEXT_ENCODING = 0x1F5:0:0
  • CFFIXED_USER_HOME, which appears to be the same as HOME

In a debug environment I’m seeing the additional oddball variables:

  • CFLOG_FORCE_STDERR = YES
  • NSUnbufferedIO = YES
  • DYLD_INSERT_LIBRARIES = some path apparently on my host computer. (?)

Now this is on my iPhone running iOS 5.1; YMMV. Which is why I uploaded the program. Though it appears I would trust that both HOME and TMPDIR will both be available and point to the right place, and constructing the paths to the Documents and Library folders is just a matter of concatenating the path string returned from HOME. So if you need to write a new file to the Documents folder in the home directory of your application you can write:

char buffer[256];
strcpy(buffer,getenv("HOME"));
strcat(buffer,"/Documents/myfile.txt");
FILE *f = fopen(buffer,"w");
...
fclose(f);
Posted in Uncategorized | Leave a comment

UDIDs are gone.

After warning from Apple, apps using UDIDs now being rejected

UDIDs are now gone.

But it’s easy enough if you need a device unique identifier to generate one and track the device that way. Of course because you have your own unique identifier you cannot match numbers against other software makers, and you can’t guarantee the user will uninstall and reinstall the application–but on the other hand, if you save the unique identifier with the device and the user upgrades his phone, the identifier will move to the new phone, following the user.

Step 1: Create a UUID.

You can create the UUID using Apple’s built in UUID routines.

	CFUUIDRef ref = CFUUIDCreate(nil);
	uuid = (NSString *)CFUUIDCreateString(nil,ref);		CFRelease(ref);

Step 2: Write the UUID out to the Documents folder, so the UUID gets backed up with the phone.

Because under the hood iOS is basically Unix, we can use the C standard library to handle creating and writing the file for us:

	char buf[256];
     /* HOME is the sandbox root directory for the application */
	strcpy(buf,getenv("HOME"));
	strcat(buf,"/Documents/appuuid.data");

	f = fopen(buf,"w");
	fputs([uuid UTF8String], f);
	fclose(f);

Step 3: If the file is already there, use what’s in the file rather than generating a new UUID. (After all, that’s the whole point of this exercise; to have a stable UUID.)

Putting all of this together, we get the following routine, which can be called when your application starts up in the main UIApplicationDelegate when you load the main window:

- (void)loadUUID
{
	char buf[256];
	strcpy(buf,getenv("HOME"));
	strcat(buf,"/Documents/appuuid.data");
	FILE *f = fopen(buf,"r");
	if (f == NULL) {
		/*
		 *	UUID doesn't exist. Create
		 */
		
		CFUUIDRef ref = CFUUIDCreate(nil);
		uuid = (NSString *)CFUUIDCreateString(nil,ref);
		CFRelease(ref);
		
		/*
		 *	Write to our file
		 */
		
		f = fopen(buf,"w");
		fputs([uuid UTF8String], f);
		fclose(f);
	} else {
		/*
		 *	UUID exists. Read from file
		 */
		
		fgets(buf,sizeof(buf),f);
		fclose(f);
		uuid = [[NSString alloc] initWithUTF8String:buf];
	}
}

This will set the uuid field in your AppDelegate class to a unique identifier, retaining it across application invocations.

Now any place where you would need the UDID, you can use the loaded uuid instead. This also has the nice property that the generated uuid is 36 characters long, 4 characters narrower than the 40 character UDID returned by iOS; thus, you can simply drop in the uuid into your back-end database code without having to widen the table column size of your existing back-end infrastructure. Further, because the UDID format and the uuid formats are different, you won’t get any accidental collisions between the old and new formats.

Posted in iphone, Objective C++ | Leave a comment

And then it turns upside-down in an instant.

Cartifact is in the business of making print maps for high end real-estate companies, and I was brought in to help build a technical team, to make Cartifact a tech-oriented company in order to position it for an acquisition.

But sometimes things don’t work out the way you plan.

At the same time we brought in a new company President who spent the last several months going around and talking to different companies with whom Cartifact had a pre-existing relationship. And after doing this for several months, the conclusion was that Cartifact made beautiful maps, and so Cartifact should continue making beautiful maps, increasing their exposure via better marketing and trying to capture more of the market for making hand-drawn beautiful maps.

This means the idea of building a technology team and positioning Cartifact–well, that went away. I’m not sure exactly what happened behind the scenes, but the feeling I get was that they didn’t want to continue bleeding red for something they had no intention of building. And so there was no more money for me.

It’s troubling when you find out the opportunity you left a rather secure job for no longer exists. And it’s troubling to discover this opportunity no longer exists with perhaps a half-hour’s notice. But that’s the way of the world sometimes.

And the important thing is to re-orient yourself, shake out your contacts, and see where you want to go.

Ten years ago, nearly to the month, I wound down In Phase Consulting, a consulting company I had run with my wife for 9 years and took a job at Symantec for two reasons. One, right after 9/11, the market for freelancing was drying up thanks to the Dot Com crash and the worries over 9/11. And two, I found that I was bumping up against my limits: I had never managed a team, I had never hired people, I had never really learned how to delegate tasks or break down larger projects into smaller components.

And I wanted to get that experience.

Now I’m back to where I was 10 years ago, but this time, the market has changed. No more picking up the phone and making cold calls; we now have Facebook and LinkedIn and blogs and mailing lists and MeetUps. It’s easier now than ever to circulate out there and see what projects are there. And while this does not alleviate the marketing process, it does provide better avenues for searching.

And I’ve changed: I’ve successfully built a team from scratch. I’ve successfully hired and brought people up to speed. I’ve successfully delivered large scale projects, and for the most part when I’ve had enough control to keep things from going haywire, I’ve managed to consistently bring projects in on time and under budget.

So a few hours ago I filed an S Corp app with LegalZoom and created a very basic web site, Glenview Software Corporation, along with ordering business cards. I figure I’ll give the freelance thing a try again, but this time not be afraid of going after the larger projects.

We’ll see where this lands.

If you need a highly qualified software developer with 25 years of professional experience in mobile, embedded, web, desktop and client/server architecture software, please send me an e-mail.

Meanwhile I’m going to go back to what I do best: create damned good software through solving very hard problems. Like that LISP interpreter which I now have running embedded on an iPhone and an iPad which I plan to turn into a symbolic programmable calculator for those archiectures.

Posted in Commentary | Leave a comment

On the development of my own Lisp interpreter, part 3: revisiting closures.

I managed to solve my lisp closure problem outlined in this post by adopting the ideas behind this paper: Closure generation based on viewing LAMBDA as EPSILON plus COMPILE. The basic idea of the paper is that you can rewrite Lambda expressions as “epsilon” expressions (that is, ‘lambda’ but which all variable state outside of the lambda block is passed in as arguments), and compiling those functions with a special method which modifies the stack on calling the lambda function.

Let me explain. Suppose we have the following bit of Lisp:

    (define (funct a)
      (lambda (b) (+ a b)))

When we call this function with an argument, say (funct 5), this returns a function pointer to a function that takes one argument (b) and adds 5 to b. So:

> (define tmp (funct 5))
> (tmp 3)
8

See what happened here? By calling funct we created a new function (which we stored in tmp). We can then call that function, and get a result.

The idea of the paper is that in order to get proper closure rules, you can walk the expression (because the beauty of Lisp is that Lisp programs are just lists of lists that a Lisp program can easily parse), and perform a substitution to rewrite our function in terms of “epsilon”, or rather, lambda functions which only have local scope.

Thus, the lambda expression of our function funct above:

    (lambda (a)
        (lambda (b) (+ a b)))

which describes a function taking one argument that returns a function taking one argument, would be rewritten:

    (lambda (a) 
        ($$closure (lambda (a b) (+ a b)) a))

Now this special function $$closure is magic. It takes as it’s first argument a precompiled function, and as the tail of the arguments a list of parameters. It then creates a new function with pre-pends it’s arguments on the stack and jumps to the original function. Thus:

    ($$closure #<procedure:anon-1> 5)

In the above case, the above procedure would generate the following function:

	POP DS   ; Pop the marker giving the size of the stack
	PUSH 5   ; Push the parameter 5 onto the stack
	PUSH DS  ; Push the stack size marker back onto the stack
	JMP <procedure:anon-1>

That is, the function would simply pre-pend the list of arguments to the start of the parameter list (adjusting the stack frame as needed to properly mark the start and end of the stack frame), and then jump to the original procedure.

The code rewriter I built also tracks the current usage of variables so I properly ignore variables not used interior to a function. For example:

    (lambda (a b)
      (set! f1 (lambda (c) (+ a c)))
      (set! f2 (lambda (c) (- b c))))

This is rewritten as:

    (lambda (a b)
      (set! f1 
            ($$closure (lambda (a c) (+ a c)) 
                       a)) 
      (set! f2 
            ($$closure (lambda (b c) (- b c)) 
                       b)))

Meaning we run our $$closure function pushing only the additional parameters that are needed to each of my interior lambda functions.

We also handle the case where you can create a closure that is only accessible from an interior method. For example, if we define a function foo:

    (define foo (lambda (a) 
                        (lambda (b) (set! a (+ a b)) 
                                    a)))

This defines a function which, once run, will store the value ‘a’ away in a variable that is only accessible from the function that is returned. Calls to that interior function can then add to the interior value, and return that value.

Thus:

> (define bar (foo 5))
> (bar 1)
6
> (bar 2)
8
> (bar -3)
5

This interior variable is of course not shared:

> (define bletch (foo 3))
> (bletch 1)
4
> (bar 1)
6

This is accomplished by our routine by detecting if any of the interior lambdas perform a set! operation to update an interior variable held by closure. If so, we then “wrap” the variable in a list, so in effect we use the variable by reference.

Compiling our routine with the lambda closure function, we get:

    (lambda (a) 
      (set! a ($$wrap a)) 
      ($$closure (lambda (a b) ($$store a (+ ($$fetch a) b)) 
                               ($$fetch a)) 
                 a))

The $$wrap function wraps the variable with a list. The function $$store sets the wrapped value, and $$fetch obtains the wrapped value. These three functions are in fact declared as:

(define ($$wrap v) (cons v '()))
(define ($$fetch v) (car v))
(define ($$store v val) (set-car v val))

The way my code works is by doing an analysis pass, which computes the variable usage across the various lambdas in my declaration, then a rewrite pass, to use the information gathered in the variable scope parameters to determine how to rewrite my lambda functions.

The first method is $$eval-scope, which walks through the lambda function, maintaining a record of all the variables that have been encountered. The specific record that I track contains the following information:

1) An association list of the currently encountered variables, maintained as a “push-down” association list. When I encounter a new variable, the variable information is pre-pended to the list of variables. This variable state information contains the following information:

a) The variable name
b) A flag indicating if the variable was written to by an interior lambda (that is, by a lambda function not where this one was declared). This actually has three states: $$read (the variable was read), $$write (the variable was written) and $$unused (the variable was not used by an interior lambda).
c) A pointer to the lambda where this variable was declared,
d) A list of pointers to the lambdas where this variable was used.

Note that we only create records for the variables that we find inside declaration headers: that is, for an expression like (lambda (a b)), we only note ‘a’ and ‘b’. If we see a variable that hasn’t been declared in this lambda or an outer lambda, we assume it was a global variable and thus ignore it.

2) A flag next to the association list indicating if we’ve seen multiple lambdas in this pass. We can use this flag to ignore rewriting the function if we don’t see any embedded lambdas.

3) A push-down list of the lambdas that we’re currently examining. This is important because if we have three nested lambdas, but the middle lambda doesn’t use a variable declared in the outer that is used in the inner, we must pass the used variable through the middle lambda. So a lambda used in the interior is considered “used” by all of the lambdas between the current one we’re examining and the lambda where this variable was first declared.

4) A push-down list of pointers to our association list. This is used essentially to push and pop variable scope: when we enter a lambda function we push the old scope state, and when we’re done with the lambda we pop the scope state. This allows us to quickly unwind the current variable state once we’re done processing an interior lambda.

The $$eval-scope method walks through all of the declarations we’re currently examining, and updates the scope variable. We also, for convenience sake, rewrite lambdas we encounter (once we’ve processed them) with a special token ($$scope), which notes for each lambda we’ve encountered (a) the lambda method itself (transformed by $$eval-scope, naturally), (b) the un-transformed lambda expression (which we need because our scope variable uses pointers to the un-transformed lambda expression as an index), (c) the variables that are used outside the lambda function (that is, the state of the closure when the lambda was encountered), and (d) the variables created and used inside the lambda.

This can be a quite complicated: for our above example:

    (define foo (lambda (a) 
                        (lambda (b) (set! a (+ a b)) 
                                    a)))

The function $$eval-scope returns:

($$scope
 (lambda (a)
   ($$scope
    (lambda (b) (set! a (+ a b)) a)
    (lambda (b) (set! a (+ a b)) a)
    ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))
    ((b $$unused (lambda (b) (set! a (+ a b)) a) (lambda (b) (set! a (+ a b)) a))
     (a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))))
 (lambda (a) (lambda (b) (set! a (+ a b)) a))
 ()
 ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a)))))

Ignoring the inner lambda, the outer lambda is:

($$scope
 (lambda (a)
   ($$scope ... ;; rewritten method)
 (lambda (a) (lambda (b) (set! a (+ a b)) a))
 ()
 ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a)))))

This indicates that our outer lambda method declares one variable a (see it in the fourth list but not in the third? It starts “((a $$write …”), which is written to. It is used by our outer and inner lambda functions, and declared in our outer lambda. (Let me reformat the association list so you can see it:)

 ((a $$write                                         ;; The variable is written to
     (lambda (a) (lambda (b) (set! a (+ a b)) a))    ;; Where the variable was declared
     (lambda (b) (set! a (+ a b)) a)                 ;; All the places where it was used...
     (lambda (a) (lambda (b) (set! a (+ a b)) a)))))

The interior lambda gets the same treatment:

   ($$scope
    (lambda (b) (set! a (+ a b)) a) ;; The transformed lambda (which needed no transformation)
    (lambda (b) (set! a (+ a b)) a) ;; The original declaration
    ;; Variable state outside this lambda
    ((a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))
    ;; Variable state inside this lambda
    ((b $$unused (lambda (b) (set! a (+ a b)) a) (lambda (b) (set! a (+ a b)) a))
     (a $$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))))

Our second pass, the $$rewrite-lambda function, takes the state information expressed in our rewritten lambda function, and does two things:

(1) We rewrite all variable accesses to variables that are ‘written’ to (marked $$write in the association list) by substituting all ‘set!’ operations with ‘$$store’, all read operations with $$fetch, and we add to the start of the declaring lambda function a $$wrap call to wrap our arguments. (Our particular dialect of Lisp doesn’t have macros (yet), so ‘do’ and ‘let’ are explicit objects within our language–and the rewrite engine handles those correctly as well, rewriting things like ‘(do ((x 1 (+ 1 x))…’ as ‘(do ((x ($$wrap 1) ($$wrap (+ 1 ($$fetch x))))) …’.)

(2) We rewrite all $$scope expressions as lambda + closure. That is, our rewrite rules examine the $$scope function and generates a ($$closure (lambda …) …), prepending the variables that are passed outside of our function in as new parameters (added at the start), and creating a $$closure declaration with those added variables, in the same order.

In our above example, the lambda declaration gets it’s final form (for our compiler):

    (lambda (a) 
      (set! a ($$wrap a)) 
      ($$closure (lambda (a b) ($$store a (+ ($$fetch a) b)) 
                               ($$fetch a)) 
                 a))

If this seems a little heavy-weight, it’s because it probably is: for the Lisp compiler I’m building, I’m not expecting to use the closure feature a lot–and so, having something that rewrites my functions during the compilation process seems a reasonable trade-off for dealing with closures. It also means that my Virtual Machine doesn’t need to understand closures at all; I can just build it as a normal stack-based VM engine with garbage collection.

Posted in Lisp | Leave a comment

REST is not secure.

Simple Security Rules

Security by obscurity never works. Assume the attacker has your source code. If you are doing some super cool obscuring of the data (like storing the account number in the URL in some obscured manner like the Citi folks apparently did), someone can and will break your algorithm and breach your system.

Excuse my language, but what on this God’s mother-fucking Earth were the Citi folks thinking? Really? REALLY?!?

But I can see the conversation amongst the developers at Citi now. “We really need to implement our web site using modern stateless protocols to communicate between the client and server.

Hell, I’ve been part of this conversation at a number of jobs over the past few years.

But here are the problems with a truly stateless protocol.

Stateless protocols are subject to replay attacks.

If a request sent to the back end is stateless, then (unless there is a timestamp agreed to between the client and server which expires the transactions) it must necessarily follow that the same request, when sent to the server, will engage the same action or obtain the same results. This means if, for example, I send a request to transfer $100 from my checking account to pay a bill, that request, when sent again, will transfer another $100 from my checking account to the bill account. If sent 10 times, $1,000 will be transferred.

And so forth.

The biggest problem to replay attacks, of course, is a black-hat sniffing packets: theoretically, since the protocol is stateless, the black-hat doesn’t even have to be able to decrypt the stateless request. He just needs to echo it over and over a bunch of times to clean out your account.

Stateless protocols are insecure protocols.

As we saw in the case of Citi, vital information specifying your account was part of the request. This means that, if the bad guys can figure out how the account information was encoded, they can simply resend the request using a different account and–volia!–instant access to every account at Citi.

Combine that with a reworked request to transfer money–and you can now write a simple script that instantly cleans out every Citi bank and transfers the money to some off-shore account, preferably in a country with banking laws that favor privacy and no extradition treaties.

See, the problem with a stateless protocol is that if the protocol requres authentication, somewhere buried in every request is a flag (or a condition of variables) which say “yes, I’m logged into this account.”

Figure out how to forge that, and you can connect to any account just by saying “um, yeah, sure; I’m logged in.”

Stateless protocols expose vital information.

Similarly, a black-hat who has managed to decrypt a stateless protocol can now derive a lot of information about the people using a service. For Citi, that was bank account information. Other protocols may be sending (either in the clear or in encrypted format) other vital, sensitive information.

But we can just use SSH to guarantee that the packets are only being sent to our servers.

When was the last time you got a successful SSH handshake from a remote web server where you took the time to actually look at the authentication keys to see that they were signed by the entity you were connecting to? And when was the last time you logged into a secure location, got a “this key has expired” warning, and just clicked “ignore?”

SSH only works against a man-in-the-middle attack (where our black-hat inserts himself into the transaction rather than just sniffs the data) when users are vigilant.

And users are not.

But we can use a PKI architecture to secure our packets.

If your protocol is truly stateless, then your client must hold a public key to communicate with your server, and that public key must be a constant: the server cannot issue a key per client because that’s–well, that’s state. Further, you can’t just work around this by having the client hold a private key for information sent by the server; the private key is no longer private when a user can download the key.

So at best PKI encrypts half of the communications. And you have the key to decrypt the data coming off the server. And since all state is held by the client, that implies if you listen to the entire session, you will eventually hear all of the state that the client holds–including account information and login information. You may not know specifically what the client is sending–but then, you have the client, so with state information and detailed information about the protocol (contained in the client code), you’re pretty much in like Flynn.

My own take:

REST with a stateless protocol should be illegal when handling monetary transactions.

For all the reasons above, putting out a web site that allows account holders to access their information via a stateless RESTful protocol is simply irresponsible. It’s compromising account holder’s money in the name of some sort of protocol ideological purity.

Just fucking don’t do it.

You can eliminate most of these risks by maintaining a per-session cookie (or some other per-session token) which ties in with account information server-side.

It’s actually quite easy to use something like HttpSession to store session information. And if your session objects are serializable, many modern Java Servlet engines support serializing state information across a cluster of servers, so you don’t lose scalability.

Personally I would store some vital stateful information with the session, stored as a single object. (For example, I would hold the account number, the login status, the user’s account ID, and the like in that stateful object, and tie it in with a token on the client.) This is information that must not be spoof-able across the wire, and information that is only populated on a successful login request.

The flip side is that you should only store the session information you absolutely need to keep around, for memory considerations: there is no need to keep the user’s username, e-mail address, and other stuff which can be derived from a quick query using an integer user ID. Otherwise, if you have a heavily used system, your server’s memory may become flooded with useless state information.

And don’t just generate some constant token ID associated with the account. That doesn’t prevent against replay attacks, and if your ID is small enough, it just substitutes one account number for another–which is, well, pretty damned close to worthless.

Use a two-state login process to access an account.

It’s easy enough to develop a protocol similar to the APOP login command in the POP3 protocol.

The idea is simple enough. When the user attempts to log in, a first request is made to the remote server requesting a token. This token can be the timestamp from the server, a UUID generated for the purpose, or some other (ideally cryptographically secure) random number. Client-side, the password is then hashed using a one-way hash (like MD5 or SHA-1) along with the token number. That is, we calculate, client-side: passHash = SHA1(password + “some-random-salt” + servertoken);.

We then send this along with the username to the server, which also performs the same password hash calculation based on the transmitted token. (And please don’t send the damned token back to the server; that simply allows someone to replay the login by replaying the second half of the login protocol rather than requesting a new token for this specific session to be issued from the server.) If the server-side hash agrees with the client-side hash, then the user is logged in.

Create a ‘reaper’ thread which deletes state information on a regular basis.

If you are using the HttpSession object, by default many Servlet containers will expire the object in about 30 minutes or so. If you’re using a different mechanism, then you may consider storing a timestamp with your session information, touching the timestamp on each access, and in the background have a reaper thread delete session objects if they grow older than some window of time.

In the past I’ve built reaper threads which invalidated my HttpSessions, to deal with older JSP containers which apparently weren’t reaping the sessions when I was expecting them to. (I’ve seen this behavior on shared JSP server services such as the ones provided by LunarPages, and better to be safe than sorry.)

For God’s sake, just say “NO!” to REST. Especially when dealing with banking transactions. Especially if you’re the banking services I use: I don’t want my money to be transferred to some random technologically sophisticated mob syndicate, simply because you needed your fucking ideological purity of a stateless RESTful protocol and didn’t think through the security consequences.

Posted in Uncategorized | 3 Comments

Sorry.

I started blogging my political thoughts here: Fuzzy Little Things That I Find Interesting

Problem is, MarsEdit 3 makes it easy to post blog posts to multiple blogs–and two of my political posts landed in the wrong place. My Chaos In Motion blog is all about technical issues that I find interesting, or for me to post items that I want to remember (and share).

Sorry about the politics.

Posted in Uncategorized | Leave a comment

Dragging and dropping objects in GWT

If you want to add a click-and-drag handler in GWT, so that (for example) if you click on an image object, you can move it around (and drag content logically associated with it), it’s fairly straight forward.

First, you need to implement a MouseDownEvent, a MosueMoveEvent and a MouseUpEvent handler, and attach them to your image. (Me, I like putting this into a single class, which contains the state associated with the dragging event.) Thus:

	Image myImage = ...
     
	EventHandler h = new EventHandler(myImage);
	myImage.addMouseDownHandler(h);
	myImage.addMouseMoveHandler(h);
	myImage.addMouseUpHandler(h);

Now the event handler needs to do some things beyond just tracking where the mouse was clicked, where it is being dragged to, and how the universe should be changed as the dragging operation takes place. We also need to trap the event so we can handle dragging outside of our object (by capturing drag events), and we also have to prevent the event from percolating upwards, so we get the dragging events rather than the browser.

This means that our event dragging class looks something like:

private class EventHandler implements MouseDownHandler, MouseMoveHandler, MouseUpHandler
{
	private boolean fIsClicked;
	private Widget fMyDragObject

	EventHandler(Widget w)
	{
		fMyDragObject = w;
	}
		
	@Override
	public void onMouseUp(MouseUpEvent event)
	{
		// Do other release operations or appropriate stuff

		// Release the capture on the focus, and clear the flag
		// indicating we're dragging
		fIsClicked = false;
		Event.releaseCapture(fMyDragObject.getElement());
	}

	@Override
	public void onMouseMove(MouseMoveEvent event)
	{
		// If mouse is not down, ignore
		if (!fIsClicked) return;

		// Do something useful here as we drag
	}

	@Override
	public void onMouseDown(MouseDownEvent event)
	{
		// Note mouse is down.
		fIsClicked = true;

		// Capture mouse and prevent event from going up
		event.preventDefault();
		Event.setCapture(fMyDragObject.getElement());

		// Initialize other state we need as we drag/drop
	}
Posted in GWT, Things To Remember | Leave a comment

A FlexTable that handles mouse events.

Reverse engineering the GWT event handler code to add new events is simple. Here’s a Flex Table which also handles mouse events:

	/**
	 * Internal flex table declaration that syncs mouse down/move/up events
	 */
	private static class MouseFlexTable extends FlexTable implements HasAllMouseHandlers
	{
		@Override
		public HandlerRegistration addMouseDownHandler(MouseDownHandler handler)
		{
			return addDomHandler(handler, MouseDownEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseUpHandler(MouseUpHandler handler)
		{
			return addDomHandler(handler, MouseUpEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseOutHandler(MouseOutHandler handler)
		{
			return addDomHandler(handler, MouseOutEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseOverHandler(MouseOverHandler handler)
		{
			return addDomHandler(handler, MouseOverEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseMoveHandler(MouseMoveHandler handler)
		{
			return addDomHandler(handler, MouseMoveEvent.getType());
		}

		@Override
		public HandlerRegistration addMouseWheelHandler(MouseWheelHandler handler)
		{
			return addDomHandler(handler, MouseWheelEvent.getType());
		}
	}

Addendum:

If you want the location (the cell) in which the mouse event happened, you can extend MouseFlexTable with the additional methods:

		public static class HTMLCell
		{
			private final int row;
			private final int col;
			
			private HTMLCell(int r, int c)
			{
				row = r;
				col = c;
			}
			
			public int getRow()
			{
				return row;
			}
			
			public int getCol()
			{
				return col;
			}
		}
		
		public HTMLCell getHTMLCellForEvent(MouseEvent event) 
		{
			Element td = getEventTargetCell(Event.as(event.getNativeEvent()));
			if (td == null) {
				return null;
			}

			int row = TableRowElement.as(td.getParentElement()).getSectionRowIndex();
			int column = TableCellElement.as(td).getCellIndex();
			return new HTMLCell(row, column);
		}
Posted in GWT, Things To Remember | Leave a comment