Announcing a new version of SecureChat

I’ve just checked in a new version of SecureChat on the main branch at GitHub.

New features include:

  • A working Android client.
  • Various notification bug fixes.
  • Various iOS bug fixes.

Why am I doing this?

Even after all these months, since the Apple v FBI fight began, I’ve been hearing way too much stupidity about encryption. The core complaint I have is the idea that somehow encrypted messaging is the province of large corporations and large government entities, entities that must somehow cooperate in order to assure our security.

And it’s such a broken way to think about encryption.

This is a demonstration of a client for iOS, a client for Android and a server which allows real-time encrypted chatting between clients. What makes chatting secure is the fact that each device generates its own public/private key, and all communications are encrypted against the device’s public key. The private key never leaves the device, and is encoded in a secure keychain with a weak checksum that would corrupt the private key if someone attempts a brute-force attack against the device’s secure keychain.

Meaning there is no way to decrypt the messages if you have access to the server. Messages are only stored on each device, encrypted using the device’s private key–meaning a data dump of the device won’t get you the decrypted messages. And a brute force attempt to decode the device’s keychain is more likely to corrupt the keychain than it is to reveal the private key.


Security is a matter of architecture, not just salt that is sprinkled on top to enhance the flavor. Which is why there are so many security breaches out there: because most software architects are terrible at their job: they simply do not consider the security implications of what they’re doing. Worse: many of the current “fads” on designing client/server protocols are inherently insecure.

This is an example of what one person can do in his spare time to create a secure end-to-end chat system which cannot be easily compromised. And unlike other end-to-end security systems (where a communications key is generated by the server rather than on the device), it is a protocol that cannot be easily compromised by compromising the code on the server.

Targeted broadcasting of multithreaded results.

Okay, so here’s a basic problem. You’re building an iOS application (or an Android application) which needs to download an image from a remote site for display in a view.

So you write code similar to the following:

- (void)setImageUrlTest:(NSString *)url
{
	/*
	 *	Request the download on a background thread. Once we've downloaded the
	 *	results, we kick off a new block in the main thread to update the
	 *	image for this image object, and then animate a fade-in
	 */
	
	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		NSURLResponse *resp;
		NSURLRequest *req = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
		NSData *data = [NSURLConnection sendSynchronousRequest:req returningResponse:&resp error:nil];
		if (data) {
			/*
			 *	We have the data from the remote server. Kick off into the main
			 *	thread the UI update
			 */
			
			dispatch_async(dispatch_get_main_queue(), ^{
				/*
				 *	Be a little tricky: fade in from transparent with a half
				 *	second delay
				 */
				
				UIImage *image = [UIImage imageWithData:data];
				self.alpha = 0;
				self.image = image;
				[UIView animateWithDuration:0.5 animations:^{
					self.alpha = 1;
				}];
			});
		} else {
			NSLog(@"An error occurred downloading image");
		}
	});
}

If we were doing this on Android, we could do this using a Java thread. (Ideally we’d want to create a thread queue, but for illustration purposes we will just create a new thread.)

    void setImageUrlTest(final String url)
    {
        final Handler h = new Handler();
        
        /*
         *  Request the download on a background thread. Once we've downloaded
         *  the results, we kick off a runnable in the main thread using the
         *  handler created above, animating a fade-in
         */
        new Thread() {
            @Override
            public void run()
            {
                try {
                    URL u = new URL(url);
                    URLConnection conn = u.openConnection();
                    InputStream is = conn.getInputStream();
                    final Bitmap bmap = BitmapFactory.decodeStream(is);
                    is.close();
                    
                    /*
                     * If we get here we have a bitmap. Put a runnable which
                     * will set the image in the main evnet loop
                     */
                    setVisibility(View.INVISIBLE);
                    setImageBitmap(bmap);
                    h.post(new Runnable() {
                        @Override
                        public void run()
                        {
                            AlphaAnimation a = new AlphaAnimation(0.0f, 1.0f);
                            a.setDuration(500);
                            a.setAnimationListener(new AnimationListener() {
                                @Override
                                public void onAnimationEnd(Animation animation)
                                {
                                }

                                @Override
                                public void onAnimationRepeat(Animation animation)
                                {
                                }

                                @Override
                                public void onAnimationStart(Animation animation)
                                {
                                    setVisibility(View.VISIBLE);
                                }
                            });
                            startAnimation(a);
                        }
                    });
                }
                catch (Throwable th) {
                    Log.d("DownloadImageView","Failed to download image " + url, th);
                }
            }
        }.start();
    }

In both cases what we do is kick off a background task which downloads the image, then using a reference to the original image view, we then load the image into the image view (doing so on the main thread where all UI tasks need to take place), and finally we trigger an animation which fades in the view.

Question: What happens if the image view goes away?

On Android and on iOS, it’s fairly routine to have a slow internet connection, and the user may dismiss your view controller or activity before the view finishes downloading.

But there is a problem with that.

On iOS, the block object that you create has an implicit ‘retain’ on the UIImageView object. This means that, until the network operation completes, all the resources associated with the UIImageView cannot be released.

Things get worse on Android, which (ironically enough) has a much smaller memory footprint for typical applications: not only can’t the ImageView object go away, but the ImageView object contains a reference to the activity which launched the image. Meaning not only is the ImageView object retained by the anonymous thread declaration, but so is the activity that the image is contained in–along with the entire rest of the view hierarchy and all other activity resources associated with the containing activity.

Given that a network timeout can be up to 30 seconds, this means if the user is browsing in and out of different screens, you can very quickly run out of memory as memory is filled up with defunct views whose sole purpose is to exist as a target for a network activity that is no longer really necessary.

What to do?

Okay, the following is not a viable solution, despite my seeing it in multiple places:

	__weak UIImageView *weakSelf = self;
	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		NSURLResponse *resp;
		NSURLRequest *req = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
		NSData *data = [NSURLConnection sendSynchronousRequest:req returningResponse:&resp error:nil];
		if (data) {
			/*
			 *	We have the data from the remote server. Kick off into the main
			 *	thread the UI update
			 */
			
			dispatch_async(dispatch_get_main_queue(), ^{
				/*
				 *	Be a little tricky: fade in from transparent with a half
				 *	second delay
				 */
				
				UIImage *image = [UIImage imageWithData:data];
				weakSelf.alpha = 0;
				weakSelf.image = image;
				[UIView animateWithDuration:0.5 animations:^{
					weakSelf.alpha = 1;
				}];
			});
		} else {
			NSLog(@"An error occurred downloading image");
		}
	});

This doesn’t work because if the image is delayed during download and the image view goes away, the weakSelf reference now points to garbage, causing a crash.

Broadcast to the rescue

One possible solution on iOS is to use NSNotificationCenter to send notifications when a network operation completes successfully. The advantage of using NSNotificationCenter (or an equivalent broadcast/receive mechanism on Android) is that the receiver is effectively detached from the broadcaster: a receiver can detach itself on release, and if the broadcast message has no receivers, the message is dropped on the floor.

And in our case this is the right answer: once the image view has gone away we don’t care what the resulting image was supposed to be.

There is a problem with this, though: the code which handles the incoming message is also separated logically from the code that makes the request. And unless you insert code into the broadcast receiver which explicitly detaches the image view from the NSNotificationCenter once a response is received, you can have dozens (or even hundreds) of broadcast receivers listening to each incoming image.

Another way: CMBroadcastThreadPool

By combining the semantics of a block notification system with a broadcast/receiver pair we can circumvent these problems.

Internally we maintain a map between a request and the block receiving the response. Each block also is associated with a ‘handler’ object; the object which effectively ‘owns’ the response. So, in the case of our image view, the ‘handler’ object is our image view itself.

When our image view goes away, we can notify our thread pool object using the removeHandler: method; this walks through the table of requests and response blocks, deleting those response blocks associated with the handle being deleted:

- (void)dealloc
{
	[[CMNetworkRequest shared] removeHandler:self];
}

Note: CMNetworkRequest inherits CMBroadcastThreadPool to implement network semantics. More information later.

We can then submit a request using the request:handler:response: method; this takes in a request (in our case, the NSURLRequest for an image), the handler (that is, the object which will be receiving the response) and the block to invoke when the response is received.

- (void)setImageUrl:(NSString *)url
{
	CMNetworkRequest *req = [CMNetworkRequest shared];
	[req removeHandler:self];	/* Remove old handler */
	NSURLRequest *rurl = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
	[req request:rurl handler:self response:^(NSData *data) {
		UIImage *image = [UIImage imageWithData:data];
		self.alpha = 0;
		self.image = image;
		[UIView animateWithDuration:0.5 animations:^{
			self.alpha = 1;
		}];
	}];
}

The call to request:handler:response: stores the handler and response in association with the request, then executes the request in a background thread. Once the response is received, the CMBroadcastThreadPool object looks up the handler and block to invoke, and if present, invokes the block.

However, if the UIImageView has gone away, there are no blocks to invoke–and the network response is dropped on the floor.


Internally our CMBroadcastThreadPool class on iOS invokes an internal method responseForRequest: to process the request. This method is invoked on a block passed into a background thread via Grand Central Dispatch.

The class itself is presented in full here:

CMBroadcastThreadPool.h

//
//  CMBroadcastThreadPool.h
//  TestThreadPool
//
//  Created by William Woody on 7/20/13.
//  Copyright (c) 2013 William Woody. All rights reserved.
//

#import <Foundation/Foundation.h>

@interface CMBroadcastThreadPool : NSObject
{
	@private
		NSMutableSet *inProcess;
		NSMutableDictionary *receivers;
}

- (void)request:(id<NSObject, NSCopying>)request handler:(id<NSObject>)h response:(void (^)(id<NSObject>))resp;
- (void)removeHandler:(id<NSObject>)h;

/* Override this method for processing requests */
- (id<NSObject>)responseForRequest:(id<NSObject, NSCopying>)request;

@end

CMBroadcastThreadPool.m

//
//  CMBroadcastThreadPool.m
//  TestThreadPool
//
//  Created by William Woody on 7/20/13.
//  Copyright (c) 2013 William Woody. All rights reserved.
//

#import "CMBroadcastThreadPool.h"

/************************************************************************/
/*																		*/
/*	Internal Storage													*/
/*																		*/
/************************************************************************/

@interface CMBroadcastStore : NSObject
@property (retain) id<NSObject> handler;
@property (copy) void (^response)(id<NSObject>);
@end

@implementation CMBroadcastStore

#if !__has_feature(objc_arc)
- (void)dealloc
{
	[handler release];
	[response release];
	[super dealloc];
}
#endif

@end

/************************************************************************/
/*																		*/
/*	Thread pool															*/
/*																		*/
/************************************************************************/

@implementation CMBroadcastThreadPool

- (id)init
{
	if (nil != (self = [super init])) {
		inProcess = [[NSMutableSet alloc] initWithCapacity:10];
		receivers = [[NSMutableDictionary alloc] initWithCapacity:10];
	}
	return self;
}

#if !__has_feature(objc_arc)
- (void)dealloc
{
	[inProcess release];
	[receivers release];
	[super dealloc];
}
#endif

/*	request:handler:response:
 *
 *		Submit a request that will be sent to the specified hanlder, via the
 *	response block
 */

- (void)request:(id<NSObject, NSCopying>)request handler:(id<NSObject>)h response:(void (^)(id<NSObject>))resp
{
	@synchronized(self) {
		/*
		 *	Add this to the map of handlers for this request
		 */
		
		NSMutableArray *recarray = [receivers objectForKey:request];
		if (!recarray) {
			recarray = [[NSMutableArray alloc] initWithCapacity:10];
			[receivers setObject:recarray forKey:request];
#if !__has_feature(objc_arc)
			[recarray release];
#endif
		}
		
		CMBroadcastStore *store = [[CMBroadcastStore alloc] init];
		store.handler = h;
		store.response = resp;
		[recarray addObject:store];
#if !__has_feature(objc_arc)
		[store release];
#endif

		/*
		 *	Now enqueue a request in GCD. This only enqueues the item if the
		 *	item is not presently in the queue.
		 */
		 
		if (![inProcess containsObject:request]) {
			[inProcess addObject:request];
			
			dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
				
				/* Process for response */
				id<NSObject> response = [self responseForRequest:request];
				
				/* Get the list of receivers; if any exist, trigger response */
				@synchronized(self) {
					NSMutableArray *a = [receivers objectForKey:request];
					if (a) {
						dispatch_async(dispatch_get_main_queue(), ^{
							for (CMBroadcastStore *s in a) {
								s.response(response);
							}
						});
						[receivers removeObjectForKey:request];
					} else {
						NSLog(@"No receivers");
					}
					
					/* Remove from list of items in queue */
					[inProcess removeObject:request];
				}
			});
		}
		
	}
}

/*	removeHandler:
 *
 *		Remove the handler, removing all the receivers for this incoming
 *	message
 */

- (void)removeHandler:(id<NSObject>)h
{
	@synchronized(self) {
		NSArray *keys = [receivers allKeys];
		for (id request in keys) {
			/*
			 *	Remove selected handlers associated with this receiver
			 */
			
			NSMutableArray *a = [receivers objectForKey:request];
			int i,len = [a count];
			for (i = len-1; i >= 0; --i) {
				CMBroadcastStore *store = [a objectAtIndex:i];
				if (store.handler == h) {
					[a removeObjectAtIndex:i];
				}
			}
			if ([a count] <= 0) {
				/*
				 *	If empty, remove the array of responses.
				 */
				
				[receivers removeObjectForKey:request];
			}
		}
	}
}

/*	responseForRequest:
 *
 *		This returns a response for the specified request. This should be
 *	overridden if possible
 */

- (id<NSObject>)responseForRequest:(id<NSObject, NSCopying>)request
{
	return nil;
}

@end

In Java

For Java I’ve done the same sort of thing, except I’ve also included a thread pool mechanism which manages a finite number of background threads. I’ve also added code which causes a request to be dropped entirely if it is not being processed. For example, if you’re attempting to download an image, but the image view goes away, and the request to download the image hasn’t started being processed by a background thread, then we drop the request entirely.

To use in Android you need to override runOnMainThread to put the runnable into the main thread. (This can be done using Android’s ‘Handler’ class.) You also need to provide the processRequest method.

BroadcastThreadPool.java

/*  BroadcastThreadPool.java
 *
 *  Created on Jul 20, 2013 by William Edward Woody
 */

package com.glenviewsoftware.bthreadpool;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Map;

/**
 * A thread pool which sends the response to a request to a broadcast list,
 * which can be disposed of at any time.
 */
public abstract class BroadcastThreadPool<Response,Request,Handler>
{
    /// The number of threads that can simultaneously run
    private int fMaxThreads;
    
    /// The number of threads currently operating
    private int fCurThreads;
    
    /// The number of waiting threads
    private int fWaitingThreads;
    
    /// The request queue; a queue of requests to be processed
    private LinkedList<Request> fRequestQueue;
    
    /// The list of requests tha are being processed
    private HashSet<Request> fInProcess;
    
    /// The request response mapping; this maps requests to their responses.
    private HashMap<Request,HashMap<Handler,ArrayList<Receiver<Response>>>> fReceivers;
    
    
    private class BackgroundThread implements Runnable
    {

        @Override
        public void run()
        {
            ++fCurThreads;
            ++fWaitingThreads;
            
            for (;;) {
                /*
                 * Get the current request
                 */
                
                Request req;
                
                synchronized(BroadcastThreadPool.this) {
                    while (fRequestQueue.isEmpty()) {
                        try {
                            BroadcastThreadPool.this.wait();
                        }
                        catch (InterruptedException e) {
                        }
                    }
                    
                    /* When we get a request, note it's in process and mark thread in use */
                    req = fRequestQueue.removeFirst();
                    fInProcess.add(req);
                    --fWaitingThreads;
                }
                
                /*
                 * Process the response. If an exception happens, set the response
                 * to null. (Is this right?)
                 */
                
                Response resp;
                try {
                    resp = processRequest(req);
                }
                catch (Throwable th) {
                    /* Unexpected failure. */
                    resp = null;
                }
                
                /*
                 * Now send the response
                 */
                
                synchronized(BroadcastThreadPool.this) {
                    /* Remove request from list of in-process requests */
                    fInProcess.remove(req);
                    
                    /* Get map of receivers for this request and remove from list */
                    final HashMap<Handler,ArrayList<Receiver<Response>>> rmap = fReceivers.get(req);
                    fReceivers.remove(req);
                    
                    /* Send response to all receivers */
                    final Response respFinal = resp;
                    runOnMainThread(new Runnable() {
                        @Override
                        public void run()
                        {
                            for (ArrayList<Receiver<Response>> l: rmap.values()) {
                                for (Receiver<Response> rec: l) {
                                    rec.processResponse(respFinal);
                                }
                            }
                        }
                    });
                    
                    ++fWaitingThreads;
                }
            }
        }
        
    }
    
    /**
     *	Broadcast receiver. This is the interface to the abstract class which
     *  will receive the response once the message has been processed.
     */
    public interface Receiver<Response>
    {
        void processResponse(Response r);
    }
    
    /**
     * Create a new thread pool which uses a broadcast/receiver pair to handle
     * requests.
     * @param maxThreads
     */
    public BroadcastThreadPool(int maxThreads)
    {
        fMaxThreads = maxThreads;
        
        fReceivers = new HashMap<Request,HashMap<Handler,ArrayList<Receiver<Response>>>>();
        fRequestQueue = new LinkedList<Request>();
        fInProcess = new HashSet<Request>();
    }
    
    /**
     * This enqueues the request, adds the response ot the list of listeners
     * @param r
     * @param h
     * @param resp
     */
    public synchronized void request(Request r, Handler h, Receiver<Response> resp)
    {
        boolean enqueue = false;
        
        /*
         *  Step 1: Add this to the hash map of handlers. If we need to
         *  create a new entry, we probably have to enqueue the request
         */
        
        HashMap<Handler,ArrayList<Receiver<Response>>> rm = fReceivers.get(h);
        if (rm == null) {
            /* Enqueue if we're not currently processing the request */
            enqueue = !fInProcess.contains(r);
            
            /* Add this receiver */
            rm = new HashMap<Handler,ArrayList<Receiver<Response>>>();
            fReceivers.put(r,rm);
        }
        
        /*
         * Step 2: Add this receiver to the list of receivers associated with
         * the specified handler
         */
        ArrayList<Receiver<Response>> list = rm.get(h);
        if (list == null) {
            list = new ArrayList<Receiver<Response>>();
            rm.put(h, list);
        }
        list.add(resp);
        
        /*
         * Step 3: If we need to enqueue the request, then enqueue it and wake
         * up a thread to process. Note we only enqueue a request if the same
         * request hasn't already been enqueued.
         */
        
        if (enqueue) {
            fRequestQueue.addLast(r);
            
            if ((fWaitingThreads <= 0) && (fCurThreads < fMaxThreads)) {
                /*
                 * Create new processing thread
                 */
                
                Thread th = new Thread(new BackgroundThread(),"BThread Proc");
                th.setDaemon(true);
                th.start();
            }
            
            notify();
        }
    }
    
    /**
     * This is called when my handler is being disposed or going inactive and
     * is no longer interested in the response. The handler will be removed
     * and all of the responses will go away
     * @param h
     */
    public synchronized void remove(Handler h)
    {
        Iterator<Map.Entry<Request,HashMap<Handler,ArrayList<Receiver<Response>>>>> iter;
        
        /*
         * Iterate through all requests, removing handlers. If the request
         * goes empty, remove
         */
        
        iter = fReceivers.entrySet().iterator();
        while (iter.hasNext()) {
            Map.Entry<Request,HashMap<Handler,ArrayList<Receiver<Response>>>> e = iter.next();
            
            HashMap<Handler,ArrayList<Receiver<Response>>> pm = e.getValue();
            pm.remove(h);
            if (pm.isEmpty()) {
                /*
                 * The request is no longer needed. If it is not in process,
                 * remove from the queue, as it doesn't need to be processed.
                 */
                
                Request req = e.getKey();
                if (!fInProcess.contains(req)) {
                    fRequestQueue.remove(req);
                }
            }
        }
    }
    
    /**
     * runOnMainThread: override on an OS which requires responses on the
     * main thread. For now, this just executes directly
     * @param r
     */
    protected void runOnMainThread(Runnable r)
    {
        r.run();
    }

    /**
     * This is the abstract method executed in the thread queue which responds
     * with a specified response.
     * @param req
     * @return
     */
    protected abstract Response processRequest(Request req);
}

Here is an example of using this class to receive data from a remote network connection. We would pass in a string and receive a byte array of the data from the remote server.

NetworkThreadPool.java

/*  NetworkThreadPool.java
 *
 *  Created on Jul 22, 2013 by William Edward Woody
 */

package com.glenviewsoftware.bthreadpool;

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;

public class NetworkThreadPool extends BroadcastThreadPool<byte[], String, Object>
{
    public NetworkThreadPool()
    {
        /* Max of 5 background threads */
        super(5);
    }
    
    private byte[] readFromInput(InputStream is) throws IOException
    {
        byte[] buffer = new byte[512];
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        int rlen;
        
        while (0 < (rlen = is.read(buffer))) {
            baos.write(buffer, 0, rlen);
        }
        
        return baos.toByteArray();
    }

    @Override
    protected byte[] processRequest(String req)
    {
        try {
            URL url = new URL(req);
            URLConnection conn = url.openConnection();
            InputStream is = conn.getInputStream();
            byte[] buffer = readFromInput(is);
            is.close();
            return buffer;
        }
        catch (Throwable th) {
            // Handle exception
            return null;
        }
    }

    public static NetworkThreadPool shared()
    {
        if (gNetworkThreadPool == null) gNetworkThreadPool = new NetworkThreadPool();
        return gNetworkThreadPool;
    }
}

We can then invoke this by writing:

        NetworkThreadPool.shared().request(imageUrl, this, new NetworkThreadPool.Receiver() {
            @Override
            public void processResponse(byte[] r)
            {
                /* Convert to image bitmap and load */
                ...
            }
        });

and of course when the image view goes away, we can (on onDetachedFromWindow) write:

void onDetachedFromWindow()
{
    super.onDetachedFromWindow();
    NetworkThreadPool.shared().remove(this);
}

Disclaimer:

This code is loosely sketched together in order to illustrate the concept of using a thread pool married with a broadcast/receiver model to allow us to handle the case where the receiver no longer exists prior to a request completing in the background.

I believe the code works but it is not throughly tested. There may be better ways to do this.

Feel free to use any of the above code in your own projects.

And if you need a dynamite Android or iOS developer on a contract basis, drop me a line. 🙂

Please resize your designs to test if they work!

*sigh*

I remember months ago making this rant on my Facebook account. Yet it still continues.

Okay, look: if you’re laying out a user interface for Android or for iOS, remember that on iOS the minimum feature size someone’s finger can reasonably touch is 44 x 44 pixels. (On “retina-resolution displays” this becomes 88 x 88 pixels.)

So stop designing what looks good on your gigantic 30″ monitor, and start designing for what will look good on a 7″ iPad Mini or an iPhone 5 or an 4″ Android.

If you really want to see what a design will look like to the user, take your 2048×1536 Photoshop and resize it down to 628 x 471 pixels. (NO, not 1024 x 768.)

On a typical desktop monitor with 100dpi resolution, a 628×471 pixel image is approximately the same size as the iPad Mini with it’s 163 dpi screen.

If you resize your design down to that size and the design looks like a muddled mess–well, guess the fuck what? When the programmer is done implementing your design, it’s going to look like a muddled mess.

And as the designer, guess who’s fault that is? The programmer’s? Think again.

The same goes for the iPhone 5 (196 x 348), the iPhone 4 (196 x 294), the Google Nexus 4 (405 x 243), the Nexus 7 (592 x 370) and the like.

Remember: your desktop screen is around 100dpi. The devices you are designing for, however, are not–and what may look spacious and free and open on your 30″ monitor at 100dpi will looked tiny and cramped and cluttered on a device with only a 4″ or 7″ or 10″ diagonal…

Learning to fly, and Jeppesen’s map data on Garmin is wrong.

It’s been a while since I’ve posted, I know.

But I have a good excuse. I’ve been learning to fly.

Flying is the coolest thing I’ve done in a long time. And in the past six months I’ve gone from my check ride to being just a few weeks away from my check ride. My last flight was my long cross country to Bakersfield and to San Luis Obispo from Whiteman Airport. And the view! There is nothing cooler than seeing Avila Bay from the front windscreen of an airplane under your control.

Now to justify spending all this money learning to fly (it ain’t cheap!) I’ve been spending some time building aviation-related software products. My first product was an E6B calculator which also includes methods for calculating maneuvering speed (that changes as the weight of your plane changes, and knowing it is vital when you hit turbulence).

My second product will be an EFB, a program which helps show you where you are on a map displaying airspace data, and also allows you to create a flight plan and file a flight plan with the FAA.

And it is building the mapping engine for Android (my first targeted platform), where my current story starts.

Building a map and testing.

In order to make rendering on Android quick, I’ve built a slippy map engine that uses OpenGL. There are several advantages to this; the biggest being if you scroll the map around you don’t have to redraw the entire screen. Instead, a few OpenGL translate calls or rotate calls–and you’re done. Some additional code to detect if you need to rebuild your tiles, and you can render the tiles in the background in a separate thread, and replace the tiles in the OpenGL instance once they’re done–this allows you to scroll around in real time even though it may take a second or so to render all the complex geometry.

And in testing my slippy map code, I noticed something.

My source of data for the shape of the airspace around Burbank, Van Nuys and Whiteman is the FAA FADDS (Federal Aeronautical Data Distribution System) database, updated every 56 days. I have the latest data set for this current cycle, and I’ve rendered it using my OpenGL slippy map engine in the following screen snapshot:

You can ignore the numbers; that’s just for debugging purposes. The airspace being hilited is the airspace at my altitude (currently set to 0); dim lines show airspace at a different flight level.

And I noticed something–interesting–when comparing the map with my Garmin Aera 796 with the Jeppesen America Navigation Data set:

There are a few differences.

Okay, let’s see what the printed VFR map shows for the same area:

I’ve taken the liberty to rotate the map to around the same orientation as the other maps.

And–there are errors.

Now the difference between the FADDS data set and the printed map is trivial: there is this extra crescent area that on the printed map belongs to Van Nuys:

The errors with the Jeppesen data set, however, are worse:

On this image I’ve superimposed the image of the terminal air chart for Van Nuys, Burbank and Whiteman on top of the Garmin’s screen. It may be hard to see exactly what’s going on, but if you look carefully you see three errors.

The first error is the west side of Van Nuys’ airspace it has been straightened out into a north-south line. On the chart, Van Nuys’ airspace curves to match Burbank’s class C airspace on the west.

The second error is the northern edge of Burbank’s C airspace over Whiteman. Whiteman’s class D airspace is entirely underneath Burbank’s class C airspace–but the border has been turned into a straight line on the Garmin.

The third error is a tiny little edge of airspace that shows Van Nuys and Whiteman’s class D airspaces overlapping. On the printed map, this little wedge belongs to Whiteman.

They say a handheld GPS device should only be used for “situational awareness” and not for navigation.

Well, one reason is simple: the airspace maps you’re looking at on the hand-held may be wrong.

I’ve also noticed similar errors around Oakland’s Class C airspace. Large chunks of the airspace have been turned from nice curving lines (showing a radius from a fixed point) into roughly shaped polygons.

Now my guess is this: the raw FADDS data from the FAA is similar to most mapping data: rather than specifying round curves, the georeferenced data is specified as a polygon with a very large number of nodes; some of the airspace curves are specified with several hundred points.

And somewhere during the conversion process, the number of points are being reduced in order to fit a compact file size, and so it renders quickly: if the length of one edge of the rendered polygon is less than a couple of pixels long, there is no point keeping the nodes of that line; you can approximate the polygon with one with fewer edges with less than a single display pixel of error.

But somewhere along the path, too many polygon edges are being removed–turning what should be a curved line into a straight edge.

I don’t know if this is because there is a limit to the number of polygon lines permitted in the file format being used for export and import, or if this is because approximation code is going haywire.

But curved lines are being turned straight–which implies if you are skirting someone’s airspace, and relying on your hand-held Garmin–you could very well be intruding into someone’s airspace and not even know it.

Drawing scaled images in Android is expensive.

So I wrote a custom view which displays a list of bitmaps. The way this works is to draw a grid of images by loading them from disk, then drawing those images using Canvas.drawBitmap(Bitmap, Rect, Rect, Paint).

And during scrolling it was dog slow.

So what I did was to pre-scale the images in a weak hash map:

    private WeakHashMap fScaledBitmaps = new WeakHashMap();

    private Bitmap getScaledBitmap(String url, int cellWidth, int cellHeight)
    {
        /*
         * Check our cache and return if it's present
         */
        Bitmap bmap = fScaledBitmaps.get(url);
        if (bmap != null) {
            if ((bmap.getWidth() == cellWidth) && (bmap.getHeight() == cellHeight)) return bmap;
            // size different; kill bitmap
            bmap.recycle();
            fScaledBitmaps.remove(url);
        }
        
        bmap = ... get our image from interior cache ...
        if (bmap == null) {
            // bitmap not present; return null
            return null;
        } else {
            // Bitmap loaded. Now grab the scaled version
            Bitmap scale = Bitmap.createScaledBitmap(bmap, cellWidth, cellHeight, true);
            bmap.recycle();

            fScaledBitmaps.put(url, scale);
            return scale;
        }
    }

And this sped up scrolling from sluggish drawing once a second to quick and smooth.

Lesson: drawing a scaled image to a canvas is frighteningly expensive. Like an order of magnitude slower than pre-scaling the bitmap and storing it in a weak reference or weak hash map.

UI Performance

I just spent the weekend rewriting an Android application for performance.

When I first start learning a UI framework, be it for iOS, Android, Java Swing, GWT, MacOS, Windows, or X, the two questions I first want to answer are:

  • How do I build a custom view?

and

  • How do I build a custom view container and perform custom layout of the children within that container?

With those two bits of information you can rule the world. (Or at least the framework.)

The problem with most applications running like a dog, especially on mobile devices, is that most frameworks are inefficient at maintaining more than a couple of dozen views within a window. This isn’t a problem when you’re talking about putting up a dialog with a bunch of controls or have a relatively static display. But when you start talking about dragging and dropping objects, or when you are talking about scrolling items in a scroll view, things can go to hell very quickly.

To take a concrete example, I put together a view which contains a scrolling area, and inside the area each item in the list of items is represented by an image, a couple of buttons and a label of text. The natural way in Android to do this is to build a ListView, create a ListAdapter and in response to each request for a view, use a LayoutInflater (as needed) to construct a view hierarchy that contains a layout or three, representing the buttons as views, the image as a view, and the text as a view, all layered on other views. On the iPhone it’s the same story; a UITableViewCell can contain a hierarchy of other views which represent the contents of the cell.

For a list of 20 items, this translates into over a hundred-something views, minimum.

And on both Android and iOS, dragging around all that crap takes forever.

My solution in each of these cases is to reduce the complexity. On the iPhone override the UITableViewCell as a single custom view which draws the buttons, widgets and components in the -drawView() method. That way, on the screen you have 7 views, not over a hundred.

On Android the solution was even more radical: instead of a list view, I just used a ScrollView and created a custom view which draws the entire list. Use the Canvas’ getClipBounds() method in the canvas passed into onDraw() to determine what needs to be drawn, and draw it all in one view.

With this technique you eliminate manipulating a hundred views, and can easily make something go from impossibly jerky to smooth as silk, even on slower devices.

On Memory Leaks in Java and in Android.

Just because it’s a garbage collected language doesn’t mean you can’t leak memory or run out of it. Especially on Android where you get so little to begin with.

Now of course sometimes the answer is that you just need more memory. If your program is a Java command line program to load the entire road map of the United States to do some network algorithms, you probably need more than the default JVM configurations give you.

Sometimes it’s not even a full-on leak, but a large chunk of memory isn’t being released in time as a consequence of some holder object that isn’t being released in time.

There are some tools that can help. With Android, you can use DDMS to get an idea what’s going on, and you can even dump a snapshot of the heap by using the Dump HPROF File option. (You can also programmatically capture uncaught exceptions on startup of your application or activity and dump an hprof file within the exception handler like so:

public void onCreate(Bundle savedInstanceState)
{
...
    Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler()
    {
        @Override
        public void uncaughtException(Thread thread, Throwable ex)
        {
            try {
                File f = new File(Environment.getExternalStorageDirectory(),"error.hprof");
                String path = f.getAbsolutePath();
                Debug.dumpHprofData(path);
                Log.d("error", "HREF dumped to " + path);
            }
            catch (IOException e) {
                Log.d("error","Huh?",e);
            }
        }
    });
...
}

Of course once you have an .hprof file from Android you have to convert it to something that can be used by an application such as the Eclipse Memory Analyzer tool using the hprof-conv command line application included as part of the Android SDK; there is more information on how to do this and how to use the MAT tool here: Attacking memory problems on Android.

One place where I’ve been running into issues is with a clever little bit of code which loads images from a separate thread from a remote resource, and puts them into a custom view that replaces the ImageView class. This little bit of code creates a background thread which is used to talk to a remote server to download images; once the image is loaded, a callback causes the custom view to redraw itself with the correct contents. A snippet of that code is below:

/*  Cache.java
 *
 *  Created on May 15, 2011 by William Edward Woody
 */

package com.chaosinmotion.android.utils;

import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.Map.Entry;

import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.DefaultHttpClient;

import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.os.Handler;

public class Cache
{
    /**
     * Our callback interface
     */
    public interface Callback
    {
        void loaded(String url, Bitmap bitmap);
        void failure(String url, Throwable th);
    }
    
    /**
     * Item in the queue which is waiting to be processed by our network thread(s)
     */
    private static class QueueItem
    {
        String url;
        Callback callback;
        
        QueueItem(String u, Callback c)
        {
            url = u;
            callback = c;
        }
    }
    
    /// The handler to thread to the UI thread
    private Handler fHandler;
    /// The event queue
    private LinkedList<QueueItem> fQueue;
    /// The global cache object, which will be created by the class loader on load.
    /// Because this is normally called from our UI objects, this means our Handler
    /// will be created on our UI thread
    public static Cache gCache = new Cache();
    
    /**
     * Internal runnable for our background loader thread
     */
    private class NetworkThread implements Runnable
    {
        public void run()
        {
            // Start HTTP Client
            HttpClient httpClient = new DefaultHttpClient();
            
            for (;;) {
                /*
                 * Dequeue next request
                 */
                QueueItem q;

                synchronized(fQueue) {
                    while (fQueue.isEmpty()) {
                        try {
                            fQueue.wait();
                        }
                        catch (InterruptedException e) {
                        }
                        break;
                    }

                    /*
                     * Get the next item
                     */
                    q = fQueue.removeLast();
                }
                
                /*
                 * Read the network
                 */
                
                try {
                    /*
                     * Set up the request and get the response
                     */
                    HttpGet get = new HttpGet(q.url);
                    HttpResponse response = httpClient.execute(get);
                    HttpEntity entity = response.getEntity();
                    
                    /*
                     * Get the bitmap from the URL response
                     */
                    InputStream is = entity.getContent();
                    final Bitmap bmap = BitmapFactory.decodeStream(is);
                    is.close();

                    entity.consumeContent();
                    
                    /*
                     * Send notification indicating we loaded the image on the
                     * main UI thread
                     */
                    final QueueItem qq = q;
                    fHandler.post(new Runnable() {
                        public void run()
                        {
                            qq.callback.loaded(qq.url,bmap);
                        }
                    });
                }
                catch (final Throwable ex) {
                    final QueueItem qq = q;
                    fHandler.post(new Runnable() {
                        public void run()
                        {
                            qq.callback.failure(qq.url,ex);
                        }
                    });
                }
            }
            
//            httpClient.getConnectionManager().shutdown();
        }
    }
    
    /**
     * Start up this object
     */
    private Cache()
    {
        fHandler = new Handler();
        fQueue = new LinkedList();
        Thread th = new Thread(new NetworkThread());
        th.setDaemon(true);
        th.start();
    }
    
    /**
     * Get the singleton cache object
     */
    public static Cache get()
    {
        return gCache;
    }
    
    /**
     * Get the image from the remote service. This will call the callback once the
     * image has been loaded
     * @param url
     * @param callback
     */
    public void getImage(String url, Callback callback)
    {
        synchronized(fQueue) {
            fQueue.addFirst(new QueueItem(url,callback));
            fQueue.notify();
        }
    }
}

Now what this does is rather simple: we have a queue of items which are put into a linked list, and our background thread loads those items, one at a time. Once the item is loaded, we call our callback so the image can then be handled by whatever is using the service to load images from a network connection.

Of course we can make this far more sophisticated; we can save the loaded files to a cache, we can collapse multiple requests for the same image so we don’t try to load it repeatedly. We can also make the management of the threads more sophisticated by creating a thread group of multiple threads all handling network loading.

We can then use this with a custom view class to draw the image, drawing a temporary image showing the real image hasn’t been loaded yet:

/*  RemoteImageView.java
 *
 *  Created on May 15, 2011 by William Edward Woody
 */

package com.chaosinmotion.android.utils;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.view.View;

public class RemoteImageView extends View
{
    private Paint fPaint;
    private Bitmap fBitmap;
    private String fURL;

    public RemoteImageView(Context context)
    {
        super(context);
        // TODO Auto-generated constructor stub
    }
    
    public void setImageURL(String url)
    {
        fBitmap = null;
        fURL = url;
        
        Cache.get().getImage(fURL, new Cache.Callback() {
            public void loaded(String url, Bitmap bitmap)
            {
                fBitmap = bitmap;
                invalidate();
            }

            public void failure(String url, Throwable th)
            {
                // Ignoring for now. Could display broken link image
            }
        });
    }

    @Override
    protected void onDraw(Canvas canvas)
    {
        if (fPaint == null) fPaint = new Paint();
        
        canvas.drawColor(Color.BLACK);
        if (fBitmap == null) return;        // could display "not loaded" image
        canvas.drawBitmap(fBitmap, 0, 0, fPaint);
    }
}

This is a very simple example of our using the Cache object to load images from a background thread. We can make this far more sophisticated; we can (for example) display a “loading” image and a “image link broken” image. We can also alter the reported size during onMeasure to return the size of the bitmap, or we can center the displayed bitmap or scale the bitmap to fit. But at it’s core, we have a simple mechanism for displaying the loaded image in our system.

Can you spot the leak?

I didn’t, at first.

Here’s a hint: Avoiding Memory Leaks

Here’s another: the RemoteImageView, being a child of the View class, holds a reference to it’s parent, and up the line until we get to the top level activity, which holds a reference to–well–just about everything.

No?

Okay, here goes.

So when we call:

        Cache.get().getImage(fURL, new Cache.Callback() { ... });

The anonymous inner class we create when we create our callback holds a reference to the RemoteImageView. And that inner class doesn’t go away until after the image is loaded. So if we have a few dozen of these and a very slow connection, the user switches from one activity to another–and we can’t let the activity go, because we’re still waiting for the images to load and be copied into the image view.

So while it’s not exactly a memory leak, the class can’t be let go of, nor can all the associated resources, until our connection completes or times out. In theory it’s not a leak, exactly, because eventually the memory will be released–but it won’t be released soon enough for our purposes. And so we crash.

So how do we fix this?

Well, we need to add two things. First, we need to somehow disassociate our view from the anonymous inner class so that, when our view no longer exists, the callback class no longer holds a reference to the view. That way, the activity can be reclaimed by the garbage collector even though our callback continues to exist. Second, we can remove the unprocessed callbacks so they don’t make a network call to load an image that is no longer needed.

To do the first, we change our anonymous inner class to a static class (that way it doesn’t hold a virtual reference to ‘this’), and explicitly pass a pointer to our outer class to it, one that can then be removed:

/*  RemoteImageView.java
 *
 *  Created on May 15, 2011 by William Edward Woody
 */

package com.chaosinmotion.android.utils;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.view.View;

public class RemoteImageView extends View
{
    private Paint fPaint;
    private Bitmap fBitmap;
    private String fURL;
    private OurCallback fCallback;
    
    public RemoteImageView(Context context)
    {
        super(context);
        // TODO Auto-generated constructor stub
    }
    

    private static class OurCallback implements Cache.Callback
    {
        private RemoteImageView pThis;
        
        OurCallback(RemoteImageView r)
        {
            pThis = r;
        }
        
        public void loaded(String url, Bitmap bitmap)
        {
            if (pThis != null) {
                pThis.fBitmap = bitmap;
                pThis.invalidate();
                pThis.fCallback = null; // our callback ended; remove reference
            }
        }

        public void failure(String url, Throwable th)
        {
            // Ignoring for now. Could display broken link image
            if (pThis != null) {
                pThis.fCallback = null; // our callback ended; remove reference
            }
        }
    }

    public void setImageURL(String url)
    {
        fBitmap = null;
        fURL = url;
        
        fCallback = new OurCallback(this);
        Cache.get().getImage(fURL, fCallback);
    }

    @Override
    protected void onDraw(Canvas canvas)
    {
        if (fPaint == null) fPaint = new Paint();
        
        canvas.drawColor(Color.BLACK);
        if (fBitmap == null) return;        // could display "not loaded" image
        canvas.drawBitmap(fBitmap, 0, 0, fPaint);
    }

    @Override
    protected void onDetachedFromWindow()
    {
        // Detach us from our callback
        if (fCallback != null) fCallback.pThis = null;
        
        super.onDetachedFromWindow();
    }
}

The two biggest changes is to create a new static OurCallback class which holds a reference to the view being acted on. We then hold a reference to the callback that is zeroed out when the callback completes, either on failure or on success. Then on the onDetachedFromWindow callback, if we have a request outstanding (because fCallback is not null), we detach the view from the callback. Note that because all the calls in the callback are done on the UI thread we don’t need to synchronize access.

This will now detach the view from the callback when the view goes away, so the activity that contains the view can be reclaimed by the memory manager.

Our second change is to remove the request from the queue, so we don’t use unnecessary resources. While not strictly necessary for memory management purposes, it helps our network performance. The change here is to explicitly remove our callback from the queue.

First, we change our onDetachedFromWindow() call to remove us (by callback) from the cache:

    @Override
    protected void onDetachedFromWindow()
    {
        // Detach us from our callback
        if (fCallback != null) {
            fCallback.pThis = null;
            Cache.get().removeCallback(fCallback);
        }
        
        super.onDetachedFromWindow();
    }

Second, we add a method to the cache to look for all instances of requests with the same callback, and delete the request from the queue. If it isn’t in the queue, it’s probably because the request is now being acted upon by our networking thread. (If we were particularly clever we could signal our networking thread to stop the network request, but I’m not going to do that here.)

So our method added to the Cache is:

    /**
     * Remove from the queue all requests with the specified callback. Done when the
     * result is no longer needed because the view is going away.
     * @param callback
     */
    public void removeCallback(Callback callback)
    {
        synchronized(fQueue) {
            Iterator iter = fQueue.iterator();
            while (iter.hasNext()) {
                QueueItem i = iter.next();
                if (i.callback == callback) {
                    iter.remove();
                }
            }
        }
    }

This iterates through the queue, removing entries that match the callback.

I’ve noted this on my list of things not to forget because this (and variations of this) comes up, with holding references to Android View objects in a thread that can survive the destruction of an activity.

The basic model is when the view goes away (which we can detect with a callback to onDetachedFromWindow), to disassociate the callback from the view and (preferably) to kill the background thread so the view object (and the activity associated with that view) can be garbage collected in a timely fashion.

The things in Android that keeps tripping me up.

android:layout_weight

When building a layout, the biggest thing that keeps going through my mind is “how do I get this object to lay itself out so it consumes only what is left in a linear layout flow?”

And the answer to that is android:layout_weight.

If you specify a layout and you want one of the controls to land at the bottom of the screen with a fixed height, then the other control in the LinearLayout should be set with height “match_parent”, and weight set to 1. This causes it to consume the rest of the space. (Bonus: you can split the view by having multiple controls with different weights, and you can even achieve an effect such as one control taking a third and the other two thirds, by using appropriate weights.)

android:gravity

It’s the other one I keep forgetting about. It allows you to center something on the screen, or flush it to the right, or whatever. Apply to the view to control it’s positioning inside the container parent.

I also have some code lying around here which helps to control multiple activities where a single activity would normally live, that I cobbled together by reading this post; he goes into how to extend an ActivityGroup to achieve multiple activities within the same tab group item. I think this principle can be extended to support other interesting effects, such as having a list view where each row in the list is it’s own activity. But that’s something I need to plug away at to see if I can make it work.

And then Apple changes the Rules. Again.

So I uploaded J2OC, and had lost interest in it. After all, who needs a second “let’s recompile Java into Objective C” in order to build iPhone and Android applications, if Apple isn’t going to allow it?

Then Apple does this: Statement by Apple on App Store REview Guidelines

In particular, we are relaxing all restrictions on the development tools used to create iOS apps, as long as the resulting apps do not download any code. This should give developers the flexibility they want, while preserving the security we need.

What I would ideally want is a Java VM kernel that can be linked into an iPhone application, one capable of running a jar file. Because ideally I’d like to write model code in Java–so I can port that model code to Android. Yet I don’t want UI bindings into the Apple API–I’d rather just build the UI twice, while the (more complicated) model code remains the same.

Thank you Apple. Maybe I’ll document J2OC better and provide some sample programs. It really is a cool little bit of technology. 🙂

Something Funny Happened To Me On The Way To Release.

So I started playing with parsing Java class files, creating a cross compiler capable of converting Java class files into Objective C files. I even had a sufficient amount of Apache Harmony running so I could use a good part of the java.lang and java.util classes; roughly in parity with the GWT cross compiler that can compile Java class files into Javascript.

Then Apple dropped the “no cross compiling” bombshell.

Now, keep in mind that I’m just me, tinkering on my spare time during weekends. I don’t have the desire or the time to go up against Apple. I’d rather allow the XMLVM project (which has a well established ecosystem, or so it seems) to decide to go (or not go) against Apple’s wishes.

Then time went by, and I sort of lost interest in this thing.

So I’ve taken the liberty to post the source code here: the Java to Objective C Compiler sources, and the J2OC RTL, which contains a subset of the Apache Harmony project, and implementing the java.lang and java.util classes.

It’s been an interesting project, and hopefully in the next few weeks I’ll document how this all works–including the wierdnesses and pitfalls I came across with the Java VM to get Apache Harmony to work. (Nothing like working through a very large collection of class files to find all the fringe cases.) The output code was intended to be human readable–but it really isn’t for some expressions.

But I’ll describe that in the next few weeks.

And at some point I’ll post an example iPhone application which includes Java code.

Note that my approach was different than the XMLVM project. Instead of providing Java bindings of the iOS libraries, my intent was to only allow the compilation of a computational kernel, then have the user provide the UI elements separately for Android, the iPhone, the iPad, and whatever other target the code was to compile for.

So you won’t find a turn-key solution for recompiling Android code and have it run on the iPhone. You should really check out the XMLVM project instead.

All this code, by the way, is being published under a BSD style license: go ahead and use the code, but leave me out of it and don’t blame me if it goes haywire.

separator.png

While I don’t intend to get into the functioning of the compiler, I will give a taste of how the code works. The bulk of the .class file parser, which reads and loads the .class file data into memory, is contained in the class ClassFile in com.chaosinmotion.j2oc.vm. This class takes in its constructor an input stream opened to the first byte of a .class file, and loads the entire class file into memory.

Once read, the entire class file can be accessed using the getters associated with that class. The bulk of the code contained inside the .vm (and subpackages within .vm) are used to represent the contents of the class file. The .vm.data classes contain the various data types used to store the meta data within a class file (such as the method names, the attributes fields, and the like), and the .vm.code classes contain a code parser to convert the code within the .class files into an array of processed instructions.

Once the instructions are parsed (by the vm.code.Code class), the code in a method is represented as an array of code segments; a run of instructions that starts with an instruction first jumped into by another instruction, and terminates with either the end of the method or with a jump instruction. In other words, a CodeSeg (Code.CodeSeg class) is a section of instructions that always enters at the first instruction and executes sequentially to the last instruction in the segment. Additional information, such as the list of variables that are used when the segment is entered are noted; this is the current state of the Java operator stack as this segment is entered.

Ultimately the code parser and class file reader represents the code in a .class file in memory in an intermediate state that can then be used to write Objective C with the WriteOCMethod class (com.chaosinmotion.j2oc.oc). A class, CodeOptimize (.oc package) provides utilities that determine if code preambles must be written for memory management or for exception handling: memory management preamble does not need to be written if I never invoke another method. (This is the case for simple functions which return a field or does simple math.)

The theory is that in practice, it should be possible to replace the code writer method with a writer method capable of writing a different language, such as C++ or C.

separator.png

In the future, when I have more time, I’ll write more about the J2OC project. But for now, if there are any segments or parts you want to use or play with, be my guest.