Things to remember: resize icon script for iOS

On iOS there seems to be about a billion different icon sizes, depending on if you’re on iOS 7 or iOS 6, on iPhone or iPad, or for search icons or whatever.

This is a script which I created to help automatically resize all the icons that are used.

if [ $# -gt 0 ]; then
    sips $1 -Z 29 --out icon-29.png
    sips $1 -Z 58 --out icon-29@2x.png
    sips $1 -Z 40 --out icon-40.png
    sips $1 -Z 80 --out icon-40@2x.png
    sips $1 -Z 50 --out icon-50.png
    sips $1 -Z 100 --out icon-50@2x.png
    sips $1 -Z 57 --out icon-57.png
    sips $1 -Z 114 --out icon-57@2x.png
    sips $1 -Z 60 --out icon-60.png
    sips $1 -Z 120 --out icon-60@2x.png
    sips $1 -Z 72 --out icon-72.png
    sips $1 -Z 144 --out icon-72@2x.png
    sips $1 -Z 76 --out icon-76.png
    sips $1 -Z 152 --out icon-76@2x.png
    echo "Done."
else
    echo "You must provide the name of an image file to process."
fi

To use, in the terminal execute this script with the name of the .png file to be resized, and this will generate all the different icon sizes used.

How to suck down iOS memory without even trying.

If you build custom UIView or UIControl objects, it’s important to remember that all of your views are backed by a CALayer object, which is essentially a bitmap image of the rendered view object. (This is how iOS is able to very quickly composite and animate views and images and the secret behind iOS’s UI responsiveness.)

However, this has some consequences on memory usage.

For example, if you create a view which is 2048×2048 pixels in size (say you are scrolling through an image), and you don’t switch the backing layer to a CATiledLayer object, then for that view iOS will allocate 2048x2048x4 bytes per pixel = 16 megabytes for the CALayer backing store. Given that the memory budget for the backing store on a small device (such as the previous generation iPod Touch) is only about 8 megabytes, your application will die an ignoble death pretty quickly.

Now not all views are the same. If you create a view that is 2048×2048 pixels in size, you set the background color to a flat color (like ‘gray’), but then you don’t respond to the drawRect method to draw your view’s contents, then the backing store in CALayer is very small. (Internally I believe iOS is allocating odd-sized texture maps in the graphics processor–and if the view is a flat color the graphics processor allocates a single pixel texture and stretches it to fit the rectangle. Also if you’re setting the background to one of the predefined textures, I believe it’s not allocating any memory at all, but instead setting up a texture map from a predefined texture and setting the (u,v) coordinates to make the texture fit.)

(An empty view, by the way, can be useful for laying out the contents inside that view; just respond to layoutSubviews, grab the bounding rectangle, and do some rectangle math to lay the contents out.)

So if you need to create a large container view with a border around it, you may be better off allocating two UIViews: one with the background color inset a pixel from the other view which is the border color. Otherwise, the moment you invoke drawRect to draw the frame, on an iPad with a retina display your container view–if it fills the screen–will require 2048 x 1536 x 4 bytes per pixels = 12 megabytes, just to represent a bordered container.

(Footnote: I have a feeling CAGradientLayer is doing something behind the scenes to cut down on memory usage; otherwise why have this class at all? And if that’s the case, in the event where you need to populate a bunch of controls over a background with a subtle gradient effect, you may want to use this as your view’s backing store rather than overriding the drawRect method and filling the entire view with a gradient.)

Targeted broadcasting of multithreaded results.

Okay, so here’s a basic problem. You’re building an iOS application (or an Android application) which needs to download an image from a remote site for display in a view.

So you write code similar to the following:

- (void)setImageUrlTest:(NSString *)url
{
	/*
	 *	Request the download on a background thread. Once we've downloaded the
	 *	results, we kick off a new block in the main thread to update the
	 *	image for this image object, and then animate a fade-in
	 */
	
	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		NSURLResponse *resp;
		NSURLRequest *req = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
		NSData *data = [NSURLConnection sendSynchronousRequest:req returningResponse:&resp error:nil];
		if (data) {
			/*
			 *	We have the data from the remote server. Kick off into the main
			 *	thread the UI update
			 */
			
			dispatch_async(dispatch_get_main_queue(), ^{
				/*
				 *	Be a little tricky: fade in from transparent with a half
				 *	second delay
				 */
				
				UIImage *image = [UIImage imageWithData:data];
				self.alpha = 0;
				self.image = image;
				[UIView animateWithDuration:0.5 animations:^{
					self.alpha = 1;
				}];
			});
		} else {
			NSLog(@"An error occurred downloading image");
		}
	});
}

If we were doing this on Android, we could do this using a Java thread. (Ideally we’d want to create a thread queue, but for illustration purposes we will just create a new thread.)

    void setImageUrlTest(final String url)
    {
        final Handler h = new Handler();
        
        /*
         *  Request the download on a background thread. Once we've downloaded
         *  the results, we kick off a runnable in the main thread using the
         *  handler created above, animating a fade-in
         */
        new Thread() {
            @Override
            public void run()
            {
                try {
                    URL u = new URL(url);
                    URLConnection conn = u.openConnection();
                    InputStream is = conn.getInputStream();
                    final Bitmap bmap = BitmapFactory.decodeStream(is);
                    is.close();
                    
                    /*
                     * If we get here we have a bitmap. Put a runnable which
                     * will set the image in the main evnet loop
                     */
                    setVisibility(View.INVISIBLE);
                    setImageBitmap(bmap);
                    h.post(new Runnable() {
                        @Override
                        public void run()
                        {
                            AlphaAnimation a = new AlphaAnimation(0.0f, 1.0f);
                            a.setDuration(500);
                            a.setAnimationListener(new AnimationListener() {
                                @Override
                                public void onAnimationEnd(Animation animation)
                                {
                                }

                                @Override
                                public void onAnimationRepeat(Animation animation)
                                {
                                }

                                @Override
                                public void onAnimationStart(Animation animation)
                                {
                                    setVisibility(View.VISIBLE);
                                }
                            });
                            startAnimation(a);
                        }
                    });
                }
                catch (Throwable th) {
                    Log.d("DownloadImageView","Failed to download image " + url, th);
                }
            }
        }.start();
    }

In both cases what we do is kick off a background task which downloads the image, then using a reference to the original image view, we then load the image into the image view (doing so on the main thread where all UI tasks need to take place), and finally we trigger an animation which fades in the view.

Question: What happens if the image view goes away?

On Android and on iOS, it’s fairly routine to have a slow internet connection, and the user may dismiss your view controller or activity before the view finishes downloading.

But there is a problem with that.

On iOS, the block object that you create has an implicit ‘retain’ on the UIImageView object. This means that, until the network operation completes, all the resources associated with the UIImageView cannot be released.

Things get worse on Android, which (ironically enough) has a much smaller memory footprint for typical applications: not only can’t the ImageView object go away, but the ImageView object contains a reference to the activity which launched the image. Meaning not only is the ImageView object retained by the anonymous thread declaration, but so is the activity that the image is contained in–along with the entire rest of the view hierarchy and all other activity resources associated with the containing activity.

Given that a network timeout can be up to 30 seconds, this means if the user is browsing in and out of different screens, you can very quickly run out of memory as memory is filled up with defunct views whose sole purpose is to exist as a target for a network activity that is no longer really necessary.

What to do?

Okay, the following is not a viable solution, despite my seeing it in multiple places:

	__weak UIImageView *weakSelf = self;
	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		NSURLResponse *resp;
		NSURLRequest *req = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
		NSData *data = [NSURLConnection sendSynchronousRequest:req returningResponse:&resp error:nil];
		if (data) {
			/*
			 *	We have the data from the remote server. Kick off into the main
			 *	thread the UI update
			 */
			
			dispatch_async(dispatch_get_main_queue(), ^{
				/*
				 *	Be a little tricky: fade in from transparent with a half
				 *	second delay
				 */
				
				UIImage *image = [UIImage imageWithData:data];
				weakSelf.alpha = 0;
				weakSelf.image = image;
				[UIView animateWithDuration:0.5 animations:^{
					weakSelf.alpha = 1;
				}];
			});
		} else {
			NSLog(@"An error occurred downloading image");
		}
	});

This doesn’t work because if the image is delayed during download and the image view goes away, the weakSelf reference now points to garbage, causing a crash.

Broadcast to the rescue

One possible solution on iOS is to use NSNotificationCenter to send notifications when a network operation completes successfully. The advantage of using NSNotificationCenter (or an equivalent broadcast/receive mechanism on Android) is that the receiver is effectively detached from the broadcaster: a receiver can detach itself on release, and if the broadcast message has no receivers, the message is dropped on the floor.

And in our case this is the right answer: once the image view has gone away we don’t care what the resulting image was supposed to be.

There is a problem with this, though: the code which handles the incoming message is also separated logically from the code that makes the request. And unless you insert code into the broadcast receiver which explicitly detaches the image view from the NSNotificationCenter once a response is received, you can have dozens (or even hundreds) of broadcast receivers listening to each incoming image.

Another way: CMBroadcastThreadPool

By combining the semantics of a block notification system with a broadcast/receiver pair we can circumvent these problems.

Internally we maintain a map between a request and the block receiving the response. Each block also is associated with a ‘handler’ object; the object which effectively ‘owns’ the response. So, in the case of our image view, the ‘handler’ object is our image view itself.

When our image view goes away, we can notify our thread pool object using the removeHandler: method; this walks through the table of requests and response blocks, deleting those response blocks associated with the handle being deleted:

- (void)dealloc
{
	[[CMNetworkRequest shared] removeHandler:self];
}

Note: CMNetworkRequest inherits CMBroadcastThreadPool to implement network semantics. More information later.

We can then submit a request using the request:handler:response: method; this takes in a request (in our case, the NSURLRequest for an image), the handler (that is, the object which will be receiving the response) and the block to invoke when the response is received.

- (void)setImageUrl:(NSString *)url
{
	CMNetworkRequest *req = [CMNetworkRequest shared];
	[req removeHandler:self];	/* Remove old handler */
	NSURLRequest *rurl = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
	[req request:rurl handler:self response:^(NSData *data) {
		UIImage *image = [UIImage imageWithData:data];
		self.alpha = 0;
		self.image = image;
		[UIView animateWithDuration:0.5 animations:^{
			self.alpha = 1;
		}];
	}];
}

The call to request:handler:response: stores the handler and response in association with the request, then executes the request in a background thread. Once the response is received, the CMBroadcastThreadPool object looks up the handler and block to invoke, and if present, invokes the block.

However, if the UIImageView has gone away, there are no blocks to invoke–and the network response is dropped on the floor.


Internally our CMBroadcastThreadPool class on iOS invokes an internal method responseForRequest: to process the request. This method is invoked on a block passed into a background thread via Grand Central Dispatch.

The class itself is presented in full here:

CMBroadcastThreadPool.h

//
//  CMBroadcastThreadPool.h
//  TestThreadPool
//
//  Created by William Woody on 7/20/13.
//  Copyright (c) 2013 William Woody. All rights reserved.
//

#import <Foundation/Foundation.h>

@interface CMBroadcastThreadPool : NSObject
{
	@private
		NSMutableSet *inProcess;
		NSMutableDictionary *receivers;
}

- (void)request:(id<NSObject, NSCopying>)request handler:(id<NSObject>)h response:(void (^)(id<NSObject>))resp;
- (void)removeHandler:(id<NSObject>)h;

/* Override this method for processing requests */
- (id<NSObject>)responseForRequest:(id<NSObject, NSCopying>)request;

@end

CMBroadcastThreadPool.m

//
//  CMBroadcastThreadPool.m
//  TestThreadPool
//
//  Created by William Woody on 7/20/13.
//  Copyright (c) 2013 William Woody. All rights reserved.
//

#import "CMBroadcastThreadPool.h"

/************************************************************************/
/*																		*/
/*	Internal Storage													*/
/*																		*/
/************************************************************************/

@interface CMBroadcastStore : NSObject
@property (retain) id<NSObject> handler;
@property (copy) void (^response)(id<NSObject>);
@end

@implementation CMBroadcastStore

#if !__has_feature(objc_arc)
- (void)dealloc
{
	[handler release];
	[response release];
	[super dealloc];
}
#endif

@end

/************************************************************************/
/*																		*/
/*	Thread pool															*/
/*																		*/
/************************************************************************/

@implementation CMBroadcastThreadPool

- (id)init
{
	if (nil != (self = [super init])) {
		inProcess = [[NSMutableSet alloc] initWithCapacity:10];
		receivers = [[NSMutableDictionary alloc] initWithCapacity:10];
	}
	return self;
}

#if !__has_feature(objc_arc)
- (void)dealloc
{
	[inProcess release];
	[receivers release];
	[super dealloc];
}
#endif

/*	request:handler:response:
 *
 *		Submit a request that will be sent to the specified hanlder, via the
 *	response block
 */

- (void)request:(id<NSObject, NSCopying>)request handler:(id<NSObject>)h response:(void (^)(id<NSObject>))resp
{
	@synchronized(self) {
		/*
		 *	Add this to the map of handlers for this request
		 */
		
		NSMutableArray *recarray = [receivers objectForKey:request];
		if (!recarray) {
			recarray = [[NSMutableArray alloc] initWithCapacity:10];
			[receivers setObject:recarray forKey:request];
#if !__has_feature(objc_arc)
			[recarray release];
#endif
		}
		
		CMBroadcastStore *store = [[CMBroadcastStore alloc] init];
		store.handler = h;
		store.response = resp;
		[recarray addObject:store];
#if !__has_feature(objc_arc)
		[store release];
#endif

		/*
		 *	Now enqueue a request in GCD. This only enqueues the item if the
		 *	item is not presently in the queue.
		 */
		 
		if (![inProcess containsObject:request]) {
			[inProcess addObject:request];
			
			dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
				
				/* Process for response */
				id<NSObject> response = [self responseForRequest:request];
				
				/* Get the list of receivers; if any exist, trigger response */
				@synchronized(self) {
					NSMutableArray *a = [receivers objectForKey:request];
					if (a) {
						dispatch_async(dispatch_get_main_queue(), ^{
							for (CMBroadcastStore *s in a) {
								s.response(response);
							}
						});
						[receivers removeObjectForKey:request];
					} else {
						NSLog(@"No receivers");
					}
					
					/* Remove from list of items in queue */
					[inProcess removeObject:request];
				}
			});
		}
		
	}
}

/*	removeHandler:
 *
 *		Remove the handler, removing all the receivers for this incoming
 *	message
 */

- (void)removeHandler:(id<NSObject>)h
{
	@synchronized(self) {
		NSArray *keys = [receivers allKeys];
		for (id request in keys) {
			/*
			 *	Remove selected handlers associated with this receiver
			 */
			
			NSMutableArray *a = [receivers objectForKey:request];
			int i,len = [a count];
			for (i = len-1; i >= 0; --i) {
				CMBroadcastStore *store = [a objectAtIndex:i];
				if (store.handler == h) {
					[a removeObjectAtIndex:i];
				}
			}
			if ([a count] <= 0) {
				/*
				 *	If empty, remove the array of responses.
				 */
				
				[receivers removeObjectForKey:request];
			}
		}
	}
}

/*	responseForRequest:
 *
 *		This returns a response for the specified request. This should be
 *	overridden if possible
 */

- (id<NSObject>)responseForRequest:(id<NSObject, NSCopying>)request
{
	return nil;
}

@end

In Java

For Java I’ve done the same sort of thing, except I’ve also included a thread pool mechanism which manages a finite number of background threads. I’ve also added code which causes a request to be dropped entirely if it is not being processed. For example, if you’re attempting to download an image, but the image view goes away, and the request to download the image hasn’t started being processed by a background thread, then we drop the request entirely.

To use in Android you need to override runOnMainThread to put the runnable into the main thread. (This can be done using Android’s ‘Handler’ class.) You also need to provide the processRequest method.

BroadcastThreadPool.java

/*  BroadcastThreadPool.java
 *
 *  Created on Jul 20, 2013 by William Edward Woody
 */

package com.glenviewsoftware.bthreadpool;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Map;

/**
 * A thread pool which sends the response to a request to a broadcast list,
 * which can be disposed of at any time.
 */
public abstract class BroadcastThreadPool<Response,Request,Handler>
{
    /// The number of threads that can simultaneously run
    private int fMaxThreads;
    
    /// The number of threads currently operating
    private int fCurThreads;
    
    /// The number of waiting threads
    private int fWaitingThreads;
    
    /// The request queue; a queue of requests to be processed
    private LinkedList<Request> fRequestQueue;
    
    /// The list of requests tha are being processed
    private HashSet<Request> fInProcess;
    
    /// The request response mapping; this maps requests to their responses.
    private HashMap<Request,HashMap<Handler,ArrayList<Receiver<Response>>>> fReceivers;
    
    
    private class BackgroundThread implements Runnable
    {

        @Override
        public void run()
        {
            ++fCurThreads;
            ++fWaitingThreads;
            
            for (;;) {
                /*
                 * Get the current request
                 */
                
                Request req;
                
                synchronized(BroadcastThreadPool.this) {
                    while (fRequestQueue.isEmpty()) {
                        try {
                            BroadcastThreadPool.this.wait();
                        }
                        catch (InterruptedException e) {
                        }
                    }
                    
                    /* When we get a request, note it's in process and mark thread in use */
                    req = fRequestQueue.removeFirst();
                    fInProcess.add(req);
                    --fWaitingThreads;
                }
                
                /*
                 * Process the response. If an exception happens, set the response
                 * to null. (Is this right?)
                 */
                
                Response resp;
                try {
                    resp = processRequest(req);
                }
                catch (Throwable th) {
                    /* Unexpected failure. */
                    resp = null;
                }
                
                /*
                 * Now send the response
                 */
                
                synchronized(BroadcastThreadPool.this) {
                    /* Remove request from list of in-process requests */
                    fInProcess.remove(req);
                    
                    /* Get map of receivers for this request and remove from list */
                    final HashMap<Handler,ArrayList<Receiver<Response>>> rmap = fReceivers.get(req);
                    fReceivers.remove(req);
                    
                    /* Send response to all receivers */
                    final Response respFinal = resp;
                    runOnMainThread(new Runnable() {
                        @Override
                        public void run()
                        {
                            for (ArrayList<Receiver<Response>> l: rmap.values()) {
                                for (Receiver<Response> rec: l) {
                                    rec.processResponse(respFinal);
                                }
                            }
                        }
                    });
                    
                    ++fWaitingThreads;
                }
            }
        }
        
    }
    
    /**
     *	Broadcast receiver. This is the interface to the abstract class which
     *  will receive the response once the message has been processed.
     */
    public interface Receiver<Response>
    {
        void processResponse(Response r);
    }
    
    /**
     * Create a new thread pool which uses a broadcast/receiver pair to handle
     * requests.
     * @param maxThreads
     */
    public BroadcastThreadPool(int maxThreads)
    {
        fMaxThreads = maxThreads;
        
        fReceivers = new HashMap<Request,HashMap<Handler,ArrayList<Receiver<Response>>>>();
        fRequestQueue = new LinkedList<Request>();
        fInProcess = new HashSet<Request>();
    }
    
    /**
     * This enqueues the request, adds the response ot the list of listeners
     * @param r
     * @param h
     * @param resp
     */
    public synchronized void request(Request r, Handler h, Receiver<Response> resp)
    {
        boolean enqueue = false;
        
        /*
         *  Step 1: Add this to the hash map of handlers. If we need to
         *  create a new entry, we probably have to enqueue the request
         */
        
        HashMap<Handler,ArrayList<Receiver<Response>>> rm = fReceivers.get(h);
        if (rm == null) {
            /* Enqueue if we're not currently processing the request */
            enqueue = !fInProcess.contains(r);
            
            /* Add this receiver */
            rm = new HashMap<Handler,ArrayList<Receiver<Response>>>();
            fReceivers.put(r,rm);
        }
        
        /*
         * Step 2: Add this receiver to the list of receivers associated with
         * the specified handler
         */
        ArrayList<Receiver<Response>> list = rm.get(h);
        if (list == null) {
            list = new ArrayList<Receiver<Response>>();
            rm.put(h, list);
        }
        list.add(resp);
        
        /*
         * Step 3: If we need to enqueue the request, then enqueue it and wake
         * up a thread to process. Note we only enqueue a request if the same
         * request hasn't already been enqueued.
         */
        
        if (enqueue) {
            fRequestQueue.addLast(r);
            
            if ((fWaitingThreads <= 0) && (fCurThreads < fMaxThreads)) {
                /*
                 * Create new processing thread
                 */
                
                Thread th = new Thread(new BackgroundThread(),"BThread Proc");
                th.setDaemon(true);
                th.start();
            }
            
            notify();
        }
    }
    
    /**
     * This is called when my handler is being disposed or going inactive and
     * is no longer interested in the response. The handler will be removed
     * and all of the responses will go away
     * @param h
     */
    public synchronized void remove(Handler h)
    {
        Iterator<Map.Entry<Request,HashMap<Handler,ArrayList<Receiver<Response>>>>> iter;
        
        /*
         * Iterate through all requests, removing handlers. If the request
         * goes empty, remove
         */
        
        iter = fReceivers.entrySet().iterator();
        while (iter.hasNext()) {
            Map.Entry<Request,HashMap<Handler,ArrayList<Receiver<Response>>>> e = iter.next();
            
            HashMap<Handler,ArrayList<Receiver<Response>>> pm = e.getValue();
            pm.remove(h);
            if (pm.isEmpty()) {
                /*
                 * The request is no longer needed. If it is not in process,
                 * remove from the queue, as it doesn't need to be processed.
                 */
                
                Request req = e.getKey();
                if (!fInProcess.contains(req)) {
                    fRequestQueue.remove(req);
                }
            }
        }
    }
    
    /**
     * runOnMainThread: override on an OS which requires responses on the
     * main thread. For now, this just executes directly
     * @param r
     */
    protected void runOnMainThread(Runnable r)
    {
        r.run();
    }

    /**
     * This is the abstract method executed in the thread queue which responds
     * with a specified response.
     * @param req
     * @return
     */
    protected abstract Response processRequest(Request req);
}

Here is an example of using this class to receive data from a remote network connection. We would pass in a string and receive a byte array of the data from the remote server.

NetworkThreadPool.java

/*  NetworkThreadPool.java
 *
 *  Created on Jul 22, 2013 by William Edward Woody
 */

package com.glenviewsoftware.bthreadpool;

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;

public class NetworkThreadPool extends BroadcastThreadPool<byte[], String, Object>
{
    public NetworkThreadPool()
    {
        /* Max of 5 background threads */
        super(5);
    }
    
    private byte[] readFromInput(InputStream is) throws IOException
    {
        byte[] buffer = new byte[512];
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        int rlen;
        
        while (0 < (rlen = is.read(buffer))) {
            baos.write(buffer, 0, rlen);
        }
        
        return baos.toByteArray();
    }

    @Override
    protected byte[] processRequest(String req)
    {
        try {
            URL url = new URL(req);
            URLConnection conn = url.openConnection();
            InputStream is = conn.getInputStream();
            byte[] buffer = readFromInput(is);
            is.close();
            return buffer;
        }
        catch (Throwable th) {
            // Handle exception
            return null;
        }
    }

    public static NetworkThreadPool shared()
    {
        if (gNetworkThreadPool == null) gNetworkThreadPool = new NetworkThreadPool();
        return gNetworkThreadPool;
    }
}

We can then invoke this by writing:

        NetworkThreadPool.shared().request(imageUrl, this, new NetworkThreadPool.Receiver() {
            @Override
            public void processResponse(byte[] r)
            {
                /* Convert to image bitmap and load */
                ...
            }
        });

and of course when the image view goes away, we can (on onDetachedFromWindow) write:

void onDetachedFromWindow()
{
    super.onDetachedFromWindow();
    NetworkThreadPool.shared().remove(this);
}

Disclaimer:

This code is loosely sketched together in order to illustrate the concept of using a thread pool married with a broadcast/receiver model to allow us to handle the case where the receiver no longer exists prior to a request completing in the background.

I believe the code works but it is not throughly tested. There may be better ways to do this.

Feel free to use any of the above code in your own projects.

And if you need a dynamite Android or iOS developer on a contract basis, drop me a line. 🙂

OpenGL ES for iOS

I’m working on an application that needs to use OpenGL ES v2.0 on iOS and Android. And one problem I’m running into is getting a good in-depth discussion of OpenGL ES shaders on iOS: one book I have covers shaders well but sample code was written for Microsoft Windows. Another book was rated highly on Amazon–but the discussion seems to be geared to people who have never worked with OpenGL or with iOS before.

The basic problem I’m running into is getting a basic OpenGL ES app running. I finally have something, so I’m posting it here for future reference, and in case it works well for other people.

This basically marries the OpenGL ES sample code from both books; incorporating shaders from one with the iOS base of the other.

This relies on GLKit; at this point, with iOS 6 on most devices and iOS 5 on most of the rest, there is no reason not to use GLKit. I’m only using GLKView, however; the types of applications I’m working on do not require constant rendering (like a OpenGL game), so I’m not using GLKViewController, which provides a timer loop which constantly renders frames for continuous smooth animation. (To plug in GLKViewController you just change GSViewController’s parent to GLKViewController, and remove the delegate assignment to self.view in viewDidLoad.

Also note I’m releasing resources on viewDidDisappear rather than on viewDidUnload; iOS 6 deprecates viewDidUnload.

GSViewController nib

This is actually very simple: the GSViewController nib contains one view: a GLKView. Not posted here because it’s so simple.

Note if you have other views and you want to move the GLKView to a different location in the hierarchy, modify the GSViewController.m/h class to provide an outlet to the view.

GSViewController.h

//
//  GSViewController.h
//  TestOpenGL
//
//  Created by William Woody on 6/12/13.
//  Copyright (c) 2013 Glenview Software. All rights reserved.
//

#import 
#import 

@interface GSViewController : UIViewController 
{
	EAGLContext *context;
	GLuint vertexBufferID;
	
	GLuint programObject;
}

@end

This implements the basic example out of the book OpenGL ES 2.0 Programming Guide. Note, however, that instead of creating a ‘UserData’ object and storing that in an ‘ESContext’ (which isn’t on iOS AFAIK), instead, I keep the contents of the ‘UserData’ record (the programObject field), along with a reference to the EAGLContext (the ‘ESContext’ of iOS), and a reference to the vertex buffer I’m using.

GSViewController.m

//
//  GSViewController.m
//  TestOpenGL
//
//  Created by William Woody on 6/12/13.
//  Copyright (c) 2013 Glenview Software. All rights reserved.
//

#import "GSViewController.h"

typedef struct {
	GLKVector3 postiionCoords;
} SceneVertex;

static const SceneVertex vertices[] = {
	{ {  0.0f,  0.5f, 0.0f } },
	{ { -0.5f, -0.5f, 0.0f } },
	{ {  0.5f, -0.5f, 0.0f } }
};

@implementation GSViewController

GLuint LoadShader(GLenum type, const char *shaderSrc)
{
	GLuint shader;
	GLint compiled;
	
	shader = glCreateShader(type);
	if (shader == 0) return 0;
	
	glShaderSource(shader, 1, &shaderSrc, NULL);
	glCompileShader(shader);
	
	glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
	if (!compiled) {
		GLint infoLen = 0;
		glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
		if (infoLen > 1) {
			char *infoLog = malloc(sizeof(char) * infoLen);
			glGetShaderInfoLog(shader, infoLen, NULL, infoLog);
			NSLog(@"Error compiling shader: %s",infoLog);
			free(infoLog);
		}
		glDeleteShader(shader);
		return 0;
	}
	return shader;
}

- (BOOL)internalInit
{
	const char vShaderStr[] =
		"attribute vec4 vPosition;                                           n"
		"void main()                                                         n"
		"{                                                                   n"
		"    gl_Position = vPosition;                                        n"
		"}                                                                   n";
	const char fShaderStr[] =
		"precision mediump float;                                            n"
		"void main()                                                         n"
		"{                                                                   n"
		"    gl_FragColor = vec4(1.0,0.0,0.0,1.0);                           n"
		"}                                                                   n";
		
	GLuint vertexShader;
	GLuint fragmentShader;
	GLint linked;
	
	vertexShader = LoadShader(GL_VERTEX_SHADER,vShaderStr);
	fragmentShader = LoadShader(GL_FRAGMENT_SHADER, fShaderStr);
	
	programObject = glCreateProgram();
	if (programObject == 0) return NO;
	
	glAttachShader(programObject, vertexShader);
	glAttachShader(programObject, fragmentShader);
	glBindAttribLocation(programObject, 0, "vPosition");
	glLinkProgram(programObject);
	
	glGetProgramiv(programObject, GL_COMPILE_STATUS, &linked);
	if (!linked) {
		GLint infoLen = 0;
		glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &infoLen);
		if (infoLen > 1) {
			char *infoLog = malloc(sizeof(char) * infoLen);
			glGetProgramInfoLog(programObject, infoLen, NULL, infoLog);
			NSLog(@"Error linking shader: %s",infoLog);
			free(infoLog);
		}
		glDeleteProgram(programObject);
		programObject = 0;
		return NO;
	}
	return YES;
}

- (void)viewDidLoad
{
	[super viewDidLoad];
	
	GLKView *view = (GLKView *)self.view;
	NSAssert([view isKindOfClass:[GLKView class]],@"View controller's view is not a GLKView");
	context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
	view.context = context;
	view.delegate = self;
	[EAGLContext setCurrentContext:context];
	
	glClearColor(0.0f,0.0f,0.0f,1.0f);
	
	[self internalInit];
	
	// Generate, bind and initialize contents of a buffer to be used in GLU memory
	glGenBuffers(1, &vertexBufferID);
	glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
	glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
}

- (void)viewDidDisappear:(BOOL)animated
{
	[super viewDidDisappear:animated];
	
	GLKView *view = (GLKView *)self.view;
	[EAGLContext setCurrentContext:view.context];
	
	if (0 != vertexBufferID) {
		glDeleteBuffers(1, &vertexBufferID);
		vertexBufferID = 0;
	}
	
	view.context = nil;
	[EAGLContext setCurrentContext:nil];
	
	glDeleteProgram(programObject);
	programObject = 0;
	[context release];
	context = nil;
}

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
	glClear(GL_COLOR_BUFFER_BIT);
	
	glUseProgram(programObject);
	
	glEnableVertexAttribArray(GLKVertexAttribPosition);
	glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(SceneVertex), NULL + offsetof(SceneVertex, postiionCoords));
	glDrawArrays(GL_TRIANGLES, 0, 3);
}

@end

Note a few things. First, I’m setting up a GLKView for rendering; this is all handled in -viewDidLoad. I’m also setting up a vertex buffer in viewDidLoad; the OpenGL ES 2.0 Programming Guide example puts that initialization in Init() instead. The -viewDidLoad method also replaces some of the setup in the example’s main() method.

Also note that -(BOOL)internalInit replaces most of the rest of Init()’s functionality. Specifically we handle compiling the shaders and creating a program there.

I handle cleanup in -viewDidDisappear; keep in mind the example OpenGL ES application doesn’t do any cleanup. We do it here because our application may continue to run even after our view controller disappears, so we need to be a good citizen.

And our draw routine (glkView:drawInRect:) delegate doesn’t set up the viewport, nor does it need to call to swap buffers.


Yes, there are a lot of problems with this code. It’s a quick and dirty application that I’m using to understand shaders in OpenGL ES 2.0.

But I do get a triangle.

Please resize your designs to test if they work!

*sigh*

I remember months ago making this rant on my Facebook account. Yet it still continues.

Okay, look: if you’re laying out a user interface for Android or for iOS, remember that on iOS the minimum feature size someone’s finger can reasonably touch is 44 x 44 pixels. (On “retina-resolution displays” this becomes 88 x 88 pixels.)

So stop designing what looks good on your gigantic 30″ monitor, and start designing for what will look good on a 7″ iPad Mini or an iPhone 5 or an 4″ Android.

If you really want to see what a design will look like to the user, take your 2048×1536 Photoshop and resize it down to 628 x 471 pixels. (NO, not 1024 x 768.)

On a typical desktop monitor with 100dpi resolution, a 628×471 pixel image is approximately the same size as the iPad Mini with it’s 163 dpi screen.

If you resize your design down to that size and the design looks like a muddled mess–well, guess the fuck what? When the programmer is done implementing your design, it’s going to look like a muddled mess.

And as the designer, guess who’s fault that is? The programmer’s? Think again.

The same goes for the iPhone 5 (196 x 348), the iPhone 4 (196 x 294), the Google Nexus 4 (405 x 243), the Nexus 7 (592 x 370) and the like.

Remember: your desktop screen is around 100dpi. The devices you are designing for, however, are not–and what may look spacious and free and open on your 30″ monitor at 100dpi will looked tiny and cramped and cluttered on a device with only a 4″ or 7″ or 10″ diagonal…

Building a static iOS Library

I’m using the instructions that I found here: iOS-Framework

But here are the places where things deviated thanks to Xcode 4.6:

(1) At Step 2: Create the Primary Framework Header, for some reason (I suspect because things got changed), it appears that specifying target membership for a header file no longer appears to work. From the notes folks seem to suggest using the Build Phases: “Copy Files” section to specify where and how to copy the files.

So what I’m doing is every publicly available header file, (a) make sure it’s inserted into the “Copy Files” list, and (b) make sure the destination is given as “Products Directory”, subpath include/${PRODUCT_NAME}

(2) At Step 3: Update the Public Headers Location, note the script in step 5 uses “${PUBLIC_HEADERS_FOLDER_PATH}” to specify where files are to be copied from. So in Step 3, we need to make sure the public headers folder path is set to something more reasonable.

In this step, set the public headers folder path (and also the private headers folder path just because to “include/${PRODUCT_NAME}”.

These changes get me to the point where I can build the framework after step 5.


There was one other hitch: you cannot include the framework project (in the later steps) into the dependent project while the framework project is still open.

Learning to fly, and Jeppesen’s map data on Garmin is wrong.

It’s been a while since I’ve posted, I know.

But I have a good excuse. I’ve been learning to fly.

Flying is the coolest thing I’ve done in a long time. And in the past six months I’ve gone from my check ride to being just a few weeks away from my check ride. My last flight was my long cross country to Bakersfield and to San Luis Obispo from Whiteman Airport. And the view! There is nothing cooler than seeing Avila Bay from the front windscreen of an airplane under your control.

Now to justify spending all this money learning to fly (it ain’t cheap!) I’ve been spending some time building aviation-related software products. My first product was an E6B calculator which also includes methods for calculating maneuvering speed (that changes as the weight of your plane changes, and knowing it is vital when you hit turbulence).

My second product will be an EFB, a program which helps show you where you are on a map displaying airspace data, and also allows you to create a flight plan and file a flight plan with the FAA.

And it is building the mapping engine for Android (my first targeted platform), where my current story starts.

Building a map and testing.

In order to make rendering on Android quick, I’ve built a slippy map engine that uses OpenGL. There are several advantages to this; the biggest being if you scroll the map around you don’t have to redraw the entire screen. Instead, a few OpenGL translate calls or rotate calls–and you’re done. Some additional code to detect if you need to rebuild your tiles, and you can render the tiles in the background in a separate thread, and replace the tiles in the OpenGL instance once they’re done–this allows you to scroll around in real time even though it may take a second or so to render all the complex geometry.

And in testing my slippy map code, I noticed something.

My source of data for the shape of the airspace around Burbank, Van Nuys and Whiteman is the FAA FADDS (Federal Aeronautical Data Distribution System) database, updated every 56 days. I have the latest data set for this current cycle, and I’ve rendered it using my OpenGL slippy map engine in the following screen snapshot:

You can ignore the numbers; that’s just for debugging purposes. The airspace being hilited is the airspace at my altitude (currently set to 0); dim lines show airspace at a different flight level.

And I noticed something–interesting–when comparing the map with my Garmin Aera 796 with the Jeppesen America Navigation Data set:

There are a few differences.

Okay, let’s see what the printed VFR map shows for the same area:

I’ve taken the liberty to rotate the map to around the same orientation as the other maps.

And–there are errors.

Now the difference between the FADDS data set and the printed map is trivial: there is this extra crescent area that on the printed map belongs to Van Nuys:

The errors with the Jeppesen data set, however, are worse:

On this image I’ve superimposed the image of the terminal air chart for Van Nuys, Burbank and Whiteman on top of the Garmin’s screen. It may be hard to see exactly what’s going on, but if you look carefully you see three errors.

The first error is the west side of Van Nuys’ airspace it has been straightened out into a north-south line. On the chart, Van Nuys’ airspace curves to match Burbank’s class C airspace on the west.

The second error is the northern edge of Burbank’s C airspace over Whiteman. Whiteman’s class D airspace is entirely underneath Burbank’s class C airspace–but the border has been turned into a straight line on the Garmin.

The third error is a tiny little edge of airspace that shows Van Nuys and Whiteman’s class D airspaces overlapping. On the printed map, this little wedge belongs to Whiteman.

They say a handheld GPS device should only be used for “situational awareness” and not for navigation.

Well, one reason is simple: the airspace maps you’re looking at on the hand-held may be wrong.

I’ve also noticed similar errors around Oakland’s Class C airspace. Large chunks of the airspace have been turned from nice curving lines (showing a radius from a fixed point) into roughly shaped polygons.

Now my guess is this: the raw FADDS data from the FAA is similar to most mapping data: rather than specifying round curves, the georeferenced data is specified as a polygon with a very large number of nodes; some of the airspace curves are specified with several hundred points.

And somewhere during the conversion process, the number of points are being reduced in order to fit a compact file size, and so it renders quickly: if the length of one edge of the rendered polygon is less than a couple of pixels long, there is no point keeping the nodes of that line; you can approximate the polygon with one with fewer edges with less than a single display pixel of error.

But somewhere along the path, too many polygon edges are being removed–turning what should be a curved line into a straight edge.

I don’t know if this is because there is a limit to the number of polygon lines permitted in the file format being used for export and import, or if this is because approximation code is going haywire.

But curved lines are being turned straight–which implies if you are skirting someone’s airspace, and relying on your hand-held Garmin–you could very well be intruding into someone’s airspace and not even know it.

UDIDs are gone.

After warning from Apple, apps using UDIDs now being rejected

UDIDs are now gone.

But it’s easy enough if you need a device unique identifier to generate one and track the device that way. Of course because you have your own unique identifier you cannot match numbers against other software makers, and you can’t guarantee the user will uninstall and reinstall the application–but on the other hand, if you save the unique identifier with the device and the user upgrades his phone, the identifier will move to the new phone, following the user.

Step 1: Create a UUID.

You can create the UUID using Apple’s built in UUID routines.

	CFUUIDRef ref = CFUUIDCreate(nil);
	uuid = (NSString *)CFUUIDCreateString(nil,ref);		CFRelease(ref);

Step 2: Write the UUID out to the Documents folder, so the UUID gets backed up with the phone.

Because under the hood iOS is basically Unix, we can use the C standard library to handle creating and writing the file for us:

	char buf[256];
     /* HOME is the sandbox root directory for the application */
	strcpy(buf,getenv("HOME"));
	strcat(buf,"/Documents/appuuid.data");

	f = fopen(buf,"w");
	fputs([uuid UTF8String], f);
	fclose(f);

Step 3: If the file is already there, use what’s in the file rather than generating a new UUID. (After all, that’s the whole point of this exercise; to have a stable UUID.)

Putting all of this together, we get the following routine, which can be called when your application starts up in the main UIApplicationDelegate when you load the main window:

- (void)loadUUID
{
	char buf[256];
	strcpy(buf,getenv("HOME"));
	strcat(buf,"/Documents/appuuid.data");
	FILE *f = fopen(buf,"r");
	if (f == NULL) {
		/*
		 *	UUID doesn't exist. Create
		 */
		
		CFUUIDRef ref = CFUUIDCreate(nil);
		uuid = (NSString *)CFUUIDCreateString(nil,ref);
		CFRelease(ref);
		
		/*
		 *	Write to our file
		 */
		
		f = fopen(buf,"w");
		fputs([uuid UTF8String], f);
		fclose(f);
	} else {
		/*
		 *	UUID exists. Read from file
		 */
		
		fgets(buf,sizeof(buf),f);
		fclose(f);
		uuid = [[NSString alloc] initWithUTF8String:buf];
	}
}

This will set the uuid field in your AppDelegate class to a unique identifier, retaining it across application invocations.

Now any place where you would need the UDID, you can use the loaded uuid instead. This also has the nice property that the generated uuid is 36 characters long, 4 characters narrower than the 40 character UDID returned by iOS; thus, you can simply drop in the uuid into your back-end database code without having to widen the table column size of your existing back-end infrastructure. Further, because the UDID format and the uuid formats are different, you won’t get any accidental collisions between the old and new formats.

My e-mail bag: The Flowcover transformation matrix

I just downloaded your flow cover library and its a fantastic piece of work especially for a beginner who is trying to learn opengl like me. I have a couple of doubts in that.

In this piece of code.

GLfloat m[16];
	memset(m,0,sizeof(m));
	m[10] = 1;
	m[15] = 1;
	m[0] = 1;
	m[5] = 1;
	double trans = off * SPREADIMAGE;
	
	double f = off * FLANKSPREAD;
	if (f  FLANKSPREAD) {
		f = FLANKSPREAD;
	}
	m[3] = -f;
	m[0] = 1-fabs(f);
	double sc = 0.45 * (1 - fabs(f));
	trans += f * 1;
	
	glPushMatrix();
	glBindTexture(GL_TEXTURE_2D,fcr.texture);

	glTranslatef(trans, 0, 0);

	glScalef(sc,sc,1.0);


	glMultMatrixf(m);

How did you calculate the matrix m. Since I suppose m[0] and m[3] is in a column major format how did you calculate the math to use it to skew the objects ?

Thanks and regards,
[name withheld]

http://www.opengl.org/resources/faq/technical/transformations.htm

“For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification.”

Normally m[3] is not used directly in standard OpenGL operations, though transformations may result in m[3] being populated. It essentially adds x in the source (x,y,z,w) vector into w’ in the destination (x’,y’,z’,w’) vector, which is then used to divide through to get the final [pmath](x_r, y_r, z_r) = ({x{prime}}/{w{prime}}, {y{prime}}/{w{prime}}, {z{prime}}/{w{prime}})[/pmath]. So in this case, m[3] = -f and m[15] = 1, so [pmath]w{prime}=1-f*x[/pmath] (since w = 1), which is then divided through x’,y’,z’ to give the final points.

In other words, I’m using the x position on the tile to divide through the points of the tile to give the perspective skewing effect.

I then multiply m[0] by 1 – fabs(f) to shorten the tile in x a little more.

Hope this helps.

– Bill

Static frameworks in iOS

Just a reminder for me: Universal Framework iPhone iOS (2.0)

This is a how-to to build a static iOS Universal framework for packaging reusable code.

Caviat:

(1) You really need to set “Generate Debug Symbols” to NO. But for whatever reason, “Generate Debug Symbols” doesn’t show up until you first build the product. So build it, then set debug symbols to NO, then build again.

(2) You need to fix up the precompiled headers to remove the reference to the cocoa headers. There is an earlier article which notes these steps: Creating Universal Framework to iPhone iOS.