Retention Policy in Java Annotations

Something I keep forgetting, which I’m recording here so I can remember in the future.

When creating a new annotation which is to be parsed during execution with Java Reflection, you must change the retention policy with the @Retention annotation.

So, for example:

@Retention(RetentionPolicy.RUNTIME)
public @interface MyCustomAnnotation {
...
}

This will keep the annotation around when checking annotations at runtime.

For whatever reason I keep forgetting this.

GWT, ScrollPanel and the iPad.

GWT makes it easy to create a scrolling panel within an HTML driven web site, by using the ScrollPanel class. ScrollPanel itself expands into a div tag containing the overflow attribute; specifying which axes the scroll bars land on translate into overflow-x or overflow-y attributes in the generated HTML DOM object.

And the iPad (and iPhone and iPod Touch) do not allow you to scroll within the overflow tags; the touch-drag gesture only scrolls the entire page.

What this means is if you are putting together a GWT application that you want to work on the iPad, the natural instinct to create a root panel the full size of the screen and tossing in a ScrollPanel area in the lower right that resizes with the window is incorrect. Instead, you are better off generating your GWT code so that it generates the properly sized HTML, and scroll around the GWT application using the window’s scroll bars.

Sometimes you have to own something completely pointless.

Mine arrived today: USB Typewriter, and it worked like a charm. It’s a rather amazing accomplishment: ordering a used typewriter retrofitted to work as a USB keyboard.

Me, I love old mechanical typewriters. I love old mechanical calculators. And the fact that this one can be used as a modern USB typewriter? *squeeeee!*

No; I have no intention of using this day to day. But it certainly is a conversation piece!

Building a Java RMI system.

Yeah, this is old technology from the stone age. But I wanted to set up a simple Java RMI system.

And it turns out every tutorial I encountered starts with:

        if (System.getSecurityManager() == null) {
            System.setSecurityManager(new SecurityManager());
        }

Then there is a ton of pages showing how we then customize the security manager settings through all sorts of security property files, or extending the SecurityManager class to give us explicit permissions or all of that stuff.

Of course all of this is nice. However, for just testing something it’s unnecessary: the SecurityManager (once set) prohibits access to various things in the JVM unless explicitly opened–and the suggestion to then grant all permissions just seems silly to me: I’m locking the door, then I’m holding it wide open.

Just don’t lock the damned door.

The following seems to work for me; YMMV.

TestInterface.java

import java.rmi.Remote;
import java.rmi.RemoteException;

public interface TestInterface extends Remote
{
    String sayHello() throws RemoteException;
}

TestServer.java

import java.rmi.Naming;
import java.rmi.RemoteException;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
import java.rmi.server.UnicastRemoteObject;

import com.atti.addel.test.springrpc.common.TestInterface;

public class TestServer implements TestInterface
{
    protected TestServer() throws RemoteException
    {
        super();
    }

    private static final long serialVersionUID = 1L;

    public String sayHello() throws RemoteException
    {
        return "Hello World.";
    }
    
    /**
     * Run this as an RMI object
     * @param args
     */
    public static void main(String[] args) 
    {
//        if (System.getSecurityManager() == null) {
//            System.setSecurityManager(new SecurityManager());
//        }
        
        try {
            LocateRegistry.createRegistry(1099);
        }
        catch (RemoteException e) {
            System.out.println("Registry exists.");
        }
        
        try {
            TestInterface engine = new TestServer();
            TestInterface stub = (TestInterface)UnicastRemoteObject.exportObject(engine,0);
            
            Registry registry = LocateRegistry.getRegistry();
            registry.rebind("TestServer",stub);
        }
        catch (Exception ex) {
            System.out.println("Unable to start up.");
            ex.printStackTrace();
        }
    }
}

TestClient.java

import java.rmi.Naming;

import com.atti.addel.test.springrpc.common.TestInterface;

public class TestClient
{
    /**
     * @param args
     */
    public static void main(String[] args)
    {
        try {
            TestInterface test = (TestInterface)Naming.lookup("rmi://localhost/TestServer");
            System.out.println(test.sayHello());
        }
        catch (Exception ex) {
            ex.printStackTrace();
        }
    }
}

Once you run the server, a service “TestServer” is created in the local RMI registry. The client will then invoke the server ‘sayHello’ method, and return the answer.

Naturally more complex examples and interfaces and stuff can be bolted onto this. But this seems to be the very bare bones system–and it avoids stuff like rmic and security permissions and stuff like that.

I ran this on my Macintosh (System 10.5) running JVM 1.6 in Eclipse 3.5 without any problems, outside of wasting an hour trying to figure out why I was getting an “java.security.AccessControlException” exception–which could directly be traced to setting up a security manager.

Punching Holes in the Schedule

Software developers take a long time to get rolling on a project, and once going it takes a long time to complete a task. So when you set up a meeting, for 30 minutes before I really can’t start a task, and it takes about 30 minutes afterwards for me to get started again.

Calling a 9:45 am meeting from when I get in at around 9:15 and having it run to 30 minutes before my 11 am meeting means you’ve punched a hole in my entire damned morning.

Which makes it doubly-obnoxious when you punch a hole in my morning–then simply fail to show up.

*sigh*

If I don’t respond to requests about FlowCover, it’s because I’m busy with a day job that demands most of my time.

I’d be happy to clear up time for a consulting fee–but unfortunately my current job takes up all of my time, so I don’t even have time to consult for money. 😦

Thus is life. So if I don’t respond to your e-mails in a timely fashion, sorry…

Forestalling death marches.

This came up at work.

Now that I’m leading a team I’ve gotten suggestions from a couple of people about using external frameworks or how we should rearrange our code. Some of the suggestions have been made by people outside of my team, some have come from inside my team. And it makes sense: anyone with more than a few years of experience start thinking of code reuse–and that leads them to either design patterns or to code frameworks that, in the short term, seem like they should save time.

But then, despite the best intentions of senior developers and architects and development managers, the whole thing turns into a cluster-fuck of biblical proportions. The projects that succeed do so because of a death march. The ones that fail do so because the programmers couldn’t make the death march. In fact, the idea is so engrained in the collective psyche of the development community that we just assume all projects end with a death march.

In all the years I’ve been working on code, I’ve either worked on projects that shipped early and did not involve a death march, or I’ve worked on projects that turned into a death march for members of my team–and ultimately failed.

And I’ve noticed that the ones that do turn into death marches do so because they violate on of the following four principles:

Discoverability.

A software developer cannot keep an entire project inside his head. It’s just impossible. So all software developers I know rely upon a variety of shortcuts, mnemonics and half-remembered memories on how they put their own code together to keep track of what is in there.

Software developers often want to decry other people’s work–and that makes sense as well: if you didn’t write it, it probably doesn’t make sense.

The reality is various IDEs like Eclipse and Xtools and NetBeans contain all sorts of tools to allow you to figure out how someone else’s code works. It’s easy to find all methods which call into a particular method, or to set a breakpoint in the code and see how it’s being called.

Assuming, of course, that you haven’t used a framework which uses Java reflection to invoke methods via a configuration file (I’m looking at you, Spring!), or you haven’t used anonymous identifiers in Objective C to invoke methods which have slightly semantically different meanings depending on the type of object being invoked.

Discoverability allows a new software developer to figure out what’s going on in some new code. It requires good documentation, both in the comments and along side the code. And it requires that we not use different development techniques that negate method call analysis tools, class hierarchy browsers, or wrap such small pieces of functionality into individual libraries that we effectively are “programming” by using configuration files to plug in groups of a dozen lines of code or less.

The opposite of discoverability is opaqueness: the inability to look at existing code and understand what it is doing. When code is no longer discoverable, we as developers no longer feel comfortable reaching in and changing or modifying the existing code base, but instead resort to using the “Lava Flow” anti-pattern. And that contributes to code bloat, uncertainty in functionality, and increased development and testing costs.

Compilability

Related to discoverability and debugability; this is the ability to use an IDE (such as Eclipse), check out someone else’s source code from a source control system, and easily and quickly build the end-product.

The key here is “easily and quickly.”

A lack of compilability means it is harder for new people on your team to build the product or understand the project. It makes it harder for an operations person to rebuild your product. It puts firewalls in the way of QA testing, since it means it is harder for someone to slipstream a new revision to resolve known bugs. And it generally leads to a failure in debugability below.

I once worked on a project that, on a very fast machine, took an hour and a half to compile. Worse, the project could not be hosted in an IDE for debugging; the only way you could debug the project was through inserting print statements. So I used to start my day scrubbing bugs and coming up with a strategy for debugging and fixing those bugs: since I only had time during the day to compile the product 6 times, I had to plan how I was going to fix multiple bugs in such a way as to maximize my six opportunities to build the product. And God help me if QA needed a new deployment: this involved rolling back my test changes, removing my print statements–and ate one of my six chances to build the project for testing purposes.

Debugability

This is extremely key: to me, this means the ability, once your environment is set up, to hit the “debug” button and see the entire project work, from the UI all the way down the stack through the client/server calls to the database calls made at the back-end.

GWT is wonderful in that, with the standard Eclipse project, you can write a client/server system and, on hitting “debug” and cutting and pasting the suggested URL into your browser, set breakpoints both in the front-end Java code that drives the UI, and in the back-end server calls, and see how everything works all the way down.

One of the people I was working with on our team was so wedded to Maven that he suggested we should sacrifice Eclipse, and sacrifice Debugability, in order to allow us to integrate within Maven. (The problem is that Maven, Eclipse and GWT didn’t interact very well together–while I’m sure things have improved, at the time it was “Maven, Eclipse, GWT: pick two.”)

Let’s just say it was hard for me not visibly lose my temper.

Without debugability there is no discoverability. There is no way to set a breakpoint and know how the project works. There is no easy way to fix bugs. There is no easy way to do your work. It turns what should be an interesting and perhaps fun project into the death march from hell.

In the aforementioned project, because I only had six chances to build the product, and only could use print statements to print the internal state of the system, I had to be tremendously clever to figure out what was going on. I had to use code review–and there were a number of bugs which easily took me a week to resolve that–had I been able to set a breakpoint in Eclipse and run the entire product by hitting “debug” would have literally taken me minutes to fix.

Think of that: without debugability, a problem that should take minutes took hundreds of times longer to resolve.

Debugability is easy to lose. Use the wrong build system that is incompatible with a good IDE. Set up a project that is built across multiple projects, some of which are shipped as opaque libraries. Use a client/server system where one or both pieces contain large opaque elements. Many developers don’t even realize the value of debugability; they’re still used to writing code in VI or Emacs, and who wants to use an IDE anyway? It’s only when, after having not used an IDE they turn to an IDE to debug–and had set up a build process that is impossible to alter to fit within that IDE, that they discover they’re stuck with only six chances to find their bugs, and waste a week fixing what should have taken minutes.

Flexibility

I’m a huge believer, within UI systems, of the delegate design pattern and the principles behind the MVC system–though I can’t say I fully appreciate the MVC pattern as a cure-all for all that ails UI development. (I’ve seen, for example, the suggestion that Ruby on Rails implements web MVC: *shudder.* Unless you’re doing pure AJAX, what people call “MVC” really isn’t–so I tend to take a leery eye towards such things.)

I believe in them, because I believe it should be possible to take any component in your system, rip it out, replace it with a substitute component, and have minimal (if any) impact on the overall code base. You should, for example, be able to replace a GWT Button with a custom button built using GWT Label and not have to rewrite your entire user interface. You should be able to take your custom text editor panel and plug in the NSTextField and (aside from renaming a few methods to fit the delegate protocol) have no changes to your overall program. If your custom table code doesn’t work you should be able to plug in someone else’s custom table code and–again, with little change besides renaming a few interface methods, use the new custom table code without rewriting your entire application.

And, especially with a user interface, you should be able to rearrange the buttons, the controls, the layout of your system without having to do much work beyond perhaps changing a few observer calls.

Without flexibility when product requirements change (as they inevitably do) it becomes difficult to make the changes needed without revamping large amounts of code. And if your code is not discoverable, those revamps often follow the “Lava Flow” antipattern. Worse, if your code is not debuggable, those changes become incredibly risky and problematic–and your entire project becomes a death march very quickly.

On my team I’m trying to communicate these four points. I don’t care what people do on my team–I assume all of them are extremely smart, capable, and self-motivated folks who are doing what they do because they want to do it, and because at some level they love doing it.

But these four points must be maintained. Without discoverability, compilability, debugability and flexibility, we will quickly sink into a death march–and not only does that mean long hours but it means no-one will want to be at work. It is discouraging knowing you have a deadline in a month, five bugs to fix, and no-way to fix those bugs faster than one bug a week.

Me; I’d rather fix those bugs in an hour–and take the next week off reading random development blogs.

It’s not done until you document your code.

I remember the original “Inside Macintosh.” I actually still have the loose-leaf binder version of “Inside Macintosh” that shipped with System v1.

The original Inside Macintosh documented the “Macintosh Toolkit” (the acronym “API” wasn’t in common use then), and, aside from two introductory chapters–one which described the OS and one which documented a sample application–each chapter followed the same formula. The first part of the chapter, consisting of from 1 to a dozen pages, would provide an overview of that toolkit. For example, the “Resource Manager” overview describes what resources are, how resource are important, and how resources are stored on disk. The second part of the chapter would always be “Using the XXX Manager”–giving examples which generally followed the pattern of how you initialized that manager, how to create or manipulate fundamental objects, how to dispose of fundamental objects, and how to shut the manager down. This would consist of the bulk of the chapter. And the end of the chapter would be a summary–generally a summary of the header file for that manager.

It always struck me that such a model was a great way to handle documentation. Start with a 1 to 3 page introduction to whatever fundamental module you are documenting–a “module” consisting of a logical unit of functionality, which could consist of several classes. Then launch into a 20 page document showing how you use that module: how you start it up, how you use the major features, how you configure the major features, how you shut it down. And give snippets of code showing how these are done.

And the summary should point to either the generated JavaDocs or HeaderDoc generated documentation giving the specific calls and specific parameters for each call.

What is interesting about such a model is that it should be fairly easy to write for the technical person creating the toolset: he knows how he wants his class set to be used, so he should be able to craft documentation describing how to use it. For a class which presents a table, for example, the developer has a mental model of how tables are displayed: a delegate or data source is created which responds to certain calls and is attached to the table.

It has always been my opinion that writing code is simply one part of a multi-step process which ultimately results in your code being used. After all, isn’t the whole point of creating code is getting people to use your code? Developers rail against “lusers” who refuse to learn how to use their computers–but I suspect it’s because developers know their fellow developers are more likely to work through the source kit than the average person, and it allows them the excuse not to write the documentation they should write.

Your code isn’t finished until it is well documented. And complaining about people who are confused without good documentation is simply shifting the blame.

iPhone OS Fragmentation.

Apple Lists iPhone OS 4 Compatibility, Excludes Original iPhone and 1st Gen iPod Touch

Up until now the biggest advantage of the iPhone ecosystem is that you could simply code for the latest OS version to ship, and unless you needed certain hardware (such as the camera in the iPhone), you just wrote your software and you could guarantee that you could run on whatever platform you wanted.

This represents a serious advantage to the iPhone development model over Android, where currently shipping phones contain every OS version from v1.6 to v2.1, or Windows Mobile which has a similar spread of currently shipping versions from 6.0 to 6.5.

The first crack in this came from OpenGL support–but that made sense: only the latest 3rd generation iPod Touch or iPhone 3GS would have the hardware necessary to support OpenGL ES v2.0 instead of v1.1. The differentiation could be handled (for the most part) during initialization. And really, this only affects 3D games.

The next crack came with the introduction of the iPad. When the iPad was announced as running iPhone OS 3.2, I just naturally assumed within a week of the iPad launch we’d see an iPhone OS 3.2 update come out for the iPhone and iPad Touch which perhaps provided some minor APIs used to detect screen size, and maybe wrap some of the new UI calls with stubs that alert the developer. Well, Apple released build tools that make it possible to ship code that can run on v3.1 or v3.2, and at run time detect if the symbols are available. The work is relatively simple: from my code I just wrote:

- (IBAction)doOptions:(id)sender
{
	NSArray *array = [[NSBundle mainBundle] loadNibNamed:@"OptionsViewMenu" owner:self options:nil];
	OptionsViewController *ctrl = [array objectAtIndex:0];
	ctrl.menu.delegate = self;
	
	if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
		if (popup != nil) [popup release];
		
		Class uiPopoverController = NSClassFromString(@"UIPopoverController");
		
		popup = [[uiPopoverController alloc] initWithContentViewController:ctrl];
		ctrl.popup = popup;
		CGRect loc = ((UIView *)sender).bounds;
		[popup presentPopoverFromRect:loc inView:sender permittedArrowDirections:UIPopoverArrowDirectionAny animated:YES];
	} else {
		ctrl.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal;
		[self presentModalViewController:ctrl animated:YES];
	}
}

My options user interface view controller now shows up in a pop-up on the iPad, and drills down (with a rotation animation) on the iPhone. To get the size right I simply included in my options view controller the following code in -viewDidLoad:

	if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
		CGSize size = self.contentSizeForViewInPopover;
		size.height = 44 * 10;
		self.contentSizeForViewInPopover = size;
	}

And the height is kept to a reasonable size.

Okay, I’m not a huge fan of conditional code, but honestly this only comes up in a few places, so it doesn’t affect the majority of my code base.

But now the final straw is coming with the iPhone OS v4 release in the summer. That final straw is simply this:

There will be phones in the iPhone/iPod Touch echosystem in use by users which are cannot run the latest iPhone OS operating system, which must be supported by developers if they are to reach the entire Apple iPhone/iPod Touch ecosystem.

Sure, it’s not as bad as the Android situation, where v1.6 of the operating system is shipping in hardware being sold today. (The Cliq and G1, specifically.) But it’s still a problem: as a developer it means I need to maintain older hardware to validate that my software runs in v3.1.3 which will probably be the latest version of the iPhone OS that can run in older hardware.

And worse: Apple announced that Multitasking will only run on the 3rd generation hardware. Which means all those applications that want to take advantage of multitasking will need to have code in place to deal with hardware that doesn’t have multitasking calls available to deal with the large numbers of iPhone 3G devices (such as mine) floating around out there.

Apple’s fragmentation perhaps could be seen as inevitable. But I don’t think so: at the very least ship a version of the iPhone OS v4 that runs on the basic hardware that has the appropriate calls to allow software to detect features that are not present, like multitasking. Provide a way in the simulator to easily test against older versions. And because the number of hardware combinations is growing substantially that need to be tested, provide an easier way to distribute beta test code. (If Apple won’t allow applications to be shipped outside of the App Store, then at least provide a “beta” section of the Apple Store that is not linked to the main Apple Store home page, with applications in beta testing that can only be linked to via an explicit URL.)

Apple’s ecosystem is large and powerful–but it’s still a little young, and experiencing some growing pains. This is the biggest growing pain: how do developers deal with a non-homogeneous hardware and OS ecosystem? Once we adjust to the iPad and iPhone, the next step will be dealing with a third size form-factor, or different iPhone screen sizes. But that won’t be overcome until Apple can help reduce the testing burden–which is the biggest problem you have when you have a non-homogeneous target environment.