GSM Feature codes that appear to work with the iPhone on the AT&T network.

Some random notes I cribbed together regarding GSM feature codes on the iPhone on the AT&T network. I tried some of them (including the #31# trick), but not all of them. YMMV.

GSM Feature Codes (that appear to work on the iPhone on AT&T)

http://www.geckobeach.com/cellular/secrets/gsmcodes.php

Feature codes 21, 67, 61, 62 appear to work. Note changing the number for unanswered voice calls from the default that was plugged into the phone may make it harder to reset; meaning if you set up call forwarding to something other than the voice mail number (on my phone 1-253-709-4040), write down what was was before so you can re-enable voice mail.

Feature code 31 appears to work. (#31#number hides your phone number.)

http://cellphonetricks.net/apple-iphone-secret-codes/

All codes work. The two digit call values all accept the same formats documented in the link above, I believe. (Note that some of these codes duplicate functionality that can be found in settings.)

http://www.arcx.com/sites/GsmFeatures.htm

This notes that you can also set a timeout parameter for “Forward if not answered” so as to set the amount of time before your call forwards somewhere. I don’t know if this works. Note that the FIDO code (3436) to send to voice mail doesn’t appear to work on the iPhone. Typing in a 10-digit number (starting with 1 + area code + number) appears to work.

On Requirements Documentation.

After reading this article I just wanted to add my two cents.

  • Documentation takes time to both write, read and communicate. Meaning if you write a 100 page specification for a software product, the developers are going to have to read the 100 page specification for your software product, and you and your software developers are going to have to hold several meetings in order to make sure the intent of that 100 page document was properly communicated.
  • Documentation also takes time to maintain. If you write a 100 page document you are going to have to go through and revise a 100 page document as new facts on the ground develop: as you learn more about your customers and as you learn more about the product itself as it is being built. Each revision to the document must also be read by your software developers, and you’re going to have to communicate the effective impact of those changes in a series of meetings as well.
  • This implies a very simple fact: if it takes you a day to write a product specification, allocate two days for the specification to be read, and three days for meetings to effectively communicate the information in your specification to your developers. If it takes a month, allocate two and three months, respectively. Or resign yourself to the fact that your product specification document won’t be read–and the entire exercise in writing that document was at best a masturbatory waste of time.

I note these three facts because they are often forgotten by product managers. (And about that 100 page specification: I’ve seen them. Worse, I’ve seen them delivered three quarters through the development cycle, by proud product managers who believed that, after spending months writing them, believed we could then execute on these massive tomes with perhaps a day or two of reading, rewriting the parts of the software we had already developed as needed.)

Clearly the theory behind the Laffer curve also applies to specification documents. No documentation at all is bad; it means we don’t have any consensus as to what we’re building. Too much documentation is also bad: it means we can never find the time to develop a consensus–and that assumes that the documentation is not internally inconsistent. (Sorry, Product Managers–but in general you’re not trained Software Architects, so please don’t play them. I’ve seen product specifications which specified the algorithm to use, by Product Managers who flunked out of college math. Me, my degree from Caltech was in math, so just tell me what you want me to build and let me figure out how to build it, or tell you why it can’t be built as specified with the budget allocated.)

So there is clearly a sweet spot in specification documentation.

And the keyword (which I slipped by above, in case you didn’t see it) is consensus.

The software specification document is used to help build, communicate and maintain a consensus as to what we are going to build, with the Product Manager providing input from his interactions with the customer as to what the customer wants. (As a Product Manager you’ve identified and talked to the customer, right? Right?) The best way to build the consensus is to effectively communicate the needs clearly, while getting feedback from the developers as to what they believe they can and cannot build. (And if a developer tells you they can’t build it, listen to them–because it may be that while it can be done, they don’t know how. And remember: we don’t fight the war we want, we fight the war we have–and we fight it with the people we have. Which also implies that you should listen to the developers because they may know how to do something you thought was impossible which makes all the difference in the world.)

So in my opinion, a well built product specification is:

  • As short as possible to communicate what is needed. (That way everyone can understand it quickly, and so internal inconsistencies don’t creep in. Further, short is easier to maintain as the facts on the ground changes.)
  • Communicates clearly what is needed, not how it should be built. (A product manager who specifies how something should be built–what components, what algorithms, etc., is either playing the job of software architect he is not qualified to play, or doesn’t trust his developers. Either case spells serious trouble for the team.)
  • Is as much a product of consensus building as it is top-down management. (Otherwise the product manager is assuming capabilities and limitations that may not actually be true, and is demonstrating distrust for the development team.)

But ultimately this is about building a consensus: a consensus as to what the customer wants and needs, with the Product Manager as the go-between, communicating with both the customer of the product and with the development team building the product. Sometimes the product manager needs to push back on the customer or convince the customer that there is an alternate, better solution; sometimes the Product Manager needs to accept that the developers cannot build his vision and needs to accept a modified vision. But this also means the Product Manager has to accept his role as a member of a team communicating ideas and facilitating consensus building, rather than believing, as many product managers I’ve known seem to believe, that without any training whatsoever in software development, architecture or design, that they are better architects than their software architects, better developers than their software developers, and better visionaries than Steve Jobs.

There was only one Steve Jobs. And even he listened to his developers–after all, according to reports he opposed an Apple App Store.

Go build a consensus instead.

Parsing the new OpenStreetMaps PBF file format.

I’ve been playing with the new .PBF file format from OpenStreetMaps for encoding their files, and thus far I’m fairly impressed. The new file format is documented here, and uses Google Protocol Buffers as the binary representation of the objects within the file. The overall file is essentially a sequence of objects written to a single data stream, with each of the elements of the stream encoded using the Google Protocol Buffer file format.

Here’s what I had to do to get a basic Java program up and running.

(1) Download the Google Protocol Buffers library and decompress.

(2) You will now need to build the Google Protocol compiler, in order to compile the .proto files for the OSM file. To do this, cd into the directory where the protocol buffers were created, and compile:

./configure
make
make install

Note that this will install Google’s libraries into your /usr/local directory. If you don’t want that, do what I did:

mkdir /Users/woody/protobuf
./configure --prefix=/Users/woody/protobuf
make
make install

(Full disclosure: I’m using MacOS X Lion.)

(3) Download the protocol buffer definitions for OSM.

(4) Compile them.

(Full disclosure: I downloaded the above files into ~/protobuf, created in step 2 above. When I did this, compiling the files took:

bin/protoc --java_out=. fileformat.proto
bin/protoc --java_out=. osmformat.proto

(5) Compile the descriptor.proto file stored in the downloaded protobuf-2.4.1 directory (created in step 1) src/google/protobuf/descriptor.proto file.

(Full disclosure: I copied this file from it’s location in the protobuf source kit into ~/protobuf created in step 2. I then compiled it with:

bin/protoc --java_out=. descriptor.proto

(6) Create a new Eclipse project. Into that project copy the following into the source kit:

(a) protobuf-2.4.1/java/src/main/java/*
(b) The product files created in steps (4) (~/protobuf/crosby…, ~/protobuf/com…)

(7) Test application

Now it turns out from the description on the OpenStreetMaps PBF file format, the file is encoded using a 4 byte length which gives the length of the BlobHeader record, the BlobHeader record (which contains the raw length of the contents), and a Blob which contains a stream which decodes into a PrimitiveBlock. The map data is contained in the PrimitiveBlock, and there are multiple PrimitiveBlocks for a single file. So the file sort of looks like a sequence of:

Length (4 bytes)
BlobHeader (encoded using Protocol Buffers)
Blob (encoded using Protocol Buffers)

And the blob object contains a block of data which is either compressed as a zlib deflated stream which can be inflated using the Java InflaterInputStream class, or as raw data.

And there are N of these things.

Given this, here is some sample code which I used to successfully deserialize the data from the stored file us-pacific.osm.pbf:

import java.io.DataInputStream;
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.zip.InflaterInputStream;

import crosby.binary.Fileformat.Blob;
import crosby.binary.Fileformat.BlobHeader;
import crosby.binary.Osmformat.HeaderBlock;
import crosby.binary.Osmformat.PrimitiveBlock;

public class Main
{

	/**
	 * @param args
	 */
	public static void main(String[] args)
	{
		try {
			FileInputStream fis = new FileInputStream("us-pacific.osm.pbf");
			DataInputStream dis = new DataInputStream(fis);
			
			for (;;) {
				if (dis.available() <= 0) break;
				
				int len = dis.readInt();
				byte[] blobHeader = new byte[len];
				dis.read(blobHeader);
				BlobHeader h = BlobHeader.parseFrom(blobHeader);
				byte[] blob = new byte[h.getDatasize()];
				dis.read(blob);
				Blob b = Blob.parseFrom(blob);

				InputStream blobData;
				if (b.hasZlibData()) {
					blobData = new InflaterInputStream(b.getZlibData().newInput());
				} else {
					blobData = b.getRaw().newInput();
				}
				System.out.println("> " + h.getType());
				if (h.getType().equals("OSMHeader")) {
					HeaderBlock hb = HeaderBlock.parseFrom(blobData);
					System.out.println("hb: " + hb.getSource());
				} else if (h.getType().equals("OSMData")) {
					PrimitiveBlock pb = PrimitiveBlock.parseFrom(blobData);
					System.out.println("pb: " + pb.getGranularity());
				}
			}
			
			fis.close();
		}
		catch (Exception ex) {
			ex.printStackTrace();
		}
	}
}

Note that we successfully parse the OSMHeader block and the PrimitiveBlock objects. (Each OSM file contains a header block and N self-contained primitive blocks.)

I’m still sorting out how to handle the contents of a PrimtiveBlock; my goal is to eventually dump this data into my own database with my own database schema for further processing. But for now this gets one in the door to reading .pbf files.

I hope this helps someone out there…

As an aside I know there are more efficient ways to parse the file. This is just something to get off the ground with, with the proviso that the code is short and simple, and hopefully rather clear.