I didn’t know >>> existed as a Java operator to do an unsigned right shift.
It’s not like I haven’t been writing code for Java for nearly a decade…
I didn’t know >>> existed as a Java operator to do an unsigned right shift.
It’s not like I haven’t been writing code for Java for nearly a decade…
An additional thought on my Security Rules Of Thumb:
The DMCA’s clause making it illegal to create or investigate anti-curcumvention devices exists because stupid people were building their own encryption algorithms without having a clue how to do it, and after the inevitable occurred (I mean, really: ZIP compression is more secure than CSS for DVD movies ever was–and ZIP compression isn’t an encryption algorithm), they got their lawyers to pass a law to cover their stupidity.
If I were King for a day I would (a) repeal the DMCA’s anti-circumvision clause, and (b) force any idiot complaining about how it allows “hackers” to “break their security” (*cough* *cough*) to attend a week-long seminar on encryption algorithms taught by folks from the NSA who have a friggin’ clue.
At their own expense, of course.
So I landed in San Francisco, and took the BART to Market Street to go to my hotel on 3rd street. And I’m fully prepared, too: iPhone 3G, Garmin Legend GPS, laptop computer, Kindle, and a large bag full of other, more mundane things. (Pants, shirts, shampoo, toothpaste.) Bag is large and unwieldy, laptop bag is digging a trench into my shoulder, but I’m there at Market Street.
I pick what I think is the nearest BART exit to my hotel (a selection which was made by an uneducated guess), I go up the stairs, and I’m now on Market Street.
With the location of my hotel programmed into my cell phone and into my Garmin, I turn both on in order to get my current location and figure out which way I need to go.
The Garmin does it’s little satellite animation status thingy. I watch as it captures and syncs with one satellite, two, three,… finally, it knows where I am: near Pier 29.
Huh?
I’m standing here with a huge pile of stuff in a large bag, a clean version of the random homeless who have taken an interest in extracting my spare change from me, I’m tired, and there is only 15 minutes left before early registration closes (because my flight was an hour late), and my Garmin thinks I’m two miles from where I know I am–somewhere on Market street?!?
So on comes the iPhone. Same result.
Damn!

Here’s the problem: the way GPS works is by triangulating your position between multiple satellites in orbit around the earth. Each GPS is an ultra-precise atomic clock and a transmitter: in the transmission signal are the Orbital Elements of each satellite and the precise time to the nanosecond. (The time transmitted, by the way, is adjusted to the Earth’s surface relativistic frame. It turns out that the effects of gravity and the motion of the satellites means that the GPS satellites are moving 45 microseconds faster in the orbital relativistic frame than time moves on the surface of the earth. Which means the time on the GPS satellites has to be tuned slower so that the transmitted time matches the time on the surface of the Earth. Yes, that blows my mind too.)
So your receiver gets the ultra-precise time to the nanosecond relative to the satellite, the ultra-precise position of the satellite to within a foot, repeats with five more satellites, and using the fact that the speed of light travels one foot per nano-second, does a little bit of geometry and figures out where you are.
But if you are in the canyon of buildings that is Market Street, you run into signal reflections. Signal reflections create noise on the signal, so you don’t necessarily get the ultra-precise time, and they add distance to the signal’s travel path, which screws up your location. Thus in the canyon of Market Street GPS is useless: the GPS receiver does the math and thinks you’re around Pier 29, give or take fifty feet, when in reality you’re two miles away.
The error circle for the GPS doesn’t help, either: that error is calculated based on the calculated precision assuming you have a clear sky. The GPS error calculation cannot take into account building reflections, because the GPS receiver has no way of knowing if the signal bounced around before being received.

The upshot of this is that I had two GPS receivers in complete agreement, which were utterly useless, and worse: left me lost.
And resorting to the crudest of navigational tools: reading street signs and walking two blocks in order to get my bearings.
A generation from now our children will forget about street signs and we’ll become like the streets of London: their street signs were put on the sides of buildings rather than made into stand-alone sign posts. Over the years, renovations caused the building owners to tear down the signs but never replace them–which means in many places in London there are no street signs at all.
If everyone in the United States has GPS receivers–they’re very cheap and are starting to be included in every phone–then when will city governments decide to save a little money and stop replacing street signs? Will we get to a day and age where the only way to figure out where you are is to pull out the electronic gadget from your pocket and ask it?
And what happens if you’re like me, lost on Market Street, and there are no street signs? Do you just find a warm grate along the sidewalk while your hotel room goes unfilled?
More importantly, what does that mean for the location based marketing software and location-based navigation tools of tomorrow, not to mention the new iPhone “Find my phone” service, when the phone lies about where it’s located?
I hate doing it.
But I had to.
We got a whole bunch of resumés and I have one job. So I filtered out half–sorry! And I plan to spend a few minutes with the other half to weed it down to two or three people to bring in.
However, here’s a hint for anyone writing a resumé. If you don’t want to find yourself on the “immediate discard” pile, don’t write that you’ve worked with “Android OS v3.0”. I didn’t bother to read past that one line.
I can’t spell worth a damn. My english skills aren’t the best in the world. So I don’t care about typos. But something like this–I don’t think so.
I ran into a weird crashing bug on our application: when we closed an Activity containing a WebView display, on occasion we’d get a SIGSEGV crash. This reliably happened about one in a half-dozen times.
The fix appears to be to call WebView’s destroy() method on the view inside the Activity during the Activity’s onDestroy:
protected void onDestroy()
{
fWebView.destroy();
super.onDestroy();
}
I’m not sure exactly why this is the case. But it appears to resolve my issues.
I don’t know why I keep misplacing this document, so I’m noting it here for future reference: Java System Property Reference for Mac OS X. This contains the various magic runtime system properties in Mac OS X.
The Java Development Guide for Mac OS X is also useful for how to do certain standard Mac-like things in the Java environment.
What I found was that by testing the System.getProperty(“os.name”).toLowerCase() return string to see if it contains “mac os”, then sprinkle various minor changes to make the UI align with the recommendations in the Java Development Guide above, you can build an application that looks Mac-like on a Mac platform with little effort.
In a typical model/view/controller implementation (setting aside, of course, the debate over what MVC really is), we implement a model (which maps to a document, file, or database), a controller (which manages the views), and a collection of views, generally contained within a window.

Now this model maps nicely into the iPhone API and the Android API: the basic controller class for the iPhone is UIViewController; for Android it is android.app.Activity. And of course the view hierarchy represents the UIViews and android.view.View objects, both built-in objects and custom objects. The controller object also makes the ideal target for iPhone delegates and data objects and for the various Android interfaces which do essentially the same thing.
On the desktop, most applications are built with either a single model and a single controller. Rarely do we have multiple controllers opened onto the same model, and when this happens, more often than not the multiple controllers are multiple instances of the same class declaration: multiple text editors opened onto the same text file, for example.
Mobile devices are different. On a mobile device you can have a top level screen with one controller drill down to a separate screen (with a separate controller) which displays an individual item, which drills down to a third screen showing information about that individual item, which drills down into a fourth showing the setting for a single field.

In this example, we have four (increasingly fine grained) views into the same model: in the first screen we have a top level view; in the next, a view of an item, and so forth. The original method I was using in order to code for this particular model was to create multiple model-like objects and pass them down to individual view controllers:

Now this model tends to be somewhat informal: my top-level model is (for example) an ArrayList of Items. When it’s time to edit an item, we pass the Item as the sub-model object into the sub-control display. And when it’s time to edit some piece of the Item, we pull that piece out and pass that to the next sub-control display.
The whole process becomes messy for a variety of reasons. First, it creates a strict dependency in the flow of views; even if we use interfaces and protocols in order to isolate a control (and its respective views), we still have a lot of work to do if we want to rearrange views. Second, it creates the problem that we somehow need to pass information about which sub-item was updated back to the previous view controller. This becomes problematic since the events for UIViewController and Activity don’t map well to the notion of “saving” or “marking dirty”: there is no single event we can easily grab that consistently means “You’re going away and the view above you is becoming active.”
Android presents a few twists on this as well. First, when your Android Activity goes into the background it can be deleted from memory in a low-memory situation. That is, you can find yourself in the situation where your Activity goes away and has to be rebuilt from scratch. This implies that you must strictly separate the sub-model from the controller (as opposed to the iPhone, where you can informally combine the model and view controller into the same UIViewController class). Second, well-factored activities can be set up to be brought up by external applications: it is possible your Item editor activity can be brought up by a separate application if it knows how to launch you with the correct Intent.
It strikes me that the proper way to handle this is to have a single unified model, and instead of passing sub-models, we instead pass a small record indicating what part of the model we are working on:

The idea is that the Δ object represents a small record which indicates to the sub-controller which part of the model it is working on. This has the advantage that updates are communicated to each of the controllers which are working on this model as the object updates. This also works well on Android, as individual control/view components can disappear and re-register themselves; the delta record is passed in the Intent which created that component, so when the Activity is reconstructed the Intent can be used to put the view controls back into the correct state.
I have a simple nutrition journal application I’ve been building for the iPhone which I was going to rewrite from the ground up using this model. In future posts I’ll update the post to indicate if this works well or not.
For those who don’t know, Flow Cover is my little implementation of the CoverFlow view by Apple, and emulates the left-right swipe view provided by CoverFlow with a new view that was written from scratch.
Alessandro Tagliati sent me the following modification to the FlowCoverView.m file:
In the -(void)draw routine (around line 410 of FlowCoverView.m), the routine draws the tiles. But it draws all of the tiles. If, instead, you just draw the visible tiles (replacing the -(void)draw routine with the following):
- (void)draw
{
/*
* Get the current aspect ratio and initialize the viewport
*/
double aspect = ((double)backingWidth)/backingHeight;
glViewport(0,0,backingWidth,backingHeight);
glDisable(GL_DEPTH_TEST); // using painters algorithm
glClearColor(0,0,0,0);
glVertexPointer(3,GL_FLOAT,0,GVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_SHORT, 0, GTextures);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
/*
* Setup for clear
*/
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glClear(GL_COLOR_BUFFER_BIT);
/*
* Set up the basic coordinate system
*/
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1,aspect,1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/*
* Change from Alesandro Tagliati :
* We don't need to draw all the tiles, just the visible ones. We guess
* there are 6 tiles visible; that can be adjusted by altering the
* constant
*/
int i,len = [self numTiles];
int mid = (int)floor(offset + 0.5);
int iStartPos = mid - VISTILES;
if (iStartPos<0) {
iStartPos=0;
}
for (i = iStartPos; i = len) {
iEndPos = len-1;
}
for (i = iEndPos; i >= mid; --i) {
[self drawTile:i atOffset:i-offset];
}
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The new lines are from the comment down to the glBindRenderbufferOES call; you can simply replace the entire routine in your code.
At the top of the FlowCoverView.m file, you’ll also need to define the constant VISTILES:
/************************************************************************/ /* */ /* Internal Layout Constants */ /* */ /************************************************************************/ #define TEXTURESIZE 256 // width and height of texture; power of 2, 256 max #define MAXTILES 48 // maximum allocated 256x256 tiles in cache #define VISTILES 6 // # tiles left and right of center tile visible on screen /* * Parameters to tweak layout and animation behaviors */ #define SPREADIMAGE 0.1 // spread between images (screen measured from -1 to 1) #define FLANKSPREAD 0.4 // flank spread out; this is how much an image moves way from center #define FRICTION 10.0 // friction #define MAXSPEED 10.0 // throttle speed to this value
Just insert the VISTILES roughly where I have it in the code snippet above.
Or you can download the zip file all over again.
Original iPhone owners & Push Notifications
One of the most awaited features, push notifications, requires a constant data connection.
*Rolls eyes*
It’s a shame Mr. Bohon was handed a megaphone without having any bloody knowledge.
The Wireless Application Protocol is the cell equivalent of the TCP/IP protocol: it describes a wireless protocol from the hardware data layer up to the application environment.
At the bottom of this protocol we have the Wireless Datagram Protocol, akin to UDP on the TCP/IP protocol suite: it permits the delivery of data packets to an arbitrary mobile device without having the entire WAP protocol stack turned on and a connection opened or the processors powered on. This mechanism is used with WAP Push, which essentially defines a WDP port and the contents of the packet.
The only component on the phone that needs to be powered on for a WDP datagram to be received by the phone is the cell phone receiver, and it’s related to the same mechanism used for SMS messages. While it is true you cannot receive an incoming SMS or WDP datagram while on the phone if you have an EDGE connection, and while it’s also true you cannot receive an incoming phone call while receiving a WDP datagram, the duration of the WDP datagram is sufficiently short enough it shouldn’t cause more than a fraction of a second delay on that incoming phone call, and the WDP datagram can be repeated in the same way SMS messages are repeated until delivered.
Bottom line: push interferes with incoming calls in the same way SMS messages do: damned near not at all.
The best part of this exercise in cell phone stupidity passing itself for expert advise comes in the form of the mea-culpa at the start of this article:
We have received multiple reports from 3.0 firmware users on original iPhones who are NOT experiencing the problems described, and who do receive calls without difficulty with the push notification service turned on. Cory’s original post is left as-is below; however, we no longer believe the issue is widespread or will affect most original iPhone users. Our apologies for any undue anxiety or confusion.
They make it sound like their massive stupidity was actually a real bug in Apple’s software implementation which was later fixed–and so the problem “is no longer wide spread”, as opposed to “speculative bullshit pulled out of our ass.”
I swear to God I’m sick to death of morons who think a cell phone is the same thing as a network computer–and when things start coming down the pike that doesn’t quite map onto the desktop computer realm, they start making up nonsense out of whole cloth.
(1) The number of laptops here is insane. It’s now lunch time on Friday, and I’d say 90+% of the people (including me) are quietly tapping away at the little keyboards and staring at the little screens.
(2) So few people here have product T-shirts. I realized as I glanced over and saw someone wearing a Parallels polo that product or project shirts are conspicuously lacking.
(3) The male to female ratio is even worse than my years at Caltech.
(4) It’s quite amazing feeding 5,000 fussy geeks in a manner of minutes. The logistics is interesting: they even put the tables you walk by at a 45° angle from the incoming line, in order to reduce knots since people can change course but cannot corner at full walking speed.
(5) Oh, yeah; there were all these amazing classes–but then if I told you about them, I’d have to shoot you.