About William Woody

I'm a software developer who has been writing code for over 30 years in everything from mobile to embedded to client/server. Now I tinker with stuff and occasionally help out someone with their startup.

Don’t reuse the same buffer to pass parameters inside a loop.

So here’s a mistake I made with the Metal API.

Suppose you have a loop where you’re constructing multiple encoders, one encoder per loop.

And you need to pass a parameter–say, an integer–into each encoder.

So you write the following:

id<MTLBuffer> buffer = [self.device newBufferWithLength:sizeof(uint16_t) options:MTLResourceOptionCPUCacheModeDefault];

for (uint16_t i = 0; i < 5; ++i) {
    id<MTLComputeCommandEncoder> compute = [buffer computeCommandEncoder];
    ... blah blah blah ...
    memmove(buffer.contents, &i, sizeof(i));
    [compute setBuffer:buffer offset:0 atIndex:MyKernelIndex];
    ... blah blah blah ...
    [compute dispatchThread...];
    [compute endEncoding];

If you run this, I discovered that all five invocations of the kernel will result in the two-byte value at MyKernelIndex to be set to 4–the last value seen in i as we loop.


Because the same buffer is reused across all five invocations, and because the Metal code isn’t executed until after the entire buffer is committed–the last value passed in is the value that will be used across all invocations.

But if this is replaced with:

for (uint16_t i = 0; i < 5; ++i) {
    id<MTLComputeCommandEncoder> compute = [buffer computeCommandEncoder];
    ... blah blah blah ...
    [compute setBytes:&i length:sizeof(i) atIndex:MyKernelIndex];
    ... blah blah blah ...
    [compute dispatchThread...];
    [compute endEncoding];

Each invocation gets a unique value for i.

Just something to watch out for in a deferred execution model.

Final Metal Introduction Document

The best way to learn something is to try to explain it to someone else. So I wrote a document as a PDF file and a collection of examples using the Metal API.

Here’s the final document. Hopefully people will find it of use.

Sample code can be found on GitHub.

And of course, like all examples, this one starts with… a blank screen.


And ends with a slightly more complex demonstration:


Updated “Metal Introduction” Document.

I’ve added a section on showing the processes in Metal for implementing Constructive Solid Geometry on the fly using the algorithms outlined in the paper An improved z-buffer CSG rendering algorithm, with example code uploaded to GitHub.

My goal is to eventually turn this into a CSG library for Metal.

The updated paper (with the additional section) can be downloaded from here: Metal: An Introduction, and updates the paper from my prior post.

And when you put it all together you should get:


Learning the Metal API

So I’m in the process of learning the Metal API, which is Apple’s replacement for OpenGL ES. The principles are fairly similar, though the Metal API is much lower level.

There are several web sites devoted to Metal, but my eventual goal is to implement image-based CSG (Constructive Solid Geometry) in Metal for an update to the Kythera application.

And that requires a deeper understanding of Metal than most introductions which seem to stop at drawing an object on the screen and perhaps adding a texture map to the object.

The best way to learn something is to try to explain it–so I’ve started writing a document showing how to build a Macintosh-based Metal API application in Objective C, and going from a blank screen to a deferred shading example.

In this case, the deferred shading example results in a rotating teapot with fairy lights and indirect illumination rendering at 60 frames/second:

Teapot Rendering Example

The sample code is uploaded at GitHub, with the different examples in their own branches.

And the first draft of the Metal Introduction Document (as a PDF with links to relevant documents) can be downloaded from here.

Feedback is appreciated.

To me this is a sign of just how hairy-complicated the HTML specification has become.

Report: Microsoft is scrapping Edge, switching to just another Chrome clone

Windows Central reports that Microsoft is planning to replace its Edge browser, which uses Microsoft’s own EdgeHTML rendering engine and Chakra JavaScript engine, with a new browser built on Chromium, the open source counterpart to Google’s Chrome. The new browser has the codename Anaheim.

To me, the fact that a nearly $1 trillion dollar company has decided to no longer develop their own web browser engine is a sign that the HTML specification has become too complex to properly implement.

But rather than review the HTML specification (and perhaps provide more detailed implementation hints for creating your own HTML browser), instead, we move to a world where the de facto specification is not the de jure specification, but whatever is implemented in Chrome, which is itself based on WebKit–derived from the KDE HTML layout engine.

I’m always concerned when specifications become too complex for implementation by mere mortals.

It also means certain aspects of the HTML specification–such as elements of the HTML specification used by ePUB (such as paged layout) is highly dependent on either undocumented API hooks inside a massive and hard-to-understand third party library, or is simply impractical to implement.

And it suggests to me that any technology that decides to rely on the HTML specification for something like page layout automatically limits the implementation of that technology. For example, it makes creating an ePUB reader that isn’t essentially a full Linux installation with a web browser launched at startup time nearly impossible–and that means there will be a lot of really crappy and horrendously insecure ePUB readers out there.

Reusability and super-natural knowledge.

One of the promises of object-oriented programming is the promise of reusability: if you build your software the right way, it should be easy to take large elements of your application and drop them into another application and–so long as the interfaces are honored, the code should run unchanged in another application.

There are a number of design paradigms which help support this reusability.

For example, interfaces allow the abstraction of the API–the application programming interface–used by a class. By allowing the API to a class or set of classes to be abstracted, we can divorce the implementation from the functionality–from the promise of the API contract. So long as the calls behave the same way it doesn’t matter how the object is implemented–or how the caller calls the object.

This requires the “separation of concerns”: we separate out each component of the software into well defined discrete components which can then be plugged together without regard to what’s going on “under the hood.”

And part of this “separation of concerns” requires, to some extent, that each object take responsibility for its own behavior, rather than for other components in the software having knowledge as to the specific object it is dealing with. In other words, if you have a choice between having a container know what is inside, or having a container which calls well-defined interfaces to determine the behavior of its contents–pick the later.

Here’s a concrete example of this.

Suppose you’re writing a UIView which allows tap events inside, and that UIView will live inside a scroll view. Because on the iPhone the scroll view has no idea if a tap event is the user trying to scroll the contents, or trying to manipulate the contents inside a view, the scroll view class needs a mechanism to determine if the intent of a finger tap is scrolling or something else.

So how does the UIScrollView class determine this?

Because a scroll view has no scroll bars, it must know whether a touch signals an intent to scroll versus an intent to track a subview in the content. To make this determination, it temporarily intercepts a touch-down event by starting a timer and, before the timer fires, seeing if the touching finger makes any movement. If the timer fires without a significant change in position, the scroll view sends tracking events to the touched subview of the content view. If the user then drags their finger far enough before the timer elapses, the scroll view cancels any tracking in the subview and performs the scrolling itself. Subclasses can override the touchesShouldBegin:withEvent:inContentView:, pagingEnabled, and touchesShouldCancelInContentView: methods (which are called by the scroll view) to affect how the scroll view handles scrolling gestures.

In other words, Apple took door number 1: they require the UIScrollView class to have “supernatural knowledge” of the views inside of it in order to determine the behavior of the scroll view.

And notice the default implementation of these methods rely on the type of class inside the scroll view:

Return Value:

YES to cancel further touch messages to view, NO to have view continue to receive those messages. The default returned value is YES if view is not a UIControl object; otherwise, it returns NO.

In other words, somewhere deep in the code inside the default implementation of UIScrollView someone wrote something like this:

- (BOOL)touchesShouldCancelInContentView:(UIView *)view
    return ![view isKindOfClass:[UIControl class]];

This is bad.

Anytime you find yourself writing code like this–anytime you find yourself figuring out the type of a class in order to alter behavior of your code–consider the possibility that you are doing something wrong.

That’s because you’re creating “supernatural knowledge”: you’re making an object which relies on the type hierarchy of other objects in your system.

Consider instead that perhaps the knowledge of the object’s behavior should not come from the type of the class, but from an interface method inside the class object being tested. In this case, consider changing your code to look like:

- (BOOL)touchesShouldCancelInContentView:(UIView *)view
    if ([view respondsToSelector:@selector(touchShouldCancel)]) {
        return [view touchShouldCancel];
    } else {
        return NO;

By doing this you won’t force a developer to override a class unrelated to the class he’s working on in order to get the functionality he desires.

Something I noticed about the new Apple Mac Mini

I’ve always had a soft-spot for the Mac Mini. I’ve always liked the diminutive form factor, and in fact, if it were up to me, the updated Mac Pro would basically be a slightly larger Mac Mini with space for a couple of internal drives and perhaps a way for end-users to change processors, memory and video cards.

But here’s an interesting thing about the Mac Mini.

The base price of a 4-core Mac Mini is around $800. Throw in about $200 for a good monitor and keyboard and for $1k you have a very nice little computer with a 3.6ghz quad-core computer with 8gb memory and a 128gb of (SSD) storage.

But unlike previous models of the Mac Mini, there are a number of ways you can upgrade the processor, memory and storage. Toss in a 3.2GHz 6-core i7 processor, 64gb of memory, and a 2Tb SSD drive with an upgraded Ethernet connector, and now you’re rounding the corner of $4,200.

I understand why Apple doesn’t want to report numbers anymore. Because the number of Mac Minis Apple sells doesn’t really tell the story of how much money is making.

Further, the numbers tells us where Apple may be going when they finally upgrade the Mac Pro.

My guess: Apple will have an upgrade path for a Mac Pro that nearly crosses the $10k line, but will be quite a powerful little box.

I think the iPad Pro is it’s own category of laptop computer.

It has been a common accusation since time immemorial that Apple computers are just too expensive. Even when the original iPad came out much cheaper than the market predicted, it wasn’t long before people concluded the iPad was too expensive.

With the introduction of the new iPads and iPhones, we’re seeing the press double-down on this assertion–one which is only confirmed (to the minds of some) that Apple will no longer give unit sales numbers. Apple has been accused of squeezing out customers, of Apple’s losing profitability, of Apple now trying to lock in customers and engaging in “rent seeking” (despite not understanding what that term means).

And Exhibit One of this assertion is the new iPad Pro:

Apple’s higher iPhone, iPad Pro prices are the new normal

Here’s the thing, though. A couple of years ago developers started to wonder if Apple would make the iPad UI framework available for writing applications on MacOS X.

(For those who don’t know: Apple’s operating systems rely on three core libraries (or frameworks) on which applications are built. The “Foundation” framework provides basic support for things like math, dynamic array storage: the underlying plumbing that any application would rely upon to work. On top of this, Apple has built the “Cocoa” framework, the framework used by MacOS applications to do things like create windows, handle menus, draw to the Mac display, etc. And on iOS, Apple has the “UI” framework, which provides similar functionality as the “Cocoa” framework, but for the iPhone, iPad and Apple TV devices.

Now despite providing similar functions, Apple’s Cocoa and UI frameworks are very different. The objects you use are named differently: “NSView” verses “UIView”, for example. The way they behave are subtly different. And the types of UI objects are very different: the UI framework relies on single-screen viewports which contain the familiar iOS navigation widgets (such as the back button in the upper-left of the screen), while the MacOS framework deals with multi-window applications.

Now elements of Cocoa has been bleeding into the UI framework (such as undo), and the UI framework has been bleeding into Cocoa (such as NSViewController, which manages a group of views).

So the question arises: will they ever be consolidated? Will Apple provide One True Way to write apps?

The argument against this One True Way, however, are the fundamental differences between the sorts of things you may want to do on the Macintosh (multiple window applications that require a mouse pointer and which show everything you need in a small number of windows), and the iPhone (single-window applications that use touch for navigation, and which may rely on lots of small windows and fast flipping between screens to navigate).

All of this makes me think that ultimately the iPad Pro is being positioned as a completely new line of portable computer, one that uses the (carefully thought through) touch-driven user interface ideas first championed on the iPhone.

Yes, a fully equipped 12 inch iPad Pro, once you throw in the smart keyboard and the Apple Pencil comes to $2,230, not including sales tax.

But look at what you get with that 12 inch iPad Pro. You get performance better to the 13″ MacBook Pro, which retails around $2,600. You get 1 TB of storage–the same as the $2,600 MacBook Pro. And you get a computer that is thinner, runs without a fan, and is more portable than the 13″ MacBook Pro. In fact, you get a computer faster and more powerful than the 15″ MacBook Pro that retails around $3,400.

I really think Apple is slowly evolving the iPad Pro as a new line of computers–and I wouldn’t be surprised if we see new APIs in iOS only available to the iPad Pro. (Already we see Apple Pencil APIs only available to the iPad Pro.)

In fact, it would not surprise me–especially given the iPad Pro’s introduction of a USB-C port–a port previously only available to Mac systems–that we start seeing development tools appear on the iPad Pro. I would not be surprised if Apple has Xcode for the iPad waiting in the wings, for example. Already we have the iOS “Files” application which allows you to browse and manage files on your iPad.

In fact, the only thing that appears to be missing from the iOS API is access to the Unix ‘fork’ system call. (Remember: iOS is based on Unix, and in fact, provides access to most familiar Unix API entry points including pthreads.)

Provide that (even in a very limited, controlled way), and you have the potential of an operating system rivaling that of any other Unix-based or Linux-based environment.

The bottom line, though, is that if you look at the iPad Pro as somehow just another iPad–and start looking at pricing considerations by looking at the top of the line–you’ve missed the story.

And if your primary use of the iPad is browsing the web, reading books, and watching movies–you may want to think about the 9.7″ iPad (which tops out at $560 fully loaded) or the iPad Mini (which tops out at $529 fully loaded).

My guess is that a new 10″ iPad and a new iPad Mini will be rolled out at some point with a slightly bumped processor and more storage. Though it wouldn’t surprise me if Apple phases out the iPad entirely. (To me that would be a bit disappointing, since there is room for a lower-cost iPad with less power for those who just want to read books or surf the web.)