The Shape of Everything
A website mostly about Mac stuff, written by Gus Mueller
» Acorn
» Twitter
» Maybe Pizza?
» Code
» Archive
December 16, 2017

Part I, Part II.

One obvious question that hasn't been asked of me yet, but the answer of which I will go on and on about (especially if you were unlucky enough to be sitting next to me last Thursday at Cyclops for our bi-monthly dev meetup, come join us!), is why now? Why has it taken Acorn so long to begin using IOSurfaceRefs for images?

The answer is slightly complicated, involving older codebases and moving tech and me hating OpenGL and a couple of other reasons, but it basically comes down to one thing:

I'm an idiot.

Or to put some kinder words on it, my understanding of how IOSurfaces work was incomplete.

Let's take a look at what Apple has to say. The first sentence from IOSurface's documentation is as follows:

"The IOSurface framework provides a framebuffer object suitable for sharing across process boundaries."

IOSurface is neat. A shared bitmap that can cross between programs, and it's got a relatively easy API including two super critical functions named IOSurfaceLock and IOSurfaceUnlock. I mean, if you're sharing the data across process boundaries then you'll need to lock things so that the two apps don't step on each other's toes. But of course if you're not sharing it across processes, then you can ignore those locks, right? Right?

Of course not, as I eventually found out.

The thing was, I was already mixing IOSurfaceRefs and CGBitmapContexts successfully in Acorn without any major hickups. I could make an IOSurface, grab it's base address (which is where the pixels are stored), and point a CGBitmapContext ref at it and go on my merry way. I could draw to it, and clear it, and make CGImageRefs which would then turn into CIImageRefs for compositing, and everything was awesome.

What I couldn't do though, was make a CIImage directly from that IOSurface. Every time I tried, I'd end up with an image that was either 100% blue, or 100% red. I had convinced myself that these were some sort of mysterious debugging messages, but I just hadn't come across the correct documentation letting me know what it was. So once or twice a year I would mess with it, get nowhere, and go back to the way that worked.

Well a couple of weeks ago I was trying again, and I got more frustrated than usual. I searched Google and GitHub for IOSurface and CGBitmapContext (in anger!), but I couldn't find anything that was relevant to what I wanted to do. More Anger. This should work! Then I thought… what about if I search my own computer using Spotlight? Maybe it'll turn something up…

And then a single file came back, named IOSurface2D.mm, which was some obscure sample code from Apple that I had received at one point a number of years ago.

I opened it, I looked, and I was happy and angry and relieved and sooo very mad at myself.

Yes, you can use a CGBitmapContext with an IOSurface without locking it. But then some other frameworks are eventually going to grab that same IOSurface for drawing and they are going to lock it and then some crazy black magic is going to swoop in and completely ruin your image. Even if you aren't using it across processes. So you better make sure to lock it, even if you're not actively drawing to it, or else things are going to go south.

And that's what I did. All I needed to do was call IOSurfaceLock and Unlock before doing anything with it, and everything was smooth and happy. And I quickly found that if I turn off beam-synced updates in OpenGL I could peg Quartz Debug's FrameMeter to over 90fps.

That was nice. And it was about time.

I've updated my FMMicroPaintPlus sample code to use this new technique, which you can find in FMIOSurfaceAccumulator.m

Since that discovery I've moved Acorn off OpenGL to Metal 2 as well as using newer Core Image APIs introduced in 10.13 (if you are on previous OS releases, it'll use the old way of drawing).

And now for a completely uninformed discussion about IOSurface

What is this black magic? Why does locking an IOSurface before wrapping a CGContext around it matter? Where, exactly, does the memory for the IOSurface live? Is it on the GPU or is it in main memory? Or is it both?

I can take a guess, and I'm probably wrong, but it's the only thing I've got right now. I think that IOSurface is mirrored across the GPU and main memory. And after you've unlocked it for drawing then something in the background will shuttle the data or subregions of it to or from the GPU. You can address the memory as if it's local, and everything just works.

If this is true, then I think that's amazing. Apple will have made a wonderful tech that transparently moves bits around to where it's needed and I don't even have to think about fiddling with the GPU.

Apple just need to add a note to the documentation that locks are needed even if you aren't sharing it across process boundaries.

December 14, 2017

This is continuing from my previous post on moving Acorn to Metal.

Over the past week I've made some pretty good strides on having Acorn use Metal 2 and the new Core Image APIs introduced in 10.13 High Sierra. I've reworked how the pixels draw from the canvas NSView subclass to the display by introducing a little shim view which rests above the canvas and does the actual rendering. (Previously the canvas was an NSOpenGLView subclass). So if you're running Acorn on MacOS 10.12 or earlier you will get the previous OpenGL rendering, and if you're on 10.13 or later you'll get the super fast Metal 2 + IOSurface rendering.

And wow is it buttery smooth. Opening up a 100 megapixel image and then panning, brushing, and zooming around is super fun and has made this work 100% worth it. I've also found a couple of other optimizations that will also help folks out on previous OS releases as well, so there are wins to go all around.

Of course, this is all just testing on my dev machines for the time being. Who knows what's going to happen when it gets out onto everyone's machines- GPUs are notorious for being flaky. But with that in mind I've also added a preference to switch back to the OpenGL renderer, as well as a software renderer (and because of the architecture I came up with, was only about a dozen lines of code to support).

All my regression tests are now run against the both renderers as well, and I found a couple of instances where drawing deviated from CPU to GPU, but it wasn't anything unexpected.

So what's left to do?

So far, I've not been able to get deep color to display correctly when using Metal and IOSurface. I can set the pixel format to MTLPixelFormatBGR10A2Unorm, and the first few frames render correctly, but then things quickly go south from there with the introduction of funky florescent colors and black boxes showing up. I've got more digging to do, but I think it might actually be issues with IOSurface and not Metal. That's just more of a hunch at this point though.

The other show stopping bug I'm running into is drawing through MTKView isn't happening as frequently as I'd like when I seem to be taking up a lot of CPU power. I have a feeling this has to do with my ignorance on Core Animation and some sort of transaction system I'm not familiar with.

This problem mainly shows up when using the magic wand tool. Using the magic wand you can click on the canvas and drag out to change the tolerance for the pixels you'd like to be selected. For each mouse event I get I recalculate the tolerance, perform the operation on the pixels, and then tell the view to update the changed region.

If I've got the renderer setup to use OpenGL or software, then I get a 1:1 mapping of the tool calling setNeedsDisplayInRect: to update a region, and then drawRect: being called to actually do the new drawing.

If I'm using Metal, then I can call setNeedsDisplayInRect: 20 times before drawRect: is actually ever called. And it only seems to happen when I stop moving the mouse and the load on the CPUs go down. This makes it look like things are going slower then they actually are, or that the tool isn't even working.

I'm also seeing something which might be related to this when using the Window ▸ Zoom Window menu item. When Zoom is called and I'm using Metal, drawRect: is never called during the animation an instead my image is squished or expanded, depending on which way the window is moving. That's no good.

But everything else is good. Really good in fact.

December 14, 2017

Primate Labs has just acquired VoodooPad.

I originally wrote VoodooPad in 2003, and then sold it to Plausible Labs in 2013 so I could focus on Acorn. Besides rewriting the encryption, Plausible never really updated VoodooPad. This seemed a shame to me, and I felt my customers were let down by this lack of updates.

But now VoodooPad is in the hands of Primate Labs, and I'm hopeful something will happen with it. I've known John Poole (the founder of Primate Labs) for a number of years, and I trust him. His company has a number of apps, and most importantly has shown that they know how to ship updates.

I had no idea this was coming, but I'm super happy it did. I still use VoodooPad every day and I'd love to see an update.

December 8, 2017

I'm taking a little break from building out the New App to get started on Acorn 6.1. I don't do code names for releases anymore, but if I did this one would be called Acorn "Whatever Gus F'n Wants to Do" Version 6.1.

So what do I want to do in 6.1? A couple of things. No new user facing features, updating the pixel plumbing, and an option for bringing color back to the UI (color has actually already started for 6.0.4, with a secret defaults pref: defaults write com.flyingmeat.Acorn6 colorPalette 1. If you turn it on it's obviously not finished, but you might like it better regardless).

Why would I want to update the "pixel plumbing" and just what does this plumbing really mean?

Acorn versions 4 through 6 store the memory for your layers in your computer's main memory- ie, not on the GPU. Drawing is still done through the GPU via OpenGL, but in most cases I make sure that the pixel processing happens on the CPU.

There's a couple of really good reasons to have the pixel processing happen on the CPU. The main reason I've done this is for fidelity. For many years GPUs have been more concerned about speed than accuracy. Well, I care about accuracy so that's why Acorn has filters running on the CPU.

Another reason is so Acorn has easy and fast access to the pixels for operations which can't be done on the GPU in a reasonable manner. Things like seed fill operations (which make up flood fill, magic wand, instant alpha) need quick access to the pixels and I haven't found a great way to run that on the GPU yet. It's not an inherently parallel operation.

And the the last major reason is just about the amount of memory Acorn can gobble up. I've had people create and edit terapixel images in Acorn (Mega: 1 million pixels. Giga: 1 billion pixels. Terra: 1 trillion pixels). It might be slow, but it's possible. My fear is that if someone tries to do that with the GPU, then that just won't be possible because of the limited amount of memory available there. (The obvious solution to this is to fallback to CPU rendering in these cases, but to be honest I haven't really explored that yet.)

This past summer at WWDC I got some good news about Core Image on the GPU- most of my concerns about fidelity are gone in MacOS 10.13 High Sierra. So I decided then that Acorn 6.1 would try and switch to Metal when running on 10.13.

Then 10.13 was released, and I put out the usual maintenance releases to fix little bugs that come with any major OS update. But something very odd happened as well- 10.13 customers were reporting slow brushing with images up to a certain size. I couldn't get the problem to reproduce on my test images, so I had folks send in the images they were having problems with and all of a sudden I got the problems to reproduce. The problem on my end was that the images I was testing with were too big.

What was going on? Core Image was taking my images created in main memory and copying them to (what I presume to be) IOSurfaceRefs. This is fine for smaller images, but when you get to bigger images those copies can take a while. But there's an upper limit to the amount of memory Core Image is willing to copy before it says screw it, and my test images were over that limit. So instead of making copies to an internal CI only buffer, it would then reference the original memory for these giant images. So brushing on big images was ironically faster.

While I've got some workarounds in Acorn 6.0.4 to keep copying down to a minimum, it isn't a solution I'm happy with. Instead I should change how Acorn stores the memory for images to something which has less of an impedance mismatch with Core Image.

So not only is Acorn 6.1 switching to Metal on 10.13 High Sierra, it will also be switching to using IOSurfaceRefs to store the pixels in. These are two very big changes and I've made some great progress over the past week with this, so I'm 99.9% sure it'll ship this way.

So that's what the new pixel plumbing will look like. Moving Acorn from local memory backed images (CGImageRef + CGBitmapContext) and pushing through OpenGL, to IOSurfaceRef backed images and pushing things through Metal 2.

It's been fun so far, and hopefully I'll have a test build later this month or early 2018 for people to play with.

November 29, 2017

Acorn 6.0.4 is pretty close to shipping, but I've put some interesting little changes into it for MacOS 10.13 High Sierra and I'd love it if folks could bang on it when you get a chance.

As always, it's available via the latest builds page.

After shipping 6.0.4, work will continue on New App™ and I'm also going to get started on Acorn 6.1, which I plan to have some more interesting changes in– mostly in the name of performance improvements. But I'll see what else I can sneak in there.

October 23, 2017

You can hear my lovely voice on Manton and Daniel's podcast, Extra Intuition. We talk a bit about early indie days, writing apps on the Mac, and a new app I've been working on.

Extra Intuition is the member's only addition to the Core Intuition podcast. I'll eventually post about the new app here, but for now they've got the :sparkly: exclusive :/sparkly: info the new thing.

October 13, 2017

On Una Pizza Napoletana's instagram feed I caught this neat video of a new pizza peel being made. But while it was playing I noticed there was something a little bit off about the router they were using. Like hey, it has a little screen and they put some tape down on the wood and… what's going on here?

So I followed up on the name at the end of the video, found the website, and behold: Shaper Origin.

It's a handheld CNC machine.

Let me repeat that.

IT'S A HANDHELD CNC MACHINE.

How amazing is that?

I could try and explain how it works here, but you should really go watch the videos instead.

I need one of these. Maybe I can go halfsies with a neighbor or something.

October 12, 2017

The Suggested Donation podcast has an inspiring interview with artist Jeremy Mann.

It's easy to look at Jeremy's work and be mad at how amazing it is an then say to yourself "F that guy. Jeremy is super talented/gifted and that's how he does it all". But after listening to this interview, you'll see that he attacks his work with a level of passion that nobody but him will probably ever understand.

You're still mad of course because that's what jealousy does to you. But you'll respect his work ethic and realize he one hundred percent earned his talent.

If you're the type of person who enjoyed watching Jiro Dreams of Sushi, you're going to love this interview.

October 11, 2017

Washington State Election Security:

"Election security and integrity are critical to the foundations of a democracy. There are many opportunities for errors to occur in elections as even the best voting system can suffer from inaccuracies or tampering. Two of the easiest ways to ensure election security are to have a paper record of each vote, and to check the results by performing a post-election audit."

I'm pretty big on voting, and have donated sales of Acorn in the past to organizations that help folks get registered to vote. But of course it's not enough to just get people to vote, but we need to make sure that our votes are counted correctly.

That's where election auditing comes in. As programmers, we audit and look over our code to make sure it's doing what we think it is. We write little tests to make sure the output being produced is what we think we programmed it to be. And it isn't always, because we are humans and we make mistakes and that's just part of the process of coding.

I don't see elections as anything different, especially since these days everything is tabulated on computers written by programmers just like myself. Auditing the vote just makes sense, and is a non-partisan issue. Have you ever met a programmer that hasn't written a bug? Has that type of programmer ever existed?

And auditing is cheap as well. But even if it wasn't, what kind of a price can you put on democracy, every US citizen's birthright?

Kirstin Mueller, my wife, also feels strongly about voting and has recently setup the Washington State Election Security to help advocate post election audits. I obviously think it's a good idea, and I hope you'll contact your legislators to tell them the same thing.

October 11, 2017

The Verge on the new Kindle Oasis released today:

"Amazon has been selling Kindles for 10 years now, but “waterproof” hasn’t appear on its list of incremental technological advancements until now. The company just announced a new version of its popular e-reader that builds on last year’s Kindle design and now has an IPX8 waterproof rating."

I have a Kindle Voyage, and it's one of my favorite products right now (the AirPods being another favorite of mine). The Kindle UI is slow and not amazing, but I put up with it because I love the display so much. It's wonderful to read in sunlight and the dark. And it's light. And the Kindle charge lasts forever. I read a lot, and I probably wouldn't if it wasn't for the Kindle.

Will I get the new Oasis? Probably, eventually. My Voyage still works perfectly fine. But the rainy season just started here in the Pacific NW, and I wonder if that's going to push me towards it now. Also, hottubs I guess?