One obvious question that hasn't been asked of me yet, but the answer of which I will go on and on about (especially if you were unlucky enough to be sitting next to me last Thursday at Cyclops for our bi-monthly dev meetup, come join us!), is why now? Why has it taken Acorn so long to begin using IOSurfaceRefs for images?
The answer is slightly complicated, involving older codebases and moving tech and me hating OpenGL and a couple of other reasons, but it basically comes down to one thing:
I'm an idiot.
Or to put some kinder words on it, my understanding of how IOSurfaces work was incomplete.
Let's take a look at what Apple has to say. The first sentence from IOSurface's documentation is as follows:
"The IOSurface framework provides a framebuffer object suitable for sharing across process boundaries."
IOSurface is neat. A shared bitmap that can cross between programs, and it's got a relatively easy API including two super critical functions named IOSurfaceLock and IOSurfaceUnlock. I mean, if you're sharing the data across process boundaries then you'll need to lock things so that the two apps don't step on each other's toes. But of course if you're not sharing it across processes, then you can ignore those locks, right? Right?
Of course not, as I eventually found out.
The thing was, I was already mixing IOSurfaceRefs and CGBitmapContexts successfully in Acorn without any major hickups. I could make an IOSurface, grab it's base address (which is where the pixels are stored), and point a CGBitmapContext ref at it and go on my merry way. I could draw to it, and clear it, and make CGImageRefs which would then turn into CIImageRefs for compositing, and everything was awesome.
What I couldn't do though, was make a CIImage directly from that IOSurface. Every time I tried, I'd end up with an image that was either 100% blue, or 100% red. I had convinced myself that these were some sort of mysterious debugging messages, but I just hadn't come across the correct documentation letting me know what it was. So once or twice a year I would mess with it, get nowhere, and go back to the way that worked.
Well a couple of weeks ago I was trying again, and I got more frustrated than usual. I searched Google and GitHub for IOSurface and CGBitmapContext (in anger!), but I couldn't find anything that was relevant to what I wanted to do. More Anger. This should work! Then I thought… what about if I search my own computer using Spotlight? Maybe it'll turn something up…
And then a single file came back, named IOSurface2D.mm, which was some obscure sample code from Apple that I had received at one point a number of years ago.
I opened it, I looked, and I was happy and angry and relieved and sooo very mad at myself.
Yes, you can use a CGBitmapContext with an IOSurface without locking it. But then some other frameworks are eventually going to grab that same IOSurface for drawing and they are going to lock it and then some crazy black magic is going to swoop in and completely ruin your image. Even if you aren't using it across processes. So you better make sure to lock it, even if you're not actively drawing to it, or else things are going to go south.
And that's what I did. All I needed to do was call IOSurfaceLock and Unlock before doing anything with it, and everything was smooth and happy. And I quickly found that if I turn off beam-synced updates in OpenGL I could peg Quartz Debug's FrameMeter to over 90fps.
That was nice. And it was about time.
Since that discovery I've moved Acorn off OpenGL to Metal 2 as well as using newer Core Image APIs introduced in 10.13 (if you are on previous OS releases, it'll use the old way of drawing).
And now for a completely uninformed discussion about IOSurface
What is this black magic? Why does locking an IOSurface before wrapping a CGContext around it matter? Where, exactly, does the memory for the IOSurface live? Is it on the GPU or is it in main memory? Or is it both?
I can take a guess, and I'm probably wrong, but it's the only thing I've got right now. I think that IOSurface is mirrored across the GPU and main memory. And after you've unlocked it for drawing then something in the background will shuttle the data or subregions of it to or from the GPU. You can address the memory as if it's local, and everything just works.
If this is true, then I think that's amazing. Apple will have made a wonderful tech that transparently moves bits around to where it's needed and I don't even have to think about fiddling with the GPU.
Apple just need to add a note to the documentation that locks are needed even if you aren't sharing it across process boundaries.