SpriteKit Bit Blitting By Drawing To A Texture For Performance Optimization

UPDATE: This post is a companion/comparison to my other post on drawing 1000s of sprites using SpriteKit’s native batch drawing. I was interested to see if rasterization of the scene before the GL draw pass would improve performance. In this particular instance it doesn’t seem too. SpriteKit’s native batch drawing of the full node hierarchy does a very good job of drawing all of the sprites in one pass, provided they originate from the same texture. There are certain scenarios where blitting the scene would provide a performance boost, for instance if there aren’t many repeated textures, or no textures at all (CGPaths in ShapeNodes etc), but for this type of particle generation it is not necessary. Having said that I’m leaving this post here as it is a useful example conceptually of blitting and displaying nodes as flat textures using the textureFromNode method.

One trick to moving lots and lots of particles around a screen without losing frame rate is to separate the positional calculations for each sprite from the drawing of each sprite. This is done by drawing the sprite’s node hierarchy into a single texture, and then SpriteKit (OpenGL) draws this texture only once. Each loop we update the position of the sprites into a node that hasn’t been added to the scene, then we draw this node to a texture, which we then add to a single sprite on the screen. This method of rasterizing the display hierarchy to a flat image is known as Blitting.

The overhead with particles is the drawing phase. If we can cut this down to a single draw of a sprite’s texture we open up precious cycles for the relatively inexpensive positional calculations of 1000s more particles. SpriteKit already does a pretty good job of batch drawing particles(see my previous article on drawing 1000s of sprites), which means if we only use a single texture for a particle or sprite, it will draw all of these in one go. However we can optimize more, especially for situations where you have lots of different sprite textures, using the textureFromNode method.

It’s a straight forward process, and there are even more sneaky optimizational tricks you can implement that I haven’t outlined here. Download the full working project from gitHub here. Below are the two main methods that draw the particles, commented with explanations.

- (void)setupLEDs
{
    self.backgroundColor = [SKColor blackColor];
    
    // create the particle texture
    SKTexture *ledTexture = [SKTexture textureWithImageNamed:@"whitePixel"];
    self.canvasNode = [SKNode node];
    
    // cycle through and throw as many sprites into the node as you want
    for (int i = 0; i < 5000; i++) {
        SKSpriteNode *sprite = [SKSpriteNode spriteNodeWithTexture:ledTexture];
        sprite.position = CGPointMake(arc4random_uniform(320), arc4random_uniform(568));
        sprite.colorBlendFactor = 1.;
        [self.canvasNode addChild:sprite];
    }
    
    // THIS IS WHERE THE MAGIC HAPPENS!!!!
    // create a texture from the node, even though it hasn't been added to the scene
    self.canvasTexture = [self.view textureFromNode:self.canvasNode];
    
    // only ever create one sprite from our node's texture
    self.canvasSprite = [SKSpriteNode spriteNodeWithTexture:self.canvasTexture size:self.frame.size];
    
    // need to update the anchor point as the texture is centered in the sprite
    self.canvasSprite.anchorPoint = CGPointMake(0, 0);
    
    // add the sprite, notice this only happens once as this has overhead
    [self addChild:self.canvasSprite];
    
}

-(void)update:(CFTimeInterval)currentTime {
    
    // cycle through the children of the node and reposition
    // this is where most of the heavy lifting now happens, as opposed to the drawing stage
    
    for (SKSpriteNode *sprite in self.canvasNode.children) {

        // animate the sprites however you want
        UIColor *randColor = (arc4random_uniform(10) >= 5) ? [UIColor redColor] : [UIColor greenColor];
        sprite.position = CGPointMake(arc4random_uniform(320), arc4random_uniform(568));
        [sprite setColor:randColor];

    }
    
    // re-draw the texture from the node hierarchy
    self.canvasTexture = [self.view textureFromNode:self.canvasNode];
    
    // swap out the old texture with the new one in our sprite << One OpenGL draw
    self.canvasSprite.texture = self.canvasTexture;
}

9 comments

  1. Thanks for the info. Using the simulator I seem to get the same frame rate with your example whether I just add the sprites to the scene or use your file untouched with Blitting. The same holds true when I bump the amount of sprites to 15000. Is there a benefit I am not noticing or perhaps something I am just missing? Thanks in advance for any advice 🙂

    Reply

    1. Ben you’re absolutely correct. In this case flattening the node into a texture doesn’t add all that much optimization, SpriteKit natively handles the batch drawing of sprites with the same texture exceptionally well. I haven’t tested in the simulator, only on iPhone 5S, and am able to get around 20000 particles randomly positioning at 60fps. I’ve updated my intro to reflect on the comparison between this method and simply placing the sprites directly in the scene. There isn’t a lot of documentation or discussion at the moment about the inner workings of SpriteKit so I think it’s an interesting and useful conclusion to have come up with and it says a lot about how well SpriteKit has been written. I’m absolutely convinced that drawing to a texture using textureFromNode will afford performance optimizations in certain scenarios, I’m quite sure it would boost the performance of this Delaunay Triangulation example, as this contains no texture, but CGPaths in sprites.

      Reply

  2. Thanks Sam! That is pretty interesting. I’ve never heard of Bit Blitting until your post. Sounds like a good thing to keep in mind.

    I tried this approach (although my knowledge is limited) to SKSpriteNodes with physics added. Instead of the update method I tried applying the texture in DidSimulatePhysics but couldn’t get it to work as hoped. Any chance for a follow up post on that? It would be interesting to see if there is a performance boost for that as well.

    Thanks again.

    Reply

  3. While this seems like a very useful tutorial, how can this be applied when the scene being worked on is actually larger than the view? For instance, a scrolling platformer where the background image is far larger than the view. Trying to use this method actually takes the entire scene and squishes it into the view, which is not desirable

    Reply

    1. Hey Jon, thanks for the comment. Clearly this isn’t a tutorial on building a side scrolling game but, as stated in the opening, an explanation of how to bit blit with SpriteKit. I’m a bit confused as the method uses textureFromNode which has nothing to do with the size of a scene, but the size of a node… In any case I would not recommend using this for a side scrolling game, there is no need. Smart use of texture atlases is enough and SpriteKit takes care of the batch drawing for you. An example of where this method would be useful is dynamically drawn shape nodes. These are drawn with Core Graphics not flat pngs so drawing to bitmap would probably optimize any updates in movement. See my post on Delaunay Triangulation for an example of SKShapeNodes.

      Reply

  4. God dammit, i can say it works on ios11 xcode 9.1
    Buuut, you cant use almost any SKAction things
    As i found, you can use SKShape nodes even if u draw lines with them
    But it’s damn awesome
    I got from 5 fps with very difficult points/lines animation to 35-35 fps
    that’s a great result!
    Thank you very much)

    Reply

  5. It’s a straight forward process, and there are even more sneaky optimizational tricks you can implement that I haven’t outlined here.

    Please do outline the other methods that you have…

    Reply

Leave a Reply to Deeds Cancel reply

Your email address will not be published. Required fields are marked *