Header image

Tyme iOS

June 26th, 2014 | Posted by lars in Talk | Tyme - (3 Comments)

It’s been a while since I’ve posted any news on this blog. Ans many of you already know, we are developing a companion iOS app for Tyme and I just wanted to give you a brief update: We’re making great progress and the app is taking shape. The reason that this is all taking longer than planned is that I’ve taken the opportunity to re-shape some of the internal database structure of Tyme and re-think the iCloud syncing mechanism. With the release of the iOS app, there will be an update for the OSX app as well and iCloud sync will be more stable, robust and future proof. The bug that was introduced with v1.1.3 a few month ago was really annoying for us and of course you, the users. To make sure that this will never happen again I had to re-build the syncing mechanism usingDrew McCormack‘s great Ensembles framework. So stay tuned, it will be awesome ;)

Tyme 1.1.3 & iCloud – solving sync issues

February 24th, 2014 | Posted by lars in Tyme - (3 Comments)

Hello Tyme users,
if you upgraded to Tyme 1.1.3 and some of your data disappeared, no data is lost, but not accessible until each of your Mac’s that use iCloud sync has 1.1.3 installed.

What’s happening is, that the database of Tyme 1.1.3 has been upgraded to use some new features that we’re requested by a lot of users. Therefor the database needs to be migrated, which is done automatically by iCloud. So each Mac converts the data that was created on it. So for example, if you have Tyme installed on two Macs with iCloud sync and created a project on each of them, the “other” project will “disappear” if you upgrade your first Mac. When you upgrade the second, everything will be back after a short while, when iCloud syncs back.

As far as we know now, this is the expected behavior when upgrading an iCloud database. If you’re a developer, this article might be of interest.

Of course this is definitively NOT an expected behavior for your (our fellow Tyme users) and we’re really sorry that this happened. Instead of temporarily “loosing” data until all Macs are up to date, you would expect that this doesn’t happen and you don’t get an heart attack after upgrading Tyme.

If you’re using just one Mac with iCloud sync or you are a new Tyme user you should not have any issues.

For the users, that are afraid to loose their data when upgrading Tyme to 1.1.3, you can back it up before you upgrade or if you have already upgraded and one Mac has still the old version installed.

- Start Tyme, open settings
- Turn off iCloud, close settings
- Click “Yes”, when asked for “Keep a copy of iCloud data”

Now your data is stored in three files on your local hard drive in:

/Users/[YOUR_USERNAME]/Library/Containers/de.nulldesign.tyme.osx/Data/Library/Application Support/Tyme.storedata
/Users/[YOUR_USERNAME]/Library/Containers/de.nulldesign.tyme.osx/Data/Library/Application Support/Tyme.storedata-shm
/Users/[YOUR_USERNAME]/Library/Containers/de.nulldesign.tyme.osx/Data/Library/Application Support/Tyme.storedata-wal

Which you can back up and copy to a secure place. If some things go wrong, you can always copy them back (with iCloud turned off in Tyme).

With the next update we will streamline this process and integrate a proper backup solution, so that things like this never happen again. So sorry again for this stress, that version 1.1.3 is causing to some of you. If you need any help, please contact us!



Update: If syncing stopped working

After upgrading to 1.1.3 it can happen in rare cases, that the syncing between your Mac’s stopped working. Maybe one of your computers has a slightly older version of your projects or it’s just missing a few time entries, after you upgraded all Mac’s to 1.1.3. Event if you add new entries on one of the computers, they won’t sync back to the other.

If you experience these issues you can do the following:

- Start Tyme, open settings
- Turn off iCloud, close settings
- Click “Yes”, when asked for “Keep a copy of iCloud data”
- Verify that this dataset is the one you want to have
- Close Tyme (If you want, make a backup of your files as described above)
- Open the iCloud settings in system prefences
- Click on manage, Tyme, Documents & Data
- Click on delete all data
- Close the panel

- Now you have deleted all iCloud data and Tyme is ready to seed the Cloud with data again
- Start Tyme, turn on iCloud, transfer data: Yes

Done. Now your data is freshly synced with iCloud. If you open Tyme on your other Mac’s they will sync again, maybe iCloud need a few minutes to do so, but the sync issues are gone. Apples iCloud is a lot better meanwhile, but as we experience now: Not perfect and has still issues. I can only say sorry again and file another bug report to Apple :(

Update 2: App Crashes after Upgrading 10.9.2

A few users where reporting crashes after upgrading Tyme to 1.1.3 and upgrading OSX to 10.9.2. Just be patient, iCloud seems to have a few hiccups. It will work after a while. The crash is an iCloud bug…

Tyme Upgrade 1.1.3

February 23rd, 2014 | Posted by lars in Tyme - (0 Comments)

Attention Upgrading Users: 

Tyme 1.1.3 has some issues, when you’re using iCloud and upgrading from a previous version. Projects disappear and only reappear after all Macs you’re running Tyme on have the upgrade 1.1.3 installed. Data ist not lost. We’re looking into the issue with Apple. We’re very sorry for this inconvenience and will do everything to solve this issue fast!

If you are a new Tyme user, you won’t have any issues. It’s just the upgrade from 1.1.2 to 1.1.3.

Tyme & iCloud – finally

January 16th, 2014 | Posted by lars in Tyme - (0 Comments)


We just submitted Tyme 1.1.0 to the App Store! Finally you’ll be able to sync your projects and tracked times with your macs. The iCloud integration was a tough one, but it works like a charm now. We decided to use the latest and improved version of iCloud syncing from Apple that got introduced with Mavericks (10.9). So Mavericks is a requirement to use iCloud with Tyme. Tyme will of course continue to run on Lion and Mountain Lion, but you won’t be able to use the iCloud feature. We made this decision, after looking into the Tyme OS usage statistics. Since already 90% of the users are running Tyme under Mavericks. Mountain Lion has about 7% and is getting less and less each week, since users upgrade to Mavericks.

So keep your fingers crossed that Apple will release it fast ;)

Tyme – Mavericks motion detection

November 11th, 2013 | Posted by lars in Tyme - (0 Comments)

The new Mavericks (OSX 10.9) introduced a new “feature” if you can call it that way. It detects motion via the light sensor and prevents system sleep this way. So if you move in front of your laptop, it won’t sleep. Read more about it here. This “feature” prevents Tyme from correctly detecting the idle timeout in some cases.

So you can either:

- Freeze. Don’t move at all.
- Use Duct tape to fix that.
- Wait for the Tyme update, which will be in the store very soon.

Tyme Updates

October 13th, 2013 | Posted by lars in Tyme - (0 Comments)

Tyme just got featured on ifun.de and I received a lot of positive feedback about our little time tracking companion. The two most asked question were:

Q: Is there an iOS app planned?
A: Yes! It’s planned and I’m starting to develop it at the moment.

Q: Is there a sync option planned to make it possible to sync all data between different macs?
A: Yes, definitively! An iCloud sync is planned and I’m working on it right now! Sync between any combination of Mac and iOS will be possible.

From a development perspective, this is a very stunning task. There has been a lot of talk about iCloud and Core Data sync in the past:

- Why doesn’t iCloud just work
- The gathering storm: Our travails with iCloud sync
- Does Core Data sync quack?

To make it short, there are a lot of issues with iCloud Core Data sync. Most issues have been resolved with the latest OSX Mavericks (10.9) and iOS 7, but I want the users of Lion and Mountain Lion be able to use syncing in Tyme as well. So going that route is not an option for me. I could start implementing the syncing by myself like the Clear guys or use another cloud solution like the new Dropbox DataStorage API. But I wanted to stick with iCloud, since everyone on a Mac and on an iOS device has iCloud, but not everyone (at least here in germany) uses dropbox. So while investigating for the best possible solution, I found this:

Drew McCormack just released an open source framework called Ensembles which does the heavy synchronization work with Core Data using diff files over iCloud. He has been dealing with iCloud and Core Data from the very beginning and if there is one iCloud sync expert out there, he is the one. I’m currently investigating the framework and I’m really confident, that this is the solution to syncing for Tyme, even if it’s still an early version of the framework. So I have to do a lot of testing and this can be very time consuming with iCloud :).

So long, stay tuned…

Tyme – Timetracking app released

September 10th, 2013 | Posted by lars in Portfolio | Talk | Tyme - (2 Comments)

Besides my day job, I worked on a remake of my old time tracking app SimpleTimer 2. SimpleTimer was done in Adobe AIR back in the days and is quite outdated meanwhile, so it was time for a fresh restart. The new app is a native Mac OS app, comes with a new name: “Tyme” and is available on the App Store. The design of Tyme and the beautiful logo was made by Margit Schroeder (hello gitti). Thank you for all the nice colors :)


It has a lot of new cool features:

  • Starts automatically with system startup
  • Accessible from the menubar
  • Quick start & stop for tasks in the menubar
  • Turn back the time, if you started the timer too late
  • Displays the running time and the daily total in the menubar
  • Simple project and task management with deadlines and planned budgets
  • Option to round time entries automatically
  • Works offline, no internet connection required
  • Daily, weekly, and monthly statistics about your workload and budget
  • Exports time entries as CSV or HTML documents
  • Supports the Macbook Pro Retina Display

And more are planned. iCloud integration is possibly the next feature. If you’re missing a feature that you’d like to be integrated, don’t hesitate to drop me an email.

So long…

Cocos2Dx – faster fonts

May 30th, 2013 | Posted by lars in Cocos2D | OpenGL | Source | Talk - (4 Comments)

For my current client, I’m working on a game that needs to display a lot of text in many cases. As a rendering engine we’re using Cocos2D. If you are familar with Cocos2D and you have worked with bitmap fonts (CCLabelBMFont) in your game you’ll notice that, they really can slow down everything, because every single text is rendered as a separate batch and you quickly end up with a lot of draw calls. After a quick research, I couldn’t find an extension or sample code which adresses this problem by batching all fonts into one single batch. So I quickly wrote my own font batch and hooray, the game runs a lot faster now. It’s more or less a “quick” hack and it could be extended by a few missing font features (text alignment, etc.), but for my case it does the job. Have fun with it:


#import "cocos2d.h"
using namespace cocos2d;
class LabelBMFontBatch : public cocos2d::CCSpriteBatchNode
    const char *_fntFile;
    int _lastChildTag;
    LabelBMFontBatch(const char *fntFile);
    static LabelBMFontBatch *create(const char *fileImage, const char *fntFile, unsigned int capacity);
    /* returns a text id. used to identify the text in the batch. use the id with removeTextByID  */
    int addTextAt(const char *text, CCPoint position, float scale);
    void removeTextByID(int textID);
    void removeAllTexts();


#include "LabelBMFontBatch.h"
LabelBMFontBatch::LabelBMFontBatch(const char *fntFile) : CCSpriteBatchNode() {
    _fntFile = fntFile;
    _lastChildTag = 0;
LabelBMFontBatch::~LabelBMFontBatch() {
    _fntFile = NULL;
LabelBMFontBatch *LabelBMFontBatch::create(const char *fileImage, const char *fntFile, unsigned int capacity) {
    LabelBMFontBatch *batchNode = new LabelBMFontBatch(fntFile);
    batchNode->initWithFile(fileImage, capacity);
    return batchNode;
int LabelBMFontBatch::addTextAt(const char *text, CCPoint position, float scale) {
    _lastChildTag += 100;
    CCLabelBMFont *bmpFont = CCLabelBMFont::create(text, _fntFile);
    CCSize textWidth = bmpFont->getContentSize();
    // center text ...
    position.x -= textWidth.width * 0.5f * scale;
    for(int i = 0; i < bmpFont->getChildrenCount(); i++)
        CCObject *child = bmpFont->getChildren()->objectAtIndex(i);
        CCSprite *pNode = (CCSprite*)child;
        CCPoint pNodePosition = pNode->getPosition();
        bmpFont->removeChild(pNode, false);
        pNode->setPosition(ccp(position.x + pNodePosition.x * scale, position.y + pNodePosition.y * scale));
    return _lastChildTag;
void LabelBMFontBatch::removeTextByID(int textID)
    CCNode *child = this->getChildByTag(textID);
    while(child != NULL)
        this->removeChild(child, true);
        child = this->getChildByTag(textID);
void LabelBMFontBatch::removeAllTexts() {

Since quite a while, I was struggling to make a decision. A decision, if I can continue supporting the flash community with the ongoing development of my open source Stage3D engine ND2D.

The sad answer is: I can’t. Despite it has been so much fun building the engine, being one of the few guys who could explore and tinker with the Flash Player 11 from the very early alpha’s (Thank you Thibault!) and bringing this engine to a level where it can compete with other professional 2D engines, it’s time for me to say goodbye.

The truth is, that I just don’t do Flash / Actionscript projects anymore. It wasn’t a conscious decision, more a smooth transition. Meanwhile, most of my client work is native iOS. I moved completely to mobile platforms. About two years ago, I started to play around with the iOS SDK and started to build my own little mobile apps. Started out just with a bit of experimenting, this is the platform I now use on a daily basis and I earn my money (mostly) building native iOS apps for clients now. After so many years of Flash and Actionscript it was really refreshing to learn so many new things and switch to a new platform with so different capabilities. I really feel comfortable in this world, being closer to the operating system than in a virtual machine. I think this was my main drive, try a new platform and a new language.

I was continuing to add features to ND2D, merging in pull requests and answering questions in the forum until now, but when you don’t use the technology in your own projects, it’s pretty hard to keep yourself up to date with all the new stuff that has been added to the platform and really bring the engine to a next level. And it’s a bit pointless as well. So I had to draw a line here. Sad but true…

It is so cool to see, what some of you have done with the engine and what nice games (#1, #2) you have made. ND2D won’t disappear. It will still be available on github with all examples and docs. And I hope it won’t be abandoned now. There are some really interestings forks (Hello Rolf!) you might wan’t to have a look at. If anyone of you wants to actively continue developing ND2D, drop me a mail!

So! Thanks to all of you guys in the forum (except the spambots ;)) for the nice discussions, interesting questions and code improvements. Thanks to sHTiF for some really cool GPU deepdives and Daniel for the nice email conversations about Stage3D. Thanks to everyone who submitted a patch or a pull request!

ND2D Extensions & Games

February 7th, 2012 | Posted by lars in Molehill / Stage3D | ND2D | Talk - (5 Comments)

You might have seen it already: Mike built some cool force field experiments with ND2D a while ago. Now he opened a github project called Napoleon. Napoleon is a 2D physics extension for ND2D using Nape, which looks very promising:

Second I found a really nice looking game built with ND2D from Björn: “28 Bratwursts Later“. I really like the title ;). It’s still in development, but it look like good fun. Check out the video here:

Another tutorial game made by Roger is Frogger: ND2D. He explains how to built a game like this with ND2D with a lot of code examples. A good place to start, if you are building your first game:

This one is already a few months old: Infinivaders. A stunning 8-bit retro space shooter.

I just released version 0.9.13 of my Stage3D engine. Meanwhile is ND2D in a very good and stable state. All features that I planned to integrate, are implemented and working. It’s very close to v 1.0. So it’s about time to have a little detailed »best practice and how to« post. This post is meant for the traditional flash developer who has never touched a GPU (The processor on your graphics card) accelerated environment. There are significant differences in this GPU powered world and you have to think and prepare your assets in a different way, than you used to. Let’s start:

What is ND2D?

ND2D is a GPU accelerated 2D game engine, that makes use of the new Stage3D features introduced in Flash Player 11 (Also known as Molehill). It has nothing to do with the traditional flash display list and runs on a different “layer”, behind all flash content. If you want to get a little low level knowledge, read Thibault’s article here. Using the GPU, the flash player is able to render full screen HD content at 60hz… Finally a dream comes true. Of course Stage3D is mainly focused on 3D, but we can make good use of the hardware for a 2D engine as well and speed things up a lot.

A GPU Environment

First of all, let’s try to understand a little, how 2D rendering on a GPU works. Actually, the GPU can only deal with 3D data. To render 2D, we just don’t use the third dimension. So you could call ND2D a “planes-in-3D-space-engine” if you like.

Unfortunately, the GPU can only deal with triangles (A triangle is also called a polygon in the 3D world). To render a sprite, we need construct a quad out of two triangles like this:

Next we have to specify, which part of our bitmap is mapped to which corner of our quad. This is called UV mapping. As you see in the picture above, the top left corner has a UV coordinate of (0, 0), which is the top left pixel of our bitmap. The lower right corner UV(1,1) is of course the lower rightmost pixel of our image. The GPU interpolates between these coordinates and know’s which pixel to choose for a UV(0.5, 0.5) coordinate (If our image is 128×128 px, it chooses the pixel 64,64, this is called sampling). One important thing is, that the GPU can only handle textures sizes, that are a power of 2 (32×32, 64×32, 128×128, 256×64, etc.). In the above example, a lot of space and therefor texture memory is wasted, because ND2D has to blow up the 68×68 sized PNG of the little bacteria and create a 128×128 texture. So keep the power of two (2^n) in mind, when exporting your images. Later we’ll get to know the TextureAtlas and it’s tools, which will take take of the unused space problem automatically.

So we need to pass all this information to the GPU: A quad/triangle definition, UV coordinates, the bitmap (on the GPU it’s called a texture). All of this is done internally in ND2D. You only have to deal with these low level details, if you want to create own objects or write your own materials and shaders.

The display hierarchy and it’s limitations

To mimic the displaylist, ND2D has a similar hierarchy compared to the flash displaylist. It feels very similar, albeit there are significant differences we’ll get to know now. Everything in ND2D is a Node2D which can have a number of childs, just like in your normal flash display list. The drawing is done from back to front of course. The draw loop starts with the topmost parent and continues with the childs. This is no different to flash’s displaylist.

One thing that’s very important to know, basically the most important thing when you’re dealing with a GPU environment is »how« things are sent to the GPU and being drawn. Keep this in mind, this is the bottleneck and the reason for low speed in your game: We have to try to sent as less data to the GPU and call as less methods as possible! Unfortunately an engine like ND2D or any other engine can’t automate this process. Let me give you an example:

You’re building a game where you have hundreds or even thousands of fluffy little bunnies on the screen. If you now would create 1000 Sprite2D’s, ND2D has to send 2000 triangles and 1000 textures to the GPU and the GPU would have to draw them one by one, which would be just very slow. This might be slower that a traditional blitting approach. But don’t give up so fast: There is batching. The GPU has methods, that allow ND2D to sent the data for 1000 sprites as one single data package instead of 1000 little one’s. The downside is, that the texture of all these 1000 sprites has to be the same. That’s the limitation: Batching is only possible, if the texture of the batched nodes is the same! Good for us, if we want to display 1000 bunnies that all look the same, but what if we have lot’s of different looking bunnies we want to display? We can’t get back to rendering them all one by one, this would be slow…

TextureAtlases / SpriteSheets

Behold! There’s always a solution and this is called a TextureAtlas. When the limitation is, that all sprites have to have the same texture, then why not just put all graphics we have in one bigger texture:

By changing the UV coordinates for each sprite, we can specify which part of the texture should be drawn for our sprite. There are a few good tools, that help you to generate a TextureAtlas (A bitmap that has a size of 2^n). You don’t have to do this by hand. ND2D currently supports these tools:

- TexturePacker (cocos2d + cocos2d-0.99.4 format)
- Zwoptex App (zwoptex-default format)

This is the main difference to traditional flash. Instead of getting your assets one by one from a library, you “bake” them all in a big PNG. And that’s the way you should go. If, for some reason, you need a dynamic approach and generate this atlas on the fly, you can check out the “nd2d-dynatlas” extension built by wjammal (thanks mate!).

Using a batch

ND2D provides two different kind of batches: The Sprite2DCloud and the Sprite2DBatch (I’ll explain the differences later). You just create a batch, pass it the TextureAtlas and the Texture2D and start to add children’s:

var atlasTex:Texture2D = Texture2D.textureFromBitmapData(new textureAtlasBitmap().bitmapData);
var atlas:TextureAtlas = new TextureAtlas(atlasTex.bitmapWidth, atlasTex.bitmapHeight, new XML(new textureAtlasXML()), TextureAtlas.XML_FORMAT_ZWOPTEX, 5, false);
batch = new Sprite2DBatch(atlasTex);
s = new Sprite2D();

As you can see, you have to add an empty Sprite2D to the batch. After adding the child to the batch, the batch passes a copy of the TextureAtlas to the sprite. Then you’re able to set individual frames or animations on that sprite:


To stop any confusion: A TextureAtlas sometimes is called a SpriteSheet and vice versa. In ND2D, a TextureAtlas means a bitmap containing packed images like in the screenshot above, plus an XML definition that defines the UV coordinates for each sprite. The simpler version is a SpriteSheet, which just contains images of equal sizes and doesn’t need an XML. You can create SpriteSheets with tools like SWFSheet by Keith Peiters.


In an ideal world, you would place all your graphics in one big TextureAtlas and work with just one batch. In reality it’s not always possible. The size of a texture is limited (2048 x 2048) and you sometimes can’t squeeze all your graphics and animations into it. You might need a second batch with a second texture. You can’t nest batches and since we live in a hierarchical world, you have to keep in mind, that one batch and all of it’s children will be drawn before the other! So one batch could deal with all background and level assets, while the upper batch renders the characters and other foreground graphics.

I said, I’ll explain the difference between a Sprite2DCloud and a Sprite2DBatch and here we go. I won’t get into technical details here, but there a basically two different methods for batching data. For those who are interested: ND2D – speeding up the engine.

The Sprite2DCloud does more computation on the CPU and delivers a complete package to the GPU, while the Sprite2DBatch receives “chunks” of data and processes it on the GPU:

Sprite2DCloud: Higher CPU load, lower GPU usage
Sprite2DBatch: Lower CPU load, higher GPU usage

On a desktop machine with a decent CPU, the cloud will be faster. On machines with a slower CPU or on mobile systems, the batch could be faster. So, I’m afraid it’s up to you to choose which batching method you’d like to use. One more important thing I have to say about the differences: Due to technical limitations (and speed optimizations) the cloud can just render it’s own children and won’t render the children’s children, while the batch will render the full display list tree. No limitations there. I’d always vote for the batch, even though it’s a bit slower on a desktop machine, but still powerful enough for our fluffy bunny horde.

There are other objects in ND2D that are fully calculated on the GPU. For example the ParticleSystem2D. Get into detail here.


I mentioned the word »mobile« quite a few times and you might ask, when Stage3D for mobile will be available. I can’t say when it’s public, but as you know, Adobe is working hard on it. All I can say, is that ND2D is already ready for mobile. MultiTouchEvent’s are integrated and a new compressed texture format (ATF) also, which will be released with Stage3D for mobile as well (hopefully).

I hope this post was somehow useful to you and helps you to get started in this new accelerated world. If you have any questions, don’t hesitate to ask them. ND2D has also a forum where a lot of questions have been answered.


In my current client project, we’re developing an AIR application targeted for iOS (Android will follow) and we wanted to make use of some iOS SDK features, so I had to write my first NativeExtension. Developing the Objective C part is pretty straight forward (If you know C++ and Objective C) and so is the Actionscript part. There are some good examples and tutorials on the Adobe site about all kind of extensions.

The hard part was to get this thing to work. So I just wanted to share my settings here. This might become useful, if you’re starting to develop your first ANE. I had strange crashes when I packaged the app with my ANE and I couldn’t figure out what was wrong. The app just crashed everytime I launched it on the device. The crashlog wasn’t very helpful. After quite a search, I found out, that I didn’t set an apparently important compiler flag for the LLVM compiler in my XCode project. So, be sure to set:

Enable Linking With Shared Libraries: No

And if you want to get rid of the warnings:

Warnings: Missing Function Prototypes: No

The second part was packaging the ANE correctly. The working command for my case is:

adt -package -target ane MyExtension.ane extension.xml -swc MyExtension.swc -platform iPhone-ARM library.swf libMyNativeExtensionIOS.a

The annoying thing about packaging the ANE is, that after you have built your swc, you have to extract the library.swf out of it (By renaming it to .zip and extracting the swf). So you need both, the swc AND the swf. I didn’t write an ANT task to do automate the process until now and I don’t know the reason for this strange step, since the ADT compiler has everything it needs within the swc. Only Adobe knows ;)

Obviously you can not test on the device everytime, because the deployment process to iOS is more or less manual and just takes too long at the moment. I found out, that I could link the ANE as a regular library (SWC) in my Flash Builder project and launch the app on my desktop machine. When the native extension tries to create the context on the desktop machine, it fails and returns null, because it was just built for the iOS platform:

context = ExtensionContext.createExtensionContext(EXTENSION_ID, null);

So I could implement a fallback for the extention when running on the desktop that mocked the behaviour in AS3. To package the application for iOS, I wrote a small ANT task. This way we can easily test on the device and have a fallback, when testing 0n the desktop without writing desktop extensions as well.

So, maybe someone will find this useful…

ND2D – Blur

December 7th, 2011 | Posted by lars in Molehill / Stage3D | ND2D | Pixelshader - (8 Comments)

Good news everyone. I found a little time to implement a blur shader for ND2D and I’m trying to explain how to implement a shader like this:

First of all: How does a blur work? To blur an image, you sample neighbouring pixels of each pixel in the image and compute the average color. For example if you have a 3×3 image and the pixel in the middle is black, the rest white. You sample all neighbours of the middle pixel (r: 1.0 g: 1.0 b: 1.0) * 8 plus the pixel itself (r: 0,0 g: 0,0 b: 0,0) * 1 and compute the average (divide by 9), the resulting pixel will be (r: 0,88 g: 0,88 b: 0,88). Just do that for every pixel in the image and you’ll have a blur.

To implement this in a shader we have to consider a few things: First you want to save as many texture sampling calls as possible. For example if you want to blur your image 4 pixels horizontally and vertically, you would have to take 9 x 9 = 81 samples (4 to the left, up and down and the pixel that should be blurred itself). This is way too much and you could never squeeze this into a fragment shader with AGAL. But there is a trick: First blur your image horizontally, take the result and blur it vertically. This way, you have to take only 9 + 9 = 18 samples (see Article: Gaussian Blur Shader). Implementing it this way, means we have to do a horizontal blur, write the output to a texture and do a vertical blur with the already horizontal blurred texture. In other words, a two pass rendering. A nice sideeffect of this approach is, that we can not only blur in x AND y direction, but in x OR y individually.

So we’ve implemented our blur now and are happy that everything is blurry with a 4×4 blur, but how do we animate it now? We could generate the shader dynamically, so that we would have a different shader for different blur values, but space is limited in a fragment shader. A program can’t exceed a certain size. What if we want to have a blur of 50 x 50? We can’t write a shader that does this. The program would be just too big, since we don’t have loops in AGAL.

One part of the answer is good old: Carl Friedrich Gauß. He invented a formular a few hundred years ago, that let’s us weighten the sampled pixels (see Article: Guassian Blur and an Implementation). So our shader can remain static and sample always 9 pixels, but the gaussian function will tell us how the samples are treated. So instead of dividing all samples by 9, we have a factor for each sample. Now not only the blur is dynamic, it even looks a lot better with the gauss values than our simple “divide by 9″ approach. Neat! Now we can animate a blur from 0 to 4 pixels. That’s ok, but we wanted 50 or more, remember?

The last and final part to our full dynamic blur shader is: Just repeat what we’ve done already! If you want to have a blur of 10, just blur two times by 4 pixels, followed by a 2 pixel blur. Implementing this is also straight forward: setRenderToTexture(), renderBlur(), switchTextures(). All done in a loop.

Enough of the tech talk, here’s the result (move your mouse to blur the sprites in x and/or y):

You’ll notice the ugly edges in the middle image. This happens, if the blur is larger than the transparent space available in the texture. So the blur is “cut off”. I haven’t found a good solution for this, except of: Leave enough space in your textures if you want to blur it ;)

I found some time to add a little bit more “D” to ND2D. Besides the regular “rotation” property which rotated around the z-axis, all nodes now have  rotationX, rotationY, rotationZ properties and are displayed via a perspective projection. It works similar to the Flash 10 2.5D API (Planes in space), could be useful for some fancy transition effects.

Second, I added a few properties to change the appearance of textures. You can strech textures now and define how they should be sampled. The API let’s you choose how the texture is filtered, if mipmapping should be used and how the mipmap filtering should be. I created four predefined quality settings: LOW, MED, HIGH and ULTRA. Have fun:

ND2D – Speed tests

October 23rd, 2011 | Posted by lars in Molehill / Stage3D | ND2D | Talk - (15 Comments)

When talking about accelerated 2D in Flash, everybody is always asking for performance comparisons. So I threw together a little speed test for ND2D. Mainly to give you some numbers, but also to test the different implementations of ND2D‘s objects. After selecting one of the four different options, the test will keep adding sprites until the framerate drops below 60hz. While adding sprites, it’s likely, that the framerate drops below 60hz for a short while, because adding and creating objects is expensive too. But what counts is the end result.

This test allows you to compare four different types of objects / rendering:

  • Sprite2D with a shared texture. Every sprite is drawn in a seperate drawCall, but there’s only one texture in memory
  • Sprite2D with individual textures. A drawCall for every sprite is used as well and there are as much textures in memory, as there are sprites
  • Sprite2DCloud. All sprites have a shared texture and are drawn in a single drawCall. All movement is calculated on the CPU and the vertexbuffer is uploaded to the GPU every frame
  • Sprite2DBatch. Shared texture as well, but most of the work is done by the GPU with batch processing.

Hit ‘F’ for fullscreen

The results on my machine in Chrome at fullscreen resolution (1680 x 1050) and the Flash Player 11 Release (Please, don’t try it in the debug player, it’s way slower) are:

  • Sprite2D shared Texture: 2157
  • Sprite2D individual Textures: 1881
  • Sprite2DCloud: 14579
  • Sprite2DBatch: 6180

There are still a lot of things, that can be optimized. For example, I’m not saving and comparing state changes in the context (texture bind / unbind checks, etc.). At least the first test could be optimized a lot with this technique I think. Even though there is still space for optimization, I’d say that ND2D is fast enough to build some stunning games! Who needs 15 thousand moving sprites in a game? That should be more than enough ;)