Header image

Since quite a while, I was struggling to make a decision. A decision, if I can continue supporting the flash community with the ongoing development of my open source Stage3D engine ND2D.

The sad answer is: I can’t. Despite it has been so much fun building the engine, being one of the few guys who could explore and tinker with the Flash Player 11 from the very early alpha’s (Thank you Thibault!) and bringing this engine to a level where it can compete with other professional 2D engines, it’s time for me to say goodbye.

The truth is, that I just don’t do Flash / Actionscript projects anymore. It wasn’t a conscious decision, more a smooth transition. Meanwhile, most of my client work is native iOS. I moved completely to mobile platforms. About two years ago, I started to play around with the iOS SDK and started to build my own little mobile apps. Started out just with a bit of experimenting, this is the platform I now use on a daily basis and I earn my money (mostly) building native iOS apps for clients now. After so many years of Flash and Actionscript it was really refreshing to learn so many new things and switch to a new platform with so different capabilities. I really feel comfortable in this world, being closer to the operating system than in a virtual machine. I think this was my main drive, try a new platform and a new language.

I was continuing to add features to ND2D, merging in pull requests and answering questions in the forum until now, but when you don’t use the technology in your own projects, it’s pretty hard to keep yourself up to date with all the new stuff that has been added to the platform and really bring the engine to a next level. And it’s a bit pointless as well. So I had to draw a line here. Sad but true…

It is so cool to see, what some of you have done with the engine and what nice games (#1, #2) you have made. ND2D won’t disappear. It will still be available on github with all examples and docs. And I hope it won’t be abandoned now. There are some really interestings forks (Hello Rolf!) you might wan’t to have a look at. If anyone of you wants to actively continue developing ND2D, drop me a mail!

So! Thanks to all of you guys in the forum (except the spambots ;)) for the nice discussions, interesting questions and code improvements. Thanks to sHTiF for some really cool GPU deepdives and Daniel for the nice email conversations about Stage3D. Thanks to everyone who submitted a patch or a pull request!

ND2D Extensions & Games

February 7th, 2012 | Posted by lars in Molehill / Stage3D | ND2D | Talk - (5 Comments)

You might have seen it already: Mike built some cool force field experiments with ND2D a while ago. Now he opened a github project called Napoleon. Napoleon is a 2D physics extension for ND2D using Nape, which looks very promising:

Second I found a really nice looking game built with ND2D from Björn: “28 Bratwursts Later“. I really like the title ;). It’s still in development, but it look like good fun. Check out the video here:

Another tutorial game made by Roger is Frogger: ND2D. He explains how to built a game like this with ND2D with a lot of code examples. A good place to start, if you are building your first game:

This one is already a few months old: Infinivaders. A stunning 8-bit retro space shooter.

I just released version 0.9.13 of my Stage3D engine. Meanwhile is ND2D in a very good and stable state. All features that I planned to integrate, are implemented and working. It’s very close to v 1.0. So it’s about time to have a little detailed »best practice and how to« post. This post is meant for the traditional flash developer who has never touched a GPU (The processor on your graphics card) accelerated environment. There are significant differences in this GPU powered world and you have to think and prepare your assets in a different way, than you used to. Let’s start:

What is ND2D?

ND2D is a GPU accelerated 2D game engine, that makes use of the new Stage3D features introduced in Flash Player 11 (Also known as Molehill). It has nothing to do with the traditional flash display list and runs on a different “layer”, behind all flash content. If you want to get a little low level knowledge, read Thibault’s article here. Using the GPU, the flash player is able to render full screen HD content at 60hz… Finally a dream comes true. Of course Stage3D is mainly focused on 3D, but we can make good use of the hardware for a 2D engine as well and speed things up a lot.

A GPU Environment

First of all, let’s try to understand a little, how 2D rendering on a GPU works. Actually, the GPU can only deal with 3D data. To render 2D, we just don’t use the third dimension. So you could call ND2D a “planes-in-3D-space-engine” if you like.

Unfortunately, the GPU can only deal with triangles (A triangle is also called a polygon in the 3D world). To render a sprite, we need construct a quad out of two triangles like this:

Next we have to specify, which part of our bitmap is mapped to which corner of our quad. This is called UV mapping. As you see in the picture above, the top left corner has a UV coordinate of (0, 0), which is the top left pixel of our bitmap. The lower right corner UV(1,1) is of course the lower rightmost pixel of our image. The GPU interpolates between these coordinates and know’s which pixel to choose for a UV(0.5, 0.5) coordinate (If our image is 128×128 px, it chooses the pixel 64,64, this is called sampling). One important thing is, that the GPU can only handle textures sizes, that are a power of 2 (32×32, 64×32, 128×128, 256×64, etc.). In the above example, a lot of space and therefor texture memory is wasted, because ND2D has to blow up the 68×68 sized PNG of the little bacteria and create a 128×128 texture. So keep the power of two (2^n) in mind, when exporting your images. Later we’ll get to know the TextureAtlas and it’s tools, which will take take of the unused space problem automatically.

So we need to pass all this information to the GPU: A quad/triangle definition, UV coordinates, the bitmap (on the GPU it’s called a texture). All of this is done internally in ND2D. You only have to deal with these low level details, if you want to create own objects or write your own materials and shaders.

The display hierarchy and it’s limitations

To mimic the displaylist, ND2D has a similar hierarchy compared to the flash displaylist. It feels very similar, albeit there are significant differences we’ll get to know now. Everything in ND2D is a Node2D which can have a number of childs, just like in your normal flash display list. The drawing is done from back to front of course. The draw loop starts with the topmost parent and continues with the childs. This is no different to flash’s displaylist.

One thing that’s very important to know, basically the most important thing when you’re dealing with a GPU environment is »how« things are sent to the GPU and being drawn. Keep this in mind, this is the bottleneck and the reason for low speed in your game: We have to try to sent as less data to the GPU and call as less methods as possible! Unfortunately an engine like ND2D or any other engine can’t automate this process. Let me give you an example:

You’re building a game where you have hundreds or even thousands of fluffy little bunnies on the screen. If you now would create 1000 Sprite2D’s, ND2D has to send 2000 triangles and 1000 textures to the GPU and the GPU would have to draw them one by one, which would be just very slow. This might be slower that a traditional blitting approach. But don’t give up so fast: There is batching. The GPU has methods, that allow ND2D to sent the data for 1000 sprites as one single data package instead of 1000 little one’s. The downside is, that the texture of all these 1000 sprites has to be the same. That’s the limitation: Batching is only possible, if the texture of the batched nodes is the same! Good for us, if we want to display 1000 bunnies that all look the same, but what if we have lot’s of different looking bunnies we want to display? We can’t get back to rendering them all one by one, this would be slow…

TextureAtlases / SpriteSheets

Behold! There’s always a solution and this is called a TextureAtlas. When the limitation is, that all sprites have to have the same texture, then why not just put all graphics we have in one bigger texture:

By changing the UV coordinates for each sprite, we can specify which part of the texture should be drawn for our sprite. There are a few good tools, that help you to generate a TextureAtlas (A bitmap that has a size of 2^n). You don’t have to do this by hand. ND2D currently supports these tools:

- TexturePacker (cocos2d + cocos2d-0.99.4 format)
- Zwoptex App (zwoptex-default format)

This is the main difference to traditional flash. Instead of getting your assets one by one from a library, you “bake” them all in a big PNG. And that’s the way you should go. If, for some reason, you need a dynamic approach and generate this atlas on the fly, you can check out the “nd2d-dynatlas” extension built by wjammal (thanks mate!).

Using a batch

ND2D provides two different kind of batches: The Sprite2DCloud and the Sprite2DBatch (I’ll explain the differences later). You just create a batch, pass it the TextureAtlas and the Texture2D and start to add children’s:

var atlasTex:Texture2D = Texture2D.textureFromBitmapData(new textureAtlasBitmap().bitmapData);
var atlas:TextureAtlas = new TextureAtlas(atlasTex.bitmapWidth, atlasTex.bitmapHeight, new XML(new textureAtlasXML()), TextureAtlas.XML_FORMAT_ZWOPTEX, 5, false);
 
batch = new Sprite2DBatch(atlasTex);
batch.setSpriteSheet(atlas);
 
s = new Sprite2D();
batch.addChild(s);

As you can see, you have to add an empty Sprite2D to the batch. After adding the child to the batch, the batch passes a copy of the TextureAtlas to the sprite. Then you’re able to set individual frames or animations on that sprite:

s.spriteSheet.playAnimation("walkingBunny");

To stop any confusion: A TextureAtlas sometimes is called a SpriteSheet and vice versa. In ND2D, a TextureAtlas means a bitmap containing packed images like in the screenshot above, plus an XML definition that defines the UV coordinates for each sprite. The simpler version is a SpriteSheet, which just contains images of equal sizes and doesn’t need an XML. You can create SpriteSheets with tools like SWFSheet by Keith Peiters.

Performance

In an ideal world, you would place all your graphics in one big TextureAtlas and work with just one batch. In reality it’s not always possible. The size of a texture is limited (2048 x 2048) and you sometimes can’t squeeze all your graphics and animations into it. You might need a second batch with a second texture. You can’t nest batches and since we live in a hierarchical world, you have to keep in mind, that one batch and all of it’s children will be drawn before the other! So one batch could deal with all background and level assets, while the upper batch renders the characters and other foreground graphics.

I said, I’ll explain the difference between a Sprite2DCloud and a Sprite2DBatch and here we go. I won’t get into technical details here, but there a basically two different methods for batching data. For those who are interested: ND2D – speeding up the engine.

The Sprite2DCloud does more computation on the CPU and delivers a complete package to the GPU, while the Sprite2DBatch receives “chunks” of data and processes it on the GPU:

Sprite2DCloud: Higher CPU load, lower GPU usage
Sprite2DBatch: Lower CPU load, higher GPU usage

On a desktop machine with a decent CPU, the cloud will be faster. On machines with a slower CPU or on mobile systems, the batch could be faster. So, I’m afraid it’s up to you to choose which batching method you’d like to use. One more important thing I have to say about the differences: Due to technical limitations (and speed optimizations) the cloud can just render it’s own children and won’t render the children’s children, while the batch will render the full display list tree. No limitations there. I’d always vote for the batch, even though it’s a bit slower on a desktop machine, but still powerful enough for our fluffy bunny horde.

There are other objects in ND2D that are fully calculated on the GPU. For example the ParticleSystem2D. Get into detail here.

Outlook

I mentioned the word »mobile« quite a few times and you might ask, when Stage3D for mobile will be available. I can’t say when it’s public, but as you know, Adobe is working hard on it. All I can say, is that ND2D is already ready for mobile. MultiTouchEvent’s are integrated and a new compressed texture format (ATF) also, which will be released with Stage3D for mobile as well (hopefully).

I hope this post was somehow useful to you and helps you to get started in this new accelerated world. If you have any questions, don’t hesitate to ask them. ND2D has also a forum where a lot of questions have been answered.

Resources

ND2D – Blur

December 7th, 2011 | Posted by lars in Molehill / Stage3D | ND2D | Pixelshader - (8 Comments)

Good news everyone. I found a little time to implement a blur shader for ND2D and I’m trying to explain how to implement a shader like this:

First of all: How does a blur work? To blur an image, you sample neighbouring pixels of each pixel in the image and compute the average color. For example if you have a 3×3 image and the pixel in the middle is black, the rest white. You sample all neighbours of the middle pixel (r: 1.0 g: 1.0 b: 1.0) * 8 plus the pixel itself (r: 0,0 g: 0,0 b: 0,0) * 1 and compute the average (divide by 9), the resulting pixel will be (r: 0,88 g: 0,88 b: 0,88). Just do that for every pixel in the image and you’ll have a blur.

To implement this in a shader we have to consider a few things: First you want to save as many texture sampling calls as possible. For example if you want to blur your image 4 pixels horizontally and vertically, you would have to take 9 x 9 = 81 samples (4 to the left, up and down and the pixel that should be blurred itself). This is way too much and you could never squeeze this into a fragment shader with AGAL. But there is a trick: First blur your image horizontally, take the result and blur it vertically. This way, you have to take only 9 + 9 = 18 samples (see Article: Gaussian Blur Shader). Implementing it this way, means we have to do a horizontal blur, write the output to a texture and do a vertical blur with the already horizontal blurred texture. In other words, a two pass rendering. A nice sideeffect of this approach is, that we can not only blur in x AND y direction, but in x OR y individually.

So we’ve implemented our blur now and are happy that everything is blurry with a 4×4 blur, but how do we animate it now? We could generate the shader dynamically, so that we would have a different shader for different blur values, but space is limited in a fragment shader. A program can’t exceed a certain size. What if we want to have a blur of 50 x 50? We can’t write a shader that does this. The program would be just too big, since we don’t have loops in AGAL.

One part of the answer is good old: Carl Friedrich Gauß. He invented a formular a few hundred years ago, that let’s us weighten the sampled pixels (see Article: Guassian Blur and an Implementation). So our shader can remain static and sample always 9 pixels, but the gaussian function will tell us how the samples are treated. So instead of dividing all samples by 9, we have a factor for each sample. Now not only the blur is dynamic, it even looks a lot better with the gauss values than our simple “divide by 9″ approach. Neat! Now we can animate a blur from 0 to 4 pixels. That’s ok, but we wanted 50 or more, remember?

The last and final part to our full dynamic blur shader is: Just repeat what we’ve done already! If you want to have a blur of 10, just blur two times by 4 pixels, followed by a 2 pixel blur. Implementing this is also straight forward: setRenderToTexture(), renderBlur(), switchTextures(). All done in a loop.

Enough of the tech talk, here’s the result (move your mouse to blur the sprites in x and/or y):

You’ll notice the ugly edges in the middle image. This happens, if the blur is larger than the transparent space available in the texture. So the blur is “cut off”. I haven’t found a good solution for this, except of: Leave enough space in your textures if you want to blur it ;)

I found some time to add a little bit more “D” to ND2D. Besides the regular “rotation” property which rotated around the z-axis, all nodes now have  rotationX, rotationY, rotationZ properties and are displayed via a perspective projection. It works similar to the Flash 10 2.5D API (Planes in space), could be useful for some fancy transition effects.

Second, I added a few properties to change the appearance of textures. You can strech textures now and define how they should be sampled. The API let’s you choose how the texture is filtered, if mipmapping should be used and how the mipmap filtering should be. I created four predefined quality settings: LOW, MED, HIGH and ULTRA. Have fun:

ND2D – Speed tests

October 23rd, 2011 | Posted by lars in Molehill / Stage3D | ND2D | Talk - (15 Comments)

When talking about accelerated 2D in Flash, everybody is always asking for performance comparisons. So I threw together a little speed test for ND2D. Mainly to give you some numbers, but also to test the different implementations of ND2D‘s objects. After selecting one of the four different options, the test will keep adding sprites until the framerate drops below 60hz. While adding sprites, it’s likely, that the framerate drops below 60hz for a short while, because adding and creating objects is expensive too. But what counts is the end result.

This test allows you to compare four different types of objects / rendering:

  • Sprite2D with a shared texture. Every sprite is drawn in a seperate drawCall, but there’s only one texture in memory
  • Sprite2D with individual textures. A drawCall for every sprite is used as well and there are as much textures in memory, as there are sprites
  • Sprite2DCloud. All sprites have a shared texture and are drawn in a single drawCall. All movement is calculated on the CPU and the vertexbuffer is uploaded to the GPU every frame
  • Sprite2DBatch. Shared texture as well, but most of the work is done by the GPU with batch processing.


Hit ‘F’ for fullscreen

The results on my machine in Chrome at fullscreen resolution (1680 x 1050) and the Flash Player 11 Release (Please, don’t try it in the debug player, it’s way slower) are:

  • Sprite2D shared Texture: 2157
  • Sprite2D individual Textures: 1881
  • Sprite2DCloud: 14579
  • Sprite2DBatch: 6180

There are still a lot of things, that can be optimized. For example, I’m not saving and comparing state changes in the context (texture bind / unbind checks, etc.). At least the first test could be optimized a lot with this technique I think. Even though there is still space for optimization, I’d say that ND2D is fast enough to build some stunning games! Who needs 15 thousand moving sprites in a game? That should be more than enough ;)

A few people where wondering why they can’t control individual particles in the ND2D particlesystem. Let me explain why:

The ParticleSystem2D is built for speed. This means, that everything and really everything for each particle is calculated on the GPU. When you create a system, initially the starting values for each particles are created and uploaded to the GPU. From now on, everything is calculated in shaders based on the current time step. This way ND2D is able to render 10.000 (or even more) particles at 60hz without any CPU usage. The drawback is, that you don’t have control over each particle, but you’ll have a lot of CPU time left for more important stuff. The ParticleSystem2D can be used for effects like rain, fire or water, but you won’t be able to animate a swarm of birds with it. You can play around with the system below, but be careful. Depending on the size of the particles you can display 10.000 at 60hz or nearly freeze your machine. The larger, the slower.

If you want to have  control over individual particles, you can use one of the batch nodes provided by ND2D. The Sprite2DCloud or the Sprite2DBatch. With these batch nodes you’re able to move each child, but they are slower, because all the positional information has to be uploaded to the GPU every single frame. When I say slower, I mean that you can still display 1000 (or a lot more) particles alphablended at 60hz. This should be enough for a whole army of kinghts or a fancy mousefollower. Play around with it here:

And if you haven’t installed the new Flash Player 11 that has been released yesterday, grab it here.

One really cool thing about textures on the GPU are the different wrapmodes when sampling pixels from it. In Molehill, there are two different types available:

  • CLAMP – if UV coordinates are lower than zero or greater than one, the coordiantes are clamped to 0..1, so the edge pixels are repeated
  • REPEAT – if UV coordinates are lower than zero or greater than one, the whole texture is repeated. So for a UV of (1.2, 1.4) the pixel of (0.2, 0.4) is sampled

Simply spoken, if you set the wrapmode to REPEAT, animate the UV-coordinates and have a self repeating texture, you’ll have the most simple endless scroller you can imagine. Don’t worry, everything is built into ND2D, you don’t have to care about what I just told you. Just watch the example:

This example is included in the ND2D Examples on Github. This scene just consists of two sprites with a fixed position in the middle of the screen. The only thing that is done on the CPU in the step loop is this:

override protected function step(elapsed:Number):void {
    starfield1.material.uvOffsetX -= (stage.stageWidth * 0.5 - mouseX) * 0.00002;
    starfield1.material.uvOffsetY -= (stage.stageHeight * 0.5 - mouseY) * 0.00002;
    starfield2.material.uvOffsetX -= (stage.stageWidth * 0.5 - mouseX) * 0.00004;
    starfield2.material.uvOffsetY -= (stage.stageHeight * 0.5 - mouseY) * 0.00004;
}

This can become handy, if you want to animate a waterfall, waves or a space field background in your game. Have fun!

I never really introduced the TextureRenderer of ND2D and what possibilities you have, when using it. The TextureRenderer does what the name suggests: It renders a display object (Sprite2D, etc.) and all subsequent objects onto a Context3D texture. The cool thing is, that you are able to draw your entire scene to a (fullscreen) texture and add some post processing effects, by writing a new material / shader and displaying it via a standard Sprite2D.

Here’s the plain scene without post processing:

… and here with a small “dizzyness” post process shader:

I’ve added this test to the examples incluced in the ND2D sources. You can see the live running example here (test #18).

ND2D – Stage3D Masks

September 2nd, 2011 | Posted by lars in Actionscript | Molehill / Stage3D | ND2D | Source - (7 Comments)

Another feature I really wanted to implement in ND2D were masks. Just like the setMask() method in flash. In Stage3D (OpenGL), there is no such thing as a mask. You can display textured triangles, that’s it, but you know that nearly everything is possible with a pixel shader. So let’s start:

The idea of masking in a fragment shader is to grab the pixel color of your texture, then grab the pixel color of your mask, multiply the two colors and display the result. But how do we find the correct pixel in the mask? Our task is to find the right UV coordinates for the mask texture.

If you look at the above image, the mask is rotated and overlaps the sprite we want to mask. How do we find the correct pixel (UV coordinate) of the mask, that overlaps this orange pixel in the sprite? Somehow we have to map the position of the pixel in the sprite to the pixel in the mask and we can do that by transforming it between the different coordinate systems. In a vertex shader we calculate the final pixel positon from local space to world space. The idea is to map this pixel in world space back to the local coordinate system of the mask. This way it’s pretty easy to find the correct UV coordinates. Let’s do a simple actionscript test:

// this is the top right corner of our sprite quad.
var v:Vector3D = new Vector3D(128, -128, 0, 1);
 
// this is the sprites matrix, translated a bit
var clipSpaceMatrix:Matrix3D = new Matrix3D();
clipSpaceMatrix.appendTranslation(100, 0, 0);
// this is the masks matrix, it's in the same position as the sprite
var maskClipSpaceMatrix:Matrix3D = new Matrix3D();
maskClipSpaceMatrix.appendTranslation(100, 0, 0);
// this is the masks size
var maskBitmap:Rectangle = new Rectangle(0, 0, 256, 256);
 
// invert the matrix, because we want to map back from world space to local mask space
maskClipSpaceMatrix.invert();
 
// transform our vertex from local sprite space to world space
v = clipSpaceMatrix.transformVector(v);
[trace] moved to clipspace: Vector3D(228, -128, 0)
 
// transform world space vertex back to local mask space
// the result is the same vector of course, because the positions of mask and sprite are equal
v = maskClipSpaceMatrix.transformVector(v);
[trace] moved to local mask space: Vector3D(128, -128, 0)
 
// calculate the uv coordinates from the local pixel position
v = new Vector3D((v.x + (maskBitmap.width * 0.5)) / maskBitmap.width,
                 (v.y + (maskBitmap.height * 0.5)) / maskBitmap.height,
                  0.0, 1.0);
 
// the result is what we expect, the top right uv coordinate:
[trace] local mask uv: Vector3D(1, 0, 0)

Porting this idea to a shader is pretty straight forward. Let’s code a PB3D Material Shader:

void evaluateVertex()
{
     interpolatedUV = float4(uvCoord.x + uvOffset.x, uvCoord.y + uvOffset.y, 0.0, 0.0);
 
     float4 worldSpacePos = float4(vertexPos.x, vertexPos.y, 0.0, 1.0) * objectToClipSpaceTransform;
     // maskObjectToClipSpaceTransform is the invertex clipspace matrix of the mask
     float4 localMaskSpacePos = worldSpacePos * maskObjectToClipSpaceTransform;
 
     // halfMaskSize.xy is maskBitmap.width/height * 0.5 passed as a parameter
     // invertedMaskSize.xy = 1.0 / maskBitmap.width/height passed as a parameter, because divisions are not properly working in the current pb3d release
     interpolatedMaskUV = float4((localMaskSpacePos.x + halfMaskSize.x) * invertedMaskSize.x,
                                 (localMaskSpacePos.y + halfMaskSize.y) * invertedMaskSize.y,
                                  0.0, 0.0);
}
 
void evaluateFragment()
{
    float4 texel = sample(textureImage, float2(interpolatedUV.x, interpolatedUV.y), PB3D_2D | PB3D_MIPNEAREST | PB3D_CLAMP);
    float4 texel2 = sample(textureMaskImage, float2(interpolatedMaskUV.x, interpolatedMaskUV.y), PB3D_2D | PB3D_MIPNEAREST | PB3D_CLAMP);
 
    result = float4(texel.r * color.r * texel2.r,
                    texel.g * color.g * texel2.g,
                    texel.b * color.b * texel2.b,
                    texel.a * color.a * texel2.a);
}

If you don’t want to use PixelBender3D and like to ‘torture’ yourself with AGAL, you can write the same shader this way:

/*
vertex shader:
 
vc0-vc3 = clipspace matrix of sprite
vc4-vc7 = inverted clipspace matrix of mask
vc8.xy = half mask width / height
vc8.zw = mask width / height
va0 = vertex
va1 = uv
*/
 
m44 vt0, va0, vc0           // vertex * clipspace
m44 vt1, vt0, vc4           // clipspace to local pos in mask
add vt1.xy, vt1.xy, vc8.xy  // add half masksize to local pos
div vt1.xy, vt1.xy, vc8.zw  // local pos / masksize
mov v0, va1                 // copy uv
mov v1, vt1                 // copy mask uv
mov op, vt0                 // output position
 
/*
fragment shader:
*/
 
mov ft0, v0                                // get interpolated uv coords
tex ft1, ft0, fs0 <2d,clamp,linear,nomip>  // sample texture
mov ft2, v1                                // get interpolated uv coords for mask
tex ft3, ft2, fs1 <2d,clamp,linear,nomip>  // sample mask
mul ft1, ft1, ft3                          // mult mask color with tex color
mov oc, ft1                                // output color

The result is visible here: ND2D – alpha masks (Move your mouse over the crates). I added one more feature: You can set the alpha of a mask, that means that you can specify how much the mask affects the sprite. In the demo above the alpha fades from 0.0 to 1.0. Since we’re using all four color components in our calculations (r,g,b,a), we can not only mask the alpha, but all color channels. I don’t know if this it’s a “nice thing to have” or if it will get annoying when you use sprites as masks in your game and need to provide an extra image for that. Just let me know :) Here is the example: ND2D – disco color masks.

ND2D – Pixel Bleeding

August 30th, 2011 | Posted by lars in Molehill / Stage3D | ND2D | OpenGL | Talk - (9 Comments)

This post is more a note to myself, but you might find that interesting.

There was a bug that was annoying me for a while in ND2D, but I didn’t had the time to fix it: When you use spritesheets and the sprites are packed without any space between them like this one:

It’s likely that you run into issues where the GPU is drawing the pixels of another sprite around your sprite. This looks like this then (The lower image is the fixed version):

If you use mip-mapping it get’s even worse, but that’s another story…

This happens, because OpenGL / DirectX needs to have the center of uv-coordinates on the pixel and not on the edge of the pixel. The solution is pretty simple: Instead of calculating the uv-coordinates from 0 to screenwidth, you’re technically supposed to calculate from 0.5 to screenwidth – 0.5. This way the edge pixels are “cropped” out and the bleeding stops :)

Operation successful, patient alive & breathing. Nurse, I need a drink, cheers!

Hi there,

I just updated ND2D to the latest public beta of the Flash Player 11. I’m totally amazed how much faster the new player is. Without any codechanges I get as twice as much FPS in most of my demos. Check it out:

ND2D – Demo

April 29th, 2011 | Posted by lars in Molehill / Stage3D | ND2D | Talk - (20 Comments)

I needed some more serious game scenarios to test ND2D. So I created this little sidescroller demo:

Be patient, there is no preloader… The visuals I created, were heavily inspired by Glit. I hope they will release a playable version of the game soon!

It features most of the effects currently implemented in ND2D:
- 2D Sprites (floor and ceiling)
- Particles (fire and moving dust)
- 2D Grid (Distortion effect on the ‘cloud’ layer)
- 2D SpriteSheets (waving grass)

This little demo runs in full screen at 60hz on my machine! Yay! I’ll add it to the examples with the latest improvements I made for ND2D the next days.

ND2D – Box2D Tests

April 27th, 2011 | Posted by lars in Molehill / Stage3D | ND2D | Talk - (1 Comments)

Good news everyone. Sven was so kind to create a little demo with Box2D and ND2D. The performance is already pretty good, but there are still a lot of things I have to optimize. I’ll include the source code of the Box2D example in the sources and post some more details of the latest ND2D features the next days.


(Note: The demo is broken with the latest Flashplayer 11 Release due to API changes)

ND2D – beta released

April 12th, 2011 | Posted by lars in Molehill / Stage3D | ND2D | Talk - (7 Comments)

Yay! I just released the first beta of ND2D. You can grab the sources via my github account: nulldesign/nd2d.

There is still a lot to do, especially in terms of performance. Since I decided to use PixelBender3D and not AGAL as my shader language, I have to wait for the next release, because a lot of features are still missing that are available in the AGAL opcodes (No KIL instruction, no Arrays, etc…).

Please play around with it, fork it, use it and send me feedback!!!

Update: You can try out a few live demos here.