Wednesday, January 6, 2016

LibGDX game jam - dev log #2

The next big addition to my LibGDX game jam project has been the addition of character dialog menus, inspired by those from the early Final Fantasy games.



To support this feature, the main additions have been:
  • a DialogSequence object, which contains an ArrayList of Strings to store the lines of text that appear, and an Iterator and some basic access methods for convenience (add, next, hasNext)
  • a DialogTrigger object, which extends the Box2DActor class and contains:
    • a DialogSequence object
    • a boolean variable called displayOnce that, as the name suggests, determines if the dialog displays only once, or will appear repeatedly
  • a set of corresponding user interface elements: the text is displayed in a Label (with a NinePatch background) on the Stage containing the user interface elements. The alignment is managed with a Table. To continue, the user needs to press the C key. Thus, in addition, there is an Image of the keyboard C key overlayed on the lower right corner of the Label, with is accomplished using a Stack object.

When the player comes into contact with a DialogTrigger object (as determined by a Box2D ContactListener object), the text is displayed on the dialog Label and the Stack containing the objects discussed above is set to visible. While the dialog text is visible, the program automatically returns from the update method, which means that the physics simulation does not run and continuous user inputs are not polled (thus none of the game world entities move). However, Actions are still processed, and discrete user input is still processed - this enables a subtle animated "pulsing" effect on the keyboard key image to take place, and also allows the user to press the key to cycle through the text contained in the DialogSequence object.

Some of the DialogTrigger data can be set up in Tiled; this is particularly useful for situations where dialog should be triggered when the player walks over a certain area that doesn't correspond to a pre-existing game world object. This is accomplished by creating a Rectangle on the Object layer, named "DialogTrigger", and then by adding a set of properties whose names contain the letters "Text" (Text0, Text1, Text2, etc.) and whose data are the strings to be displayed. By including a property called "Once", the DialogTrigger object will be configured (a boolean value will be set) so that the dialog sequence only displays once. Other dialog sequences can be configured in the code itself to be displayed when a pre-existing object should trigger a dialog sequence -- for example, when obtaining a key, you may want to display the message "You obtained a key!"

Finally, if the DialogTrigger field displayOnce is set to true, then when the message is complete, it gets added to removeList, which removes the corresponding body (that triggered the interaction) from the physics world, and also removes the object from its previously set "parentList", the ArrayList of DialogTrigger objects contained in the room, so even if the room is reloaded again later, the dialog will not display again.

The code that implements these features is currently posted on GitHub.


Tuesday, December 29, 2015

LibGDX game jam - dev log #1


The LibGDX Game Jam is on!

In addition to using LibGDX for the coding, I'll be using Tiled (http://www.mapeditor.org/) for level design and Box2D (http://box2d.org/) for physics. For prototyping I'll be using some spritesheets from (http://kenney.nl/). If time permits, I'll work on some custom art assets later.

All the code will be posted on GitHub at https://github.com/stemkoski/LibGDX-Jam

The timing is great, as I've just finished writing Beginning Java Game Development with LibGDX, and this will be an excellent opportunity to build on the foundations presented by that book (source code from the book can be downloaded at http://www.apress.com/9781484215012). To get things up and running quickly, I'll be using (and improving) the following Java classes developed in the book:

  • BaseGame and BaseScreen. These are abstract classes that extend the LibGDX Game and Screen classes, respectively.
  • BaseActor. Extends the LibGDX Actor class, contains Texture data and collision Shape data (which we won't use, instead delegating collision handling to Box2D).
  • AnimatedActor. Extends the BaseActor class, contains Animation data. 
  • Box2DActor. Extends the AnimatedActor class, contains physics-related (Body) data for Box2D.
  • GameUtils. A utility class containing methods to simplify (1) creating animations from files, and (2) processing collisions from Box2D.
  • ParticleActor. Extends the Actor class, is useful for particle effects created with the LibGDX particle editor.


The theme for the LibGDX game jam is "Life in Space". My first thought was outer space -- planets, stars, rocketships, asteroids, etc. From a game design perspective, I've always had a great deal of respect for the classic arcade game Asteroids, for two reasons:

  • its use of immersion: the controls move the spaceship relative to its orientation, rather than relative to the screen directions
  • the shape of its universe: this game features wraparound (the left side of the screen is connected to the right; the top side of the screen is connected to the bottom), and so the Asteroids universe actually exists on the surface of a torus (the shape of a doughnut)

And this train of thought led me to my interpretation of the theme: a key gameplay mechanic is going to be the player figuring out the shape of the space they are in. There will be a series of rooms, but in this universe, they will not be connected as a flat grid. This game will be more puzzle/maze based rather than combat based.

The first step is going to be creating a data structure (called a Room) to hold tilemap data, physics objects, stage objects, etc. The current plan is for each map created with Tiled to have three layers: a tile image layer, a layer for interactive game objects (keys, doors, etc.), and a layer to store non-interactive physics objects (the walls). Each room will have an ID, a number of SpawnPoint objects (each with its own ID), and a number of Portal objects (which contain a destination room ID and spawn point ID). The main game will read in the Tiled TMX files, store each in a custom Room object, and store the set of Rooms in a HashMap (indexed by ID).

The images below show a room in Tiled, and as rendered in LibGDX.

a room in Tiled
room rendered in LibGDX

Working code that implements these steps is currently available on GitHub.


Monday, December 28, 2015

Book Published! Beginning Java Game Development with LibGDX


I've written a book called Beginning Java Game Development with LibGDX, published by Apress, and available here: http://www.apress.com/9781484215012



Here's what the book is about (quoted from the introduction):
In this book, you’ll learn how to program games in Java using the LibGDX game development framework. The LibGDX libraries are both powerful and easy to use, and they will enable you to create a great variety of games quickly and efficiently. LibGDX is free and open-source, can be used to make 2D and 3D games, and integrates easily with third-party libraries to support additional features. Applications created in LibGDX are truly cross-platform; supported systems include Windows, Mac OS X, Linux, Android, iOS, and HTML5/WebGL. 
I have taught courses in Java programming and video game development for many years, and I’ve often struggled to find game programming books that I can recommend to my students without reservation, which led me to write this book. In particular, you will find that this book contains the following unique combination of features, chosen with the aspiring game developer in mind:
  • This book recommends and explains how to use a simple Java development environment (BlueJ) so that you can move on to programming games more quickly.
  • By using the LibGDX framework, you won’t have to “reinvent the wheel” for common programming tasks such as rendering graphics and playing audio. (An explanation of how to write such code from scratch could easily require fifty or more additional pages of reading.) LibGDX streamlines the development process and allows you to focus on game mechanics and design.
  • This book contains many examples of video games that can be developed with LibGDX. The first few example projects will introduce you to the basic features provided by the framework; these starter projects will be extended in the chapters that follow to illustrate how to add visual polish and advanced functionality. Later projects will focus on implementing game mechanics from a variety of genres: shoot-’em-ups, infinite side scrollers, drag-and-drop games, platform games, adventure games with a top-down perspective, and 2.5D games. I believe that working through many examples is fundamental in the learning process; you will observe programming patterns common to many games, you will see the benefits of writing reusable code in practice, you will have the opportunity to compare and contrast code from different projects, and you will gain experience by implementing additional features on your own.
  • At the beginning of this book, I am only assuming that you have a basic familiarity with Java programming. (Details about what background knowledge you need are discussed in the appendix.) Throughout the first few chapters of this book, advanced programming concepts will be introduced and explained as they arise naturally and are needed in the context of game programming. By the time you reach the end of this book, you will have learned about many advanced Java programming topics that are also useful for software development in general.
And here are some screenshots of some of the many games and demos you'll create in the book:

Cheese, Please!
Balloon Buster
Starfish Collector
Space Rocks - inspired by Asteroids
Plane Dodger - inspired by Flappy Bird
Rectangle Destroyer - inspired by Breakout
52 Card Pickup
Starscape - a particle effects demo
Jumping Jack - a sandbox style physics demo
Jumping Jack 2 - a platformer game
Treasure Quest - a top-down adventure/rpg style game
Pirate's Cove - an interactive 3D (2.5D) demo

Interested? Check it out at: http://www.apress.com/9781484215012!


Thursday, December 5, 2013

Interactive Earth with Three.js

I have always been amazed by the Small Arms Import/Export Chrome Experiment at http://www.chromeexperiments.com/detail/arms-globe/ -- it combines data visualization (geography and timeline) with an interactive 3D model of the Earth, using the amazing Javascript library Three.js. The code is open source, available on GitHub, and I wanted to adapt part of it to a project I've been working on.

As a first step, I wanted to implement the country selection and highlighting. I tried reading through the source code, but eventually got lost in the morass of details.  Fortunately, I stumbled upon a blog post written by Michael Chang about creating this visualization; read it at http://mflux.tumblr.com/post/28367579774/armstradeviz. There are many great ideas discussed here, in particular about encoding information in images.  Summarizing briefly: to determine the country that is clicked, there is a grayscale image in the background, with each country colored a unique shade of gray. Determine the gray value (0-255) of the pixel that was clicked and the corresponding country can be identified by a table lookup. (This technique is also mentioned at well-formed-data.net.) The highlighting feature is similarly cleverly implemented: a second image, stored as a canvas element with dimensions 256 pixels by 1 pixel, contains the colors that will correspond to each country. The country corresponding to the value X in the previously mentioned table lookup will be rendered with the color of the pixel on the canvas at position (X,0). A shader is used to make this substitution and color each country appropriately. When a mouse click is detected, the canvas image is recolored as necessary; if the country corresponding to value X is clicked, the canvas pixel at (X,0) is changed to the highlight color, all other pixels are changed to the default color (black).

For aesthetic reasons, I also decided to blend in a satellite image of Earth. Doing this with shaders is straightforward: one includes an additional texture and then mixes the color (vec4) values in the desired ratio in the fragment shader. However, the grayscale image that Chang uses is slightly offset and somehow differently projected from standard Earth images, which typically place the longitude of 180 degrees aligned with the left border of the image.  So I did some image editing, using the satellite image as the base layer and Chang's grayscale image as a secondary translucent layer, then I attempted to cut and paste and move and redraw parts of the grayscale image so that they lined up. The results are far from pixel perfect, but adequate for my purposes. (If anyone reading this knows of a source for a more exact/professional such image, could you drop me a line? Either stemkoski@gmail or @ProfStemkoski on Twitter.) Then I used an edge detection filter to produce an image consisting of outlined countries, and also blended that into the final result in the fragment shader code. The result:


The code discussed above is available online at http://stemkoski.github.io/Three.js/Country-Select-Highlight.html.

More posts to follow on this project...

Happy coding!

Friday, July 5, 2013

Shaders in Three.js: glow and halo effects revisited

While updating my collection of Three.js examples to v58 of Three.js, I decided to clean up some old code.  In particular, I hoped that the selective glow and atmosphere examples could be simplified - removing the need for blending the results of a secondary scene - by using shaders more cleverly.  I posted a question at StackOverflow, and as it often happens, the process of writing the question as clearly as possible for an audience other than myself, helped to clarify the issue.  The biggest hint came from a post at John Chapman's Graphics blog:
The edges ... need to be softened somehow, and that's where the vertex normals come in. We can use the dot product of the view space normal ... with the view vector ... as a metric describing how how near to the edge ... the current fragment is.
Linear algebra is amazingly useful.  Typically, for those faces that appear (with respect to the viewer) to be near the center of the mesh, the normal of the face will point in nearly the same direction as the "view vector" -- the vector from the camera to the face -- and then the dot product of the normalized vectors will be approximately equal to 1.  For those faces that appear to be on the edge as viewed from the camera, their normal vectors will be perpendicular to the view vector, resulting in a dot product approximately equal to 0.  Then we just need to adjust the intensity/color/opacity of the material based on the value of this dot product, and voila! we have a material whose appearance looks the same (from the point of view of the camera) no matter where the object or camera is, which is the basis of the glow effect. (The glow shouldn't look different as we rotate or pan around the object.)  By adjusting some parameters, we can produce an "inner glow" effect, a "halo" or "atmospheric" effect, or a "shell" or "surrounding force field" effect.  To see the demo in action, check out http://stemkoski.github.io/Three.js/Shader-Glow.html.



In particular, click on the information button in the top left and adjust the values as recommended (or however you like) to see some of the possible effects.

Happy coding!


Sunday, June 23, 2013

Creating a particle effects engine in Javascript and Three.js

A particle engine (or particle system) is a technique from computer graphics that uses a large number of small images that can be used to render special effects such as fire, smoke, stars, snow, or fireworks.

Image of Particle Engine Example
Video of Particle Engine Examples
http://www.youtube.com/watch?v=gtr5QLu9RaI

The engine we will describe consists of two main components: the particles themselves and the emitter that controls the creation of the particles.

Particles typically have the following properties, each of which may change during the existence of the particle:

  • position, velocity, acceleration (each a Vector3)
  • angle, angleVelocity, angleAcceleration (each a floating point number). this controls the angle of rotation of the image used for the particles (since particles will always be drawn facing the camera, only a single number is necessary for each)
  • size of the image used for the particles
  • color of the particle (stored as a Vector3 in HSL format) 
  • opacity - a number between 0 (completely transparent) and 1 (completely opaque)
  • age - a value that stores the number of seconds the particle has been alive
  • alive - true/false - controls whether particle values require updating and particle requires rendering

Particles also often have an associated function that updates these properties based on the amount of time that has passed since the last update.

A simple Tween class can be used to change the values of the size, color, and opacity of the particles.  A Tween contains a sorted array of times [T0, T1, ..., Tn] and an array of corresponding values [V0, V1, ..., Vn], and a function that, at time t, with Ta < t < Tb, returns the value linearly interpolated by the corresponding amount between Va and Vb.  (If t < T0, then V0 is returned, and if t > Tn, then Vn is returned.)

Typically, we never interact with individual particle data directly, rather, we do it through the emitter.

Our emitter will contains the following data:

  • base and spread values for all the particle properties (except age and alive).
    When particles are created, their properties are set to random values in the interval (base - spread / 2, base + spread / 2); particle age is set to 0 and alive status is set to false.
  • enum-type values that determine the shape of the emitter (either rectangular or spherical) and the way in which initial velocities will be specified (also rectangular or spherical)
  • base image to be used for each of the particles, the appearance of which is customized by the particle properties
  • optional Tween values for changing the size, color, and opacity values
  • blending style to be used in rendering the particles, typically either "normal" or additive (which is useful for fire or light-based effects)
  • an array of particles
  • number of particles that should be created per second
  • number of seconds each particle will exist
  • number of seconds the emitter has been active
  • number of seconds until the emitter stops -- for a "burst"-style effect, this value should be close to zero; for a "continuous"-style effect, this value should be large.
  • the maximum number of particles that can exist at any point in time, which is calculated from the number of particles created per second, and the duration of the particles and the emitter  
  • Three.js objects that control the rendering of the particles: a THREE.Geometry, a THREE.ShaderMaterial, and a THREE.ParticleSystem.

 The emitter also contains the following functions:

  • setValues - sets/changes values within the emitter 
  • randomValue and randomVector3 - takes a base value and a spread value and calculates a (uniformly distributed) random value in the range (base - spread / 2, base + spread / 2). These are helper functions for the createParticle function
  • createParticle - initializes a single particle according to parameters set in emitter
  • initializeEngine - initializes all emitter data
  • update - spawns new particles if necessary; updates the particle data for any particles currently existing; removes particles whose age is greater than the maximum particle age and reuses/recycles them if new particles need to created 
  • destroy - stops the emitter and removes all rendered objects
Fortunately, all the graphics processing is handled by Three.js.  There is an object in Three.js called "THREE.ParticleSystem" which allows you to assign an image to individual vertices of a geometry. The appearance of the particles is controller by a "THREE.ShaderMaterial", which requires us to write a custom vertex shader and a custom fragment shader with attributes which take all of our particle properties into account (image, angle, size, color, opacity, visibility) when rendering each particle.

To try out a live demo, go to: http://stemkoski.github.io/Three.js/Particle-Engine.html

Also, feel free to check out the Particle Engine source code, the parameters for the examples in the demo, and how to include the particle engine in a Three.js page.

Happy coding!



Friday, May 31, 2013

Motion Detection and Three.js

This post was inspired by and based upon the work of Romuald Quantin: he has written an excellent demo (play a xylophone using your webcam!) at http://www.soundstep.com/blog/2012/03/22/javascript-motion-detection/ and an in-depth article at http://www.adobe.com/devnet/html5/articles/javascript-motion-detection.html. Basically, I re-implemented his code with a few minor tweaks:

  • The jQuery code was removed.
  • There are two canvases: one containing the video image (flipped horizontally for easier interaction by the viewer), the second canvas containing the superimposed graphics
  • For code readability, there is an array (called "buttons") that contains x/y/width/height information for all the superimposed graphics, used for both the drawImage calls and the getImageData calls when checking regions for sufficient motion to trigger an action.


The first demo is a minimal example which just displays a message when motion is detected in certain regions; no Three.js code appears here.

The second demo incorporates motion detection in a Three.js scene.  When motion is detected over one of three displayed textures, the texture of the spinning cube in the Three.js scene changes to match.

Here is a YouTube video of these demos in action:


Happy coding!