Andy Fenwick

Andy Fenwick is a graphics programmer, computer gamer, board gamer, science geek and reluctant DIY-er. He is often found in his natural habitat of Ashby-de-la-Zouch listening to gothic music and complaining about government incompetence to his long-suffering wife.

Nov 152015
 

I’ve pretty good at quite a lot of things, but I’ve always been a bit embarrassed about not being able to speak a second language. A couple of GCSEs in French and German was as far as I got, and that’s now mostly faded. This famously isn’t rare for English speakers, but after years of concentrating on science and maths I quite fancied branching out and giving a language a go.

At least I had the luxury of not needing to learn something ‘useful’, given that most international workplaces speak English, so there was the freedom to pick anything that appealed (and wanting to learn is vital if you want to make progress in any field). I knew the most French, but I’m just not that big on the Latin languages. I love Iceland and would love to speak Icelandic, but it’s quite difficult and just a bit too niche. So then I considered the other Nordic languages, and decided on Norwegian.

Apparently Norwegian is one of the easiest languages for an English speaker to learn. It’s relatively simple, there aren’t many strange sounds, and the grammar is familiar. It’s also considered the ‘middle’ language between Swedish and Danish – Danish has very similar vocabulary but different pronunciation (hence with Norwegian you get reading in Danish ‘for free’), and Swedish has similar pronunciation but more variation in vocabulary. The result being that Norwegian speakers have a reasonable chance of being able to communicate with both Swedes and Danes. That sounded promising, and the sound and look of the language appealed, so I thought I’d give it a go.

Naturally I assumed that technology had revolutionised language learning as it has everything else, and it appears I wasn’t wrong. There are loads of sites and apps available, and the most popular were Babbel.com and Duolingo.com. At the time (October 2014) Duolingo didn’t offer a Norwegian course, so I dived in with Babbel. But recently I’ve given been using Duolingo as well, so I’ll start with that.

DuolingoLogo

Duolingo is a free service offering 15 languages at last count, with more on the way. It’s a modern approach to language teaching, with all content crowd-sourced from volunteers and a healthy dose of gamification thrown in.

Gamification is all about using the compulsive reward structure of games to keep you interested. Finishing a lesson in Duolingo gets you XP, and gaining XP levels you up (I don’t know what levels do). You choose an XP target per day and build up streaks, and completing courses and streaks earns you Lingots, which you can spend in the store on… stuff to help you earn more Lingots (OK, I don’t really know what they’re for either). It’s an interesting approach, and probably useful if you’re not that fussed about learning, but I suspect that if you need gimmicks to progress then maybe you’ve not got the motivation required to get very far.

Each lesson introduces a few new words, and you can click on a word at any time to get a translation. Lessons consist of a random series of translation or listening exercises, and last until you get enough answers correct. There’s a robot voice for reading out words and sentence, and for the most part it’s decent but does sound very synthetic. There are no formal lessons on grammar, but the website (only, not the app) often has a page of explanation with each topic explaining a few intricacies.

Because each lesson only uses words you already know, plus a few new ones, the content tends to be repetitive and bears little resemblance to actual speech. The repetition is a good thing for learning, but it all feels rather contrived and likely to give the illusion that you know more than you do. I’ve come across a few questionable sentence choices as well, my favourite being “The wolf is eating me”. I suppose that could come in useful one day.

Overall though Duolingo does a good job. As it’s free you’ve got nothing to lose, so it’s worth giving it a go.

BabbelLogo

Babbel is different in that it’s a paid subscription service, again offering lessons both on the site and in an app. What you’re paying for with Babbel is proper structured lessons and real voice actors, both of which I find hugely valuable.

There are two main types of content for each language – structured courses of lessons and vocabulary lists. The structured lessons remind me of traditional language courses, starting with basic phrases (hello, goodbye, how are you, I don’t understand, etc), then introducing more grammar rules and tenses alongside topics such as food and drink, families, hobbies and the like.

There’s the usual variety of activities in each lesson, from repeating words with speech recognition, matching words to their translation, anagrams, and filling in missing gaps in sentences (from both text and audio cues). The speech recognition is a bit hit and miss, so I tend to turn it off, but that could just be my pronunciation…

Unlike Duolingo, sentences feel a lot more natural. You’re only learning a few new words each lesson, but they’re used in context with a lot of unfamiliar words (with the full English translation provided). You pick up quite a lot of random extra words and phrases this way, as you’re more exposed to the full language. Finding yourself understanding more and more of this ‘padding’ is a rewarding sign of progress.

My favourite bits early on were the dialogues at the end of each lesson. Some of them actually made me laugh out loud (not sure if the comedy was intentional or not), from someone complaining over the price of vegetables, to an interview for a plumbing job where she’d previously worked as a doctor, in telemarketing and as a gardener in a cemetery. They remind me of the GCSE language writing exams, where as long as it was grammatically correct you could write all manner of rubbish. I remember giggling away to myself while writing about a school skiing trip where three people broke their legs, the teachers were in the bar all day and everyone had to go home early with diarrhoea (“durchfall”, the word that every student of German knows).

Finally there’s your vocabulary list. Each new word and phrase goes into your list, and different words come up for revision every day. Getting it right moves it up a level so it comes up for revision less often, while getting it wrong shows it again sooner. It’s intuitive, helpful, and gives you something quick to do each day – five or ten minutes here and there will usually get you through your list (Duolingo has something similar, but the logic is a bit fuzzier).

Overall I’m very impressed with both sites. I’ve learnt more in the last year that I did with years of lessons at school. Whether that’s through the approach or just wanting to learn I don’t know, but I suspect a bit of both. Personally I prefer Babbel, and I think it’s very good value at around £40/year (they often send special offers after signing up). If you feel like giving it a go, here’s an invite with a free week. But I’ll continue to use Duolingo alongside it, as they complement each other well.

Oct 302015
 

It’s been a slow year on the blog. This is mainly due to taking up a few new hobbies (learning Norwegian, and getting into astronomy and astrophotography) and preparing for a new child. I may write about some of these in future. But in the meantime I’ve done a little bit of work on my previously-untitled “spaceship game”, and figured I should put it up for download on the off-chance that anyone wants to play it.

I’ve mainly improved the frontend and UI, as it used to use the terrible looking built-in UI system in Unity pre-4.6. However, as my day job is making nice UIs for mobile apps, it’s not really the sort of thing I want to spend all my spare time on as well, so it’s pretty minimal (but not as embarrassing as before). The graphics are also terrible, because I’m not an artist. You may have heard the term “programmer art” before. If not, you have now.

At least the game now has a title. Using the time-honoured method of picking pairs of vaguely relevant words at random and putting them in Google until no existing game comes up, it is called Galaxy Arena. Download it here:

PC: GalaxyArena_PC.zip (13.1MB)

Mac: GalaxyArena_Mac.zip (15.1MB)

shipgame1

The Game

Galaxy Arena is a multiplayer-only game where you competitively build a spaceship out of square parts against the clock, and then fight everyone else with your creation. It’s pretty obviously heavily inspired by the board game Galaxy Trucker if you’ve played that (and if not, you should).

You can play free-for-all or with teams, and it supports 2-12 players. Team support is flexible but rudimentary – simply type a team number into the Team box by your name in the lobby, and the game will form teams accordingly.

I’ve only actually tested it on a LAN with up to eight players, but I presume it works over the internet. It uses the Unity development server for posting open games, and this is occasionally down for maintenance. If it is, you’ll just have to try again later (sorry), or use the Connect to IP option (port 33104).

Build Controls

The first phase of the game is where you build your ship. Your Core module in the middle of the screen. This has tiny engines and a puny laser built in, and your match is over when it’s destroyed. Around the edge of the screen are eight stacks of tiles, shared between all players. You have 60 seconds to grab tiles from the edge and attach them to your ship.

Left-click: grab a tile / place a tile if valid (when green)

Right-click: rotate the grabbed tile 90 degrees

Esc key: discard the current tile

Building Rules

Ships must be built following a few rules:

  • Tiles must connect to an existing part of the spaceship
  • Edges of the new tile must match all existing neighbouring tiles – either connector-to-connector, or blank-to-blank
  • No tile can be placed in the square directly in front of a weapon (shown with a red cross)
  • No tile can be placed in the square directly behind an engine (shown with a red cross)

After the time is up, you’ll be taken to the arena to fight.

Tiles

ga_coreCore. Lose this and you die.

 

ga_engine

Engine. Provides thrust and increases turning rate.

 

ga_laserLaser. Rapid firing, low damage weapon. Faster projectile than the missile.

 

ga_missileMissile. Slow firing, high damage weapon. Slightly higher DPS than the laser.

 

ga_gyroGyroscope. Increased turning rate.

 

ga_crewCrew. Increases rate of fire, and increases engine power.

 

ga_batteryBattery. Increases total energy reserves and recharge rate.

 

ga_armourArmour. More hit points than other modules.

 

 

Fight Controls

The arena stage is last man (or team) standing. There is no time limit and no walls around the arena, so just play nice and don’t run away forever… (this is remaining a prototype, not a polished full-featured game).

W/ S / Q / E : thrust forwards / backwards / right / left. Fires whichever engines you have pointed in that direction

A / D : steer left / right. Steering is faster the more engines and gyros you have, and slower the more mass of ship you have

Space : hold down to boost your engines, but this drains your energy bar quickly

Arrow keys : fire all of your weapons that face in that direction. Firing uses energy, and weapons won’t fire if there isn’t enough.

Tab : show the scores (just for fun, they don’t mean anything). Score is purely the amount of damage you’ve inflicted.

Your team mates have green names, and have green arrows pointing to them. Your enemies have red names and red arrows. You can damage your team mates, so be careful.

And that’s about it. Here’s a video of the previous version of the game. Enjoy!

Jan 282015
 

Just before Christmas I had a couple of days off work and planned to give Elite a proper play, but then something happened: a friend very kindly bought me a copy of Kerbal Space Program. I knew of the game and I’d always been curious, but I assumed I would quickly get bored of a predominantly sandbox experience. Apparently I was wrong – I booted it up one morning and spent the next three days solid engrossed in orbital mechanics, staging, lander design and “delta-v”.

The game involves building rockets and space planes from a huge variety of components and launching them into space to explore the solar system (different to but based on our solar system, which I guess stops people nitpicking about accuracy), while using reasonably realistic physics. As such, it was humbling as a self-confessed science/space nerd that it took an hour of failures and a tutorial before I managed to get a rocket into orbit. It’s refreshing to play a game that pays so little attention to traditional ‘gaming’ experience and relies so much on a knowledge of physics.

kerbal1

A brave Kerbal on his way to the moon

There are three different game modes – sandbox mode where everything is available, “Science sandbox” where new parts are unlocked by doing new things and using scientific instruments, and a full career mode which also adds money earned through contracts. I was warned that the career mode was really hard, so I went with Science mode. This lowers the learning curve of Sandbox mode by introducing new components slowly, while removing the stress of money and crew management. The game is still in Beta but when it’s done I’ll have a stab at Career mode now I vaguely know what I’m doing.

kerbal2

This little guy can’t get home. I’ve reclassified him as Moon Base 1

I was struck by the familiarity of my progression during the first few hours compared to the real space programmes, and the real sense of accomplishment that I’d not felt in a game for years:

  1. Fire a rocket up into the air. Heh, this is cool.
  2. Fire a rocket up into the air and parachute the capsule back home. First successful mission.
  3. Blast well into space before splashing down in the ocean. Sub-orbital space flight!
  4. Making it into orbit (eventually) and safely de-orbiting. Ooh, reentry looks cool.
  5. Getting into orbit, firing off towards the moon, slingshotting round the back and getting back home again. Landing there looks tricky!
  6. Getting to the moon, going into obit and landing on the surface in one piece. Having no fuel left for the return journey (real-life thankfully skipped this stage).
  7. Landing on the moon, grabbing surface samples, getting back into space and making it back in one piece for the first time. A real accomplishment!
kerbal3

About to ditch the last stage and deploy the parachutes for splashdown

Each stage presents new challenges and physics concepts to master. Getting into space requires staging (jettisoning empty fuel tanks and engines to make the rocket lighter). Obtaining a circular orbit needs an understanding of the apoapsis and periapsis orbital nodes (farthest and nearest points) and how burning at one side of the orbit only affects the other side. And then you can worry about orbital transfers to get to other moons and planets.

kerbal4

Get used to the view, son, you’re stuck on Eve for the forseeable future…

Thankfully the game provides a great maneuver planning tool. The current orbits of all bodies and ships are shown, and you can add planned maneuver nodes along the path. Drag the node in each of the six directions (prograde and retrograde, normal and anti-normal, radial and anti-radial) and the predicted path updates, including closest approaches to other bodies and ‘encounters’ (getting within their gravitational sphere of influence). While I’m sure NASA use complicated maths to plan their paths, in this game dragging nodes around until you hit on something useful also works while you’re getting the hang of what each direction does to your orbit. Then you just follow the directions on your control panel to carry out the planned burn.

kerbal5

The aerodynamic model is very simple, otherwise this rover and sky crane would never get off the ground

 After that the sky is, indeed, the limit. I’ve got “small colonies” (read: failed return missions) on Eve (Venus) and Duna (Mars), and used a sky crane to drop a small rover on the moon. The other planets require even more powerful rockets to reach, and as we speak there are probes scouting the outer planets for my future (probably suicide) missions…

And finally, for the truly brave, there is the small matter of orbital rendezvous and docking. Why get just one spacecraft where you want it to go when you can get two in the same place, at the same speed, at the same time? Building substantial space stations requires launching them in multiple sections, and this is my next project. My first and only successful docking took an hour, but it must get easier. I mean, it’s complicated, but it’s not rocket science.

Jan 212015
 

The wait is over, Elite: Dangerous has been out for a month now and I’ve been playing it quite a lot. I was going to wait until it was actually released before diving in, but I had the chance to borrow an Oculus Rift DK2 for a weekend right at the end of the beta, so I dived in early. And both are awesome!

I’ve heard a few predictions that the Rift won’t turn out to be any good. I don’t know how it’ll fare as a consumer device in the market, but from a technical perspective I can safely say it works great.

elite1

Prepare to be randomly attacked by many of the psychotic NPCs

I had a quick go on the original devkit about 18 months ago at the Rezzed show, playing a completely unsuitable game (it used FPS controls on a joypad) and it was… quite good. You could see the giant square pixels, and everything was a bit blurry and smeary, although that was probably a combination of steamed up lenses and the fact I wear glasses. But still, you could look around it was pretty immersive – until the point when I moved my head both in real life and with the stick at the same time and felt instantly sick. Moral of the story – the Rift and FPS controls don’t mix.

The DK2 is a massive improvement. The resolution is higher and it added full head positional tracking. The positional tracking is the biggest improvement to immersion – the best games and demos involve sitting in some kind of chair, and the first time you lean over and look behind or underneath you, you almost forget where you really are.

elite2

On my way to the Pleiades

Elite is an unashamedly slow-paced space simulation, which really benefits from the immersion offered by the Rift. Despite the complicated controls it’s completely possible to play on an Xbox 360 pad (assuming you don’t have a flight stick) without touching the keyboard, which is vital for VR (use the Advanced Pad preset in the options – lots of controls are mapped to “hold a button and press the D-pad” which takes a bit of learning but works well). In particular, the dogfighting is much better with the Rift as you can look up out of the roof to keep track of your target. The only downside is the text is a bit small, so you have to lean it to read it easily. But the fact that you can do just that is awesome in itself!

elite3

Ringed planets can be very pretty

The first few hours are a bit of a grind as you carry out delivery missions for a few thousand credits each, but pretty soon you can get into a dedicated trading or fighting ship and make some serious money. My early goal was to get into the classic Cobra Mk3 and kit it out, and then go on a jaunt somewhere interesting. Looking around the map, the closest landmark I could find was the Pleiades star cluster and the thin nebula around them, at a mere 300Ly away. The journey there took an hour or two, scooping fuel from stars on the way, and watching the nebula grow bigger after each jump thanks to the dynamic sky box.

elite4

A young planet around one of the massive, bright Pleiades

The attention to realism is what makes the game for me. Everything is to scale, from the layout of the stars to the planets in a system, and planets and stations move and orbit correctly within systems. That’s such a surprise after being used to the static universes of Eve and the X games – for example when you find your trade route has got a bit slower because the station is now on the wrong side of the planet compared to an hour ago.

Arriving at the Pleiades, it’s all massive, bright, young stars with new lava-riddled planets around them as you’d expect. Slightly less expected was the random NPC cruiser that started shooting at me shortly after I took the screenshot above, hundreds of light years from civilisation. But I suppose the game needs to attempt to give the illusion of a populated universe.

elite5

Back to space trucking between outposts

One criticism I do have is the multiplayer aspect. I admit that when I’m trading I play in Solo mode, with no other human players. There are no ‘positive’ interaction you can have with other players – all you can do is randomly blow them up or crash into them. Given that losing a ship full of expensive cargo can set you back hours of game time, I don’t have any desire to risk it. I appreciate that some like the thrill and the adrenaline, but being on the wrong end of a pirate encounter isn’t for me. Hopefully Frontier will add some other incentive to play with others, but in the meantime I’m happy enough in my own little world.

It’s also not a game you necessarily want to play for hours at a time, as it can get fairly repetitive grinding money for the next ship. But by mixing up a bit of trading, fighting, mining, exploring and keeping up with the evolving story, as well as playing in small doses, it’s still a game I feel excited to get back to and play. A worthy new (old) entry into the sparse space sim genre!

Oct 012014
 

This is part 2 of how my spaceship building/fighting game is structured. Find part 1 here.

Space network synchronisation

Each player’s ship consists of a parent object with a PlayerShip script and a Rigidbody 2D, and a bunch of children (one for each attached ship module). I very much like the fact that you can just add a selection of children with box colliders (i.e. the modules) to an object with a Rigidbody and the physics interactions Just Work (certainly well enough for my purposes).

With that in mind, the only objects created with a Network.Instantiate() are the parent ship objects, one for each player. The server owns all network-instantiated objects, and nothing is network-instantiated on a client. The server keeps track of which object belongs to which player.

The clients have already been told which modules make up all the ships, so they create them all locally and attach them as children of the ships. The parent PlayerShips are the only things in the game that use the NetworkView state synchronisation (which automatically updates position and rotation 15 times/second). This is very efficient as there is only one synchronised object per player.

Prediction and interpolation

The ships use some simple prediction to minimise the effects of lag. I’ve seen a few people asking about how this works, so here’s the serialisation code:

void OnSerializeNetworkView(BitStream stream, NetworkMessageInfo info)
{
  if (stream.isWriting)
  {
    Vector3 position = rigidbody2D.position;
    Vector3 velocity = rigidbody2D.velocity;
    float rotation = rigidbody2D.rotation;
    float rotSpeed = rigidbody2D.angularVelocity;

    stream.Serialize(ref position);
    stream.Serialize(ref velocity);
    stream.Serialize(ref rotation);
    stream.Serialize(ref rotSpeed);
  }
  else
  {
    stream.Serialize(ref syncPosition);
    stream.Serialize(ref syncVelocity);
    stream.Serialize(ref syncRotation);
    stream.Serialize(ref syncRotSpeed);

    syncPositionFrom = transform.position;
    syncRotationFrom = transform.rotation.eulerAngles.z;
    syncBlendTime = 0.0f;
  }
}

And here’s the update code to calculate the transform every frame:

void Update()
{
  if (!Network.isServer)
  {
    syncBlendTime += Time.deltaTime;
    float blend = Mathf.Min(1.0f, syncBlendTime / blendTimeMax);
    transform.position = Vector3.Lerp(syncPositionFrom, syncPosition, blend);

    float newRot = Mathf.LerpAngle(syncRotationFrom, syncRotation, blend);
    transform.rotation = Quaternion.Euler(0.0f, 0.0f, newRot);

    // Update the from and to values by velocity.
    syncPositionFrom += syncVelocity * Time.deltaTime;
    syncPosition += syncVelocity * Time.deltaTime;
    syncRotationFrom += syncRotSpeed * Time.deltaTime;
    syncRotation += syncRotSpeed * Time.deltaTime;
  }
}

This will predict the position/rotation in the frames following an update, and blend out previous prediction errors over blendTimeMax (set it the same as your time between updates). This will fix all positional discontinuities (nothing will pop to a new position) but there will still be first-order discontinuities (velocity will pop).

That’s not a problem at all for the other ships, as it’s not noticeable in a game like this with slow controls. The only issue is if the camera is fixed relative to your ship (which it currently the case), because a tiny change in the ship rotation leads to a large movement of the background at the edge of the screen. It’s still barely noticeable, but ideally the camera position/rotation needs to be slightly elastic.

Controlling the ship

The Space scene contains a PlayerControls script which takes input from the keyboard and sends it to the server. You have controls for applying thrust in four directions (forwards, backwards, left and right), firing in each of the for directions, and steering left and right. The PlayerControls sends an RPC to the server whenever any of the inputs change (e.g. started or stopped steering) to minimise server calls. On the server, the inputs are passed to the PlayerShip owned by that player.

Ships are controlled by applying physics forces to the Rigidbody. Every FixedUpdate(), the PlayerShip uses GetComponentsInChildren() to find all the Engine components (scripts attached to the engine module prefabs) and send them the net horizontal and vertical thrust. If the engine is facing the right way is applies a force to the parent Rigidbody with AddForceAtPosition().

Applying the force at the actual engine location results is wild spinning for even slightly unbalanced ships, so I blend the position nearly all the way back towards the centre of mass to make is more controllable (97% of the way in this case, and even then it’s hard to drive with off-centre engines).

Steering simply uses AddTorque() to rotate the ship.

shipgame1

A ship with unbalanced engines

Weapons

Weapons are fired in a slightly different way to engines. Because there are a variety of weapons systems, I use BroadcastMessage() to call a function on every child script that responds to it (scripts are added to each weapon module). Each weapon script keeps track of its own cooldown and fires if it can.

Firing weapons creates Projectile objects. Each weapon module prefab has a Projectile prefab referenced in it, and each Projectile can have different graphics, speed, lifetime, damage and particle effects. The projectile is created on all clients by doing a Network.Instantiate().

Because projectiles go in straight lines there is no need to synchronise the transforms over the network. The initial position and velocity are set precisely the same as on the server, and the parent ship velocity is added immediately after creation with an RPC (use the new object’s networkView.viewID to identify it on the client). The projectile can then move itself locally and be in exactly the right place.

Impacts and damage are all calculated on the server. OnTriggerEnter2D() is used to detect when a ShipModule has been hit. Network.Destroy() is called on the projectile, a particle effect is instantiated on all clients, and damage is applied to the ShipModule.

If the ShipModule has been destroyed it informs the parent PlayerShip which checks if any other components are no longer connect. RPCs are then used to tell clients to destroy the relevant modules. If the red central component is destroyed then you’re out of the game.

No physics calculations or collisions are processed on the clients at all. The game is entirely server-authoritative to minimise cheating possibilities – the clients simply sends off inputs and receive updates. Overall I’m pretty happy with how it all works, and very happy that the entire game comes in at under 3000 lines of code (total, including comments and blank space)!

Next it needs a load of polish – better graphics, add some sounds, a building tutorial etc, but it’s a decent start.

Sep 282014
 

A little while ago I posted about my spaceship building/fighting game, made in Unity. Because Unity is quite different to how I’m used to writing games (pure C++), it required a bit of getting used to how to structure things. This post will give a high level overview of how it works and all fits together. Again, as a Unity novice I’m not saying this is exactly right, but I’m happy with how easily everything came together so it can’t be too far off!

I plan to do a few more improvements and usability tweaks to the game and then I’ll probably put it up for download. No guarantees on when though.

Scenes

The game consists of just three scenes: Lobby, Garage and Space.

The Lobby contains the UI for creating, joining and leaving servers, a Start button and a chat panel. There is a singleton NetworkManager that stores state about which server you’ve created or joined, and the other players in your game. I talked about that here.

On starting the game, every player loads the Garage scene. This scene contains the base ship and the components available to build it. After 60 seconds of building, the server calls time and tells every client to load the Space scene.

The Space scene contains very little except for a textured quad for the Background, a camera and a couple of manager scripts. All players are added in code, and at the end of the game the Lobby is reloaded.

Quickly about the networking side of things. The server player also plays the game as a client, so as much as possible the code is split out into Server and Client parts. The client-running-on-the-server goes through all the same processes as any other player for simplicity (but the NetworkManager shortcuts the message passing, as I spoke about before).

Garage structure

The Garage scene contains a bunch of GarageSlots which are where the available components appear. There’s also a GarageManager object which has references to all the GarageSlots (set up in the inspector). Finally there’s a PlayerShip which is mainly a container for ShipModules (the square tiles you add to your ship).

Each individual ShipModule is defined as a prefab which contains a sprite, a box collider, a ShipModule script (with info about the connectors on each face, hit points etc), and any module-specific scripts. There are currently eight types of module and around 30 different module variants, so there are 30 different prefabs.

unitygarage

All very straightforward. One problem is then how to create a global accessor for these modules, so that both the Garage and Space scenes can get references to the prefabs. Looks like we need another singleton, which we’ll call the ModuleLibrary.

Singletons with configurable parameters

The ModuleLibrary script contains references to every module prefab, set up in the inspector. This is all fine if the script only exists in one scene because you can just drag one into the scene and set it up. However, singletons like the NetworkManager work by newing up a NetworkManager script and adding it to an object. Instead I want a singleton that I can configure in the editor.

To do this we can set up an object containing a ModuleLibrary script, configure it by adding all the Module prefabs to it, and save that as a prefab. Then you can use this singleton get() function to instantiate it from the prefab:

static ModuleLibrary m_instance;
public static ModuleLibrary Instance
{
  get
  {
    if (m_instance == null)
    {
      GameObject obj = Instantiate(Resources.Load("ModuleLibrary")) as GameObject;
      m_instance = obj.GetComponent();
      DontDestroyOnLoad(m_instance);
    }
    return m_instance;
  }
}

One thing to note is that Resources.Load() takes a path relative to the Resources folder in your Assets list. This folder doesn’t exist by default so you’ll have to create it.

unityResFolder

Now we are able to get a prefab for any tile from this singleton and a module ID number.

Garage security

For a small hobby game I’m not at all worried about cheating, but it’s good practice to design a robust hack-proof system as much as possible anyway. To that end, the server keeps track of and verifies all steps in the ship building process.

The server generates the IDs of the modules that will be available in the slots, and tells all clients. When a player clicks a module to pick it up, their client sends the chosen slot ID back to the server. The server stores which type of module that client has selected, and generates a new one to fill the gap.

When a player then clicks to attach a module to their ship, the client only sends the grid coordinates and the rotation (one of four). The server already knows which component is selected and verifies that it’s valid. Therefore it’s not possible to send new modules back, or create invalid ships, by sending fake packets to the server.

From the Garage to Space

The details of everyone’s ships are stored in a ShipStore, which is another configurable singleton. The ShipStore on the server is what keeps track of the ships that each player is building. When the Space scene has loaded, the server ShipStore uses RPCs to tell every other player the details of all the ships.

Unfortunately the built-in Unity networking doesn’t support sending arbitrary byte arrays, so the transmission is a bit cumbersome – an RPC call is made for every single component on every ship and contains the player, coordinates, module ID and rotation. It’s not ideal but it works, and there are at most a couple of hundred messages.

At this stage there is a little bit of message passing to ensure that every client has finished loading and receiving ship data. Everyone has now build a ship and made it into space so it’s time for some action, but that can wait until part 2.

Sep 042014
 

I’ll try to keep this brief, I don’t normally like to get too serious on here and everything has been said before more eloquently that I could put it.

I 100% support Anita Sarkeesian and her work on Feminist Frequency, as well as everyone else who has received abuse as a result of working on or expressing a view on video games.

If you don’t know what I’m talking about, count yourself lucky that you’ve been spared this particular display of vulgarity from a small segment of the “gamer” community in response to the Feminist Frequency videos. And if you have any interest in video games and how they can be improved, you could do far worse than watch them yourselves.

femfreq

My main emotion while writing this is embarrassment. I’m embarrassed to have to state the blindingly obvious: that women and minorities have the right to criticise a medium of entertainment without being subjected to abuse and death threats. I’m embarrassed that the gaming community has allowed mysogynists and sociopaths to be such a prominent part of how the public (and even gamers themselves) perceives it. In 27 years of being a gamer I’ve never once been ashamed of my hobby, but the events of the last few weeks have pushed me perilously close.

This is me stating that those people do no represent me. Along with many many many many others, I believe it’s important to make a public stand – to do my little part in counteracting this toxic culture.

Games have been fighting to be recognised as art for years. Now they’re made it – and they’re starting to get the same level of critical examination as any other media. Some of it is valid, some less so, but any high-quality reasoned commentary is welcome, especially in a medium that has been mostly devoid of it for so long. As a (ex-professional, now spare time) game developer, I wholeheartedly welcome any criticism that makes me think harder about what I do, and what can be improved.

To end on a positive note: in the long run this sorry episode may actually achieve something. The developer community is small, close-knit and predominantly comprised of rational, educated, well-meaning professionals. I believe the issue of equality in games has very much been brought to the forefront for most of those developers. And I believe that gaming will continue to accelerate along the sometimes-rocky road to being a fully-fledged respected artform, one that we can all be proud of.

Thank you. Normal service will be resumed shortly.

Aug 232014
 

Today’s post is a rather mixed bag of  vaguely-related topics. Lets kick off (ahem) with Kickstarter…

Kickstarter

Last weekend I was supposed to be at Alt-Fest, a crowd-funded goth/metal/industrial festival that was shaping up to be awesome. Then three weeks ago it imploded spectacularly – cancellation rumours circulated on Twitter, before finally a statement was released saying they’d not sold enough tickets and had run out of money. I suspect the UK alternative community is too small to support such an ambitious festival, but it was worth a punt. I suspect that this could be the first and last large-scale crowdfunded festival.

I love the concept of Kickstarter, but I’m done for the time being. I went in with some friends on the Deadzone game, but by the time we received it we’d all rather lost interest in miniatures wargaming, so I’ve got a small pile of useless plastic I may never use.

chaosreborn

Chaos Reborn concept art

It’s not all been bad. The other project I’ve backed recently is Julian Gollop’s Chaos Reborn, a remake of the best game ever made: the original Chaos on the ZX Spectrum. To be honest I don’t know if I’ll actually get around to playing the finished product, but as a ten year old I spent hundreds of hours playing dodgy copies of Chaos and Rebelstar, so I figured some back-payment was due. It’s looking pretty good though.

Space sims

On a similar note I’m done with pre-ordering games as well, and I can pin that down to one game: X Rebirth. I’d been a fan of the X games since my university housemate made me play X-Tension years ago. It was a modern-day Elite with trading, shooting, pirates, space stations and fleets of ships. It was quite slow and the interface was awkward but I spent months building my space empire. The sequels were more of the same but bigger, better and prettier.

xrebirth1

World’s most annoying co-pilot with character design straight out of the bad old days

So I was aware that X Rebirth was going off in a slightly different direction, mainly by making it friendlier and more accessible, but obviously it was going to be good, right? There was a small pre-order discount and access to the soundtrack (which I loved in the earlier games and spent hours listening to) so I succumbed to a rare impulse purchase. A few weeks later and I was the proud owner of a (literally) unplayable mess. Frequent soft-locks, glaring bugs and things just not working made progress through the game impossible.

The worst bit though wasn’t the bugs, but was leaving your ship to walk around a station. There were all of two different station interior layouts, the NPC character models were straight out of System Shock 2, every character was incredibly rude when speaking to you (even they ones you’d employed) and the gameplay involved walking around looking for crates full of junk to sell back to people stood three feet away from them. Some sage advice: “if you can’t do something well, cut it from your game”. The have been a huge load of updates since, but I can’t bring myself to go back and play it again.

xrebirth2

It’s a shame because the actual space bit can be very pretty

All is not lost is the space sim genre though. The most successful crowd-funded project of all time, Star Citizen, continues to attract cash at an alarming rate. I’ve not been following it too closely because I can’t see any way that it’ll ever be ready for release. I suspect it’s far too ambitious to ever lock down on a shippable feature set, but prove me wrong and release a decent game and I’ll buy it.

But! The real Elite is back and looking good! With a more sensible level of funding, Elite: Dangerous is already in beta for an expected release later this year. In a classic case of doing more with less it’s promising to be a worthy sequel, and I fully expect to while away many more hours in the dark depths of space. Finally this most venerable of genres is getting the attention it deserves, on the modern hardware that can do it justice.

elitedangerous

Elite Dangerous

Music

Anyway, to make up for Alt-Fest here’s a few more bands I’ve been listening to lately.

Bad Pollyanna

I was looking forward to Bad Pollyanna last week, but instead I’ll be seeing them play the Whitby Goth Weekend later in the year. It’s catchy guitar tunes paired with a slightly retro horror-goth theme. Also try Monstrous Child.

Leaves Eyes

I saw Leaves Eyes back in January, and my main memory is how genuinely happy and polite they all were! From Norway and Germany, they play some great Viking-themed symphonic metal. If you like the less heavy stuff, give singer Liv Kristine’s solo work a go.

Amaranthe

Amaranthe is one of those bands I keep going back to without getting bored. Their three vocalists – female singer, male singer and male growler – keep things varied, with a nice mixture of metal styles and pop influences. I saw them in Nottingham earlier in the year and while their complicated sound meant the mix wasn’t perfect, I’ll certainly be going to see them again. Check out Razorblade and Electroheart.

Miss FD

Time for something a bit different with Miss FD. A mix of upbeat American-electro with the more recent albums featuring a lot of moody quieter tracks. Also try Moment Of Fade and Enter The Void.

Aug 142014
 

It’s become a bit of a tradition now that when I host a LAN I show up with a new game prototype (although as it’s usually six months between them that’s not saying a lot). This time was my first attempt in Unity, and it got its first outing a couple of weeks back. Conveniently I had a second LAN the next week with a different group of friends, so I had time to make some tweaks and improvements. Here’s an eight-player team game:

If you know the Galaxy Trucker board game (which I raved about ages ago) then the ship building mechanics may look slightly familiar. It’s pretty simple – you start with a central tile and add more tiles which build outwards, making sure the connectors match up on all sides. Nothing can do directly in front of a weapon, or directly behind an engine, but that’s about it for building rules. Everyone has the same tiles available at any time, and once a tile is picked a new one spawns for everyone. After 60 seconds it’s all done and you get to use your creation.

shipbuild1

At this point you’re all dumped into a deathmatch arena, where you fly your ship around and destroy everyone not on your team. The combat is somewhat inspired by Mechwarrior, with large, slow ships that take a lot of punishment, and each part of the ship can be blown off individually. Firing weapons and boosting uses up energy, so there’s some management involved here as well. Destroy the central component to destroy the whole ship.

shipbuild2

Considering I hadn’t actually played it properly during development, it worked really well. After the first session I made the ships a lot less sluggish and added team games, which made the fighting a bit more tactical and less random. I had a few people saying it was the most fun game of the LAN, so there’s definitely potential. It still needs sound, and some non-programmer art, and I’ve got a bunch of things I want to add, but I’m really happy with it so far. I should really think of a name…

Aug 112014
 

It’s been a while since the last post, mainly because I’ve had two LAN parties in the last two weeks and getting my first Unity game prototype into a playable state has taken priority. But more about that one later. In the meantime, here is a prototype I made at the end of last year (complete with terrible programmer art).

Each player has an end zone area in their colour, and the idea is to place mirrors to direct as many of the beams into your zone as possible. You get points whenever the beams are going into your zone, and you can place bombs on other players’ mirrors to blow them up a few seconds later. First to 100 points wins. Here’s a three-player game:

Turns out that after playing it a few times, it’s not that fun. There is too much going on with mirrors popping up and disappearing all over the place, so beams change path unpredictably making it hard to plan anything. Originally there was limit to how often you could place mirrors or bombs, and the game became a manic click-fest. I then added a slight cooldown between clicks, but that was a bit frustrating.

The prototype was written completely from scratch in C++, apart from using ENet to make the networking side a bit easier. The reason for starting from scratch was purely that I wanted to have made some kind of playable game in this way, which I managed (but Unity is definitely the way forward from now on).

mirrorgame

For the networking side of things I went pure server-authoritative to ensure there were no sync issues, and as a result it’s been almost entirely bug free. The client didn’t even know which player it was until near the end of development – it simply sends mouse clicks to the server, and the server sends back all the tile state updates. This is even the case when playing single player – the client and server run in the same process, and all communication goes through the network. This is a model I’m sticking with for all prototyping, because it means that if it works in single player it’s almost guaranteed to work multiplayer, which is makes debugging and testing much easier.

Jun 132014
 

Chestenham Science Festival is quickly becoming a must-attend event in my calendar. This year I went down for the Friday and Saturday with my wife and we attended half a dozen events. From my limited experience I think you could pick any event at random and be almost certain to hear something fascinating. Here are a few of this year’s talks.

chelt_venue

Rebuilding Our World From Scratch

If 99.99% of the population was wiped out tomorrow, how would we survive as a species? What could we do to restore our way of life as quickly as possible, hopefully skipping that whole Dark Ages thing?

Lewis Dartnell opened by showing a standard pencil, one of the simplest products you can imagine, and saying that no single person in the world can make that pencil. You need to grow the trees, cut the wood, mine and shape the graphite, mine and smelt the metal for the eraser holder, and acquire the rubber. The knowledge is spread over many specialists all over the world.

He has written a book (as he kept reminding us) that tries to explain how to rebuild as much as possible of our way of life, starting from first principles (and the decaying remains of our current world). This includes starting a fire with a 9V battery and some wire wool, repurposing alternators to generate electricity, and building a gasifier from tin cans to extract gas fuel from wood.

An interesting idea is that the printing press is one of  the most vital technologies to rediscover early. Without it, information can’t be easily distributed and remains in the hands of the powerful, hampering progress. Similarly photography is great for passing on information, and can be achieved with fairly simple technology.

Other topics of the book deal with agriculture, time of day and basic chemistry. I thought there was a bit too much time spent trailing the book at the expense of actual content in the talk, but it was quite interesting. Find out more here.

 The Science Of Cake

With the lure of baking, science, explosions and free cake samples, how could we resist? Henry Herbert of the Fabulous Baker Brothers was joined by chemist Andrea Sella and material scientist Mark Miodownik for some live baking action, interspersed with plenty of explanations about exactly what’s going on in your mixing bowl.

Why does egg go white after whipping? It’s all to do with refraction from the tiny air bubbles, bending the light in all directions until it’s opaque, and in fact this follows the same principle as to how sun cream works.

Does egg whip up quicker in a copper bowl? Yes it does, because the copper ions bind to the proteins, stabilising the foam (and making it taste foul is the process).

Why are some things sweet? Nobody really knows, because many completely different shaped molecules taste sweet, including some that are quite dangerous. Ever wondered why children used to pick and eat lead-based paint? Lead acetate is horribly bad for you, but tastes rather nice.

chelt_cake

No presentation including Professor Sella would be complete without something exploding, and he didn’t disappoint by demonstrating the destructive power of flour dust, blowing the top off a can. He also answered the question, “Would baking in a vacuum make a cake rise more?” It turns out this is something he’s actually tried. The vacuum oven was written off as the splattered cake mix couldn’t be removed.

And finally a member of the audience turned out to be a food scientist who shared some videos of CAT scans of things baking, which turns out to be really useful for learning about how to control the air pockets that form as the cake rises. The cake samples weren’t half bad either.

Making The Body Invisible

The was the last presentation we attended, and I wasn’t sure how it was going to be as it sounded quite technical. I needn’t have worried though as Mark Lythgoe and team took us on a fascinating (and slightly shambolic) tour through a couple of cutting edge medical imaging techniques, in what was probably the most interesting talk we went to.

The first technology was about literally making tissue completely transparent (although only dead tissue). This requires two processes – removing the pigments, and injecting the tissue with a solution of the same refractive index so that light passes straight through. They demonstrated this live on stage with a tiny heart, lung and brain, and by the end of the talk it was finished and you could read writing placed underneath the organ, which was quite strange to see! (Obviously there was still some distortion, but that would have been caused by the irregular outer shape of the tissue).

The point of this is it gives an easier way to detect diseased tissue after a biopsy. Human tissue emits light, just like those glowy jellyfish, but very dimly. Different types of tissue give out different types of light, but because the body is opaque you can’t see it. By making the tissue invisible you can sense this light and spot diseased cells, which glow a different colour.

The second technology was photoacoustic imaging. When you fire a short pulse of bright light at something it heats up and expands. This expansion creates an ultrasonic sound wave, which is different depending on the material that is heated. The heating amount is very small (less than 1 degree), so it is safe to fire a laser pulse into the body to generate ultrasound from some distance underneath the skin. You can then use a detector and some clever maths to reconstruct a 3D image of the tissue structure that produced the sound.

Photoacoustic imaging is especially good at picking out melanin from surrounding cells, so by genetically engineering cells to produce melanin as well when another gene is active you can directly image gene expression. This has exciting implications for cancer research.

And More…

We also saw a hilarious set from the very talented Robin Ince, learned about the language of chimps with Liz Bonnin and friends, and heard some heated discussion about the heritability of intelligence between Robert Winston and Robert Plomin. But that’s enough words for one day. If you can, I highly recommend popping along to some talks next year!

May 302014
 

Everyone I know (for “everyone” = “programmers”, anyway) seems to be trying out Unity and I’d heard much praise about how easy it is for rapid prototyping. Enough to make me take a look.

My very first thought, going through one of the tutorials, was “this feels like cheating”. One of the basic tutorials is a vertical scrolling shooter, and within an hour or two you have 3D models, sounds, particles and some basic enemy waves. Creating it mainly involves dragging a few things around in the interface, and writing a few lines of code.

However, my next thought: this is so far removed from normal programming, how do I actually, you know, “make a game”? This is the bit I need to get my head around next, learning the proper ways to translate my normal C++ skills into the Unity paradigm.

Starting out with networking

My current game-making interest is for multiplayer LAN games (if you can’t find the perfect game, make your own!), so my first project was investigating the networking support and making a basic lobby. It didn’t take long to run into the first problem with the built-in networking solution, but more about that later.

There are two methods in Unity of networking your game. The first is to mark variables on objects as being synchronised, so that they’re automatically kept up to date across clients (e.g. position and rotation of characters). The second is to use Remote Procedure Calls (RPCs) which allow one instance of the game to call a function on another instance (to signal events, e.g. player has fired a weapon).

My preferred setup uses an authoritative server (the clients send inputs, and the server processes them and sends back the results) but the server is also a client so can play the game. This is as opposed to a dedicated server where you have a separate server process that all player clients connect to. Requiring a dedicated server for LAN play just seems messy and requires building and running two executables, which is a pain. Conceptually we have a separate client and server running in the same process, and all communication is through the network, even if it’s a single player game or your client is the server. The major advantage of this is that all clients run the exact same code and if the game functions correctly in single player then you’re confident it will work in multiplayer, which makes testing a lot easier.

Building a lobby

So the first thing you need is a lobby that allows you to create a server or join an existing one. Luckily Unity lets you do this in about three lines of code. The server then listens to connect and disconnect requests and assigns a player number to each connected player. Finally there’s a basic chat box so clients can all write in the chat window.

When a client connects to the server there’s a bit of back and forth with RPCs. The server sends an allow or deny response (in case the game is full or has been started), and on receiving an ‘allow’ response, the client sends back the player name. Once the server receives the player name that player is inserted into the lobby. To keep the code simple, the player creating the server goes through the same process as any other client – simply call the OnConnectedToServer event (client-side), and the OnPlayerConnected event (server-side) and all the message passing and player initialisation will happen exactly as if it was a remote client.

unitylobby

A very basic lobby

An RPC bug

This is where the problem comes in. An RPC has a parameter that determines who it is sent to: All, Server or a specific client. If your process is the server, and you send an RPC to Server, you would expect it to just call the local function. Instead it does… nothing! Which is really helpful, and means the client-running-on-the-server case doesn’t work.

This is a known bug and has been around for years. Reading forum discussions (such as this one) provides a good insight into the Unity community, where there are a bunch of people who will argue vociferously that this is in fact correct behaviour, and that all your code should look like:

if (Network.isServer)
  networkView.RPC("MyRPCFunction");
else
  MyRPCFunction();

I would respectfully guess that these people are unlikely to be professional programmers. Luckily it’s not too hard to fix. That same thread has a method of using reflection to write an RPC wrapper that does the correct thing: call the RPC as normal if you’re not the server, or look up the function by name and call it directly if you are.

There is a similar issue with instantiating objects. Depending on if you’re in a networked game or not, you need to use two different Instantiate methods. Adding a wrapper for this as well simplifies the code elsewhere. I guess that the ‘standard’ networking case these days that Unity was built for is dedicated servers, and I’m just old fashioned trying to build an old-style LAN game. But it can be done.

Some code

Here is my NetworkManager and Lobby code so far, which might be a useful starting point if you’re building anything similar:

UnityNetwork.zip

I’m not at all confident that this is the best way to go about coding in Unity, but it’s pretty tidy and works. You can create a named game, search for and join/disconnect/rejoin games registered on their test server or connect by IP, set your player name and chat in a box. The NetworkManager is a singleton class and the Lobby should be added to an object in the scene, and you should be good to go.

The only slight complication above using the standard RPCs is calling functions on a different script on the same GameObject. This “just works” using the standard calls, but when using the wrapper you need to pass in the target script. I guess the wrapper could search all the scripts, but you should know what script it’s in anyway so it’s probably not a problem:

NetworkManager.Instance.RPC(GetComponent<MyScriptName>(), "MyFunction", RPCMode.Server);

With all that out the way it’s on to the next step – start making an actual game.

Apr 182014
 

I recently dug up my university dissertation and related code, and was surprised that it still runs and isn’t as bad as I feared. It was pretty interesting at the time so I thought I’d share. It was a good call to choose a project in the games/puzzle area, as I’m pretty sure that helped me get my job in the games industry a few months later!

Full report, and running the code

RollingBlock.pdf – this is the full dissertation I submitted, in case you’re interested in reading about this in more depth.

RollingBlock.zip – this is the full source and compiled code, written in Java. I’ve not recompiled it since 2002 and some of the formatting seems a bit messed up, but it still runs (please don’t judge the code quality, this was my first reasonable sized coding project!). There is the full editor program, plus a variant on the player for puzzles with multiple moveable blocks.

To run the Editor program you need the Java JRE installed. If java.exe is in your path then editor.bat should launch the program, otherwise you’ll need to put the full path in.

Rolling Block Puzzles

The idea of the ‘rolling block’ puzzle is to roll a non-cube block (2×1 in the simplest case) through a grid-based maze so it ends up exactly covering the goal. Because the block isn’t square it takes up more room in some orientations that others, and the puzzle involves shuffling the block into the right orientation to get to where you want to go.

Here is an example puzzle, where you need to roll the blue block so that it stands upright on the yellow goal square. So the only possible first move is up-left, because there isn’t enough room to roll in the other directions:

Example of a rolling block puzzleThis is a type of multi-state, or concealed path, maze. Underneath you’re navigating a state graph of moves from the start point to the end point, but the shape of the block, and the moves it allows, means that the actual path to the solution can’t easily be seen. There are only 53 empty squares on the grid but there are 125 reachable maze states, and the shortest solution requires 32 moves (solution at the bottom).

Automatic maze generation

These types of puzzles are very hard to design unassisted, because each block added or removed impacts the state graph in many different places. Some clever people design these things by hand, but us mere mortals can use a computer.

The first thing we need is a solver. This does an exhaustive breadth-first search of the maze and builds a graph of all states and the transitions between them. In each state there are at most four legal moves so it checks if each destination state is a duplicate, add a new node if it isn’t, and add an edge between the two nodes. Here is a very small 5×5 maze:

rb5x5And this is the state graph a solver will come up with:

rb5x5graph

The numbers are how many moves are required to reach each state, so zero is the start position and 18 is the goal position (the colour coding is based on branches and dead ends). So you can see that even a tiny 5×5 maze with three obstacle blocks has quite a complex state graph.

Measuring puzzle quality

To generate good new puzzles, we have to be able to tell the difference between a good puzzle and a bad one. What makes a puzzle ‘good’ is a hard thing to pin down, but we can pull out a few ideas such as it having a long solution, many dead ends, looking deceptively simple etc. For our purposes though we need to come up with a single score for the quality of a given puzzle, such that a higher score means the layout is ‘better’ in some sense.

One approach is to analyse the graph that the solver produced, and score it using a weighted set of criteria. The criteria I used are:

  • Number of nodes – more states means a more complex puzzle
  • Solution length – longer puzzles tend to be harder
  • Number of branches – choices and dead ends are what make the puzzle interesting
  • Number of blocks – give this a negative weight to prefer simpler looking puzzles with less obstacles
  • Number of turns – requiring more direction changes can be a more interesting solution
  • Average loop length – large loops in the graph make it less obvious when you’re on the right path

Different weights can be assigned here depending on what characteristics you want in your generated puzzle.

rb_8x8C

A more interesting block – 60 move solution

Genetic algorithms

So now we can give a score to a specific puzzle, but we need to use this to come up with the best puzzle possible. For this we can use a type of genetic algorithm.

Genetic algorithms are based on the same principles as evolution in the real world: using randomness to slowly evolve towards an optimal solution. The idea is to start with a solution (any solution), and then randomly modify it a bit. If the new solution is better than the old one, keep it, otherwise throw it away. Repeat this process many times, each time starting with the best solution you’ve found so far. To tell if a solution is better than the previous one we apply a fitness function to see if it survives to pass on its ‘genes’ (or instead gets eaten by bears). After a while, you can stop the process and take the best solution found so far as your result. Despite what creationists may tell you, order can result from randomness (plus a fitness function).

In the case of a rolling block puzzle we start with either an empty board, or one with a few random blocks, and a mutation rate (maybe 5%). Each iteration we check every grid square to see if it ‘mutates’ (toggles between obstacle and empty). The fitness function is the puzzle score described earlier. If the score is higher or equal we keep it. Then just keep going as long as you want.

And that’s pretty much all you need to generate new puzzles. The key is the fitness function – the better it fits our subjective opinion of “this is a good puzzle”, the better the generated results will be. I’m sure that there are much better fitness functions out there but this one works well enough.

Here’s a video of it in action.

References

Robert Abbott’s Logic Mazes page

My multi-level puzzle applet hosted on Logic Mazes (probably won’t run with today’s security settings)

Erich Friedman’s designs

A bunch of mobile games using a similar puzzle

 

Example solution (L is up-left): LURDRDRDDDLLLULDRRRRULDLLLUUULUU

Apr 012014
 

Time for an epic game roundup post.

LAN parties, for me, and the best way to play games. Getting a load of friends together in the same room for some quality gaming time is great. There is just one recurring problem: finding games that are enjoyable for everyone to play. Ideally we all play the same thing, and the preference is for something co-operative, or at least team-based, to smooth out skill differences. However I’ve found that multiplayer games fall into one of a few categories:

  1. AAA, easy to play, polished and enjoyable experiences. That generally cost £20-40, which is a lot of money for something that will likely only get played for a couple of hours.
  2. Free to play (in the sense of actually being a reasonably fun game to play without having to pay anything). Except they usually require 5-10 hours of playing/grinding to get going.
  3. Complex strategy games. Which have a massive learning curve and require tens of hours of play to get good at.
  4. Cheap, fun, indie games. Where tutorials were last on the ever growing list of features, making them inaccessible without substantial background reading.

I’ve tried a lot of games at LANs over the years, so here’s a summary of a few old favourites along with some recent attempts:

Dawn of War

Dawn of War is a Warhammer 40K RTS game, released in 2004. It’s the go-to game for one of my groups, who have been playing it (and the various expansions) since release. It’s a Category 3 (along with almost every RTS), but luckily we all had loads of free time back then are were fans of the licence, and it’s given us years of gaming in return.

Pros: Flexible player numbers, 2-4 co-op play versus AI (stretching to 5 vs 3 AI on the hardest difficulty). Runs on any system.

Cons: It’s showing its age. And frankly it’s getting a bit repetitive after nine years.

dow

Wolfenstein: Enemy Territory

I wrote about Enemy Territory a while back. It’s a team objective-based FPS from 2005, and is the other old favourite of the Dawn of War brigade. The game has always been available for free, and is great fun if you can get at least six players. I would put this in Category 4, as being an unfinished game there is no help for new players (each map requires learning by heart because there is no on-screen information about where to go and what to do).

Pros: Supports large numbers of players (6-20+). Free. Runs on any system.

Cons: Steep learning curve. And again, we’ve been playing it forever.

wolf_et

Left 4 Dead 1 & 2

I’m sure you all know this one, but the Left 4 Dead games are zombie survival shooters. They’re great co-op games if you have exactly four players and even better in versus mode if you have exactly eight players. This actually avoids all of the problem categories, so gets played most times (or at least is the backup game).

Pros: Frequently on sale for a couple of quid.

Cons: Best with exactly 4 or 8 players.

l4d2

Diablo 3

Diablo 3 is an isometric dungeon crawler where a bunch of you run around killing hordes of monsters and grabbing loot. This sounded like the perfect LAN game – highly polished, four-player co-op, easily accessible and a challenge requiring some interesting teamwork. It’s a Category 1 game, but four of use bought it at release and sat down one evening to play through. Unfortunately it wasn’t to be. The game was just too easy, unless you put in 50+ hours of repeated playthroughs to grind up through the difficulty levels. Even the bosses seemed to fall within seconds. I know it was meant to be accessible, and it certainly is, but some kind of configurable difficulty option could have saved this game.

I believe that the recently released Reaper of Souls expansion fixes most of these issues so it may be worth another look, but it’s not cheap.

Pros: Requires no thought or skill. Very accessible.

Cons: Requires no thought or skill. Far too easy when starting from scratch. Expensive.

diablo3

MechWarrior Online

Everyone loves giant hulking robots, and when we came across MechWarrior Online recently it sounded great: a team-based, slower paced, tactical game with lots of room for strategy and less reliance on twitch skills. Unfortunately everything about the game itself conspired to make this as difficult as possible. The frontend UI is possibly the worst in any game ever (may be a slight exaggeration, but it’s really terrible), it’s a definite Category 2 in that you need to put a few hours in before you can buy and customise your own Mech (which is when the game gets fun), and the community was pretty hostile to new players.

The worst part though was the team options – each match is a fixed 12v12, but the allowable team sizes are 2, 3, 4 and 12. We got around this with our group of seven by forming two groups and hitting Start at the same time, which mostly got us into the same game, but that may just be an indicator of the number of active players. I understand that the point of the team size restriction is to allow the matchmaker to make fairer games, but I’ll stick my neck out and say that actually being able to play with your friends is much more important than slightly fairer matches. Having said that I’ve played the game a fair bit solo, and I quite like it.

Pros: Free. Fun for small groups of experienced players.

Cons: Restrictive group sizes. Not at all friendly to completely new players.

mwo

Terraria

After failing with MechWarrior we found out that most of us had picked up Terraria in the Steam sales and never played it. Terraria is basically 2D Minecraft, where you dig, build things and fight monsters. It’s a Category 4 with no explanation of how to play, but luckily we had a couple of experienced players to show us. Exploring caves in a group is pretty fun for an hour or so, but it got samey quite quickly. It’s not a bad way to break up and evening though.

Pros: Freeform drop-in/drop-out gameplay. Cheap.

Cons: No real point to the game. Can quickly run out of things to do.

terraria

Various Call of Duty/Battlefield games

I’ll lump all the more recent Call of Duty and Battlefield games together, as they’re all the same from the point of view of a LAN. You can’t fault the technical quality of the multiplayer modes in any of these games – very polished, good fun, progression systems to keep you playing, definite Category 1 games. But it’s no secret that we’re getting older, and our 30+ year old reflexes just can’t keep up with the teenagers. Since the move to online-only multiplayer, these types of games have become more frustrating. The inclusion of the grinding aspects of free-to-play to unlock decent equipment further puts me off (I gave up on Battlefield 3 when it came out because I rarely got more than a couple of kills per match, although I hear the equipment is supposed to be more balanced in Battlefield 4).

Pros: High quality experiences. Flexible teams. They’re shooters so minimal learning required.

Cons: Frustrating if you don’t have pro-gamer skills. Even the old games still command high prices.

Battlefield 2

Taking a step backwards, Battlefield 2 was a great successes. It’s old, but still looks acceptable on modern PCs. The main bonus is that it’s from an era of local LAN play and bots, which caters perfectly to our preference for cooperative experiences for a range of skill levels. I think we settled on 8 humans vs 14 bots, tweaking the difficulty level until we got close matches. It’s got guns, tanks, comedy helicopters (“Who wants to be my gunner?” – “HELL NO I value my life”), and introduced Commander mode where wannabe-Eisenhowers can set objectives and airdrop vehicles and supplies.

Maybe I’m just not that into competitive gaming but local multiplayer against AI frequently gives the best experiences, where everyone succeeds or fails together. And when you fail you work out what went wrong and have another go. Very satisfying when you finally succeed.

Pros: Local multiplayer. Flexible numbers. Highly configurable difficulty level.

Cons: Starting to show its age.

battlefield2

Mar 122014
 

Now that I don’t make games for a living I find that I’m more inclined to do so in my free time. Nothing even approaching a releasable standard, but it keeps me amused, and is an excuse to mess around with network programming. Occasionally I’ll inflict a messy prototype on my friends at a LAN party, and I’ll probably write more about them here at some point.

For one prototype I needed some kind of random ‘house’ layout, with rooms and corridors and the like. Procedural maze generators seem to be one of those things that most programmers end up writing at some point, and as mine works reasonably well for what it is I thought I’d share it in case it’s useful for anyone. The code isn’t particularly efficient for large, deep layouts, but it’s quick enough for what I need it for.

The code

First of all, here’s the full code. I don’t think there are any external dependencies except for a RandBool() function, so it should pretty much compile (although I haven’t tested it stand-alone):

housegenerator.cpp , housegenerator.h

Here is a 50×40 tile layout that it generated:

houselayout

Each tile is brown for wall, orange for a room, grey for hallway, and the light crosses are the doors (I’m a programmer, not an artist…). Let’s see how it works.

The algorithm

I started from this pseudocode, referenced from this Stack Exchange question about house generation, and finished it off and made a few tweaks.

The idea is to generate a series of hallways and rooms, and add doors so that the layout is fully connected (every room is accessible from every other room). This is accomplished by repeatedly subdividing the space, first to create hallways, and then to create rooms. There are a number of stages:

  1. Start by filling in the entire map with wall. We want to subdivide the entire map, except for the one-tile outer wall boundary, so initialise the queue of Areas with this.
  2. While there are still Areas in the queue, take the front one and subdivide it. If we don’t have enough hallway yet (defined as a fraction of the final map being hall tiles) and the area is big enough, carve out a straight line for the hall and add the blocks on either side back into the queue. Otherwise, if the Area is big enough, cut it in two along a random line and add both bits to the queue. Otherwise, carve out the final room.
  3. Now we have all the hallways and rooms, but they’re not joined. The next thing to do is connect the hallways to each other. Check the ends of each hall, and if there is more hallway the other side of the end wall, carve out the wall. All hallways are now guaranteed to be connected.
  4. Now we need to put doors in. Add all the rooms to another queue of unconnected rooms. While the queue isn’t empty, take the first room and pick a random start point and work your way around the edge, checking if a door here would open onto a hallway. If so, place a door. If not, do the same again, except this time check for possible doors into rooms that are already connected (e.g. aren’t in the queue). If no doors were placed, re-add the room to the back of the queue.

And that’s about it. Eventually all rooms will have doors to either hallways or other connected rooms. Here are a couple more layouts: houselayout2

houselayout3

Configurable options

There are a few variables at the top of the file that can be tweaked.

  • MaxHallRate. This is the fraction of the map that is allowed to be hallway. Set to 1.0 to keep subdividing until the individual rooms are too small to subdivide further. This will result is a very flat connection graph, where nearly all rooms are connected directly to a hallway. Set it to 0.0001 (not zero or it infinite loops) to create a single piece of hallway. The connection graph here will be very deep, creating more of a maze-like layout.
  • HallDoorProb. When a door is placed between a room and a hallway, this is the probability that more walls will be tested for creating additional doors. Set to 1.0 to always create doors in walls with hallway alongside, and 0.0 to only create one door.
  • RoomDoorProb. Similarly for doors to other rooms, this controls the probability of extra doors being created. Set both of these probabilities to zero to create a connection tree (with no loops) – there will only be one way of traversing doors to get from any room to any other room.
  • MaxRoomSize. This is the size below which rooms will no longer be subdivided. A bigger number mean bigger rooms.

From a house to a maze

Houses are generally well connected – it’s usually quick to get from any room to any other room, so there are often lots of doors or loops (except for my house, which seems designed to maximise journey times…). Mazes are supposed to be hard to navigate, with many dead ends and only one route through. Luckily enough, this algorithm supports creating both.

Hallways are the main backbone of the connectivity. Having many hallways greatly reduces the average number of doors that must be passed on any journey. Reducing hallways will make your layout more maze-like. You could remove all hallways entirely by modifying the start room to be of Hall type, but I haven’t implemented this.

Setting both door probabilities to zero will ensure the connected graph is a tree. This maximises the average number of doors traversed per journey.

Reducing the room size could be used to maximise the number of doors (although the layouts are less interesting with very small rooms.

Here is an example of a large maze-like layout I generated:

maze_full

Enjoy solving that one!

Downloadable demo

If you want to play around with the generator but don’t have your own code to plug it into, here’s a Windows demo executable you can download:

http://www.polygonpi.com/files/HouseGenerator.zip

Disclaimer: This was written in my Mirror game engine, and is a quick hack-job to have something runnable. As such it starts and connects to a local multiplayer server, and requests network access. Feel free to deny access as it doesn’t do anything, but taking all that stuff out would have been a fair amount of work. Just so you’re aware.

There is also a bug with using hardware acceleration on some systems that I never got to the bottom of. So if HouseGen.exe doesn’t work, try HouseGen_warp.exe which uses a software renderer. The joys of PC development when you only have one machine…

Usage: Use the RunMe.bat file to edit the generation parameters (and change the .exe is required). The four generation parameters plus width and height are specified, and can be happily changed.

Feb 192014
 

The last year has seen much discussion about the price of energy. Nuclear plants, wind turbines, fracking, price caps and green subsidies are a staple part of the news and political vocabularies. And as usual with politics, most of the arguments make no sense when you examine the economics.

Externalities

But before I get into that I need to cover an important economic concept, the idea of externalities.

In a perfect free market, competition ensures that the price of an item or service falls to the ‘correct’ price, which is the price of producing it plus the smallest profit margin necessary to persuade the producer to bother. The monetary cost of production is the cost of paying people for their time and skills in building things, digging stuff out of the ground, writing code or whatever. However, there are other potential costs – air pollution from a factory; noise from a mechanics; increased road traffic from delivery lorries. The producer often doesn’t have to pay these costs. If I open a curry house in my living room I don’t have to pay my neighbours compensation for the smell. These costs are called externalities (specifically negative externalities in this case, as it’s also possible to produce positive side-effects you aren’t paid for).

The effect of negative externalities is that goods and services are cheaper than they ‘should’ be. There is an amount of money I could pay my neighbours to appropriately compensate them for the curry smell, thus internalising the externality. My costs are now higher so I have to charge more, and my neighbours are happy. If I can’t make a profit with this extra charge then I should shut down and re-open the business in a location with less picky neighbours.

Schemes exist to account for externalities in many areas. Congestion charging in cities during the day encourages people to drive at night, or pay for the inconvenience they cause. High taxes on cigarettes and alcohol pay the costs of hospitals and police cleaning up the mess (although I assume this isn’t the primary reason for these taxes).

The point is that if all externalities are ‘internalised’, businesses can do anything they want to make a profit, safe in the knowledge that they are paying for any damage they are causing.

Carbon trading

So what does this have to do with the price of energy?

Most of our energy is produced by burning fossil fuels. Burning things produces carbon dioxide which contribute to climate change. Climate change is likely to cause humanity major costs in the future, due to increased extreme weather events, rising sea levels and the like. This is a prime example of a negative externality, unless the future costs of releasing carbon dioxide are included in the price of the energy.

There is an attempt to do just that with the EU Emission Trading Scheme. A limit is set on the amount of carbon that the regulated industries can produce, and the limit is shared between those industries in the form of tradeable emission permits. Businesses that reduce their emissions can sell their permits to high polluters. Highly polluting businesses aren’t a problem in this system – it’s more efficient to pay someone else to reduce emissions than to reduce their own, so goods remain cheap while overall emissions go down. Compare this to the less efficient system of flat caps – cleaner industries get no benefit, and goods from dirty industries get more expensive as they take expensive measures to clean up.

That’s the plan, but unfortunately due to political issues there are too many permits available. In 2008 permits sold for €20/tonne of C02, but in 2013 they could be picked up for €2.81/tonne. One of the major problems of externality charging is working out the correct price that should be paid. The true future cost of climate change is not known, but it seems likely that €2.81/tonne of CO2 is too low, and provides very little incentive to clean up.

Renewables and nuclear

Renewable (and nuclear) energy is more expensive than energy from coal and gas. One of the big arguments the Government has made in favour of fracking for shale gas is that it could reduce energy bills (although they’re playing this down now). This is short-sighted because it doesn’t take into account the externality of climate change.

Cheap fossil fuel energy now is like taking a loan to be paid back with considerable interest in 50+ years. It defers the cost of clean up by many decades. Renewables have little cleanup cost beyond dismantling them at end of life, so almost all costs are included up front. Nuclear has significant cleanup costs for decommissioning and waste storage, but these costs are known and again factored into the up front cost.

Taking loans isn’t always a bad idea. Governments and businesses frequently run up debt. As long as spending the loan produces greater returns than the interest on it then you can still come out ahead. With energy we have two options – take the loan for cheap energy now, or pay everything up front. If the increased economic growth from cheaper energy means it’ll be comparatively cheaper to fix the damage from climate change in the future then it could make economic sense. I’m no expert but personally I doubt that’s the case.

 Government energy schemes

All of this doesn’t change the fact that energy is becoming a real strain on less well off households (and likely to get even more so). In fact I’m making it worse by advocating more expensive bills. Politicians have come up with a few schemes to try to help:

1. Ed Milliband’s price cap. Energy companies may be forced to keep their prices fixed for 20 months. This is plainly a terrible idea. It makes very little difference to people’s wallets and inhibits investment.

2. George Osborne’s lifting of green levies on energy. This does exactly the opposite of what I’m talking about with addressing externalities. Applying additional levies on fossil fuels is a way of ensuring that energy costs the ‘correct’ amount. Removing these levies makes the situation worse.

3. Energy company profits. Ministers have been making noises about increasing competition and reducing profit margins. While more competition is good, this is a bit of a red herring. Average long term profit margins are around 5%, so even if profits were reduced to zero this would have very little effect on total bills.

4. Winter Fuel Payments. The elderly are entitled to an annual payment of £100-£300 to help with fuel bills. This is actually a good policy for a couple of reasons I’ll come to.

So what else would an economist do to try to help?

The ‘correct’ fix

The problem with attempting to reduce energy bills for all is that most people can afford to pay the bills already. It may mean cutting back on some luxuries as bills continue to rise, but as we’re steadily using up all the cheap fuels (North Sea gas, easily accessible oil wells etc) we have to accept that energy is just expensive, and adapt. A better approach is to target those in need specifically, and this is where the Winter Fuel Payment is a good model.

Firstly, it targets a group particularly in need of help – the elderly with low incomes. Targeting help at small groups is cheaper than attempting to help everyone, as the help goes to those who need it.

Secondly, it may actually reduce energy consumption. You could give ‘energy vouchers’ that can be redeemed to pay bills, undoubtably helping that person. Or you could just give cash (which is what actually happens). The recipient can then make more efficient use of the money by spending it on whatever they want. They may just spend it on the bill, and that’s fine. Or they might instead spend it on upgrading their boiler or insulating their home, reducing their usage permanently. It doesn’t matter that it’s not spent directly on energy – the cost of the scheme is the same, but the money can be used more efficiently than the equivalent in vouchers.

Using tax revenue to subsidise energy prices and keep bills low for all (which is what vote-winning energy policies tend to boil down to) is inefficient. Far better to let energy be priced at its correct level and redistribute the savings as cash to those who need it. Then as prices rise, everyone has both the incentive and the opportunity to reduce their own usage.

Unfortunately this kind of approach is a much harder sell, and wins fewer votes, than making vague promises to bring down bills. But I’m pretty sure more long term good would come from rational economics-based energy policy than any of the vote-grabbing gimmicks that are currently being thrown around.

Jan 122014
 

Massive Open Online Courses (MOOCs) have really taken off in the last couple of years. I was aware they existed but it was only a few months ago that I took the plunge and signed up for one. I wasn’t really sure what to expect in terms of course content, time requirements and difficulty, so I thought I’d talk about the two I’ve taken so far.

A couple of friends had taken courses on Coursera so that’s where I signed up (although there are many other options). Coursera hosts courses run by many different universities around the world, on all manner of subjects. There are loads of courses starting throughout January so now is a good time to see if anything takes your fancy. Most run for between four and 12 weeks, and generally ask for 4-8 hours per week to watch the lectures and complete the homework (although this is quite variable depending on how easy you find the subject).

The courses aren’t ‘worth’ anything in the sense of traditional qualifications. An interesting development is that some of the more rigorous Coursera courses have a ‘Signature Track’ option where you can pay some money to have your identity verified and get some real university credit for your work. I’ve not looked too much into this though.

So why did I want to do a course in the first place? I’ve read quite a few popular science books on physics, quantum mechanic, string theory and the like but they always shy away from the actual maths (understandable if you want to sell any copies). Without the maths though it’s impossible to understand the subject beyond some vague hand wavy concepts. I was looking for some way to delve a little deeper into the subject without doing a full physics degree, which is rather impractical when you have a job.

From the Big Bang to Dark Energy

The first course that caught my eye was From the Big Bang to Dark Energy. It’s a short four week course from the University of Tokyo giving an overview of the history of the universe and basic particle physics. The only recommended background knowledge was some simple high school maths, so I wasn’t expecting anything too difficult.

There were a couple of hours of lectures per week which were engaging and easy to understand. They concentrated on general concepts rather than equations (although there were a few equations scattered around), in particular focusing on why we know what we know from a range of recent experiments.

This course was aimed at the more casual learner. It was very light on the maths in the lecture videos, but used the homework questions to introduce a few calculations (mostly just cancelling units and multiplying a few numbers). You could quite happily ignore the maths and still get a pass, and you were allowed nearly unlimited attempts at the questions (hence my final score of 100%).

While a lot of the content was familiar to me I still learnt a few things, and I would recommend this course to anyone looking for a light introduction to the history and evolution of the universe (assuming the course runs again).

Analysis of a Complex Kind

The second course I took was Analysis of a Complex Kind on complex numbers and complex analysis, from Wesleyan University. This was a completely different experience. It was much more formal and rigorous, and felt a lot like a traditional university-level class. I spend probably 6-9 hours per week, which was sometimes a struggle. Even though the course was only six weeks long it felt like quite a commitment (although that may just be a comment on my general level of busyness).

I wasn’t overly familiar with complex analysis before the course outside of a few bits at school and university, but elements of it keep cropping up in my reading so I decided it would be interesting to learn more. You definitely need a strong interest and ability in maths before considering this course, and if you’re going into complex numbers cold then it’ll be a steep learning curve.

It made a change to go back to working with pen and paper, and I got through reams of the stuff by the end. Picking up a pen is something I rarely find myself needing to do these days. The assessments were mainly multiple choice questions, but there’s a deceptively large amount of work needed to find the answers.

One new feature for me in this course was peer-assessed assignments. These were questions that involved drawing graphs, or long-form answers that couldn’t be multiple choice. You can either scan in your work on paper or submit PDFs created directly on computer, and then you’re provided marking guidelines and have a week to assess four other people’s work. The process isn’t perfect (I saw one or two marking errors) but that’s why everyone is marked four times and averaged. Doing a decent job of marking others’ work actually took a fair chunk of time, longer than I was expecting.

I was pleased with my final score of 94.8% (fractionally missing out on a distinction). It was a good workout for the brain, and even though I’m unlikely to use anything I learnt in this course day to day I suspect it’ll come in handy should I pursue any further maths or physics-based education.

 Overall

These type of courses work really well if you can reliably dedicate a few hours per week. I won’t be taking any more for at least a couple of months as they monopolised my free time somewhat (and I have other projects I want to work on), but I’m sure I’ll be back for more.

A lot of the courses seem to be being run for the first time by people who haven’t done this kind of thing before, but don’t let that put you off. These two were both well run and offered great learning potential. MOOCs are likely to only improve in the future as people get more used to what works and what doesn’t. Find something you’re interested in and give one a go!

Jan 062014
 
Dec 142013
 

The final part I’m going to cover for high dynamic range rendering is an implementation of lens flare. Lens flare is an effect that raises a lot of hate from some people, and it can certainly be overdone. However, when used subtly I think it can add a lot to an image.

Theory of lens flare

Lens flare is another artifact of camera lenses. Not all of the light that hits a lens passes through – some small amount is always reflected back. Most camera systems also use many lenses in series rather than just the one. The result is that light can bounce around inside the system between any combination of the lenses and end up landing elsewhere on the sensor.

If you’re not a graphics professional you could take a look at this paper which aims to simulate actual lenses to give very realistic lens flares. If there’s any possibility that you might find it at all useful then stay away because it’s patent pending (but this is not the place to discuss the broken state of software patents).

We don’t need to simulate lens flare accurately, we can get a good approximation with some simple tricks.

Ghosts

The main component of lens flare is the ‘ghost’ images – the light from the bright bits of the scene bouncing around between the lenses and landing where they shouldn’t. With spherical lenses the light will always land somewhere along the line from the original point through the centre of the image.

The lens flare effect is applied by drawing an additive pass over the whole screen. To simulate ghosts, for every pixel we need to check at various points along the line through the centre of the screen to see if any bright parts of the image will cause ghosts here. The HLSL code looks something like this:

float distances[8] = {0.5f, 0.7f, 1.03f, 1.35, 1.55f, 1.62f, 2.2f, 3.9f};
float rgbScalesGhost[3] = {1.01f, 1.00f, 0.99f};

// Vector to the centre of the screen.
float2 dir = 0.5f - input.uv;

for (int rgb = 0; rgb < 3; rgb++)
{
    for (int i = 0; i < 8; i++)
    {
        float2 uv = input.uv + dir*distances[i]*rgbScalesGhost[rgb];
        float colour = texture.Sample(sampler, uv)[rgb];
        ret[rgb] += saturate(colour - 0.5f) * 1.5f;
    }
}

The eight distance values control where the ghosts will appear along the line. A value of 1.0 will always sample from the centre of the screen, values greater than one will cause ghosts on the opposite side of the screen and values less than one will create ghosts on the same side as the original bright spot. Just pick a selection of values that give you the distribution you like. Real lens systems will give a certain pattern of ghosts (and lots more of them), but we’re not worrying about being accurate.

This is a simpler set of four ghosts from the sun, showing how they always lie along the line through the centre:

lensflare_ratios

Four ghosts from the sun, projected from the centre of the screen

The ghost at the bottom right had a distance value of 1.62. You can see this by measuring the ratio of distance to the centre of the screen in the image above.

This next image is using eight ghosts with the code above. You can’t see the ghost for value 1.03 as this is currently off-screen (values very near 1.0 will produce very large ghosts that cover the entire screen when looking directly at a bright light, and are very useful for enhancing the ‘glare’ effect).

You can see the non-circular ghosts as well in this image, as some of the sun is occluded:

lensflare_occluded

Full set of ghosts from an occluded sun

Chromatic aberration

Another property of lenses is that they don’t bend all wavelengths of light by the same amount. Chromatic aberration is the term used to describe this effect, and it leads to coloured “fringes” around bright parts of the image.

One reason that real camera systems have multiple lenses at all is to compensate for this, and refocus all the colours back onto the same point. The internal reflections that cause the ghosts will be affected by these fringes. To simulate this we can instead create a separate ghost for each of the red, green and blue channels, using a slight offset to the distance value for each channel. You’ll then end up with something like this:

lensflare_chromatic

Chromatic aberration on ghosts

Halos

Another type of lens flare is the ‘halo’ effect you get when pointing directly into a bright light. This code will sample a fixed distance towards the centre of the screen, which gives nice full and partial halos, including chromatic aberration again:

float rgbScalesHalo[3] = {0.98f, 1.00f, 1.02f};
float aspect = screenHeight/screenWidth;

// Vector to the centre of the screen.
float2 dir = 0.5f - input.uv;

for (int rgb = 0; rgb < 3; rgb++)
{
    float2 fixedDir = dir;
    fixedDir.y *= aspect;
    float2 normDir = normalize(fixedDir);
    normDir *= 0.4f * (rgbScalesHalo[rgb]);
    normDir.y /= aspect; // Compensate back again to texture coordinates.

    float colour = texture.Sample(sampler, input.uv + normDir)[rgb];
    halo[rgb] = saturate(colour - 0.5f) * 1.5f;
}
lensflare_halo

Full halo from a central sun

lensflare_halo2

Partial halo from an offset sun

Put together the ghosts and halos and you get something like this (which looks a mess, but will look good later):

lensflare_ghosthalo

Eight ghosts plus halo

 Blurring

The lens flares we have so far don’t look very realistic – they are far too defined and hard-edged. Luckily this is easily fixed. Instead of sampling from the original image we can instead use one of the blurred versions that were used to draw the bloom. If we use the 1/16th resolution Gaussian blurred version we instead get something which is starting to look passable:

lensflare_blurred

Ghosts and halo sampling from a blurred texture

Lens dirt

It’s looking better but it still looks very CG and too “perfect”.  There is one more trick we can do to make it look more natural, and that is to simulate lens dirt.

Dirt and smears on the lens will reflect stray light, for example from flares, and become visible. Instead of adding the ghosts and halos directly onto the image, we can instead modulate it with a lens dirt texture first. This is the texture I’m currently using, which was part of the original article I read about this technique and which I can unfortunately no longer find. If this is yours please let me know!

lensflare_dirt

Lens dirt overlay

This texture is mostly black with some brighter features. This means that most of the flares will be removed, and just the brighter dirt patches will remain. You may recognise this effect from Battlefield 3, where it’s used all the time.

You can’t really see the halo when modulating with this lens dirt texture, so we can add a bit more halo on top. This is the final result, as used in my demos:

lensflare_final

Final result

And that’s it for High Dynamic Range rendering, which I think is one of the most important new(-ish) techniques in game rendering in the last few years.

Nov 242013
 

Last time I covered HDR render targets, tone mapping and automatic exposure control. Now it’s time to simulate some camera imperfections that give the illusion that something in an image is brighter than it actually is on the screen.

Bloom

The first effect to simulate is bloom. This is when the light from a really bright object appears to “bleed” over the rest of the image. This is an image with no bloom – the sun is just a white circle, and doesn’t look particularly bright:

nobloomexample

No bloom

With a bloom effect the sun looks a lot brighter, even though the central pixels are actually the same white colour:

bloomexample

With bloom

Theory of bloom

Why does this happen? There is a good explanation on Wikipedia but this is the basic idea.

Camera lenses can never perfectly focus light from a point onto another point. My previous diagrams had straight lines through lens showing the path of the light through the lens. What actually happens is that light (being a wave) diffracts through the aperture creating diffraction patterns. This means the light from a single point lands on the sensor as a bright central spot surrounded by much fainter concentric rings, called the Airy pattern (the rings have been brightened in this picture so you can see them easier):

airy_pattern

Airy disk

Usually this isn’t a problem – at normal light levels the central peak is the only thing bright enough to be picked up by the sensor, and it fits within one pixel. However, with very bright lights, the diffraction pattern is bright enough to be detected. For anything other than a really tiny light source the individual rings won’t be visible because they’ll all overlap and blur together, and what you get is the appearance of light leaking from bright areas to dark areas.

This effect is pretty useful for us. Because people are used to bright objects blooming, by doing the reverse and drawing the bloom we perceive the object as brighter than it really is on screen.

Implementation

The idea of rendering bloom is the same as Bokeh depth of field. Recall from the depth of field that each pixel is actually the shape of the aperture, drawn at varying sizes depending how in focus it is. So to draw Bokeh ‘properly’ each pixel should be draw as a larger texture. To draw bloom ‘properly’ you would instead draw each pixel with a texture of the Airy pattern. For dim pixels you would only see the bright centre spot, and for very bright pixels you would see the circles as well.

That’s not very practical though so we can take shortcuts which make it much quicker to draw at the expense of physical accuracy. The main optimisation is to do away with the Airy pattern completely and use a Gaussian blur instead. When you draw many Airy patterns in neighbouring pixels the rings average out and you are left with something very similar to a Gaussian blur:

gaussian

Gaussian blur

The effect we are trying to simulate is bright pixels bleeding over darker neighbours, so what we’ll do is find the bright pixels in the image, blur them and then add them back onto the original image.

To find the bright pixels in the image we take the frame buffer, subtract a threshold value based on the exposure and copy the result into a new texture:

bloom_extract1

The extracted bloom – the original image with a threshold value subtracted

 Then create more textures, each half the size of the previous one, scaling down the brightness slightly with each one (depending how far you want the bloom to spread). Here are two of the downsized textures from a total of eight:

bloom_extract2

The 1/8th size downsized extracted bloom

bloom_extract3

The 1/64th size downsized extracted bloom

Because we’re not simulating bloom completely accurately, there are a few magic numbers we can tweak (like the threshold and downscaling darkening) to control the overall size and brightness of the bloom effect. Ideally we would work it all out automatically from the Airy disk and camera properties, but this method looks good enough and is more controllable to give the type of image you want.

Now we have all the downsized textures we need to blur them all. I’m using an 11×11 Gaussian blur which is soft enough to give an almost completely smooth image when they’re all added up again. A larger blur would give smoother results but would take longer to draw. The reason for doing the downscaling into multiple textures is that it is much quicker to perform smaller blurs on multiple smaller textures than it is to perform a massive blur on the original sized image.

After blurring, the two textures above look like this (and similarly for all the others):

bloom_blur2

Blurred 1/8th size bloom

bloom_blur3

Blurred 1/64th size bloom

Then to get the final image we simply add up all of the blurred textures (simple bilinear filtering is enough to get rid of the blockiness), scale it by some overall brightness value and add it back on top of the tonemapped image from last time. The end result will then be something like this, with obvious bloom around the sun but also some subtle bleeding around other bright areas like around the bright floor:

bloom_final

The great thing about this is that you don’t need to do anything special to make the sun or other bright lights bloom – it’s all just handled automatically, even for ‘accidental’ bright pixeld like intense specular highlights.

That’s not quite everything that you can do when rendering bright things. Next time I’ll describe that scourge of late-90s games – lens flare. (It looks better these days…)