Dev Blog 28 - Get Coin

Once again! It’s the Rolled Out! Development Blog Update! Your host, BitesDev, is here on the scene. Are you excited? You should be. You will be.

Today, we’re going to be talking about *”Coin”* and *”Hole”*. But before we move onto that, we have to talk about the HUGE news that has been taking the gaming scene by storm. The news everybody is talking about. It was revealed so recently, haven’t you heard about it? Who knew a game that began with its roots in the indie game scene could have made it so far…

That’s right. Rivals of Aether Definitive Edition has released. Please, go check it out!!! This game is awesome, and it deserves so much love for being an amazingly original and standout title within the platform fighter genre. It’s even on Switch! There’s no excuse not to play it!

Huh? You thought I was referring to that new Smash Bros. character? I don’t even know what the hell you’re talking about.

Anyways, let’s jump right into the update.

Coins are back in, and stronger than ever. We still haven’t finished the models for each individual world, so until we get to that point, they really will just look like coins everywhere you go. But they look pretty awesome, anyways!

We also now have holes.

They work pretty much perfectly. I hope you like our holes.

Aside from those two things, we’ve just been focusing on lots of little polish-related things. There’s now a particle effect around the ball whenever you’re being actively affected by a gravity floor.

Naturally, following this, we’re going to have a writeup by our very own CraftedCart. Take it away!

Musicality Infinality

I was hoping “Infinality” was an actual word when coming up with the title for this section, but alas, it isn’t. Anyways, recently I’ve been messing around with getting music playing in-game - seems simple, right…? right??

So, the way music works in our game is that there may be an intro section of music, followed by a looping section that repeats forever. These two sections are stored as different files for each song.

Now… the tricky part is, how do we play the intro section, followed by the looping section, without any gap between the two files?

Exploring solutions

Using a sound cue

So, my first thought after doing a bit of searching online was perhaps I could use the “concatenator” node in a sound cue (where a sound cue lets you manipulate and combine audio playback in Unreal Engine). The concatenator node is simple: it plays its first input, then its second, then its third, etc. in order. Soo, I simply wire up the intro section and looping section into the concatenator, aaand…

Hmm. That doesn’t quite sound right, does it… Turns out while the concatenator does play sounds sequentially, it cannot do it seamlessly - moving on…

Not quite a cutscene

Next idea: how about using a “level sequencer” - that thing in Unreal most often used for cutscenes and other cinematic elements. What if I tried just using it to sequence 2 bits of audio together - I can figure out how to make the second section loop later…

…maybe not, then. Even doing something ludicrous like setting the sequencer’s framerate to the same as the audio sample rate (44100 Hz) doesn’t help.

Days are too short - I wish I could synthesize time

After a fair bit more digging online, I eventually came across a plugin known as “TimeSynth” - it comes bundled with Unreal, disabled by default, and can do sample-accurate audio stitching. Well, that sounds exactly like what we want, doesn’t it! Let’s give it a listen…

…oh heck yeah! That sounds pretty seamless to me.

So, the way TimeSynth works is you define a BPM for the audio you play, and queue up audio. Here, in BeginPlay, I set the haunted grounds intro sound playing, and also register a “quantization event delegate” (which basically means every beat or however long, I want to run some other code). After 4 beats, we want to play the looping part, so every beat we increment the BeatsPassed counter. Once that hits 3, we queue up the looping sound to start playing on the next beat.

So… that works, right? All sounds good, just gotta punch in a bunch of numbers into the engine and we should be good to go, right? CraftedCart…. why does your article go on for several more paragraphs? CRAFTEDCART???

Hahaaa, I’m a programmer - it’s my nature to see the silliness in this and engineer my own solution. ;P After all, doesn’t it seem a bit unnecessary to specify the BPM and length of each song when I… just want to play one audio file immediately after another ends? Besides, it starts getting really finicky when we have to deal with intro sections that aren’t an exact multiple of a beat long.

So, I dove in to the TimeSynth plugin’s source code and started digging around, seeing how TimeSynth managed to play audio seamlessly and making my own plugin to play audio one-after-another without the need to specify durations or timings or other nonsence. But first, a detour…

How is audio represented digitally anyway?

You probably know how sound works: you have a sound wave that goes uppy-downy, that goes into your ear-holes, and that lets you hear the screeches of a crying baby on the other side of the plane while you’re just trying to relax.

When working with audio digitally, what we do is we pick a sample rate - in our case, the music we have has a sample rate of 44100 Hz. This means 44100 times per second, we have a sample of how high or low our sound wave is at that point in time. We can send these samples off to the hardware and that’ll make your speakers move back and forth depending on the value of each sample. For obvious reasons, a higher sample rate sounds better since you’ll be able to better capture the fluctuations in the sound wave - 44100 Hz was the sample rate most commonly used on CDs, and that’ll do for us too!

Now, there’s several ways of storing audio samples. Using a signed 16 bit number is common (that is, an integer between -32768 and 32767). Sometimes, a floating-point (decimal) number between -1 and 1 is used instead. Unreal generally seems to work with the latter.

So how does music in the game work then?

So, finally get get to how music in Rolled Out works. We have a subclass of USynthComponent, USequencedAudioComponent I call it. This component can be given a “sequenced audio asset”, which is just a list of audio files to play in order. When you’ve done that, the sequenced audio component will start decoding both the first and second sound files to play, known as the “Now” file and the “Next” file (the now file being the file that should be currently playing, the next file being the one queued up to play immediately after the now file has finished).

Periodically, the engine will ask to be fed some audio samples such that it can play them. Most of the time, this is a simple affair - I take the samples I get from the decoder decoding the “now” file, and feed this straight into the audio buffer. When we near the end of a file, this gets a little more complicated, but not much.

Lets say the engine has asked for 200 samples for example, but we’re near the end of the “now” file and there’s only 50 samples left. At this point, the “next” file becomes the “now” file (as that’s the file that we should start playing from), we fetch the third sound file from the sequenced audio asset, and if that exists, that becomes the new “next” file and starts decoding. We then fill out the remaining 150 samples in the buffer with the beginning of the new “now”

(The game… doesn’t actually have any audio split into 3 or more files, so in practise, that part is never used)

tl;dr: We decode 2 audio files at once such that we can seamlessly stitch the first file with the second one.

So… let’s give that a listen, shall we?


It’s my birthday in 5 days! Yay!


Thanks for reading, and see you on the 15th.