We speak English and French and welcomes everyone!
Ragdoll physics is fun to see in action.
Ragdoll physics is NOT fun to implement in a full game.
For sure, I wanted to have ragdoll physics in Artillery Royale.
It’s fun and goes well with the explosions and destructible map.
As always I checked online for tutorials and I found many of them.
It seemed easy with unity: add a bunch of rigid bodies and capsule colliders linked with hinge joints and… voilà
But what if you already have some animations going on?
Using some anima2D inverse kinematic or similar?
Those tutorials fall short.
So this is how I did, it’s not that hard but there is a lot of going on.
But first, please enjoy the result
We have two states: Playing (using idle/run/jump animations), and Receiving damages (ragdoll part).
Playing: the animation (via animator) is active and your anima2D bones move. Great.
Receiving damages (i.e.: from an explosion):
– You stop the animation and disable anima2D.
– Save all bones local positions and rotations.
– Activate all your ragdoll related rigid bodies (that will activate the associated colliders etc).
– Add some force to the main bone’s rigid body and let the physics engine do the magic.
– When the main rigid body ends moving you deactivate all the ragdoll rigid bodies.
– Extrapolate the current bones positions/rotations back to their saved state.
– Eventually you restart the animation and you’re back to Playing state.
OMG this is some intense coding but it works quite well!
If you are like me, you probably have the impostor syndrome.
And let’s be honest, I’m not good at math and often it’s a hard problem for game dev.
Today I’m going to talk about A* also known as A star pathfinding.
At first, I thought I would not be able to do it myself, so I spent a lot of time looking for the right A* pre-made assets and I tried many.
Most were hard to understand and use, plus Artillery Royale does not use anything close to a grid and basic A* assets often presume that you have some kind of grid, so it was like trying to fit circles into squares.
Anyway, at some point I decided to learn about that A* algorithm to be able to understand a better those assets.
I thought it was only going to be for my developer culture because it would have been too hard for me to implement from scratch, but I realized it was really easy.
I read a bunch of blog posts and I decided to give it a go, using my own graph data.
And guess what? It worked like a charm.
Those are the resource I learned from:
This one explain the basic of A* with the simple formula behind it.
This one is the one referenced in the previous post, really great too, more detailed.
This one is the cherry on the cake, it gives you A* implementations in multiples programming languages.
To be honest, if you read this you will be able to get somewhere.
For me the hard part was not the A* algorithm but building a graph that represent the game. A* (in the examples I found) is often used for grid based game, but it can be used for any graph.
Now I have a working A* path finding for my AI and I’m really proud (except that I lose several days trying to figure out assets from other instead of trying to understand A*).
Lesson learn, it’s quite often better to dive a little deeper than the asset store and look at what is behind the scene.
Note, it’s the second time I felt in this trap, when building the destructible map I started with an asset and in the end switched to my own code.
I’m proud to announce that Artillery Royale has its own page on the Steam Store!
You can go there and at the game to your wishlist!
First, let me show you the official Artillery Royale logo!
If you ask me, it’s beautiful.
Now let’s talk about Artillery Royale community. If you follow the game you now that we have a discord server where people can get news and give feedbacks about the ongoing alpha: https://discord.com/invite/fq78teW
Growing a community is hard, starting like me from nothing and nowhere — I don’t have any pre-existing network — I’m often wondering how I can get people look at the game and more than that, join the discussion.
On the game side, the technical base is working, know I’m going to iterate and add content. More weapons, more features etc. but what I’d love is to have player engaged in the process, so I’d know that this game will be enjoyed as much as I enjoy making it.
You will probably see me all around the Internet posting on Twitter, YouTube and other game related platforms. I don’t know how to do that, but I’ll report on that blog in a few months, hopefully with some results!
Right now I have some pleasant news, first Artillery Royale is coming soon on Steam (as a “Coming Soon” page) and because — or thanks to — the hard work I had to put in that store listing (Steam is asking a bunch of assets and other questions) I have a bunch of nice screenshots, banner, icon and… an official trailer!
It’s not one of my specialty + the game is alpha, so is the trailer, but still I’m proud of it
You can find me on YouTube too, I’m not yet sure of which channel I’ll choose, but I let you know! Follow and subscribe to everything, and we will see which platform wins.
Meanwhile, it’s Discord where all the news are aggregated, and where future player can take part in the development process!
Spoiler alert, I won’t use Unity machine learning (also called mlagents) to implement artificial intelligence for my bots. If you want to know more about why, read on.
At first, it was hard to use, see my previous post, but then Unity helped my by giving me access to their alpha mlagents-cloud. That fixed my previous problem which was mostly a hardware problem.
From hard to use it becomes easy to iterate, and that’s exactly what I needed to find out if it was a good approach for my idea of having bot using “real” AI.
When you try to train a model you have to give it three main data points:
1 – Observations: what it (it’s called an agent) can see from its environment
2 – Actions: what the agent can do
3 – Rewards: information on how it performs
So you have to think quite hard about it, but as you know your environment, in the end you can find some good inputs for each of those (so you think).
In the very beginning I tried to train a model that would move and shot the target.
Let’s dive into some details
I had 12 observations, 10 actions and plenty of rewards points here and there. But I found out that no matter what, my model could not understand how to fire, it was moving quite alright but never firing.
I decided to split the model in two, one for moving, and one for aim and fire. I found out online that most people do this way when the problem for the agent is too hard. It’s a first trade off but I thought that it was acceptable.
Now I have two experiments, one to learn to move and the other to aim and fire.
The agent has to go to the target so reward is calculated on how close it is to the target. It can go left/right/jump/double jump. The map can be pretty hard to navigate for sure, even sometime impossible (something that machine learning does not like).
After 7 iterations, where I changed the reward values, added/removed some observations, made the map easier to navigate etc. This is what I’ve got:
The agent mostly succeed but sometime it goes in the wrong direction, it is always jumping like crazy, it does not handle the double jump when needed, it does not look natural at all.
Give it 8 more iterations, trying to add negative reward to the jump so it stops doing it that much etc. I did not get anything better.
Note that even if Unity mlagents-cloud allow me to iterate quickly, it still needs a couple of hours between each model changes.
The agent has to hit the target with the bazooka so reward is calculated on how many damages it makes and also how close it is (when failing). It can aim up/down/load fire/release to shot. This time the map was made easy from the beginning.
But after 5 iterations I found out that this was already too complex for the model. It did not manage to hit the target, only itself. The load and release action to fire is too complex from what I understood.
The problem is that machine learning is hard, and I’m not an expert in it
It took me a full week, working like crazy to conclude that I’m not an expert enough to know what are the limits of this, and how to bypass them. Of course, I could spend more time on this but it seems that no matter what, the outcome will not be as good as I first imagined.
By working on machine learning in this scope, training an agent to be a bot in a game, I also realized that doing so as a developper, you would lose all the control on your bot. I’m quite sure that when the AI is well trained the result for the player is nice, but as a game designer you can not force how your bot would behave (except making an new model each time).
This adds up to my final conclusion: machine learning is not what I need so I’ll have to make a manual AI for Artillery Battle, and this will be hard.
When working with machine learning, you can come across some funny (but logic) behaviors. For example in my first “fire” experiment the agent learned that not firing at all was the best way to go. Because if it failed and hit itself it was punished. So I had to give it some positive reward for firing and lower negative on hitting itself (this is an example on what you have to do between each of your experiments to get a better outcome).
My idea was to implement the network part from day one because that way I could play with beta testers right away.
First, I started with my own implementation using Firebase and found out that it was pretty hard to get everything working. Then I benchmarked a bunch of solutions and settled down on Photon Unity Network (PUN).
It was great, the code was not that hard and it seemed to work. Until I had the occasion to test an early version with a friend in real condition (meaning over the internet and not on a local machine). The result was too laggy for me. I’m pretty sure I could improve some details but I didn’t want to fight against the code.
I decided to stop developing the network part right away, but thanks to this first step I’m very aware of how to structure the code.
Later on, I made some prototype with a new solution of my own, tailored for that particular game. Indeed, being a turn-based game, I will go for a “turn replay” mechanism: the idea is to record the turn of the player and broadcast it to the other player in near real time. This will also allow keeping a record of any game for later replays.
You can now see how important it is to have deterministic physics, so I don’t need to record every movement in the replay stream.
Let’s dive into some details
The “Stream Play” code (that’s how I call it internally) is split in two main components: the Recorder which in charge of — hum — recording events and the Player which will replay those events. Of course in between there is a websocket connection to transfer recorded event from player A to player B (it goes through a server for extra control).
The recorder does not save everything that is happening, it only saves important information called snapshots. Those are the position of the characters, the state of the map (holes and other changes like this), positions of the bonus boxes and mines. That way at the end of the turn we are sure that both players are in sync.
The recorder also sends the active player inputs, this time it is real time, and those inputs are played right away on the other side. But because the output could slightly diverge, the source of truth at the end of the turn will be the snapshots.
The player, on the other end, buffers a few seconds of data, and because Artillery Royale is turn based and not real time, it does not matter much. And then runs the inputs and apply the snapshots. Both are time based that way the player can follow the right timeline.
In the middle there is a NodeJS server. It does not do much. Mostly send data from player A to player B, using a game id that is shared across both client. This server prototype — I mean this whole network thing — is still an early prototype. But so far I have some good results!
@koalefant asked on the discord server (click to join): “I am curious why did you end up using both snapshots and input simulation for networking? Would not snapshots be sufficient?”
The answer is: basically I use custom physics for movements (characters and ammo) but I still use Unity colliders and I’m worried that collision would drift away at some point (I mean not worried, it will at some point). That’s why I’m using both inputs and snapshots.
We can see that I choose a deterministic way of doing by sending inputs and letting the physics plays on both sides, and because the physics in Artillery Royale is — mostly — deterministic, it works. But I’m extra careful and send snapshots just in case!
The data that flows from both players is very light. Even real time inputs does not represent that much of information. This way of doing will also allow saving replays in a very optimized format.