Project Epsilon Dev log pt.3: SDLC Case study

On wednesday we again met up to review our progress so far. It was just me and Arthur, we looked at what had been added to the game. This is specifically additional visual elements like the rising volume cloud bars (implemented by Hani). I had also implemented a floating dragon prefab, a series of sprites that snakes into the background (Hani produced sprites for this too). Arthur also began to implement a moving camera, this will not track the player but it will move forwards independently.

We now gave ourselves more tasks to do on the wednesday meeting, for Arthur it would be to finish off the camera as well as a health system for the player. We discussed whether to add health points or an instant death system. That is if the player should lose health upon colliding with enemies/dangerous objects or the player must straight away lose health. I put forward the argument that because the brief requires the experience to be under a minute then it would be suitable if that one minute was quite intense and as such should be an instant death system. However this should easily be changeable in the future. Hopefully this is a small scale design choice that should easily be rectified later if needed and shouldn’t affect other parts of the game. Right now I can only imagine needing to provide feedback upon damage as well as implementing possible health pickups if we choose to have a health system.

Arthur also requested that these tasks are listed in our trello board. This makes sense rather than just listing the minutes of that meeting. This made me reconsider our approach, I believe we need to adopt a Software Development Lifecycle.

  • Waterfall-A linear approach. In 5 stages: Requirements where the needs of the client/project are analysed. Then the Design process takes place where we consdier how best to approach the project, how the game would be designed in our case. We would also figure out how we would work together. Next is coding where we develop the project. This is then tested and debugged. Then this followed by outputting the game and mainenance. Check the game for additional issues. This model would not work for us as the nature of the project is quite flexible, we do not have a defined set of requirements as we are piecing together the requirements and the project itself. Shown here: [https://www.tutorialspoint.com/sdlc/sdlc_waterfall_model.htm]
  • Iterative-This approach is more suitable, it accomodates those with unknown requirements, emphasises on development and then testing the project. The project will then respond by changing the requirements/goals from the testing. This would give us a prototype earlier. As such we can identify what we like/don’t. This is already our approach we have been using. One disadvantage is that it could be time consuming doing this process repetitively. However our team is small and focused enough to implement this SDLC. Shown here: [https://airbrake.io/blog/sdlc/iterative-model]
  • Spiral-This is composed of 4 stages that are spiralled. Similar to the iterative model, it incorporates planning, risk analysis, engineering and evaluation. This is a single spiral and would be repeated as neccessary. While it is safer as it involves risk analysis. I do feel that it is less beneficial as again we are a small group. Each of our roles are clearly defined and our chagnes may not need extensive consideration after every cycle. Shown here: [http://tryqa.com/what-is-spiral-model-advantages-disadvantages-and-when-to-use-it/]
  • AGILE-This is a lightweight approach. Planning is quick just like our meetings. This is followed by development again nothing too far from what we have been doing already. Then followed by Testing the parts we have developed and demonstrated. This model is essentially what we have been doing so far. The only real change is that we should put together a complete prototype to demonstrate all parts together. As of right now the features all live in separate scenes. Incoporating them all into one scene should highlight issues that we have overlooked and would show if all the different parts do work together or not. Shown here [https://www.tutorialspoint.com/sdlc/sdlc_agile_model.htm#:~:text=Agile%20SDLC%20model%20is%20a,builds%20are%20provided%20in%20iterations.]

As such we should appraoch the project with the AGILE model.

Project Epsilon Dev log pt.2: Aims and Audio Detection

A prototype had been developed. This was accomplished in two stages. Hani had already begun working on a prototype level, this was produced using downloaded assets from the unity store. Hani wanted to see if the audio detection algorithm can be used for our purposes. He showed us that it was indeed usable. Because we were using Unity Collaborate. We were able to share the project and update the necessary files. As such I began to develop the game and separate the downloaded assets. The only assets that needed to be replaced was a script for character movement. We kept the audio detection scripts as we were not sure how best to creating our own. Doing so would also be outside the scope of this project.

Once this was accomplished Hani and Kevin were notified and shown this version of the game. The movement could be refined but it functioned as needed. For now our biggest focus was to implement the audio detection algorithm.

This version uses our own scripts and sprites

Aims for the project

Hani’s intial proposal interested me as it focused on using audio in a unique way, this is a part of games design which I have had a small amount of experience in. The main goal of the project is to use music, that is the song playing in the background, to affect other parts of the game. As such one of my aims for this project is to understand the different ways music/audio can be used in game design.

Another aim is to explore the best approaches to collaborative work with members who are not in games design. Both Hani and Kevin are both studying in the Sound Arts, as such they do not have experience in games design and me and Arthur do not have experience in sound engineering/production. This is a chance for us to learn skills in working and communicating with people outside of our disciplines. This reminds me of the workshop we did with animation students, I had worked with an animator on a short pitch and it was great discussing ideas and decisions with some one outside of my own field.

We are designing a game, specifically a 2D platformer. Then another aim of mine is to take this as a chance to expand my skills in 2D game design specifically concerning design for a platformer. I imagine this would involve focusing on how best to implement movement mecahnics and good level design, especially concerning platforms.

Audio detection

Both Hani and Kevin have far more experience with audio related technology and production. As such they are more suitable for understanding and engaging with the audio detection algorithm. Hani had linked the below youtube video explaining audio detection and the necessary scripts to use.

Link to code: https://github.com/coderDarren/RenaissanceCoders_UnityScripting

Here the code we used consisted of two scripts. The AudioSpectrum script acquires the spectrum data from the audio playing from that object.

Suprisingly, this was a case of getting the required data and making it available for use.

The second is the AudioSyncer scripts, these are child instances of the first. These are used to enact changes for objects, that is a script that takes the AudioSpectrum spectrum values and based on the chosen bias, will detect a beat or note to respond to.

These audiosyncer scripts are split into two parts. One is a IEnumertor function that enacts the change (this may change colour, size of an object or anything we wish to.)

This one changes the scale of an object.

The other part of this script listens for a beat and calls the above function. Because of this, we were able to implement our own scripts that accomplish different responses such as a change of colour on sprites. Because of this we will still be implementing different scripts of this type that should achieve different outcomes.

Meeting on wednesday

Arthur Audren de Kerdrel expressed he wished to join the group, on wednesdays meeting he joined in and I updated him on the project, our intentions and what progress we’ve made so far. We also began to discuss what could be added to the game. One main change was that Arthur posited the idea of a wall that chases the player.

Now the main focus would be to continue adding additional features/mechanics for the game. Certainly we would make changes and communicate these chagnes on meetings/messaging. We have proved that we’re able to work together.

Project Epsilon Dev log Pt.1: Initial meeting and ideas

Project Epsilon and team

After examining the padlet for potential projects. I discovered Hani Malcolm Ibrahim’s Project Epsilon, the concept involves the use of audio detection to affect the game in some way. The team we currently have is myself, Hani Malcolm Ibrahim and Kevin Halomoan.

This warranted further discussion but we primarily used whatsapp to communicate and further discuss ideas for theme, game play, what roles we would all take. We later used microsoft teams to conduct our first meeting as well as a trello board to list down tasks and questions we had for the project.

Link to Trello: https://trello.com/invite/b/1A078TwQ/7cca1a43d41ac88cfd624202077dc40a/project-epsilon

In the trello board we set rough expectations as a timeline.

First meeting

Our first meeting had the purpose of discussing the following:

  • Theme: After discussing we decided upon having an 80’s VHS visual theme, this should inform design decisions on the music and visual assets. As such Kevin cited Hotline Miami as inspiration for music, this would consist of songs in the style of Vaporwave/Synthwave.
  • Mechanics: We discussed the possibilities of how the audio detection could be used in the game. The possibilities included that certain objects/enemies jump or move in time with the certain elements of the song. As well as pre-rendered geometric patterns that react to the beat. The audio detection would analyze the song for drumbeats, particular melodies and other identifiable sounds. These sounds would then call a function to affect the chosen game objects.
  • Type of game: Initially we decided on making a 2D platformer game. This was done because we felt that this would be simple enough to carry out without impacting the main feature of audio detection. As such this would involve a single character jumping on to different platforms to collect different items.
  • Roles: We designated roles based on strengths as well as urgency. I am responsible for developing the game, this would mainly focus on developing the mechanics and arranging assets to design levels. Kevin will focus on developing songs to be used as well as other visualizations derived from the songs. Hani will work on applying the beat detection into unity as well as work on art assets (sprite sheets, character design). It is possible that there may be shared responsibilities between us as well.

Objectives for the next week:

Prepare a single level implementing player movement, this would include some platforms. Hani will prepare some early character designs and work with me to implement the audio detection feature. This level will then be used to test audio detection on some items to see if it is viable.

Audio detection in other games

Below I will identify uses of audio detection in other games. Specifically I want to see how soundtracks affect the game in either a visual or mechanical manner.

Crypt of the Necrodancer-The soundtrack informs the instances of player input by ensuring that they can only perform an action (attack, move or use items) on a beat of the currently playing song. The UI at the bottom shows a heart pulsing in time with the beat, accompanied by lines approaching this from both sides indicating when a beat occurs. A beat in this case is the regular unit of time in the song. Different songs have different beats.

Enemies are also bound by the same beat-dependent action, that is they can only move or attack on a beat. As such players can use this to their advantage. Upon defeating enemies a coin multiplier begins, should the player continue performing actions in the correct timing. The environment also changes by lighting the floor similar to a checkerboard, every other floor tile is a bright colour.

Beat hazard-Audio detection here seems to respond to the intensity/volume of a soundtrack. Particular beats incur a flashing effect in the background of the game.

“Each element of the game is tired to a number of frequencies in the song. As these frequencies change they cause each element to build up pressure, so to speak,” -Steve Hunt, Sole Developer of Beat Hazard

[Source: https://kotaku.com/beat-hazard-one-mans-quest-to-make-your-music-hurt-you-5520256#:~:text=%22Each%20element%20of%20the%20game,different%20parts%20of%20the%20game.]

In the above interview he states that weapons fire in time with the music. Enemy numbers and flight patterns are dictated by the song and the boss of that level is generated by the music. Beat Hazard also allows for players to use their own songs to inform the game. As such each song generates it’s own level and is consistent in it’s behaviour, that is it will use the same enemy pattern and boss.

Osu-While Osu is a rhythm game, the use of soundtracks is still appropriate. There is a feature that allows players to download songs that have their own levels. These songs make use of every beat to generate different prompts for the player to act upon. These can either take the form of single press button or a click-and-drag style button. Osu also allows players to play using a mouse, however it is recommended to play with a pen and graphics tablet. The speed of these follows the tempo of the song.

Source: https://osu.ppy.sh/wiki/hu/Game_mode

Conclusion

The above games utilise their songs in different ways, however the soundtracks always inform the game. Meaning the elements of the game react to the soundtrack. It’s evident that the song is analysed for particular patterns and then the content is generated for that particular song. Games also take advantage of the predictable nature of soundtracks, once a player listens to it for the first time. They become more accustomed to the song, increasing their familiarity to the pace or structure of the level that song generates (Particularly true in Beat Hazard and Osu).

Our initial idea would be to develop a 2D platformer as the prototype. I believe our first important decision would be how the game responds to the playing song. Both Hani and Kevin wish to implement visual elements as they have accomplished animating geomteric shapes in Processing, hopefully we would be able to implement these too. Then this too would be discussed in our next meeting.