Software Design Spring 2017 Final Project: SCREAMgame
The player: It’s a nice day outside so you decide to take a walk for the first time in like thirty weeks (because let’s face it, you don’t get out much do you?). Unfortunately all of your suspicions are right: the outside world is trying to kill you. Get out alive so you can live, and more importantly, play more games on your laptop. Also you’re a cute lil’ hamster.
The environment: Some nerd thought they could just take a walk? False. Try to kill them for lols. Even though they are a cute hamster. But since you aren’t a physical body, you got to scream.
The gameplay:The player: Use the arrow keys in order to make the hamster jump, move forward, and move backwards until you reach the goal. Make sure to avoid items that will cause certain death.
The environment: Scream at the laptop to activate the death traps.
Camille Xue: Camille is a first year engineering student at Franklin W. Olin College of Engineering, studying Engineering with Computing. Her project goals for this project included working with new libraries and integrating voice input with the game environment. For this project, Camille worked on the voice integration and the environment set-up.
Minju Kang: Minju is a first year engineering student at Franklin W. Olin College of Engineering, studying Electrical Engineering. Her project goals for this project were to learn how to use pygame and to be able to work in a group on a semi-large scale project. Minju’s contributions for this projects were the general game structure design, environment set-up, and player set-up.
Nathaniel Tan: Nathaniel is a first year engineering student at Franklin W. Olin College of Engineering, also studying Electrical Engineering. Project goals for this project included: learning how to integrate physics concepts into game design and learning more about game design. Nathaniel worked on the physics and collision detection engine and the level editor.
Prava Dhulipalla: Prava, like all of her project partners, is a first year engineering student at Franklin W. Olin College of Engineering, studying Electrical and Computer Engineering. Her project goals were to learn more about machine learning and voice integration with a project. As a result, she worked on the neural network and the voice integration aspects of the code.
Our personal motivations, combined with the group motivation of creating a final product of which we were proud and about which we were excited, resulted with the idea of this project.
Ultimately, all of us ended up not only engaging with our personal goals, but learning a lot about fields we wanted to explore. Thus, the culmination of this project was not only a success because it was a final product that worked and satisfied the requirements of our lovely Software Design professors, but because we ended up fulfilling our personal motivations.
This visual shows that the game included many components to help us engage with our personal goals.
Please reference the README.md on our original repository in order to play our SCREAM game! Have fun!
Run this! Combines the player and the level for an A+ playing experience.
Level Editor:All individual levels are implemented in seperate files, all in one folder. The code specifies which level, which then finds the correct file in the level folder.
Player:The player set-up included generating the character, and integrating keyboard commands with character movement.
Environment:The environment set-up included generating the sky, the ground, the boxes, and the death-traps.
Voice Processing:The game records the character for a small amount of time, and then processes that information to determine whether it was loud enough to make the death traps fall.
Neural Network:This trains on the ‘a’, ‘e’, and ‘o’ vowel sounds to differentiate using machine learning methods, specifically a neural network algorithm.
Physics and Collision Detection Engine:The physics work off of real life kinematic equations, with the except of the use of trial and error to determine friction and gravity coefficients. The collision detection was fairly basic, as it was two-dimensional and the detection focused on rectangular boxes.
1 - Got voice input from the laptop and was able to quantify it. A basic level with the environment and a moving player (through keyboard, only could move back and forth) was created.
2 - The voice input was turned into one amplitude. Collision detection and physics was included in the environment and the game environment. Character can jump and interact with the environment.
3 - A threshold was set and a calibration period was introduced. Death traps were added.
4 - Death traps fall based on voice input. The player can die when coming in contact with interactive traps or land traps.
1 - A three-dimensional game with volume and pitch controlling elevation of the landscape. Volume and pitch also control the path that the player has. The game would be played on separate laptops.
2 - A three-dimensional game, played on one laptop, with volume and pitch controlling elevation of the landscape but the arrow keys controlling the path of the player.
3 - A two-dimensional game, played on one laptop, with volume controlling elevation of the landscape and the arrow keys moving the player.
4 - A two-dimensional game, played on one laptop, with volume controlling death traps that kill a character. The player is controlled by the keyboard.
5 - A two-dimensional game, played on one laptop, with vowel sounds controlling the type of death trap that kills a character. The player is controlled by the keyboard.
Initially, these links were used merely used as reference points in order to get audio from the microphone. However, debugging and troubleshooting issues made the code essentially very similar to snippets of code located within these two links.
Neural Network:These links were used as references and information sources in order to create an algorithm for a neural network. No code was directly copied from snippets of code located within these links.
The neural network is trained on the vowel sounds of ‘a,’ ‘e,’ and ‘o,’ However, more options could be included if it were trained on all the vowel sounds. This would probably require more work on the neural network so that it will be more accurate (or possibly the existing one would be fine). More recognizable vowel sounds means more options in the game. Also, although the neural network theoretically works, Prava fell into the classic training data hubris. This was fine in context of the project and learning goals, but it was still slightly demoralizing.
Voice Control:One idea that was discussed was having both elements of the play experience (controlling the player and controlling the environment) be voice-controlled. To do this, the game would have to be hosted on two different laptops (the reason that having the game able to be played across two different laptops wasn’t a significant next step is because we felt that being able to scream and use keyboard commands on one laptop by two people added to the gaming experience).
Environment Build:Although the environment is voice-controlled, another option we were considering (and this could go with implementing more vowel sounds) is being able to ‘build’ the environment. As in, when pitch or volume is raised, the environment builds up, and when pitch and volume is lowered, the environment goes deeper.
Hosted Online:Using something like Flask, it would be interesting and more convenient if there was a way to host the game on the web instead of making players download a git repository and play it through the terminal window.