Skip to content

This project aims to demonstrate how combining visual and auditory inputs can improve AI agent decision-making. Specifically, in Soccer Twos, auditory sensors enable agents to track the ball when it's out of sight, leading to smarter responses.

License

Notifications You must be signed in to change notification settings

K33w3/Project2.1

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Controlling Agents in the Unity Game Engine

This repository contains the work for Project 2.1 - AI and Machine Learning in the Bachelor Computer Science program (Year 2) at Maastricht University. The central topic is evaluating and analyzing the performance of Machine Learning (ML) solutions for controlling agents in real-time 3D video game environments.

Project Overview

The objective of this project is to:

  1. Apply and analyze state-of-the-art Deep Reinforcement Learning (DRL) algorithms for training agents in Unity game engine environments.
  2. Develop new simulated sensors for agents and experiment with training scenarios.
  3. Document the findings and code in a public GitHub repository for showcasing to future employers.

This project spans Periods 2.1, 2.2, and 2.3 of the academic year 2024-2025.


Contents


Getting Started

Prerequisites

  1. Install Unity (Personal or Student license recommended).
  2. Install Python (version compatible with ML-Agents and virtual environments).
  3. Ensure Git is installed and configured.
  4. Clone this repository:
    git clone https://github.com/K33w3/Project2.1.git
    cd Project2.1

Installing Dependencies

Follow the steps below to set up the required tools and dependencies:

  1. Unity ML-Agents Toolkit: Use the fixed version of ML-Agents:

    git clone https://github.com/DennisSoemers/ml-agents.git --branch fix-numpy-release-21-branch

    Follow the installation guide.

  2. Python Virtual Environment: Set up a virtual environment for Python dependencies:

    python -m venv venv
    source venv/bin/activate   # On Windows, use `venv\Scripts\activate`
    pip install -r requirements.txt

Phases and Deliverables

Phase 1: Project Setup

  • Deliverables:
    • Written project plan outlining steps, timelines, and risks.
    • Public GitHub repository with clear documentation.
  • Key Learning Goals:
    • Familiarity with Unity, ML-Agents, and Python virtual environments.
    • Initial code modifications and testing.

Phase 2: Agent Training and Sensor Development

  • Deliverables:
    • Train agents using DRL algorithms in ML-Agents.
    • Develop a new sensor type for the "Soccer Twos" environment.
    • Present work during the Midway Evaluation.

Phase 3: Performance Analysis

  • Deliverables:
    • Analyze RL algorithm performance based on parameters, sensors, and environment complexity.
    • Written report and live demonstration.
  • Key Focus Areas:
    • Experiment reproducibility.
    • Use of Unity Profiler for performance insights.

Technologies Used

  • Unity Game Engine: For building and running real-time 3D simulations.
  • Unity ML-Agents Toolkit: For implementing DRL algorithms in Unity environments.
  • Python: For scripting and managing ML-Agents.
  • PPO (Proximal Policy Optimization): For stable policy training in discrete and continuous environments.
  • SAC (Soft Actor-Critic): For robust performance in continuous action spaces.
  • Git/GitHub: For version control and collaboration.

Setup

  1. Clone the repository and install dependencies as described in Getting Started.
  2. Open the Unity project in the Unity Editor.
  3. Run the example ML-Agents environments to confirm setup.

Documentation

Documentation for this project is available in the repository:

  • Setup Guide: Step-by-step installation and setup instructions.
  • Experiments Guide: Details on training algorithms, sensor modifications, and performance analysis.

Contact

For questions or feedback, contact the project contributors:

About

This project aims to demonstrate how combining visual and auditory inputs can improve AI agent decision-making. Specifically, in Soccer Twos, auditory sensors enable agents to track the ball when it's out of sight, leading to smarter responses.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C# 54.5%
  • Python 40.5%
  • Jupyter Notebook 4.7%
  • ShaderLab 0.2%
  • Shell 0.1%
  • Batchfile 0.0%