4/21/2023 0 Comments Super hexagon game play![]() ![]() Prioritized experience replay at first performs better,.n-step significantly decreased the performance.Double Q-Learning and Dueling Networks did not improve the performance.All six Rainbow extensions have been evaluated. ![]() The used hyperparameters can be found at the bottom of trainer.py below if _name_ = '_main_':.See superhexagon.SuperHexagonInterface._preprocess_frame for more implementational details.Such that the walls and the player belong to the foreground and everything else belongs to the background Additionally, a threshold function is applied to the frame.See utils.Network for more implementational details.For the fully connected part of the network the feature maps of both streams are flattened and concatenated.Since the small triangle which is controlled by the player is barely visible for the first stream,Ī zoomed-in input is given to the second stream.For the first stream the frame is cropped to a square and resized to 60 by 60 pixel.The network has two convolutional streams.The reinforcement learning agent receives a reward of -1 if it dies otherwise a reward of 0. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |