You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue will describe the plan to get to a minimum viable simulation. It's like a software requirements document but allowing for more technical details.
Step 0: Make A Simple Physics-based 2D Environment 🛠
This step partially satisfies my need for cognitive closure by starting with something to check off but also specifies the environment where the organisms live.
Requirements For The Simulation Environment
The simulation has basic physics/collisions/etc
Pymunk takes care of physics
The simulation can draw shapes (at least circles, squares, triangles, line-segments)
Pymunk makes shapes and Pyglet displays shapes
The simulation has outer-borders (or at least some way to keep organisms in a finite space...yes, limited energy per organism and food source clustered in a small area is a reasonable solution)
The simulation's "state" can be saved, including every object and relevant data attached to each object (e.g., location history?) to:
1. be able to restart from that state
Edit: September 8, 2019 Marking as low priority because I am not sure if restarting from a given state is worthwhile? A restart from a given state is possible because the replay feature works in Step 1: Implement A "Save State" Mechanism #2 by saving the binary representation of the entire "space" (Pymunk object), which includes all shapes inside the simulation. Perhaps this feature will be more useful as the simulation gets more complex, but even then it should be trivial to implement the code. The non-trivial part is considering how that initial state might interact with any randomly generated numbers and whether those consequences are acceptable. ¯\_(ツ)_/¯
2. be able to replay a timelapse of the simulation
The "save state" mechanism will be fast, perhaps saving to a file in batches after building a cache?
Step 2: Extend The 2D Environment With Important Stuff For Evolution
Requirements
The simulation can arbitrarily spawn food
The simulation will allow the user to increase/decrease the food spawn rate
The organisms can move in any direction (up, down, left, right, etc.)
The organisms can sense another object inside it's "field of view" (FOV)
The organism has a limited angle and limited range for FOV.
here's a crude drawing of ray casting to calculate "field of view."
/|
/ |
/ |
/ |
O----o--.|
\ |
\ |
\ |
\|
There exists some way for an organism to know that a food object in its FOV is genuine food
The organism can choose to eat food
The organism can physically interact (collision) with food
The organism can "grab" food and "un-grab" food (throw?)
Can the organism see color? (yes by multi-channel FOV?) (but is that necessary? Why not a hex code? utilize fewer data points.)
Step 3: Extend The Organisms With Simple Brains
After reading Up and Down the Ladder of Abstraction (UDLA), I think the simulation would benefit significantly from incremental development with lots of visual representations of each step in the development process. So before adding fancy neural network brains, the organisms in the simulation should be able to follow a simple handwritten ruleset. Requirements
The organisms can follow a simple rule like, "if food is in range of sensors, then move to it and eat."
The simulation's state can be recorded while running a simple ruleset.
The simulation will display an interactive visualization of all states, at all times
Step 4: Extend The Organisms With Automated Brains
This step is the fuzziest in my mind. Should a convolutional neural network be used? How about a recurrent neural network? Reinforcement learning? I have no clue what's best.
Requirements
The organisms have neural network brains
The brains can output
move_up
move_down
move_left
move_right
eat
grab
ungrab
The brains take as input
An array of ray-cast-projections for FOV in Red
An array of ray-cast-projections for FOV in Blue
An array of ray-cast-projections for FOV in Green
the previous K actions, where K is some arbitrary number
The text was updated successfully, but these errors were encountered:
Overview
This issue will describe the plan to get to a minimum viable simulation. It's like a software requirements document but allowing for more technical details.
Step 0: Make A Simple Physics-based 2D Environment 🛠
This step partially satisfies my need for cognitive closure by starting with something to check off but also specifies the environment where the organisms live.
Requirements For The Simulation Environment
Edit: September 8, 2019
Marking as a low priority task for now because I want to minimize the complexity of the basic simulation to begin with.Step 1: Implement A "Save State" Mechanism
Requirements
Edit: September 8, 2019
Marking as low priority because I am not sure if restarting from a given state is worthwhile? A restart from a given state is possible because the replay feature works in Step 1: Implement A "Save State" Mechanism #2 by saving the binary representation of the entire "space" (Pymunk object), which includes all shapes inside the simulation. Perhaps this feature will be more useful as the simulation gets more complex, but even then it should be trivial to implement the code. The non-trivial part is considering how that initial state might interact with any randomly generated numbers and whether those consequences are acceptable.¯\_(ツ)_/¯
pickle
because this example and @ryanprior suggest it.Edit: Sep 13, 2019
while Step 1: Implement A "Save State" Mechanism #2 got this done, it will also re-implement the same code with anything but pickle because pickle is slow and results in giant files. (see Step 1: Implement A "Save State" Mechanism #2 for more details)Step 2: Extend The 2D Environment With Important Stuff For Evolution
Requirements
here's a crude drawing of ray casting to calculate "field of view."
Step 3: Extend The Organisms With Simple Brains
After reading Up and Down the Ladder of Abstraction (UDLA), I think the simulation would benefit significantly from incremental development with lots of visual representations of each step in the development process. So before adding fancy neural network brains, the organisms in the simulation should be able to follow a simple handwritten ruleset.
Requirements
Step 4: Extend The Organisms With Automated Brains
This step is the fuzziest in my mind. Should a convolutional neural network be used? How about a recurrent neural network? Reinforcement learning? I have no clue what's best.
Requirements
The text was updated successfully, but these errors were encountered: