1 July 2019 at 19 h 53 min #13881AboveallinsanityParticipant
Dear SOFA community,
I just recently discovered this amazing project and I am trying to use it to train a soft robot controller via RL. (Apologies beforehand for the naivety in the questions….)
I am currently using SOFA with the soft robotics pulgin. I am running the executable as a subprocess in a python script, something like “subprocess.call(runSofa myscene.py -g batch)” I am using batch because I dont want to visualize the simulation. My goal to wrap the simulator in a python reinforcement learning environment like GYM. In the process, some things that I couldnt figure out are:
1) What is a good way for me to advance SOFA one step at a time programically without using the GUI? Currently the program quits after finishing the specified number of steps. I would like it to stand by after every 1 step until I advance manually.
2) Is there a way to modify the “python script controller” while I’m running the simulation? Currently my policy is hard coded in myscene.py, but I would like to have a neural network do the job, and the NN might be updated after each step. Say I already have the module that calculates the output from the network, how can I pass it to the python controller during run time?
Sorry if these questions are somewhat trivial. I am just starting to use SOFA, and I’d be hugely grateful for any insight that you guys provide. Thank you very much..!
Thomas2 July 2019 at 17 h 04 min #13892Bruno MarquesParticipant
Hi and Welcome to Sofa!
Sadly, the runSofa GUI is not meant to allow stepping programmatically.
Luckily, there are some workarounds..:
1. SofaPython3 is a plugin in development. It’s purpose is to provide python packages for SOFA, that will let you define your own simulation loop in python by calling SOFA’s animate / step methods manually from python. But I would not encourage you to use it for now as it is still in active development and the API is still likely to change a lot.
2. A (less elegant) alternative is to keep using SofaPython, place a PythonScriptController in your SOFA scene, and hook a socket on its onBeginAnimationStep() method.
This method is called at each step of the simulation, and while you can’t really pause the simulation, you could, through a socket, send a package (in localhost) to a separate process (your learning algorithm) with all the useful stuff from the current simulation step, wait for your other process to send an acknowledgement package before exiting the function.
# something like that (pseudo code) def onBeginAnimationStep(self, dt): sock.send(some_data) ack = sock.recv() return
Concerning your second question, I’m not 100% sure I understood what you mean by “modifying the pythonScriptController” but I assume that your learning algorithm sends you updated data that you need to insert in your simulation (let’s say the young modulus of a FEM ForceField for instance).
Using the approach I suggested, your ACK package could contain the updated values you’d like to apply to your simulation, and your PythonScriptController could take those new values and set them in your scene’s components.
# something like that (pseudo-code) def onBeginAnimationStep(self, dt): sock.send(some_data) ack = sock.recv() y = parse(ack) self.rootNode.myForceField.youngModulus = y self.rootNode.myForceField.reinit() # might be necessary to call reinit() on the component to update the internal values... return
I hope my answer helps, good luck with SOFA, the first time is the worst 😉5 July 2019 at 9 h 51 min #13908faicheleParticipant
- Zykl.io UG
As an alternative, you might want to consider using ROS to couple your RL environment with SOFA, using the SOFA ROS connector plugin. I have worked on the Neurorobotics Platform (https://neurorobotics.net/) in the past, where ROS served as middleware to integrate Spiking Neural Network simulators with robotics simulations. The ROS connector for SOFA offers a similar possibility, in that you have a convenient way to exchange data between a SOFA simulation and an external (Python-based) framework.
With best regards,
Fabian15 July 2019 at 16 h 31 min #13956PierreShgParticipant
- Defrost Team - Inria Lille
I am working on controlling robots with RL and Sofa.
Concerning launching Sofa simulations with subprocess there is a nice launcher in sofa/tools/sofa-launcher which might be useful if you want to start several simulations maybe with different parameters (think A2C for instance).
As Bruno said, there is no nice way to advance the simulation step by step as of right now. We are working on it in the SofaPython3 plugin. I am using the hack he talked about, by placing a .recv() in the onBeginAnimationStep() of your controller the controller will get locked there and wait for your command to reach it.
For your second question my neural net is housed in another program in which i do the learning and i only pass the commands via socket to the PythonScriptController in Sofa. That way your net and your simulation are somewhat independant.
I have actually wrapped my simulation in gym and it works nicely.
I don’t know how much that helps, maybe if you tell us in more detail what you are trying to do I may help you some more.
- You must be logged in to reply to this topic.