27 August 2019 at 0 h 52 min #14161
I would like to compare the behaviour of a simulated phantom and a real one. As such, I captured a RGBD images of interactions with the real camera and was wondering if I can put a virtual depth camera in the simulation. It seems like I can set the viewpoint using OpenGL viewer and SOFA’s GUI allows to recording the screenshot. Is it possible to capture screenshots from a Python script? Is there an existing method to estimate the depth map of the scene from a particular view point?
Jie Ying2 September 2019 at 16 h 55 min #14175
Thank you very much for this interesting question!
Coupling the fields of simulation and computer vision has been investigated and is still active! I will ask the developers in this field to get you an answer and maybe an access to their code. With their work, it should be possible to include in SOFA simulation the cloud point coming from the RGBD camera and then make the comparison. Would this be what you need?
I never captured a screenshot from Python, but if you mimic keyboard action : ALT+C it should work.
I’ll get back to you.
Hugo3 September 2019 at 4 h 19 min #14180
Thanks for the info! Yes, that sounds exactly like what I need. I’m currently using the MeshExporter as a workaround, but it would be interesting if SOFA can simulate the point cloud from a particular camera position.
Jie Ying5 September 2019 at 16 h 57 min #14215
PS: for people simply looking for getting screenshots / videos of the simulation (regarding the title of your forum topic), you can use the VideoRecorder (Edit->Video Recorder Manager). To activate it, press “V” during the simulation.6 September 2019 at 9 h 27 min #14219
I currently work on interfacing realsense cameras with SOFA simulation, especially in the context of liver deformation tracking.
I’m not sure to understand your question. Do you want to reconstruct a point cloud from RGBD frames offline inside Sofa ?
That should be fairly easy to do. Mainly, one should save camera’s intrinsics and RGB and Depth streams in a video file. Then depending on the camera model you’re using, 2D-3D reprojection implementations tend to differ slightly.
Working exclusively on Intel’s Realsense cameras, I can only vouch for their C++ framework, which I find top notch. I can’t speak for other RGBD cameras manufacturers.6 September 2019 at 20 h 41 min #14221
Thanks, Hugo for the introduction.
I’m also working with an Intel Realsense camera (SR300). I’m trying to check the validity of my simulation by comparing the output of the depth camera with what the simulation shows. Currently, I’m reading out the positions of my mesh and comparing that to the point cloud. This is fine, but I was wondering since SOFA does have all the information, whether it’s possible to stream out RGBD frames from a particular camera view point (given the camera’s intrinsic and extrinsic properties).
Jie Ying10 September 2019 at 10 h 28 min #14222
I will let @omar guide you, he’ll have more accurate and up-to-date replies regarding his current work.
Hugo10 September 2019 at 11 h 00 min #14224
@JieYing, Sofa doesn’t have a component that does exactly that (yet?)
Concerning the intrinsics parameters, they are camera specific, so I guess they’ll be the same as the realsense (in your case SR300)
Extrinsics can be handled with Sofa’s Transform engine. It binds to a mechanical object and applies scaling/rotation/translation on it on demand.
Or you can also just map them in a rotation/translation matrix and figure out your point cloud’s new position when reprojecting from 2D to 3D.
I’ll make sure to share a code snippet later if you want.
Hope this helps
Omar18 September 2019 at 20 h 37 min #14267
Thanks for the info. A code snippet would be useful if it’s not too much trouble.
Jie Ying25 September 2019 at 11 h 20 min #14280
Do not hesitate to name Omar @omar so that he can receive emails.
Hugo2 October 2019 at 9 h 48 min #14336
Hi @jieying, sorry for the late answer.
I’m on a vacation as for now, with little to no access to my code.
I’ll be sure to send a snippet once I’m back, by next week.
Omar30 January 2020 at 17 h 37 min #15173RishabhParticipant
- Graduate researcher
Good to see this thread. I am working on a similar robotics problem where I need the RGBD information from a virtual camera inside the simulation (possibly frame-by-frame while the simulation is running?)
Is this possible? and can I change the view point as well?
Rishabh4 February 2020 at 18 h 45 min #15176
I am very happy to have found this thread. I am trying to train machine learning models to assist doctors during surgery, and we are using SOFA to simulate the environment (i.e. the patient’s body and the organs inside it).
To do this, I would like to access the rendered simulation view from within Python3. According to my understanding, this is not currently possible with the existing bindings. Given this, I want to add my own pybind11 bindings to expose the scene in Python, but I am unsure what object/method in the SOFA framework contains this information (I have little experience in C++ but I am willing to get my hands dirty). Any advice is much appreciated.
Balazs7 February 2020 at 19 h 13 min #15192
Welcome on the SOFA forum!
Machine learning trained with simulations, how trendy is it! Interesting topic!
A plugin already exists allowing to use SOFA and interact with the simulation within a Python3 environment: SOFAPython3 plugin. In a python script, you can design, run and interact with a simulation and its parameters (and much more!).
Is this what you are looking for?
What do you mean exactly by “access the rendered simulation view from within Python3”?
Hugo5 March 2020 at 10 h 59 min #15287Damien MarchalParticipant
- CNRS/Defrost Team
At DEFROST for grabbing of the screen we are using SofaPython3 and pygame. The rendering of Sofa is done in an openGL canvas using pygame. As we have full control of the simulation and rendering loop we can grab individual frames that are plug that into a machine learning algorithm.
I have not a lot of time to provide code for that but I think Pierre Sheggs may have some. So I poke him.
Damien.9 March 2020 at 18 h 45 min #15381
@hugo, sorry for the late reply. We have indeed found the SofaPython3 plugin, but it does not quite meet our needs (it seems to be in development). Essentially, I would like to have a Python variable that contains an image of what I would see if the SOFA GUI were open. As far as I can tell, this functionality does not exist, although I can take a screenshot of the simulation and save it to a file. This would work but would be extremely slow.
@damien-marchaluniv-lille1-fr, thanks for your reply as well. We have made some progress since my last message, but we are not C++ experts and our current solution is not very performant. We have copied and repurposed the HeadlessGUI to allow access to the rendered simulation view, and have defined a pybind buffer to allow reading the image from Python. I would be very glad for any code from Pierre!
Balazs17 March 2020 at 17 h 54 min #15421
The SofaPython3 is no more in transient phase, it is stable. Obviously developments in this project continue but it’s only a sign of good health! The plugins are not all yet compatible with this SofaPython3 but it will come soon.
You would like to compare at each time step an image and the resulting point of view in the simulation. Is this correct?
Hugo1 April 2020 at 11 h 39 min #15619
Yes that is correct. Do you know what objects would be the most logical way to access these values? Right now we are using an OpenGL method called glReadPixels (https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glReadPixels.xhtml), but maybe there is a better way. Feel free to send me any resources that might help us along our way.
Balazs9 April 2020 at 16 h 01 min #15682Damien MarchalParticipant
- CNRS/Defrost Team
Few month ago I was doing screenshot using python3. For that I was using pygame to open a window with an opengl context then the SofaPython3 functions to build control the simulation and trigger the sofa rendering into this opengl context.
This was more or less looking like this:
def doRendering(display, t): glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, (display/display), 0.1, 50.0) glMatrixMode(GL_MODELVIEW); glLoadIdentity(); cameraMVM = camera.getOpenGLModelViewMatrix() glMultMatrixd(cameraMVM) Sofa.Simulation.draw(scene) pygame.display.init() display = (800,600) pygame.display.set_mode(display, pygame.DOUBLEBUF|pygame.OPENGL) time = 0.0 while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() Sofa.Simulation.glewInit() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glEnable(GL_LIGHTING) glEnable(GL_DEPTH_TEST) doRendering(display, time) # here you can use pygame https://stackoverflow.com/questions/17267395/how-to-take-screenshot-of-certain-part-of-screen-in-pygame makeAScreenShot() time+=0.01 pygame.display.flip()
I have no time right now to test the code so please don’t consider it as a working example,more as a source of inspiration.20 April 2020 at 13 h 01 min #15852PierreShgParticipant
- Defrost Team - Inria Lille
Sorry for the late reply, my code base was a mess and we had to fix an issue between OpenGL and SofaPython3.
I can provide you with a simple rendering function I use.
The code below will create a pygame display, fetch the opengl context and print it on the display. You can use the commented code and the PIL Image library to export the image to a file for example.
I use this code while running the code with python3, not with runSofa but I don’t see why you couldn’t put it in a Sofa Controller.
I don’t know if that is what you were looking for. If not, please reply to this thread.
Also if you enhance the code, please share it, OpenGL code is very tricky and time consuming to build and debug
def simple_render(rootNode): """ Get the OpenGL Context to render an image (snapshot) of the simulation state """ pygame.display.init() display_size = (800, 600) pygame.display.set_mode(display_size, pygame.DOUBLEBUF | pygame.OPENGL) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glEnable(GL_LIGHTING) glEnable(GL_DEPTH_TEST) Sofa.Simulation.glewInit() glMatrixMode(GL_PROJECTION) glLoadIdentity() gluPerspective(45, (display_size / display_size), 0.1, 50.0) glMatrixMode(GL_MODELVIEW) glLoadIdentity() cameraMVM = rootNode.camera.getOpenGLModelViewMatrix() glMultMatrixd(cameraMVM) # _, _, width, height = glGetIntegerv(GL_VIEWPORT) Sofa.Simulation.draw(rootNode) _, _, width, height = glGetIntegerv(GL_VIEWPORT) buff = glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE) image_array = np.fromstring(buff, np.uint8) if image_array != : image = image_array.reshape(display_size, display_size, 3) else: image = np.zeros((display_size, display_size, 3)) np.flipud(image) ### Debug # from PIL import Image # # print("image", image) # img = Image.fromarray(image, 'RGB') # img.show() #display.flip() return image20 April 2020 at 14 h 14 min #1586521 October 2020 at 3 h 56 min #17431ArthurModerator
- Toyota Technological Institute at Chicago
How would one do this in python, say within an animate function:
“I never captured a screenshot from Python, but if you mimic keyboard action : ALT+C it should work. ”
I would like to record video using v20.06 (with python2 scripts). And if I can mimic keyboard input it should be as easy as mimicking a “v” so this sounds ideal.
Working with Sofa v20.12 and SofaPython3 I found a solution. (Currently the pygame example code isn’t compatible, but this worked for me).
# encoding: utf-8 # !/usr/bin/python3 import Sofa import SofaRuntime import Sofa.Gui class scene_interface: """Scene_interface provides step and reset methods""" def __init__(self, dt=0.01, max_steps=300): self.dt = dt # max_steps, how long the simulator should run. Total length: dt*max_steps self.max_steps = max_steps # root node in the simulator self.root = None # the current step in the simulation self.current_step = 0 # Register all the common component in the factory. SofaRuntime.importPlugin('SofaOpenglVisual') SofaRuntime.importPlugin("SofaComponentAll") self.root = Sofa.Core.Node("myroot") ### create some objects to observe self.place_objects_in_scene(self.root) # place light and a camera self.root.addObject("LightManager") self.root.addObject("SpotLight", position=[0,10,0], direction=[0,-1,0]) self.root.addObject("InteractiveCamera", name="camera", position=[0,10, 0], lookAt=[0,0,0], distance=37, fieldOfView=45, zNear=0.63, zFar=55.69) # start the simulator Sofa.Simulation.init(self.root) # start the gui Sofa.Gui.GUIManager.Init("Recorded_Episode", "qt") Sofa.Gui.GUIManager.createGUI(self.root, __file__) def place_objects_in_scene(self, root): ### these are just some things that stay still and move around # so you know the animation is actually happening root.gravity = [0, -1., 0] root.addObject("VisualStyle", displayFlags="showWireframe showBehaviorModels showAll") root.addObject("MeshGmshLoader", name="meshLoaderCoarse", filename="mesh/liver.msh") root.addObject("MeshObjLoader", name="meshLoaderFine", filename="mesh/liver-smooth.obj") root.addObject("EulerImplicitSolver") root.addObject("CGLinearSolver", iterations="200", tolerance="1e-09", threshold="1e-09") liver = root.addChild("liver") liver.addObject("TetrahedronSetTopologyContainer", name="topo", src="@../meshLoaderCoarse" ) liver.addObject("TetrahedronSetGeometryAlgorithms", template="Vec3d", name="GeomAlgo") liver.addObject("MechanicalObject", template="Vec3d", name="MechanicalModel", showObject="1", showObjectScale="3") liver.addObject("TetrahedronFEMForceField", name="fem", youngModulus="1000", poissonRatio="0.4", method="large") liver.addObject("MeshMatrixMass", massDensity="1") liver.addObject("FixedConstraint", indices="2 3 50") def step(self): # step through time # this steps the simulation Sofa.Simulation.animate(self.root, self.dt) # just to keep track of where we are self.current_step += 1 ### A better example would also show how to read and edit values through scripts # which would likely be useful if you are running without a normal gui # return true if done return self.current_step >= self.max_steps # save a screenshot from the position of where we set the camera above def record_frame(self, filename): Sofa.Gui.GUIManager.SaveScreenshot(filename) def main(): a = scene_interface() done = False while not done: factor = a.current_step done = a.step() a.record_frame(str(factor) + ".png") if __name__ == '__main__': main()
For reference I built with
SofaPython3 commit: 184206f126acf0c5d45416fc23cb37baf1971fa5
and Sofa commit:184206f126acf0c5d45416fc23cb37baf1971fa530 October 2020 at 14 h 47 min #17493
Would you like to take images at each time steps?
Or would you like your Python script to save an image only at very specific moments?
I doubt that with Python 2.7 you will be able to trigger keyboard event, I never tried it myself. Could you give it a try?
The usual keyboard pressed “v” key does not suit your need I guess?
Hugo18 November 2020 at 0 h 40 min #17709trannguyenleParticipant
- Aalto University
I just started with SOFA 2 days ago so totally new with this. I followed some tutorials to get to know SOFA, so far so good. My research is going to incorporate both vision and haptic for robotics application with deformable object.
I had a look at the chat above and noted that @omar is working on “work on interfacing realsense cameras with SOFA simulation, especially in the context of liver deformation tracking”. I want to do the same thing but with other object deformation tracking. Do you have some code or tutorial to get on board with this stuff? It would be really nice for me to have a starting point regarding this matter.18 November 2020 at 8 h 22 min #17713
- You must be logged in to reply to this topic.