27 August 2019 at 0 h 52 min #14161
I would like to compare the behaviour of a simulated phantom and a real one. As such, I captured a RGBD images of interactions with the real camera and was wondering if I can put a virtual depth camera in the simulation. It seems like I can set the viewpoint using OpenGL viewer and SOFA’s GUI allows to recording the screenshot. Is it possible to capture screenshots from a Python script? Is there an existing method to estimate the depth map of the scene from a particular view point?
Jie Ying2 September 2019 at 16 h 55 min #14175
Thank you very much for this interesting question!
Coupling the fields of simulation and computer vision has been investigated and is still active! I will ask the developers in this field to get you an answer and maybe an access to their code. With their work, it should be possible to include in SOFA simulation the cloud point coming from the RGBD camera and then make the comparison. Would this be what you need?
I never captured a screenshot from Python, but if you mimic keyboard action : ALT+C it should work.
I’ll get back to you.
Hugo3 September 2019 at 4 h 19 min #14180
Thanks for the info! Yes, that sounds exactly like what I need. I’m currently using the MeshExporter as a workaround, but it would be interesting if SOFA can simulate the point cloud from a particular camera position.
Jie Ying5 September 2019 at 16 h 57 min #14215
PS: for people simply looking for getting screenshots / videos of the simulation (regarding the title of your forum topic), you can use the VideoRecorder (Edit->Video Recorder Manager). To activate it, press “V” during the simulation.6 September 2019 at 9 h 27 min #14219
I currently work on interfacing realsense cameras with SOFA simulation, especially in the context of liver deformation tracking.
I’m not sure to understand your question. Do you want to reconstruct a point cloud from RGBD frames offline inside Sofa ?
That should be fairly easy to do. Mainly, one should save camera’s intrinsics and RGB and Depth streams in a video file. Then depending on the camera model you’re using, 2D-3D reprojection implementations tend to differ slightly.
Working exclusively on Intel’s Realsense cameras, I can only vouch for their C++ framework, which I find top notch. I can’t speak for other RGBD cameras manufacturers.6 September 2019 at 20 h 41 min #14221
Thanks, Hugo for the introduction.
I’m also working with an Intel Realsense camera (SR300). I’m trying to check the validity of my simulation by comparing the output of the depth camera with what the simulation shows. Currently, I’m reading out the positions of my mesh and comparing that to the point cloud. This is fine, but I was wondering since SOFA does have all the information, whether it’s possible to stream out RGBD frames from a particular camera view point (given the camera’s intrinsic and extrinsic properties).
Jie Ying10 September 2019 at 10 h 28 min #14222
I will let @omar guide you, he’ll have more accurate and up-to-date replies regarding his current work.
Hugo10 September 2019 at 11 h 00 min #14224
@JieYing, Sofa doesn’t have a component that does exactly that (yet?)
Concerning the intrinsics parameters, they are camera specific, so I guess they’ll be the same as the realsense (in your case SR300)
Extrinsics can be handled with Sofa’s Transform engine. It binds to a mechanical object and applies scaling/rotation/translation on it on demand.
Or you can also just map them in a rotation/translation matrix and figure out your point cloud’s new position when reprojecting from 2D to 3D.
I’ll make sure to share a code snippet later if you want.
Hope this helps
Omar18 September 2019 at 20 h 37 min #14267
Thanks for the info. A code snippet would be useful if it’s not too much trouble.
Jie Ying25 September 2019 at 11 h 20 min #14280
Do not hesitate to name Omar @omar so that he can receive emails.
Hugo2 October 2019 at 9 h 48 min #14336
Hi @jieying, sorry for the late answer.
I’m on a vacation as for now, with little to no access to my code.
I’ll be sure to send a snippet once I’m back, by next week.
Omar30 January 2020 at 17 h 37 min #15173RishabhParticipant
- Graduate researcher
Good to see this thread. I am working on a similar robotics problem where I need the RGBD information from a virtual camera inside the simulation (possibly frame-by-frame while the simulation is running?)
Is this possible? and can I change the view point as well?
Rishabh4 February 2020 at 18 h 45 min #15176balazsParticipant
I am very happy to have found this thread. I am trying to train machine learning models to assist doctors during surgery, and we are using SOFA to simulate the environment (i.e. the patient’s body and the organs inside it).
To do this, I would like to access the rendered simulation view from within Python3. According to my understanding, this is not currently possible with the existing bindings. Given this, I want to add my own pybind11 bindings to expose the scene in Python, but I am unsure what object/method in the SOFA framework contains this information (I have little experience in C++ but I am willing to get my hands dirty). Any advice is much appreciated.
Balazs7 February 2020 at 19 h 13 min #15192
Welcome on the SOFA forum!
Machine learning trained with simulations, how trendy is it! Interesting topic!
A plugin already exists allowing to use SOFA and interact with the simulation within a Python3 environment: SOFAPython3 plugin. In a python script, you can design, run and interact with a simulation and its parameters (and much more!).
Is this what you are looking for?
What do you mean exactly by “access the rendered simulation view from within Python3”?
- You must be logged in to reply to this topic.