Hardcore "Rock n Roll" DepthKit Studio Field Test
I decided to do a kind of documentary field test first!
So, I decided to build up in a regular room with nonblinded windows to double-check how the sensors react to this potential interference of disturbing natural sunlight IR frequency.
(As this test was mainly a get-to-know of the new software functions and handling of the software -> it was not so important to get the cleanest kind of mesh and texture result as I know before that if you control the light and the shooting location fully -> the results are very good)
I rebuild first the calibrating board settings from "Cory Allen's" ten-minutes calibration clip out of the DepthKit Studio documentation.
As a result, I can say the calibration and taking your time for it is crucial ... as positing the sensors is a significant element for getting the best results and more sensors are better than less ... for a client who wants hyper "indi" results, I would say between 8-12 is a good amount ...
Then, I followed the rules of pairing the next to each other sensors to get the best results.
I played a lot with the new, more straightforward cleaning function, and it is easy, but I loved working with a good greenscreen and creating the extra alpha mask in After Effect. From my point of view, as the older version of DepthKit Studio had this function, I believe they should reactivate that and give the user at least the option to go (if they want) the greenscreen alpha mask road, as the result has been even better than the already good new simple functions result. For the people who don't have a green screen room or don't know how to light a greenscreen or handle the spill, the new cleaning functions are better and faster as they also introduced the cleaning box option, but for people who know how to deal with green, the result could be even higher-class.
I experimented with
export formats and speeds
- OBJ Sequence -> for the use of any post-software
- PNG Sequence -> for the inside of Unity post-processing
- combined MP4 Clips -> for direct use through the depthkit Unity plugins
Of course, all the different sizes
I tested the Unity Plugin workflows ...
I even tested importing an old 6-sensor recording and exporting this old project to see how fast and proper the new cleaning and exporting results are.
Speed-wise, my main volumetric recording machine is faster, but the second one can do all the functions as well; it only takes a bit longer.
I even tried to test the old high-class mesh exporter road for Arcturus HoloEdit again.
And I got some insides that will be valuable for future DepthKit Studio Shooting.
As a RESULT, the NEW DepthKit Studio 070 is another fantastic step for the Volumetric Community.
As always, in this field, the software still has some minor things to improve, but the personal, blazing, fast, and professional reaction of Cory Allen and James George inside of their DepthKit Forum is outstanding!
A dedicated volumetric creator can achieve decent meshed & textured assets with DepthKit Studio, and you can either carry on with DepthKit Unity Plugins or you can carry on the asset on the HoloEdit and then enter the next level of Volumetric Post Production.
But it is a serious journey in which you have to keep control of everything like light / shiny materials / the calibration process, and the custom positioning of the sensors according to the movement and performance of the recorded human.