Martn
Demmer
Opimization test thanks to Arcturus Evaluation
After solving the issue that I should use either first re-mesh and then detect face or use not at all re-mesh ... and specially not after detect face (as it takes away the results) I have now as well a texture to share in which you can see an focus on the face ...
1024 x 1024 Texture face aware & low poly reduction
When I encode a 1k Texture Face aware after I did a 2K Texture Face aware -> the results feel extremely similar to the 2k result..., which means it saves loading time but feels the same, -> means "NICE" ... Next test I have to test how the results are when I reduced Face aware from Setting "1" to setting "0.7"
0.7 Setting for face-aware textures:
A fast Unity Test between... other holo edit tests ... (I tested as well, already retargeting blend weight)
First, I tested 2022.3.19f1, but I had problems with the new "Meta all-in-one SDK" ...
I didn't get the locomotion running... I could move around the (???), but it was a straight user error of mine.
Nothing went wrong connected with the Arcturus Unity Plugin; it runs super smooth.
I switched between building platforms and tested 2020.3.48f1 later with the Oculus XR plugin 1.13.1.
I put the player controller into the hierarchy, and XR locomotion ran directly ...
And the Arcturus Plugin ran as well, stable and smooth ...
I still have to figure out why I only got "HoloSuitePlayerURP+HDRPUnlit" as a setting for my asset ...
I guess it is another simple UNITY user error :) ...
My specialty is clear volumetric recording and how to use it properly for a new way of storytelling.
In Unity, I am still at the base level. As you see, I got it running ;)
​
I recorded for documentation an OBS screen recording of my very first VR walk around the asset :)
The OMS asset shown and used was created with a PLY 30.000tri + 2k texture sequence; I kept the texture size at 2k but made the textures face-aware and optimized the meshes. I exported it to OMS + MP4 (including skeleton information and head retargeting blend weights information); the asset went from 5GB down to 1GB for a Minute.
Next, test retargeting the head for a "look at" function. It is another sample scene in the Arcturus Unity Package. I have already exchanged the sample scene asset for my asset. Still, at the moment, I am taking away the Unity prefab "HeadRetargeting Skeleton" and putting my generated prefab there. Somehow, the head will not follow my movements, but with the sample scene asset it works easygoing...
Before I went into retargeting, I double-checked the SSDR compression of HoloEdit. As I mentioned, the input size of the PLY + PNG stream was 5GB, and now, after I did the SSDR Stage, the exported OMS + MP4 is together 200MB ... I have to check in Unity the visual difference, but I guess it will look quite similar.
​
Background to SSDR at the documentation of HoloEdit LINK
​
I will report here ...