One of the possibilities with 3D applications that intrigues me the most is using Augmented Reality (AR). Based on properties of the camera such as focal length we can determine the projection matrix for the internal camera of an iOS device.
Given the projection matrix and pattern recongnition on frames of the video stream, it is possible to calculate the camera position in real-world coordinates.
This in turn gives us the opportunity to calculate where an object would appear to the camera if it were in world-space.

I use a marker image displayed on my PC monitor to track the position of the object.
Below you'll see an impression of what is possible so far:

 
Model credits: 3DGenerator (tf3dm.com)

I'm currently testing this feature and fine-tuning the parameters.

Fun Facts: 
One of these models does not load at all in the otherwise excellent Blender Application. 
This project was in fact the first in ~25 years of being in software development where I actually used knowledge acquired during the project I graduated on.
This being "Triangulation range-finding based on a position-sensitive device array" at the Applied Physics Faculty (department Imaging Physics) at Delft University of Technology (TD-Delft).

Back to the roots
Back then, this study produced two publications:
"Range-finding camera based on a position-sensitive device array" (Elsevier: Sensors and Actuators)
"Video-speed triangulation range imaging" (Traditional and Non-Traditional Robotic Sensors, Springer International Publishing AG)

Nice to go back to the roots :-)