The newly announced Google Pixel 3 and 3 XL phones do impressive things with machine learning when it comes to their camera app. The ability to do resolution comparable 2X zoom to an optical lens by using exposure merging is genius. Their portrait mode is also the best ever made on a phone with incredible separation of background and foreground depth of field based on learning algorithms that can tackle hair transitions and other objects all with only a single lens.
When it comes to video however it is not as good as either the Samsung’s or Apple’s latest flagship phones. The pixel tops at 4k 30p and the slow motion while doing 240fps which matches the iPhone XS it is only 720p instead of 1080p. Google seems to have beefed up the phone for still images and selfies and left the video features on a secondary plane. The slow motion mode is essentially identical to last year’s Pixel 2 and 2 XL at 120fps 1080p and 240fps 720p.
NVIDIA has been hard at work on the problem posed by high frame rate interpolation of video data shot on lower fps. We have had this tech since the late 1990s with the advent of Twixtor and refined over the decades in systems like Twixtor Pro and Adobe’s Optical Flow in After Effects. You are still not getting real temporal detail data since the frames are created by extrapolating velocity and direction vectors plus pixel values between frames to get the result.
We explored this technique in our post on interpolation here and why it is no substitute from a real slow motion camera solution. NVIDIA’s new method uses machine learning along with 11,000 videos to arrive at a more convincing result. Considering the relatively small sample size we can imagine a future where hundreds of thousands or millions of footage samples are used to generate near flawless interpolation. This technique takes some serious computation and data sets so as of now it is not really ready for the mass market but that could change with the cloud very soon.
The Latest on Hi Speed Affordable Imaging!