Tag Archives: NVIDIA

Rife-App Creates Higher frame Rates 25x Faster!

A few months back we took a look at Dain app, and how it was able to use AI and machine learning to create in between frames from almost any source footage and create something that looked and felt like real footage taken with higher fps cameras.  The algorithm was so revolutionary that it took the world by storm, making older software that used re-timing from Adobe and others look antiquated and underpowered.  The Dain-App was great and it was a pay what you want App but had an Achilles heel. The software required a powerful Nvidia GPU with as much VRAM as you could muster to be able to convert footage and re-time it.  

The new Rife-App which is the direct successor of Dain App by the creator GRisk is up to 25x faster than the original, improves the algorithm, and by many examples betters it by creating more seamless transitions. The flow of frames is frankly jaw-droppingly beautiful, especially on low frame rate animation.   We estimate that Animation studios in 2D will eat this app up immediately, and even 3D animation studios could reduce their render times by calculating fewer frames and using Rife-App to increase them to 24p, 30p or 60p from a lower source like 20fps or 12fps.   → Continue Reading Full Post ←

Dain app GPU and why you should wait for 2021!

Dain app GPU

Probably no computer-related technology has received more attention on the PC side than GPUs.  NVidia launched the 3000 series of cards with aggressive pricing that completely obliterate the previous 2000 series GPUs for much less money. Things are great for PC gamers, machine learning coders, and 3D animators in 2020 with these levels of performance except for the fact that there is a complete scarcity of GPU cards in most lines including but not limited to the 2000 and 3000 series for a variety of reasons.

Dain app the machine learning frame rate interpolation software uses CUDA v5.0 as the minimum requirement or a GeForce GTX 750 as the minimum card to run it. But that does not mean it will be fast or be able to finish interpolating your high res footage. In fact, a 720p clip interpolated frame rate needs about 10-11GB of VRAM memory on the card as it runs the entirety of the calculation in video memory for predictive algorithms to work.

Dain app and the GPU Shortage:

Dain App will be able to run in any CUDA 5.0+ supported Nvidia card or the following:

  • GeForce GTX 750 Ti, GeForce GTX 750, GeForce GTX 960M, GeForce GTX 950M, GeForce 940M, GeForce 930M, GeForce GTX 860M, GeForce GTX 850M, GeForce 845M, GeForce 840M, GeForce 830M, GeForce GTX 870M
  • GeForce GTX Titan X, GeForce GTX 980 Ti, GeForce GTX 980, GeForce GTX 970, GeForce GTX 960, GeForce GTX 950, GeForce GTX 750 SE,
    GeForce GTX 980M, GeForce GTX 970M, GeForce GTX 965M
  • Nvidia TITAN Xp, Titan X,
    GeForce GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, GTX 1060,
    GTX 1050 Ti, GTX 1050, GT 1030,
    MX350, MX330, MX250, MX230, MX150, MX130, MX110
  • NVIDIA TITAN RTX,
    GeForce RTX 2080 Ti, RTX 2080 Super, RTX 2080, RTX 2070 Super, RTX 2070, RTX 2060 Super, RTX 2060,
    GeForce GTX 1660 Ti, GTX 1660 Super, GTX 1660, GTX 1650 Super, GTX 1650, MX450
  • GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti

However, since the app runs on video memory or VRAM you need a card with a minimum of 4GB of it to have any sort of success at interpolating frames. Even then your card will have too little VRAM to do anything over VGA resolution. There is a workaround however by using the Split frames into sections feature which will render small pixel buckets of frames and re-align a merged final frame with all the parts when done. This allows you to literally render up to 4k footage at higher frame rates without getting higher video memory.

How to Create Slow Motion Videos with DAIN APP | AI Frame Interpolation by GreenBox:

This workaround is very slow and can take days for a few second 4k clip to render on a mid-range PC.  Your best bet is to get a faster GPU with tons of VRAM. In our view, a minimum of 10GB of VRAM or even better yet a minimum of 12GB is preferable to get the best performance.  In Dain app your card’s CUDA cores are the primary speed accelerator but without enough VRAM it becomes slow as molasses.

The good news is that new Nvidia GPUs have more and more VRAM than ever before.  The just-launched 3000 series toys with 24GB on the high end to 8GB on the low end 3060 Ti cards.  However, none of these cards are available at this time at their suggested retail prices. Scalpers literally bought the entire free supply of cards and are selling them on eBay and Amazon at ridiculously high prices approaching anywhere from 40% to 150% mark up. 

You would think that the 3060 Ti FE starting at $399 for an 8GB of VRAM card would be ideal for the Dain app, cheap, great performance, and close to the ideal 10GB of VRAM but there lies the problem.  8GB will force you to do segmented rendering for higher frame rates and that will limit your speed and video frame sizes. Your render time will increase exponentially with also some artifacts from conjoining segments showing up in some instances.

What to do?

If money is no object then we suggest you get a 3090 RTX card with 24GB of RAM. This will be the ideal card setup but we are talking at a hefty price increase as MSRP cards are nowhere to be found. Relying on scalpers will cost you dearly.  You could still get a 3080 with 10GB of VRAM or a 3070, 3060 Ti with 8GB for about 40% more money.

There is a better option in our view and that is, wait for next year. Nvidia is going to ramp up production of 3000 series cards by Q1 2021 offering better lower-priced options of current cards and also a new 3060 card launching with less CUDA and Raytracing tensor cores but with a whopping 12GB of VRAM option.

By having 12GB of VRAM the 3060 card will be ideal for DAIN app on a budget and be able to render footage directly without segmenting the frames. You will get a big cut on CUDA cores on these cards compared to the 3060 TI.  Rumors say that the 4864 CUDA cores of the 3060 Ti will drop down to  3840 on the regular 3060 card. That is a cut of 1000 cores.  It will still have more CUDA cores than most of the 2000 series so it will still be a very capable card for sure.  For example, a 2080 standard card only features 2944 CUDA Cores.  You will really be able to get better technology for a smaller price tag for gaming, graphics, and machine learning applications like Dain app.

Card Options Today?

1. Nvidia RTX 3000 Series

We first start with the RTX 3000 series. You can get them today at high prices but the links below should adjust in time to much lower levels as supply catches up to demand.

amzn_assoc_placement = "adunit0";
amzn_assoc_tracking_id = "hispeedcams-20";
amzn_assoc_ad_mode = "manual";
amzn_assoc_ad_type = "smart";
amzn_assoc_marketplace = "amazon";
amzn_assoc_region = "US";
amzn_assoc_title = "RTX Cards at Amazon";
amzn_assoc_linkid = "21a8fc1df62beea447f50d845d979617";
amzn_assoc_rows = "4";
amzn_assoc_design = "text_links";
amzn_assoc_asins = "B08HBQWBHH,B08HR9D2JS,B08HRBW6VB,B08HR7SV3M,B08L8L71SM";

amzn_assoc_placement = "adunit0";
amzn_assoc_tracking_id = "hispeedcams-20";
amzn_assoc_ad_mode = "manual";
amzn_assoc_ad_type = "smart";
amzn_assoc_marketplace = "amazon";
amzn_assoc_region = "US";
amzn_assoc_title = "RTX 3060 Ti";
amzn_assoc_linkid = "92700d93b889e22be18ea83ae388dd07";
amzn_assoc_asins = "B08L8L71SM,B08PW559LL,B08QZ5GJ52,B08P3V572B";
amzn_assoc_rows = "4";
amzn_assoc_design = "text_links";

2. Nvidia RTX 2000 Series:

amzn_assoc_placement = "adunit0";
amzn_assoc_tracking_id = "hispeedcams-20";
amzn_assoc_ad_mode = "manual";
amzn_assoc_ad_type = "smart";
amzn_assoc_marketplace = "amazon";
amzn_assoc_region = "US";
amzn_assoc_title = "RTX 2000 Series!";
amzn_assoc_linkid = "edcfa31c5523b1f515240d3301577dbe";
amzn_assoc_rows = "4";
amzn_assoc_design = "text_links";
amzn_assoc_asins = "B07VR2GZMB,B08FYRG8XP,B07Y2R5Y2G,B07W3P4PC2";

amzn_assoc_placement = "adunit0";
amzn_assoc_tracking_id = "hispeedcams-20";
amzn_assoc_ad_mode = "manual";
amzn_assoc_ad_type = "smart";
amzn_assoc_marketplace = "amazon";
amzn_assoc_region = "US";
amzn_assoc_title = "RTX 2070 Series!";
amzn_assoc_linkid = "edcfa31c5523b1f515240d3301577dbe";
amzn_assoc_rows = "4";
amzn_assoc_design = "text_links";
amzn_assoc_asins = "B0856BVRFL,B07Y2R5Y2G,B07YXPVBWW,B07VDC5FDJ";

3. Get in the EVGA Queue!

The company EVGA has developed a product queue at https://www.evga.com/ that allows you to place your name on a waiting list and will email you the right to buy a card for 8 hours. If you do not buy it it will go to the next in line and you will have to re-register a new slot.

We feel this is great for two reasons, you get a great product at the MSRP price and you also refrain from supporting the scalping market.  The downside is that you may have to wait weeks to months for a slot to become available.

4. Wait for next year!

If you wait until 2021, Nvidia will have a January announcement event for RTX with the rumored RTX 3060 card which will be the renamed 3050 ti card.  There will be an option with 12GB of VRAM which will be the best in price performance and should be under $350 USD when it ships in quantity.

Final note: 

We feel you should only buy a card now at inflated prices if you absolutely need it for mission-critical work.  If you can wait please do so to combat price gouging and scalpers who destroy the legitimate market for technology parts.

We have rarely seen such a blatant attack to the consumer. The RTX cards have been gone in mere seconds from online sources due to bots that continuously scan for new stock and snatch it automatically.   By not buying from them you support the community at large and save money in the process.

You can read our article on Dain App and the interpolation of footage that lets you create very convincing super slow motion from almost any frame rate. However, the app shines even more with high-speed footage. You can literally create a 4000fps video from a 1000fps source that looks almost as good as the real thing. Of course you will create frames and data from the ether so for mission-critical and lab studies, Dain app will not be an option.

If you get a new card and run it through Dain app, please share your results and footage below.  Merry Christmas -HSC

amzn_assoc_placement = "adunit0";
amzn_assoc_tracking_id = "hispeedcams-20";
amzn_assoc_ad_mode = "manual";
amzn_assoc_ad_type = "smart";
amzn_assoc_marketplace = "amazon";
amzn_assoc_region = "US";
amzn_assoc_title = "RTX 3000 Series!";
amzn_assoc_linkid = "edcfa31c5523b1f515240d3301577dbe";
amzn_assoc_asins = "B08KWLMZV4,B08HR7SV3M,B08P2D3JSG,B08J5F3G18";
amzn_assoc_search_bar = "true";

Multiply Your Video Frame Rate with Dain-App !

Multiply Your Video Frame Rate

We got over 20 messages with essentially the same video sample in our inbox this week. They all touted the new interpolation from the DAIN experimental App or (Depth Aware Video Interpolation App)  which now analyses footage with a Neural network AI algorithm that crunches motion vectors and even what seemed impossible before “Object Occlusion” to generate higher frame rates from lower fps sources.  The technology is pretty fascinating and should be further improved by more training and samples over the coming years.

For stop motion animators, this is a complete game-changer as now you could animate with as little as 8fps and then interpolate to 30fps or 60fps with very little in the way of tearing and artifacting as long as the footage is well lit and objects clearly defined.  To make matters more interesting, it also analyses footage with shallow depth of field yielding impressive results.

Multiply Your Video Frame Rate with Interpolation or the “I” Word for Slow Motion Enthusiasts:

We visited the Interpolation topic in the past on our Fake Slow Motion article and concluded that then, the quality of interpolation while good was far from usable and you really could not compromise real high fps footage from interpolated versions except in very simple cases.

Now with DAIN technology, we have no choice but to re-visit the cases and analyze what it is capable of.  We looked at a few dozen examples and it is clear the technology has progressed forward so much that now stop motion animation, 2D Cell-based cartoon animation, and even 3D animated sequences rendered at 30p can easily be turned in higher fps increments yielding impressive and in some cases miraculous results.

We would like you to first watch the video below to understand what a depth map is and how the software in DAIN can create frames from nothing that look just like real ones.  A depth map will generate an approximated view of the world in a Lidar-Like vision representation to figure out to the best of the AI estimation where objects are in a scene according to their location close or far to the camera.

Depth-Aware Video Frame Interpolation by Wenbo Bao:

Even at 48fps from  12fps source, it is clear the technology in DAIN can yield impressive results even with heavy organic detail in the background including foliage. The software does an admirable job of estimating the relative position of objects in the video scenes.

AI使用フレーム補間アプリ DAIN APP byTALBOの実験室 Ch.: → Continue Reading Full Post ←

Top 10 Slow Motion Video Editing Software List by Filmora!

Top 10 Slow Motion Video Editing Software

The video editing software Filmora has compounded a list of the top ten slow motion video editing software packages for personal computers. While you can clearly see this is in some way an article to attract an audience for their product, you can also learn about products you didn’t know existed as options. Even Filmora was to us a product that was not really on the radar but now is. Especially at the cheap price of $59.99 for a lifetime license with no subscriptions. We are not advertising any product, in particular, You can see the Top Ten Slow Mo Software List Here!

Our software of choice has always been Adobe After Effects with the Time Warp feature but we can understand why the subscription model for Adobe CC products can be a little too much to ask for many. The list here also shows completely free products like Avi Synth which is very powerful but not easy to use and other optical flow software that can really slow regular video and higher frame rates to a crawl. However, we consider interpolation fake slow motion as we noted in our article here!  Nvidia’s new Machine Learning Algorithm is very impressive and shows the way forward for converting regular video to higher frame rates with surprisingly amazing results. Tell us what you think – HSC

NVIDIA Slow Motion Interpolation With AI Deep Learning Tech!

NVIDIA Slow Motion Interpolation

NVIDIA has been hard at work on the problem posed by high frame rate interpolation of video data shot on lower fps.  We have had this tech since the late 1990s with the advent of Twixtor and refined over the decades in systems like Twixtor Pro and Adobe’s Optical Flow in After Effects. You are still not getting real temporal detail data since the frames are created by extrapolating velocity and direction vectors plus pixel values between frames to get the result.

We explored this technique in our post on interpolation here and why it is no substitute from a real slow motion camera solution.  NVIDIA’s new method uses machine learning along with 11,000 videos to arrive at a more convincing result. Considering the relatively small sample size we can imagine a future where hundreds of thousands or millions of footage samples are used to generate near flawless interpolation. This technique takes some serious computation and data sets so as of now it is not really ready for the mass market but that could change with the cloud very soon.

NVIDIA Slow Motion Is New But Still flawed:

As you can see in the sample video below the artifacts produced by interpolation are very evident and more so when a fluid or fabric motion is introduced. The human eye can hide some of these in real time playback due to the persistence of vision effect and brain image processing but it is still quite apparent if you look at it witha  critical eye.

Transforming Standard Video Into Slow Motion with AI by NVIDIA:

There is no question this might be the best looking interpolation method we have seen to date but it is still not generating new information that has any scientific value. In other words, you can’t create something from nothing.  Nothing being the estimated values between two distant frames in time.  It sure is a marvel of computation and could really help in getting many more frames where detail is vast and artifacts suppressed but there is no real image captured from a live event.  If you record an explosion or fluid with this technique you will get what the computer estimates should be there and not what actually happened. Any rogue debris or physically distinct motion phenomena will not be there.  This technique is completely useless for education and scientific research.

That said the technique can make your slow-mo videos shot on your phone just a little more interesting even when shot at 30 or 60fps. As with any interpolation technique you can get better results the more frames you give the system.  If you shoot at 1000fps with a shutter of 1/4000 for example you will get the ability to interpolate down to 3k or 4k fps without much artifacting happening.  Then again if you shoot at 4000fps like what an edgertronic SC2+ can do you could interpolate down to 16,000fps without much in the way of artifacting.

We can certainly see a future in which you can upload your lower frame rate footage to the cloud and choose which frame rate you want it at within a reasonable range.   The cloud AI with the Machine Learning algorithms will get better with more and more videos being added to the collection. It is possible to do it with millions of samples instead of only 11,000 videos like the NVIDIA researchers were using in the lab.    The interpolation should get better and better as the computer learns from the added content.

It will also be possible to create footage from scratch by using video parts much like what Google did with Machine image learning to create new art.  What an interesting future it will be.

We are all for better interpolation but do not believe the hype when you are told you may never need a slow-motion camera again. In fact, temporal detail and nature recording cannot be interpolated to generate real information. So you better continue to use your slow motion camera and expect to get one more capable as technology improves and the price continues to lower. -HSC

Nvidia Slow Motion Interpolation Press Release on the Technology Below:

Link to the article here: https://news.developer.nvidia.com/transforming-standard-video-into-slow-motion-with-ai/?ncid=–43539

Transforming Standard Video Into Slow Motion with AI

June 18, 2018

Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, outperforming various state-of-the-art methods that aim to do the same.  The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week.

“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers wrote in the research paper.  “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,” the team explained.

With this new research, users can slow down their recordings after taking them.

Using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.

The team used a separate dataset to validate the accuracy of their system.

The result can make videos shot at a lower frame rate look more fluid and less blurry.

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”

To help demonstrate the research, the team took a series of clips from The Slow Mo Guys, a popular slow-motion based science and technology entertainment YouTube series created by Gavin Free, starring himself and his friend Daniel Gruchy, and made their videos even slower.

The method can take everyday videos of life’s most precious moments and slow them down to look like your favorite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation.

The researchers, which include Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, and Jan Kautz, will present on Thursday, June 21 from 2:50 – 4:30 PM at CVPR.