2 / 60 |
How to Get From 30 to 60 Frames Per Second in Video Games for "Free". Presented as a part of "Split Second Screen Space" session on Monday, 26 July | 2:00 PM - 3:30 PM | ROOM 408 AB. The talk is available on DVD set as well as online at http://siggraphencore.myshopify.com/products/2010-tl037 Additional material is available at http://and.intercon.ru/releases/talks/rtfrucvg
|
5 / 60 |
"How do we come up with new ideas?" is a very interesting topic. It is very different in every case but it probably is the most interesting part of it, which is usually not discussed. It started in 2008 on our way back home from SIGGRAPH 2008. Cory, Cedrick and I, somehow (I would really like to remember how), got into this discussion about software video players on PC. And how they can do the same thing that those 120Hz HDTV do, making regular movies look smoother, reconstructing natural motion of objects. I thought that was great, because you don't actually need a TV to play with it. I knew that there were WinDVD with its Trimension and AVISynth with MSU FRC filter that could be applied in FFDShow codec. But it took that discussion to start thinking about it from a different perspective. http://compression.ru/video/frame_rate_conversion/index_en_msu.html So as soon as I got back home, I started to play with it and soon after that realized that there are a lot of issues. Mostly the artifacts of a different kind, that appear in more or less complex scenes, as well as performance issues (it is really slow when done properly). And to better understand the problem, I made a very quick and simple prototype to play with. |
7 / 60 |
The second step is interpolation itself. At this point the inner frames (A1, A2, A3, etc) are constructed from the outer ones (A, B) based on the motion vector field, the result of motion estimation. One of the examples of how this could be done for video sequences is MSU Frame Rate Conversion Method by Dr. Dmitry Vatolin and Sergey Grishin. http://compression.ru/video/frame_rate_conversion/index_en.html Obviously, there are a lot of different ways of doing it, but the point is that it gets really complicated once you have to adjust for scaling, rotation, transparency and dynamic lighting effects of any kind. In fact, most of the complexity related to interpolation is due to reconstruction of the original data and conditions. |
12 / 60 |
Perception-wise it has to do with motion eye-tracking. Our visual system has a certain temporal function. In fact, it is continuous. Numerous experiments have shown no signs of frame based processing of any kind even though we can observe effects such as "wagon-wheel effect" in real life. But those are mostly due to either stroboscopic effect or "motion during-effect", when a motion after-effect becomes superimposed on the real motion. Display devices, on the other hand, have very different kinds of temporal functions. The one on the slide shows an idealized LCD display. http://en.wikipedia.org/wiki/Comparison_of_display_technology Now, suppose we have a moving object that we eye-track. At some point the display goes on, projects it on our retina, and then goes off. Meanwhile, the eye keeps continually moving to the next, expected position of an object. http://www.poynton.com/PDFs/Motion_portrayal.pdf |
16 / 60 |
So how do we get both, the quality of 30 fps rendering and fluid natural motions of games running at 60 fps? |
17 / 60 |
18 / 60 |
26 / 60 |
27 / 60 |
35 / 60 |
36 / 60 |
40 / 60 |
Once the mask is generated, we leak small neighboring image patches around the area inside of it by duplicating and shifting the original layer up, down, left and right. |
41 / 60 |
42 / 60 |
43 / 60 |
44 / 60 |
In case of Photoshop implementation, those four layers should be blended all together with some additional transparency as there is no way to do conditional accumulation. |
48 / 60 |
49 / 60 |
50 / 60 |
57 / 60 |
References Simonyan, K., Grishin, S., and Vatolin, D., AviSynth MSU Frame Rate Conversion Filter. Rosado, G. 2007. Motion Blur as a Post-Processing Effect. In GPU Gems 3, H. Nguyen, Ed. 575-582. Castagno, R., Haavisto, P., Ramponi, G., 1996. A method for Motion Adaptive Frame Rate Up-Conversion. IEEE Transactions on circuits and Systems for Video Technology 6, 5. Pelagotti, A., and de Haan, G., 1999, High quality picture rate up-conversion for video on TV and PC. Proc. Philips Conf. on Digital Signal Processing, paper 4.1, Veldhoven (NL). Chen, Y.-K., Vetro, A., Sun, H., and Kung, S.Y., 1998, Framerate up-conversion using transmitted true motion vectors. In Proc. IEEE Second Workshop on Multimedia Signal Processing, 622-627. Poynton, C., 1996, Motion portrayal, eye tracking, and emerging display technology. In Proc. Advanced Motion Imaging Conference 192-202. Mather, G., 2006, Introduction to Motion Perception. |
58 / 60 |
A few special words for SIGGRAPH 2010 attendees. |
60 / 60 |
Alternatively, you can email me at andcoder@gmail.com |