Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ImFace++: A Sophisticated Nonlinear 3D Morphable Face Model with Implicit Neural Representations (2312.04028v3)

Published 7 Dec 2023 in cs.CV

Abstract: Accurate representations of 3D faces are of paramount importance in various computer vision and graphics applications. However, the challenges persist due to the limitations imposed by data discretization and model linearity, which hinder the precise capture of identity and expression clues in current studies. This paper presents a novel 3D morphable face model, named ImFace++, to learn a sophisticated and continuous space with implicit neural representations. ImFace++ first constructs two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions, respectively, which simultaneously facilitate automatic learning of point-to-point correspondences across diverse facial shapes. To capture more sophisticated facial details, a refinement displacement field within the template space is further incorporated, enabling fine-grained learning of individual-specific facial details. Furthermore, a Neural Blend-Field is designed to reinforce the representation capabilities through adaptive blending of an array of local fields. In addition to ImFace++, we devise an improved learning strategy to extend expression embeddings, allowing for a broader range of expression variations. Comprehensive qualitative and quantitative evaluation demonstrates that ImFace++ significantly advances the state-of-the-art in terms of both face reconstruction fidelity and correspondence accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (96)
  1. V. Blanz and T. Vetter, “A morphable model for the synthesis of 3D faces,” ACM SIGGRAPH Computer Graphics, no.8, pp. 187-194, 1999.
  2. V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1063-1074, 2003.
  3. G. Hu, F. Yan, C. Chan, W. Deng, W. Christmas, J. Kittler, and N. M. Robertson, “Face recognition using a unified 3D morphable model,” in European Conference on Computer Vision, 2016, pp. 73-89.
  4. O. Aldrian and W. A. Smith, “Inverse rendering of faces with a 3D morphable model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no.5, pp. 1080-1093, 2012.
  5. F. C. Staal, A. J. Ponniah, F. Angullia, C. Ruff, M. J. Koudstaal, and D. Dunaway, “Describing Crouzon and Pfeiffer syndrome based on principal component analysis,” Journal of Cranio-Maxillofacial Surgery, vol. 43, no.4, pp. 528-536, 2015.
  6. T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero, “Learning a model of facial shape and expression from 4D scans,” ACM Transactions on Graphics, vol. 36, no. 6, pp. 1-17, 2017.
  7. A. Brunton, T. Bolkart, and S. Wuhrer, “Multilinear wavelets: A statistical shape space for human faces,” in European Conference on Computer Vision, 2014, pp. 297-312.
  8. C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou, “FaceWarehouse: A 3D facial expression database for visual computing,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 413-425, 2013.
  9. D. Vlasic, M. Brand, H. Pfister, and J. Popovic, “Face transfer with multilinear models,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 426-433, 2005.
  10. S. Z. Gilani, A. Mian, F. Shafait, and I. Reid, “Dense 3D face correspondence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 7, pp. 1584-1598, 2017.
  11. J. Booth, A. Roussos, A. Ponniah, D. Dunaway, and S. Zafeiriou, “Large scale 3D morphable models,” International Journal of Computer Vision, vol. 126, no. 2, pp. 233-254, 2018.
  12. L. Tran and X. Liu, “Nonlinear 3D face morphable model,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2018, pp. 7346-7355.
  13. B. Egger, W. A. Smith, A. Tewari, S. Wuhrer, M. Zollhoefer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani, C. Theobalt, V. Blanz, and T. Vetter, “3D morphable face models—past, present, and future,” ACM Transactions on Graphics, vol. 39, no. 5, pp. 1-38, 2020.
  14. T. Bagautdinov, C. Wu, J. Saragih, P. Fua, and Y. Sheikh, “Modeling facial geometry using compositional vaes,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2018, pp. 3877-3886.
  15. A. Ranjan, T. Bolkart, S. Sanyal, and M. J. Black, “Generating 3D faces using convolutional mesh autoencoders,” in European Conference on Computer Vision, 2018, pp. 725–741.
  16. G. Bouritsas, S. Bokhnyak, S. Ploumpis, M. Bronstein, and S. Zafeiriou, “Neural 3D Morphable Models: Spiral convolutional networks for 3D shape representation learning and generation,” Proc. IEEE International Conference on Computer Vision, 2019, pp. 7213-7222.
  17. Z. Chen and T. Kim, “Learning feature aggregation for deep 3D morphable models,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 13164-13173.
  18. M. B. R, A. Tewari, H. P. Seidel, M. Elgharib, and C. Theobalt, “Learning complete 3D morphable face models from images and videos,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 3361-3371.
  19. L. Tran, F. Liu, and X. Liu, “Towards high-fidelity nonlinear 3D face morphable model,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2019, pp. 1126-1135.
  20. J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “DeepSDF: Learning continuous signed distance functions for shape representation,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2019, pp. 165-174.
  21. L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3D reconstruction in function space,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2019, pp. 4460-4470.
  22. Z. Chen and H. Zhang, “Learning implicit fields for generative shape modeling,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2019, pp. 5939-5948.
  23. Y. Deng, J. Yang, and X. Tong, “Deformed implicit field: Modeling 3D shapes with learned dense correspondence,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 10286-10296.
  24. Z. Zheng, T. Yu, Q. Dai, and Y. Liu, “Deep implicit templates for 3D shape representation,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 1429-1439.
  25. L. Feng and X. Liu, “Learning implicit functions for topology-varying dense 3D shape correspondence,” Proc. Advances in Neural Information Processing Systems, 2020, pp. 4823-4834.
  26. A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman, “Implicit geometric regularization for learning shapes,” Proc. International Conference on Machine Learning, 2020, pp. 3789-3799.
  27. Y. Lipman, “Phase transitions, distance functions, and implicit neural representations,” arXiv preprint arXiv:2106.07689, 2021.
  28. K. Genova, F. Cole, D. Vlasic, A. Sarna, W. T. Freeman, and T. Funkhouser, “Learning shape templates with structured implicit functions,” Proc. IEEE International Conference on Computer Vision, 2019, pp. 7154-7164.
  29. S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger, “Convolutional occupancy networks,” in European Conference on Computer Vision, 2020, pp. 523-540.
  30. S. Shunsuke, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li, “PIFu: Pixel-aligned implicit function for high-resolution clothed human digitization,” Proc. IEEE International Conference on Computer Vision, 2019, pp. 2304-2314.
  31. X. Chen, Y. Zheng, M. J. Black, O. Hilliges, and A. Geiger, “SNARF: Differentiable forward skinning for animating non-Rigid neural implicit shapes,” Proc. IEEE International Conference on Computer Vision, 2021, pp. 11594-11604.
  32. T. Alldieck, H. Xu, and C. Sminchisescu, “imGHUM: Implicit generative models of 3D human shape and articulated pose,” Proc. IEEE International Conference on Computer Vision, 2021, pp. 5461-5470.
  33. T. Yenamandra, A. Tewari, F. Bernard, H. P. Seidel, M. Elgharib, D. Cremers, and C. Theobalt, “i3DMM: Deep implicit 3D morphable model of human heads,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 12803-12813.
  34. E. Ramon, G. Triginer, J. Escur, A. Pumarola, J. Garcia, X. Giro-i-Nieto, and F. Moreno-Noguer, “H3D-Net: Few-shot high-fidelity 3D head reconstruction,” Proc. IEEE International Conference on Computer Vision, 2021, pp. 5620-5629.
  35. E. Corona, A. Pumarola, G. Alenya, G. Pons-Moll, and F. Moreno-Noguer, “SMPLicit: Topology-aware generative model for clothed people,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 11875-11885.
  36. S. Peng, Y. Zhang, Y. Xu, Q. Wang, Q. Shuai, H. Bao, and X. Zhou, “Neural Body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 9054-9063.
  37. J. P. Lewis, M. Cordner, and N. Fong, “Pose space deformation: A unified approach to shape interpolation and skeleton-driven deformation,” ACM SIGGRAPH Computer Graphics, no.8, pp. 165-172, 2000.
  38. J. Chibane, A. Mir, and G. Pons-Moll, “Neural unsigned distance fields for implicit function learning,” Proc. Advances in Neural Information Processing Systems, 2020, pp. 21638-21652.
  39. M. Atzmon and Y. Lipman, “SAL: Sign agnostic learning of shapes from raw data,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2020, pp. 2565-2574.
  40. B. Amberg, S. Romdhani, and T. Vetter, “Optimal step nonrigid ICP algorithms for surface registration,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2007, pp. 1-8.
  41. H. Yang, H. Zhu, Y. Wang, M. Huang, Q. Shen, R. Yang, and X. Cao, “FaceScape: A large-scale high quality 3D face dataset and detailed riggable 3D face prediction,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2020, pp. 601-610.
  42. D. Cosker, E. Krumhuber, and A. Hilton, “A FACS valid 3D dynamic action unit database with applications to 3D dynamic morphable facial modeling,” Proc. IEEE International Conference on Computer Vision, 2011, pp. 2296-2303.
  43. M. Bahri, E. O’Sullivan, S. Gong, F. Liu, X. Liu, M. M. Bronstein, and S. Zafeiriou, “Shape my face: Registering 3D face scans by surface-to-surface translation,” International Journal of Computer Vision, vol. 129, no. 9, pp. 2680-2713, 2021.
  44. S. Cheng, M. M. Bronstein, Y. Zhou, I. Kotsia, M. Pantic, and S. Zafeiriou, “MeshGAN: Non-linear 3D morphable models of faces,” arXiv preprint arXiv:1903.10384, 2019.
  45. V. Sitzmann, J. N. P. Martel, A. W. Bergman, D. B. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” Proc. Advances in Neural Information Processing Systems, 2020, pp. 7462-7473.
  46. L. Yi, V. G. Kim, D. Ceylan, I. Shen, M. Yan, H. Su, C. Lu, Q. Huang, A. Sheffer, and L. Guibas, “A scalable active framework for region annotation in 3D shape collections,” ACM Transactions on Graphics, vol. 35, no. 6, pp. 1-12, 2016.
  47. S. Saito, T. Simon, J. Saragih, and H. Joo, “PIFuHD: Multi-level pixel-aligned implicit function for high-resolution 3D human digitization,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2020, pp. 84-93.
  48. S. Peng, J. Dong, Q. Wang, S. Zhang, Q. Shuai, X. Zhou, and H. Bao, “Animatable neural radiance fields for modeling dynamic human bodies,” Proc. IEEE International Conference on Computer Vision, 2021, pp. 14314-14323.
  49. B. Mildenhall, P. P. Pratul, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing scenes as neural radiance fields for view synthesis,” in European Conference on Computer Vision, 2020, pp. 99-106.
  50. D. T. Lee and B. J. Schachter, “Two algorithms for constructing a Delaunay triangulation,” International Journal of Computer & Information Sciences, vol. 9, no. 3, pp. 219-242, 1980.
  51. T. Möller and B. Trumbore, “Fast, minimum storage ray-triangle intersection,” Journal of Graphics Tools, vol. 2, no. 1, pp. 21-28, 1997.
  52. T. Takikawa, J. Litalien, K. Yin, K. Kreis, C. Loop, D. Nowrouzezahrai, A. Jacobson, M. McGuire, and S. Fidler, “Neural geometric level of detail: Real-time rendering with implicit 3D shapes,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 11358-11367.
  53. K. Genova, F. Cole, A. Sud, A. Sarna, and T. Funkhouser, “Local deep implicit functions for 3D shape,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2020, pp. 4857-4866.
  54. J. Tang, W. Chen, J. Yang, B. Wang, S. Liu, B. Yang, and L. Gao, “OctField: Hierarchical implicit functions for 3D modeling,” Proc. Advances in Neural Information Processing Systems, 2021.
  55. M. Ibing, I. Lim, and L. Kobbelt, “3D shape generation with grid-based implicit functions,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021, pp. 13559-13568.
  56. T. Bolkart and S. Wuhrer, “A groupwise multilinear correspondence optimization for 3D faces,” Proc. IEEE International Conference on Computer Vision, 2015, pp. 3604-3612.
  57. A. Patel and W. A. Smith, “3D morphable face models revisited,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2009, pp. 1327-1334.
  58. P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, “A 3D face model for pose and illumination invariant face recognition,” Proc. IEEE Int’l Conf. Advanced Video and Signal Based Surveillance, 2009, pp. 296-301.
  59. V. F. Abrevaya, S. Wuhrer, and E. Boyer, “Spatiotemporal modeling for efficient registration of dynamic 3D faces,” Proc. IEEE International Conference on 3D Vision, 2018, pp. 371-380.
  60. F. Liu, L. Tran, and X. Liu, “3D face modeling from diverse raw scan data,” Proc. IEEE International Conference on Computer Vision, 2019, pp. 9408-9418.
  61. P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, “NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,” Proc. Advances in Neural Information Processing Systems, 2021.
  62. Q. Xu, W. Wang, D. Ceylan, R. Mech, and U. Neumann, “DISN: Deep implicit surface network for high-quality single-view 3D reconstruction,” Proc. Advances in Neural Information Processing Systems, 2019.
  63. J. Zhang, Y. Yao, and L. Quan, “Learning signed distance field for multi-view surface reconstruction,” Proc. IEEE International Conference on Computer Vision, 2021, pp. 6525-6534.
  64. V. Sitzmann, E. R. Chan, R. Tucker, N. Snavely, and G. Wetzstein, “MetaSDF: Meta-learning signed distance functions,” Proc. Advances in Neural Information Processing Systems, 2020.
  65. N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville, “On the spectral bias of neural networks,” Proc. International Conference on Machine Learning, 2019, pp. 5301-5310.
  66. S. Giebenhain, T. Kirschstein, M. Georgopoulos, M. Rünz, L. Agapito, and M. Nießner, “Learning neural parametric head models,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2023.
  67. Z. J. Xu, Y. Zhang, and Y. Xiao, “Training behavior of deep neural network in frequency domain,” Proc. Advances in Neural Information Processing Systems, 2019, pp. 264-274.
  68. R. Basri, M. Galun, A. Geifman, D. Jacobs, Y. Kasten, and S. Kritchman, “Frequency bias in neural networks for input of non-uniform density,” Proc. Advances in Neural Information Processing Systems, 2020, pp. 685-694.
  69. C. Sun, M. Sun, and H. T. Chen, “Direct Voxel Grid Optimization: Super-fast convergence for radiance fields reconstruction,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2022, pp. 5459-5469.
  70. J. N. Martel, D. B. Lindell, , C. Z. Lin, E. R. Chan, M. Monteiro, and G. Wetzstein, “ACORN: Adaptive coordinate networks for neural scene representation,” ACM SIGGRAPH Computer Graphics, vol. 40, no. 4, pp. 1-13, 2021.
  71. T. Wu, J. Wang, X. Pan, X. Xu, C. Theobalt, Z. Liu, and D. Lin, “Voxurf: Voxel-based efficient and accurate neural surface reconstruction,” Proc. International Conference on Learning Representations, 2023.
  72. E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. De Mello, O. Gallo, L. J. Guibas, J. Tremblay, S. Khamis, T. Karras, and G. Wetzstein, “Efficient geometry-aware 3D generative adversarial networks,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2022, pp. 16123-16133.
  73. J. R. Shue, E. R. Chan, R. Po, Z. Ankner, J. Wu, and G. Wetzstein, “3D neural field generation using triplane diffusion,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2023, pp. 20875-20886.
  74. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Transactions on Graphics, vol. 41, no. 4, pp. 1-15, 2022.
  75. Y. Xu, L. Wang, X. Zhan, H. Zhang, and Y. Liu, “ManVatar: Fast 3D head avatar reconstruction using motion-aware neural voxels,” arXiv preprint arXiv:2211.13206, 2022.
  76. X. Gao, C. Zhong, J. Xiang, Y. Hong, Y. Guo, and J. Zhang, “Reconstructing personalized semantic facial nerf models from monocular video,” ACM Transactions on Graphics, vol. 41, no. 6, pp. 1-12, 2022.
  77. A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su, “TensoRF: Tensorial radiance fields,” in European Conference on Computer Vision, 2022, pp. 333-350.
  78. V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” Proc. Advances in Neural Information Processing Systems, 2020, pp. 7462-7473.
  79. S. F. Chng, S. Ramasinghe, J. Sherrah, and S. Lucey, “Gaussian activated neural radiance fields for high fidelity reconstruction and pose estimation,” in European Conference on Computer Vision, 2022, pp. 264-280.
  80. J. Yang, M. Pavone, and Y. Wang, “FreeNeRF: Improving few-shot neural rendering with free frequency regularization,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2023.
  81. K. Park, U. Sinha, J. T. Barron, S. Bouaziz, D. B. Goldman, S. M. Seitz, and R. Martin-Brualla, “Nerfies: Deformable neural radiance fields,” Proc. IEEE International Conference on Computer Vision, 2021, pp. 5865-5874.
  82. Y. Wang, R. Lukas, and S. H. Olga, “Geometry-consistent neural shape representation with implicit displacement fields,” Proc. International Conference on Learning Representations, 2022.
  83. Y. Wang, I. Skorokhodov, and P. Wonka, “HF-NeuS: Improved surface reconstruction using high-frequency details,” Proc. Advances in Neural Information Processing Systems, 2022, pp. 1966-1978.
  84. W. Zielonka, T. Bolkart, and J. Thies, “Instant volumetric head avatars,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2023.
  85. L. v. d. Maaten and G. Hinton, “Visualizing Data using t-SNE,” Journal of Machine Learning Reasearch, vol. 9, no. 86, pp. 2579-2605, 2008.
  86. T. Lewiner, H. Lopes, A. W. Vieira, and G. Tavares, “Efficient implementation of marching cubes’ cases with topological guarantees,” Journal of graphics tools, vol. 8, no. 2, pp. 1-15, 2003.
  87. M. Zheng, H. Yang, D. Huang, and L. Chen, “ImFace: A nonlinear 3D morphable face model with implicit neural representations,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2022.
  88. D. Wang, P. Chandran, G. Zoss, D. Bradley, and P. Gotardo, “MoRF: Morphable radiance fields for multiview neural head modeling,” ACM SIGGRAPH Computer Graphics, no. 55, pp. 1-9, 2022.
  89. Y. Hong, B. Peng, H. Xiao, L. Liu, and J. Zhang, “HeadNeRF: A real-time nerf-based parametric head model,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2022.
  90. Y. Zhuang, H. Zhu, X. Sun, and X. Cao, “MoFaNeRF: Morphable facial neural radiance field,” in European Conference on Computer Vision, 2022, pp. 268-285.
  91. M. Mihajlovic, S. Prokudin, M. Pollefeys, and S. Tang, “ResFields: Residual neural fields for spatiotemporal signals,” arXiv preprint arXiv:2309.03160, 2023.
  92. P. Caselles, E. Ramon, J. Garcia, G. Triginer, and F. Moreno-Noguer, “Implicit shape and appearance priors for few-shot full head reconstruction,” arXiv preprint arXiv:2310.08784, 2023.
  93. M. C. Bühler, K. Sarkar, T. Shah, G. Li, D. Wang, L. Helminger, S. Orts-Escolano, D. Lagun, O. Hilliges, T. Beeler, and A. Meka, “Preface: A data-driven volumetric prior for few-shot ultra high-resolution face synthesis,” Proc. IEEE International Conference on Computer Vision, 2023, pp. 3402-3413.
  94. M. Mohamed and L. Agapito, “GNPM: Geometric-aware neural parametric models,” International Conference on 3D Vision, 2022, pp. 166-175.
  95. P. Palafox, A. Božič, J. Thies, M. Nießner, and A. Dai, “NPMs: Neural parametric models for 3D deformable shapes,” Proc. IEEE International Conference on Computer Vision, 2021, pp. 12695-12705.
  96. https://github.com/mmatl/pyrender
Citations (2)

Summary

We haven't generated a summary for this paper yet.