Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-Implicit Stochastic Recurrent Neural Networks (1910.12819v2)

Published 28 Oct 2019 in cs.LG and stat.ML

Abstract: Stochastic recurrent neural networks with latent random variables of complex dependency structures have shown to be more successful in modeling sequential data than deterministic deep models. However, the majority of existing methods have limited expressive power due to the Gaussian assumption of latent variables. In this paper, we advocate learning implicit latent representations using semi-implicit variational inference to further increase model flexibility. Semi-implicit stochastic recurrent neural network(SIS-RNN) is developed to enrich inferred model posteriors that may have no analytic density functions, as long as independent random samples can be generated via reparameterization. Extensive experiments in different tasks on real-world datasets show that SIS-RNN outperforms the existing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ehsan Hajiramezanali (27 papers)
  2. Arman Hasanzadeh (13 papers)
  3. Nick Duffield (32 papers)
  4. Krishna Narayanan (25 papers)
  5. Mingyuan Zhou (161 papers)
  6. Xiaoning Qian (69 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.