An essay on the history of DSGE models (2409.00812v2)
Abstract: Dynamic Stochastic General Equilibrium (DSGE) models are nowadays a crucial quantitative tool for policy-makers. However, they did not emerge spontaneously. They are built upon previously established ideas in Economics and relatively recent advancements in Mathematics. This essay provides a comprehensive coverage of their history, starting from the pioneering Neoclassical general equilibrium theories and eventually reaching the New Neoclassical Synthesis (NNS). In addition, the mathematical tools involved in formulating a DSGE model are thoroughly presented. I argue that this history has a mixed nature rather than an absolutist or relativist one, that the NNS may have emerged due to the complementary nature of New Classical and New Keynesian theories, and that the recent adoption and development of DSGE models by central banks from different countries has entailed a departure from the goal of building a universally valid theory that Economics has always had. The latter means that DSGE modeling has landed not without loss of generality.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
What this paper is about
This paper tells the story of how modern “DSGE models” became a central tool in economics, especially for central banks. DSGE stands for Dynamic Stochastic General Equilibrium. In simpler words:
- Dynamic: things change over time.
- Stochastic: there is randomness and surprises.
- General Equilibrium: all markets in the economy are linked and balance together.
The paper walks through older ideas (like Classical, Neoclassical, Keynesian, and Monetarist economics), shows the math that made modern models possible, and explains two big schools—New Classical and New Keynesian—that eventually blended into what economists use today. It also argues that this history is “mixed”: no single camp fully won; instead, ideas were combined. Finally, it points out that central banks’ practical use of DSGE models means economics moved away from trying to have one universal theory for all places and times.
The paper’s main goals and questions
The author aims to:
- Explain where DSGE models came from—both the economic ideas and the newer math that made them possible.
- Show why the history is “mixed,” not all-or-nothing: different schools (New Classical and New Keynesian) each contributed, and their ideas were combined.
- Explore why the “New Neoclassical Synthesis” (NNS) emerged—a blending of microfoundations and rational expectations (New Classical) with sticky prices/wages and imperfect competition (New Keynesian).
- Discuss how adopting DSGE models in real central banks led economics to focus less on one-size-fits-all theory and more on models tailored to specific countries, which means losing some generality.
How the paper approaches the topic
This is an essay that combines history with clear introductions to key mathematical tools. The author:
- Reviews major economic theories:
- Neoclassical general equilibrium (all markets connected and in balance).
- Keynesian models (like IS-LM and the Phillips curve).
- Monetarism (money is central; long-run neutrality of money).
- Introduces math that lets economists model decisions over time and prove that equilibria exist:
- Fixed point theorems: Think of stirring a cup of coffee—no matter how you stir, there’s always at least one point that ends up where it started. These theorems guarantee “a point that maps to itself,” which helps prove an equilibrium exists even if we can’t write it down explicitly.
- Dynamic programming (Bellman equation): Like beating a video game level-by-level—solve today’s best move by thinking “best now + best later.” It breaks a big problem into smaller steps.
- Pontryagin’s Maximum Principle: A way to find the best path over time using “shadow prices” (costate variables) that value resources and constraints, similar to having an internal price tag that guides optimal choices.
- Shows how economists use Pareto optimality (no one can be made better off without making someone worse off) to characterize equilibria when closed-form solutions are too hard.
- Uses simple example models:
- Lucas’s “islands” model: People can’t perfectly tell real vs. nominal changes; only unexpected money changes move output temporarily.
- Fischer/Taylor sticky wage/price models: Even with rational expectations, if wages/prices change slowly, policy can stabilize the economy.
The main ideas and why they matter
Here are the key points the paper highlights, explained in everyday language:
- General equilibrium thinking: Early economists like Walras and Pareto showed that you should look at all markets together, because changes in one market affect others. This set the stage for macro models that consider the whole economy, not just bits.
- Keynesians and Monetarists:
- Keynesians said wages and prices can be “sticky,” so policy can help fix downturns.
- Monetarists said money is the main driver of inflation and believed in a “natural rate” of unemployment that policy can’t beat in the long run.
- New math = new models:
- Fixed point theorems helped economists prove equilibrium exists.
- Dynamic programming and maximum principles let them model smart, forward-looking decisions—how people balance “now” vs. “later.”
- Euler equations (the rules that come out of these methods) tell you the right trade-off between today’s and tomorrow’s choices.
- New Classical revolution:
- Rational expectations: People use information well and adjust quickly; you can’t repeatedly trick them.
- Lucas critique: If policy changes, people’s behavior changes too. So models must be based on actual decision-making (microfoundations), not just past averages.
- Time inconsistency: Policymakers may promise one thing but later have a reason to change plans; rules can be more credible than discretion.
- New Keynesian response:
- Even with rational expectations, sticky wages and prices mean policy can stabilize output and inflation.
- This helped build models where both microfoundations and frictions coexist.
- The New Neoclassical Synthesis (NNS):
- A blend of New Classical and New Keynesian ideas.
- It’s the backbone of modern DSGE models used by central banks.
- Example: real business cycle (RBC) ideas about technology shocks plus sticky prices to explain both long-term trends and short-term fluctuations.
- Practical shift in economics:
- Central banks now build DSGE models tailored to their country’s data and institutions.
- That helps real-world decisions but moves away from the old goal of a universal theory that fits everywhere.
What the paper finds and why it’s important
The paper’s main conclusions:
- The history is “mixed”: Progress didn’t mean replacing one school entirely; instead, economists combined the best parts—solid microfoundations and expectations (New Classical) with realistic frictions (New Keynesian).
- The NNS likely emerged because the two schools complemented each other:
- Microfoundations and rational expectations make models credible and consistent.
- Sticky prices/wages and imperfect competition make models match real-world behavior and policy effects.
- Central banks adopting DSGE models changed the focus of economics:
- Models are now often country-specific, practical, and data-informed.
- This is useful for policy but reduces the discipline’s ambition to have a single, universally valid theory.
These points matter because they explain why today’s macroeconomics looks the way it does: a blend of theory and practicality, designed to help real institutions manage inflation, employment, and growth.
What this means going forward
- For policymaking: DSGE models give structured, transparent ways to forecast and test policies. They help central banks think about “what if” scenarios and make decisions consistently.
- For research: There’s a trade-off between generality and usefulness. Tailored models can better guide policy in specific places, but may not apply everywhere.
- For students and the public: Understanding that economics models are built from a mix of ideas—and that the math matters—can help you see why economists disagree and how they still make progress. Future improvements will likely come from adding richer features (like finance and inequality) while keeping the solid foundations of expectations and optimization.
In short, the paper shows how modern macro models grew from a century of ideas and math, why they look the way they do, and how their real-world use has shaped the goals of economics today.
Knowledge Gaps
Below is a concise list of the paper’s unresolved knowledge gaps, limitations, and open questions that future research could address:
- The claim that DSGE modeling has “landed not without loss of generality” is not operationalized; develop measurable criteria for “generality” (e.g., parameter invariance across institutional contexts, policy-robustness) and test them across central bank DSGE implementations.
- The asserted “complementary nature” of New Classical and New Keynesian ideas leading to the NNS is not empirically demonstrated; perform historical-network analyses, citation mapping, and textual content analysis to substantiate this convergence.
- The paper presents fixed-point and optimization tools but does not link them to actual DSGE existence/uniqueness results; derive explicit conditions for existence, uniqueness, and stability of rational expectations equilibria in canonical NK models (e.g., with Calvo pricing and Taylor rules).
- Indeterminacy and sunspot equilibria under nominal rigidities and policy rules are not discussed; characterize regions of determinacy/indeterminacy and the role of policy feedback coefficients in NK DSGE models.
- Estimation and identification are largely absent; compare calibration vs Bayesian estimation, assess prior sensitivity, and address identification of structural shocks (TFP, monetary policy, preference shocks) in practice.
- Empirical validation of rational expectations is not assessed; use survey expectations (e.g., SPF, Michigan, ECB SPF) and learning models to test whether agents’ expectations align with RE or exhibit adaptive/behavioral features.
- Representative-agent assumptions are taken for granted; integrate heterogeneity (TANK/HANK) to paper distributional effects of policy and evaluate their quantitative relevance relative to RA models.
- Financial frictions and macro-finance linkages are not covered; incorporate mechanisms such as financial accelerator (BGG), collateral constraints (Kiyotaki–Moore), intermediary frictions, and test their necessity for crisis-era dynamics.
- Nonlinearity and occasionally binding constraints (e.g., ZLB, borrowing limits) are not addressed; evaluate solution methods (piecewise-linear, global projection, perturbation around risky steady states) and quantify approximation errors.
- Open-economy DSGE developments are missing; analyze international RBC/NK models, exchange-rate determination, terms-of-trade shocks, and spillovers, including identification of external shocks.
- The paper claims rapid idea evolution during crises without evidence; conduct event-paper analyses linking macroeconomic crises to bursts in theoretical/model innovation and adoption.
- No systematic evaluation of central bank DSGE models’ real-time performance; compile a cross-country inventory, document design choices, and assess forecast accuracy, policy robustness, and model portability (to test “loss of generality”).
- Mapping between model variables and data is not discussed; formalize measurement choices (e.g., output vs GDP, model inflation vs CPI/PCE), detrending, stationarity, and measurement error implications for estimation.
- Solution techniques are only briefly referenced; provide a comparative assessment of perturbation vs projection vs value-function iteration, including error bounds, computational cost, and suitability for models with frictions.
- Model uncertainty and robustness are omitted; investigate misspecification diagnostics, model averaging/ensembles, and robust control applications in policy design under uncertainty.
- Labor market frictions are not integrated; embed search-and-matching, wage bargaining, and unemployment dynamics, and compare their empirical performance to frictionless labor supply models.
- Phillips curve microfoundations and empirical status are not evaluated; use micro price-setting evidence (e.g., Bils–Klenow, scanner data) to calibrate/indexation parameters and reassess the slope and stability of the NKPC.
- Fiscal DSGE modeling is limited; analyze debt dynamics with distortionary taxation, government spending multipliers across regimes, and OLG vs RA implications for Ricardian equivalence and fiscal transmission.
- Expectations formation beyond RE is not covered; test information frictions (rational inattention, sticky information), adaptive learning, and bounded rationality, and paper policy implications under alternative expectations processes.
- The 2008 crisis critique is announced but not developed in the provided text; perform a post-mortem of DSGE failures and successes, identify missing mechanisms, and quantify improvements from added frictions.
- Mathematical derivations contain notation/parentheses errors in several equations (e.g., Pontryagin and dynamic programming sections); provide corrected, reproducible derivations and link them to canonical sources or appendices.
- A taxonomy of DSGE families is missing; systematize the evolution from RBC to NK, SW-type models, TANK/HANK, macro-finance DSGE, and identify core trade-offs in tractability vs realism.
- Monetary policy instrumentation is outdated in parts (money supply vs interest-rate rules); reconcile classical money-supply frameworks with modern interest-rate rule environments and shock identification in NK DSGE.
- Welfare and optimal policy analysis under commitment vs discretion is not pursued; integrate time inconsistency with nominal rigidities, paper credible commitment devices, and evaluate welfare under realistic constraints (ZLB, learning).
Practical Applications
Immediate Applications
The following applications can be deployed now by leveraging the paper’s synthesis of DSGE history, its mathematical toolkit (fixed-point theorems, dynamic programming, Pontryagin), and its discussion of New Classical and New Keynesian (NNS) policy frameworks.
- Central bank policy analysis and design using NNS-style DSGE models (Policy; Finance)
- Use microfounded NK models with rational expectations and nominal rigidities to evaluate interest-rate rules, inflation-targeting, exchange-rate regimes, and fiscal-monetary coordination.
- Tools/products/workflows: “Policy Design Loop” (model specification → calibration/estimation → scenario simulation → rule comparison → policy communication); adoption of standard toolkits (Dynare, DYNARE++, Julia/QuantEcon, Matlab/Python NK templates).
- Assumptions/dependencies: Rational expectations, representative agents, calibrated or estimated structural parameters, reliable macro data, linearization accuracy; institutional capacity for model maintenance.
- Rule-versus-discretion diagnostics and commitment devices (Policy)
- Apply time-inconsistency insights (Kydland–Prescott) to assess credibility of discretionary policies; compare simple rules (e.g., Taylor-type) vs optimal commitment; inform central bank mandate design and communication strategy.
- Tools/products/workflows: “Commitment Dashboard” reporting rule adherence, policy surprises, forward-guidance credibility metrics.
- Assumptions/dependencies: Expectations are sensitive to institutional design; legal/operational feasibility of commitment; clear loss function and constraints.
- Expectations-consistent forecasting and nowcasting (Policy; Finance)
- Combine rational-expectations structure with survey measures and market-implied expectations to produce forecasts consistent with the Lucas critique (avoid purely reduced-form parameters).
- Tools/products/workflows: State-space implementations of DSGE for real-time filtering; expectation-augmented Phillips curve modules.
- Assumptions/dependencies: Identification of shocks; data revisions; model misspecification risk.
- Macro stress testing and scenario analysis for banks and insurers (Finance)
- Use small/medium-scale DSGE (or NK with financial “wedges”) to translate macro shocks into paths for inflation, GDP, rates, and credit spreads; inform ICAAP/ORSA, ALM, and capital planning.
- Tools/products/workflows: DSGE scenario generator feeding balance-sheet models; “shock decomposition” reports.
- Assumptions/dependencies: Mapping from aggregate shocks to institution-specific exposures; treatment of financial frictions (often ad hoc in baseline DSGE).
- Wage and price contract design with staggered indexation (Labor; Industry)
- Apply Fischer/Taylor staggered contracts to set indexation clauses, contract horizons, and review frequencies that minimize real wage misalignment and output volatility.
- Tools/products/workflows: “Contract Staggering Simulator” to evaluate firm-level outcomes under alternative indexation rules.
- Assumptions/dependencies: Partial menu costs/stickiness are material; inflation expectations are measurable and contractible.
- Model governance and transparency in policy institutions (Policy; Governance)
- Operationalize the paper’s “loss of generality” insight by tailoring country-specific DSGE models and documenting assumptions, parameter priors, and validation tests.
- Tools/products/workflows: Model registry, reproducible pipelines, version control; model risk assessments acknowledging Lucas critique.
- Assumptions/dependencies: Staff training; reproducible infrastructure; openness to model plurality.
- Rapid prototyping of general equilibrium models via fixed-point and welfare theorems (Academia; Policy)
- Use fixed-point theorems to ensure equilibrium existence in new models; use Pareto-optimality characterizations to solve planner problems as a shortcut to competitive equilibria.
- Tools/products/workflows: “Planner-first” solution workflow; existence/uniqueness checklists.
- Assumptions/dependencies: Convexity/compactness conditions; accurate mapping between planner and decentralized allocations.
- Teaching and training modules linking history to modern methods (Education)
- Build curricula moving from Walrasian GE to Sidrauski, Lucas islands, and NK staggered-price models; integrate Bellman equations and Euler conditions.
- Tools/products/workflows: Reproducible Jupyter/Matlab notebooks, problem sets on dynamic programming and rational expectations, micro-to-macro derivations.
- Assumptions/dependencies: Access to software; mathematical prerequisites; curated datasets.
- Open-source starter kits for RBC/NK models (Software; Academia; Policy)
- Provide calibrated baseline models with estimation scripts (Bayesian or ML-based filtering), shock decomposition, and policy-rule toggles.
- Tools/products/workflows: “DSGE-in-a-Box” template repos; CI-tested solvers; documentation for central bank adoption.
- Assumptions/dependencies: License and data policies; community maintenance; cross-language bindings.
- Communication strategies to manage expectations (Policy; Public communication)
- Align policy guidance with model-implied dynamics: emphasize that surprise (unanticipated) components drive short-run real effects; explain the role of rules in anchoring expectations.
- Tools/products/workflows: Plain-language inflation reports, forward guidance trackers, surprise indices.
- Assumptions/dependencies: Public trust; clarity of targets; stable policy framework.
Long-Term Applications
These applications require further research, scaling, or development to address known limitations (e.g., post-2008 critiques, heterogeneity, financial frictions) and to extend the toolbox.
- Heterogeneous-agent DSGE (HANK) for distributional policy analysis (Policy; Academia)
- Incorporate household/firm heterogeneity, incomplete markets, and liquidity constraints to paper inequality, redistribution, and heterogeneous responses to policy.
- Tools/products/workflows: Parallelized global solution methods; micro–macro data integration; distributional dashboards.
- Assumptions/dependencies: High computational costs; detailed microdata; robust calibration/estimation.
- Integrated macro–finance and macroprudential modules (Policy; Finance)
- Endogenize credit, leverage, default, and liquidity to evaluate capital buffers, borrower-based tools, and lender-of-last-resort policies.
- Tools/products/workflows: DSGE with financial intermediaries; stress testing that closes the GE feedback loop; macroprudential policy optimizers.
- Assumptions/dependencies: Identification of financial shocks; nonlinear crisis dynamics; data on balance-sheet networks.
- Nonlinear, global solution and verification pipelines (Software; Academia; Policy)
- Move beyond local linearization to solve models under large shocks and occasionally binding constraints (ZLB, credit limits), with formal verification of existence/uniqueness.
- Tools/products/workflows: HPC-enabled value function iteration; policy function iteration with monotone operators; code verification based on fixed-point theory.
- Assumptions/dependencies: Computational infrastructure; numerical stability; rigorous testing frameworks.
- Climate–macro DSGE and energy transition planning (Energy; Policy; Finance)
- Embed carbon taxes, abatement, and transition risks to paper optimal climate policy and green investment.
- Tools/products/workflows: Climate-augmented NK/HANK modules; carbon price path optimizers; transition risk scenarios for financial stability.
- Assumptions/dependencies: Climate damage/abatement function uncertainty; sectoral detail; long-horizon discounting choices.
- Robust control and model-ensemble policy under uncertainty (Policy)
- Operationalize the paper’s pluralism insight by combining multiple structural models and applying robust control to account for misspecification and structural breaks.
- Tools/products/workflows: Model averaging, Bayesian model combination, worst-case policy evaluation.
- Assumptions/dependencies: Comparable priors across models; performance metrics; governance for model selection.
- Expectation formation beyond rational expectations (Policy; Academia)
- Incorporate learning, adaptive expectations, or information frictions to reconcile survey expectations and micro evidence with macro dynamics.
- Tools/products/workflows: DSGE-with-learning estimation; experimental/field evidence integration.
- Assumptions/dependencies: Stability under learning; data on information sets; identification.
- DSGE-as-a-service for SMEs and local governments (Software; Industry; Policy)
- Cloud-based, customizable macro scenario tools for budgeting, pricing, and investment planning, calibrated to local economies.
- Tools/products/workflows: API-accessible scenario engines; templates for small open economies; user-friendly interfaces.
- Assumptions/dependencies: Data pipelines; usability; trust in structural assumptions.
- Agent-based and DSGE hybrids (Academia; Policy)
- Combine micro-interaction realism with equilibrium discipline to capture network effects and emergent phenomena alongside policy counterfactuals.
- Tools/products/workflows: Co-simulation frameworks; networked financial frictions modules.
- Assumptions/dependencies: Calibration complexity; reconciliation of equilibrium concepts with emergent dynamics.
- Welfare analysis with broader criteria (Policy; Academia)
- Extend beyond Pareto efficiency to incorporate inequality aversion, volatility penalties, and multi-objective mandates (growth, stability, distribution).
- Tools/products/workflows: Social welfare function design; multi-criteria policy optimization; distribution-sensitive evaluation.
- Assumptions/dependencies: Normative choices; political legitimacy; transparent trade-offs.
- Dynamic contracts and algorithmic pricing with policy oversight (Industry; Competition policy)
- Use NK insights on price/wage stickiness to guide algorithmic pricing and digital market rules; evaluate dynamic contract designs under uncertainty.
- Tools/products/workflows: Simulators of dynamic pricing with menu costs; compliance toolkits for competition authorities.
- Assumptions/dependencies: Data on platform markets; regulation of pricing algorithms; enforceability of contract features.
- Capacity building and interdisciplinary labs (Education; Policy)
- Institutionalize training that connects economic history, methodology, and modern computation for sustainable model stewardship in public agencies.
- Tools/products/workflows: DSGE clinics, sandboxes, collaborative model repositories.
- Assumptions/dependencies: Funding; talent pipelines; open science culture.
Glossary
- Adaptative expectations: A rule-of-thumb method where agents update expected future values based on past discrepancies between actual and expected outcomes. "Now, consider adaptative expectations of the form , where ."
- AS-AD model: A macroeconomic framework combining aggregate supply and aggregate demand to analyze output and price levels. "the AS-AD model \parencite{brownlee_theory_1950}"
- Banach's fixed point theorem: A result guaranteeing a unique fixed point for a contraction mapping on a complete metric space. "Banach's fixed point theorem: Let be a complete metric space. For every contraction mapping , there exists a unique such that ."
- Bellman equation: The fundamental recursive functional equation in dynamic programming representing the value of optimal decisions. "Replacing with results in the Bellman equation:"
- Blackwell's sufficient conditions: Conditions (monotonicity and discounting) that ensure an operator is a contraction mapping. "As satisfies monotonicity and discounting, it meets \citeauthor{blackwell_discounted_1965}'s \parencite*{blackwell_discounted_1965} sufficient conditions to be a contraction mapping."
- Brouwer's fixed point theorem: A theorem asserting that any continuous function from a compact convex set to itself has a fixed point. "Brouwer's fixed point theorem: Let be a nonempty, compact, and convex subset of a finite-dimensional normed linear space. For every continuous function , there exists such that ."
- Classical Dichotomy: The separation between nominal and real variables, where money affects only nominal variables. "An important conclusion that can be found in Walras' work is that the Classical Dichotomy holds; i.e., money only affecting nominal variables and not real variables."
- Competitive equilibrium: An allocation and a price system where agents optimize given prices and all markets clear. "A competitive equilibrium is an allocation together with a price system such that is feasible..."
- Contraction mapping: A function that brings points closer together and guarantees a unique fixed point in a complete metric space. "For every contraction mapping "
- Correspondence: A set-valued mapping that assigns a set of outputs to each input. "For every closed, convex-valued correspondence , there exists such that ."
- Dynamic programming: A method for solving sequential decision problems by breaking them into subproblems and using recursion. "Now, I will briefly present stochastic dynamic programming."
- DSGE models: Dynamic Stochastic General Equilibrium models used for quantitative policy analysis with microfoundations and random shocks. "Dynamic Stochastic General Equilibrium (DSGE) models are nowadays a crucial quantitative tool for policy-makers."
- Euler equation: A necessary optimality condition linking intertemporal choices of controls and states in dynamic optimization. "Combining these new expressions results in the Euler equation:"
- Hahn-Banach theorem: A fundamental result in functional analysis used to separate convex sets; applied to prove equilibrium properties. "requires an application of the Hahn-Banach theorem (another relatively new mathematical tool)."
- Hamiltonian: A function combining the objective and dynamics in optimal control, used to derive necessary conditions. "where is the Hamiltonian."
- Heckscher-Ohlin (H-O) model: A general equilibrium trade model relating factor endowments to patterns of production and trade. "the H-O model \parencite{heckscher_utrikeshandelns_1919, ohlin_handelns_1924, ohlin_interregional_1933}"
- IS-LM model: A macro model where the interaction of investment-savings and liquidity-money markets determines income and interest. "He developed the first version of the IS-LM model, integrating (mostly) Keynesian and Neoclassical ideas..."
- Kakutani's Fixed point theorem: A fixed-point result for set-valued functions (correspondences) on compact convex sets. "Kakutani's Fixed point theorem: Let be a nonempty, compact, and convex subset of a finite-dimensional normed linear space."
- Karush-Kuhn-Tucker optimization: Optimality conditions for constrained optimization problems, especially nonlinear programming. "I will not show Karush-Kuhn-Tucker optimization \parencite{karush_minima_1939, kuhn_nonlinear_1951}, as it is mostly used in finite time horizon models that will not be formally presented in this essay."
- Lagrange multiplier: A parameter associated with constraints in optimization, representing shadow prices. "Denote as the subjective rate of time preference of the economic unit, as the Lagrange multiplier attached to the flow constraint, and as the Lagrange multiplier attached to the stock constraint."
- Lucas critique: The argument that models without microfoundations give unreliable policy predictions because parameters change with policy. "The first one is the \citeauthor{lucas_econometric_1976}' \parencite*{lucas_econometric_1976} critique, which posits that non-microfounded models cannot accurately evaluate the outcomes of any policy..."
- Mundell-Fleming model: An open-economy macro model extending IS-LM to include exchange rates and capital mobility. "and the Mundell-Fleming model \parencite{mundell_capital_1963, fleming_domestic_1962}."
- Natural interest rate: The equilibrium real rate of interest consistent with stable prices, introduced in Wicksellian analysis. "he partially relied on this theory to formulate the notion of the natural interest rate"
- New Neoclassical Synthesis (NNS): A framework combining New Classical microfoundations with New Keynesian nominal rigidities. "eventually reaching the New Neoclassical Synthesis (NNS)."
- Non-Accelerating Inflation (or natural) Rate of Unemployment: The unemployment rate consistent with stable inflation. "the Non-Accelerating Inflation (or natural) Rate of Unemployment \parencite{friedman_role_1968}."
- Ordinal utility functions: Utility representations where only preference ordering (not cardinal intensity) matters. "He developed and incorporated ordinal utility functions"
- Pareto optimal: An allocation where no individual can be made better off without making someone else worse off. "We say that an allocation is Pareto optimal if it is feasible and if there is no other feasible allocation such that and for some ."
- Paretian optimality: Pareto’s formulation of welfare optimality in general equilibrium contexts. "and formulated the notion of Paretian optimality so as to provide mathematical proofs of the welfare-maximizing proprieties that a competitive general equilibrium has."
- Phillips curve: A relation suggesting a trade-off between inflation and unemployment. "Another well-known (post) Keynesian formulation is the Phillips curve, which posits a long-term trade-off between inflation and unemployment."
- Pontryagin's Maximum Principle: Necessary conditions for optimal control in continuous-time dynamic systems. "Pontryagin's Maximum Principle \parencite{pontryagin_mathematical_1962}"
- Ramsey-Cass-Koopmans model: A foundational intertemporal growth model of optimal saving and capital accumulation. "built upon Ramsey-Cass-Koopmans's model \parencite{ramsey_mathematical_1928,koopmans_concept_1965,cass_optimum_1965}"
- Rational expectations: The assumption that agents’ forecasts are consistent with the model and available information. "Rational expectations, albeit not their original development, has been their hallmark."
- Ricardian Equivalence: The proposition that debt-financed fiscal policy does not affect real economic activity under certain assumptions. "formalized the concept of Ricardian Equivalence under the assumption of ultrarrationality"
- Schauder's fixed point theorem: A generalization of Brouwer’s theorem for mappings on convex subsets of normed spaces. "Schauder's fixed point theorem: Let be a nonempty, compact, and convex subset of a normed linear space. For every continuous function , there exists such that ."
- Time inconsistency: The problem where optimal policy plans are not credible because incentives change after private agents act. "the concept of a time inconsistency introduced by \textcite{kydland_rules_1977}, which suggests that discretionary policy might not achieve its purposes..."
- Transversality condition: A boundary condition ensuring optimality in infinite-horizon optimization problems. "Regarding the last of the necessary conditions, it can be written way more elegantly in what is known as the transversality condition:"
- Ultrarrationality: The assumption that households fully internalize the government’s intertemporal budget constraint. "under the assumption of ultrarrationality"
- Value function: A function mapping states to the maximal attainable value in dynamic programming. "In order to find , it is necessary to define a value function "
- Walras' Law: The principle that if all but one market clear, the remaining market must also clear. "Other important result is the Walras' Law, which states that, if of the markets are in equilibrium, the remaining one must also be."
Collections
Sign up for free to add this paper to one or more collections.