Score contribution per author:
α: calibrated so average coauthorship-adjusted count equals average raw count
In many dynamic programming problems, a mix of state variables exists - some exhibiting stochastic cycles and others having deterministic cycles. We derive a formula for the value function in infinite-horizon, stationary, Markovian decision problems by exploiting a special partitioned-circulant structure of the transition matrix [Pi]. Our strategy for computing the left-inverse of the matrix [I-[beta][Pi]], which is central to implementing Howard's policy iteration algorithm, yields significant improvements in computation time and major reductions in memory required. When the deterministic cycle is of order n, our cyclic inversion algorithm yields an O(n2) speed-up relative to the usual policy iteration algorithm.