Algorithmic and Human Collusion
As self-learning pricing algorithms become popular, there are growing concerns among academics and regulators that algorithms could learn to collude tacitly on non-competitive prices and thereby harm competition. I study popular reinforcement learning algorithms and show that they develop collusive behavior in a simulated market environment. To derive a counterfactual that resembles traditional tacit collusion, I conduct market experiments with human participants in the same environment. Across different treatments, I vary the market size and the number of firms that use a self-learned pricing algorithm. I provide evidence that oligopoly markets can become more collusive if algorithms make pricing decisions instead of humans. In two-firm markets, market prices are weakly increasing in the number of algorithms in the market. In three-firm markets, algorithms weaken competition if most firms use an algorithm and human sellers are inexperienced.
Algorithmic Price Recommendations and Collusion (with Matthias Hunold)
This paper investigates the collusive and competitive effects of algorithmic price recommendations on market outcomes. We develop a theoretical framework and derive two algorithms that recommend collusive pricing strategies. Utilizing a laboratory experiment, we find that sellers condition their prices on the recommendation of the algorithms. The algorithm with a soft punishment strategy lowers market prices and has a pro-competitive effect. The algorithm that recommends a subgame perfect equilibrium strategy increases the range of market outcomes, including more collusive ones. Variations in economic preferences lead to heterogeneous treatment effects and explain the results.
[Reach out for an early draft]
What Drives Demand for Loot Boxes? An Experimental Study (with Simon Cordes and Markus Dertwinkel-Kalt)
The market for video games is booming, with in-game purchases accounting for a substantial share of developers' revenues. Policymakers and the general public alike are concerned that so-called loot boxes - lotteries that offer random rewards to be used in-game - induce consumers to overspend on video games. We provide experimental evidence suggesting that common design features of loot boxes (such as opaque odds and positively selected feedback) indeed induce overspending by inflating the belief of winning a prize. In combination, these features double the average willingness-to-pay for lotteries. Based on our findings, we argue for the need to regulate the design of loot boxes to protect consumers from overspending.
Volunteering at the Workplace under Incomplete Information: Team Size Does Not Matter (with Adrian Hillenbrand and Fabian Winter)
Volunteering is a widespread allocation mechanism in the workplace. It emerges naturally in software development or the generation of online knowledge platforms. Using a field experiment with more than 2000 workers, we study the effect of team size on volunteering in an online labor market. In contrast to our theoretical predictions and previous research, we find no effect of team size on volunteering, although workers react to free-riding incentives. We replicate the results and provide further robustness checks. Eliciting workers’ beliefs about their co-workers’ volunteering reveals conditional volunteering as the primary driver of our results.
Memory Length in Algorithmic Collusion (with Bernhard Kasberger, Simon Martin and Hans-Theo Normann)
Work in progress