Current research

Algorithmic and Human Collusion

Revise and Resubmit at The Economic Journal

Abstract:

I study self-learning pricing algorithms and show that they are collusive in market simulations. To derive a counterfactual that resembles traditional tacit collusion, I conduct market experiments with humans in the same environment. Across different treatments, I vary the market size and the number of firms that use a pricing algorithm. I demonstrate that oligopoly markets can become more collusive if algorithms make pricing decisions instead of humans. In two-firm markets, prices are weakly increasing in the number of algorithms in the market. In three-firm markets, algorithms weaken competition if most firms use an algorithm and human sellers are inexperienced.

[PDF] [Video]

Algorithmic Price Recommendations and Collusion (with Matthias Hunold)
Revise and Resubmit at Experimental Economics

Abstract:

This paper investigates the collusive and competitive effects of algorithmic price recommendations on market outcomes. These recommendations are often non-binding and common in many markets, especially on online platforms. We develop a theoretical framework and derive two algorithms that recommend collusive pricing strategies. Utilizing a laboratory experiment, we find that sellers condition their prices on the recommendation of the algorithms. The algorithm with a soft punishment strategy lowers market prices and has a pro-competitive effect. The algorithm that recommends a subgame perfect equilibrium strategy increases the range of market outcomes, including more collusive ones. Variations in economic preferences lead to heterogeneous treatment effects and explain the results.

[PDF]

Volunteering at the Workplace under Incomplete Information: Team Size Does Not Matter (with Adrian Hillenbrand and Fabian Winter)
Revise and Resubmit at Experimental Economics

Abstract:

Volunteering is a widespread allocation mechanism in the workplace. It emerges naturally in software development or the generation of online knowledge platforms. Using a field experiment with more than 2000 workers, we study the effect of team size on volunteering in an online labor market. In contrast to our theoretical predictions and previous research, we find no effect of team size on volunteering, although workers react to free-riding incentives. We replicate the results and provide further robustness checks. Eliciting workers’ beliefs about their co-workers’ volunteering reveals conditional volunteering as the primary driver of our results. 

[PDF

What Drives Demand for Loot Boxes? An Experimental Study (with Simon Cordes and Markus Dertwinkel-Kalt)
Revise and Resubmit at Journal of Economic Behavior & Organization

Abstract:

The market for video games is booming, with in-game purchases accounting for a substantial share of developers' revenues. Policymakers and the general public alike are concerned that so-called loot boxes - lotteries that offer random rewards to be used in-game - induce consumers to overspend on video games. We provide experimental evidence suggesting that common design features of loot boxes (such as opaque odds and positively selected feedback) indeed induce overspending by inflating the belief of winning a prize. In combination, these features double the average willingness-to-pay for lotteries. Based on our findings, we argue for the need to regulate the design of loot boxes to protect consumers from overspending.

[PDF]

Human-Machine Interactions in Pricing: Evidence from Two Large-Scale Field Experiments (with Tobias Huelden, Vitalijs Jascisens, and Lars Roemheld)

Abstract:

While many companies use algorithms to optimize their pricing, additional human oversight and price interventions are widespread. Human intervention can correct algorithmic flaws and introduce private information into the pricing process, but it may also be based on less sophisticated pricing strategies or suffer from behavioral biases. Using fine-grained data from one of Europe's largest e-commerce companies, we examine the impact of human intervention on the company's commercial performance in two field experiments with around 700,000 products. We show that sizeable heterogeneity exists and present evidence of interventions that harmed commercial performance and interventions that improved firm outcomes. We show that the quality of human interventions can be predicted with algorithmic tools, which allows us to exploit expert knowledge while blocking inefficient interventions.

[PDF]

Algorithmic Cooperation (with Bernhard Kasberger, Simon Martin and Hans-Theo Normann) 

Abstract:

Algorithms play an increasingly important role in economic situations. Often these situations are strategic, where the artificial intelligence may or may not be cooperative. We study the determinants and forms of algorithmic cooperation in the infinitely repeated prisoner's dilemma. We run a sequence of computational experiments, accompanied by additional repeated prisoner's dilemma games played by humans in the lab. We find that the same factors that increase human cooperation largely also determine the cooperation rates of algorithms. However, algorithms tend to play different strategies than humans. Algorithms cooperate less than humans when cooperation is very risky or not incentive-compatible.

[PDF]

Human-machine social systems (with Milena Tsvetkova, Taha Yasseri, and Niccolo Pescetelli)

Abstract:

From fake accounts on social media and generative-AI bots such as ChatGPT to high-frequency trading algorithms on financial markets and self-driving vehicles on the streets, robots, bots, and algorithms are proliferating and permeating our communication channels, social interactions, economic transactions, and transportation arteries. Networks of multiple interdependent and interacting humans and autonomous machines constitute complex adaptive social systems where the collective outcomes cannot be simply deduced from either human or machine behavior alone. Under this paradigm, we review recent experimental, theoretical, and observational research from across a range of disciplines - robotics, human-computer interaction, web science, complexity science, computational social science, finance, economics, political science, social psychology, and sociology. We identify general dynamics and patterns in situations of competition, coordination, cooperation, contagion, and collective decision-making, and contextualize them in four prominent existing human-machine communities: high-frequency trading markets, the social media platform formerly known as Twitter, the open-collaboration encyclopedia Wikipedia, and the news aggregation and discussion community Reddit. We conclude with suggestions for the research, design, and governance of human-machine social systems, which are necessary to reduce misinformation, prevent financial crashes, improve road safety, overcome labor market disruptions, and enable a better human future.

[PDF]