With the advent of artificial agents, awareness is growing about the importance of trust in advancing their successful and well calibrated adoption. Prior theoretical work legitimates extending interpersonal trust relationships onto technology. While anecdotal evidence on trust in artificial agents argues for an aversive behavior, the scarce empirical research finds mixed results. This experimental study investigates the differential trust of professional investors as a function of the source of investment advice. Results show that professionals trust artificial agents more than human counterparts, yet also reveal that this trust is not well calibrated in either the computational system or the human. Stressing the normative contrast of the study, trust in artificial agents is less wisely calibrated than in human advisors: Investors should trust artificial agents over their human counterpart even more than they do. However, this pattern of trusting artificial agents to make decisions is inversely reflected in professionals’ confidence in their decisions. Confidence in a decision increases less upon advice from artificial than from human agents. By advancing the understanding of professionals’ trust in artificial agents, findings pave the way for future scholarly work on human–technology interaction and provide an important impetus for practice and regulation of the use of artificial agents in a professional context.