Options
Tobias Fahse
Last Name
Fahse
First name
Tobias
Email
tobias.fahse@unisg.ch
Now showing
1 - 9 of 9
-
PublicationOvercoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods( 2022)Over the last decade, the importance of machine learning increased dramatically in business and marketing. However, when machine learning is used for decision-making, bias rooted in unrepresentative datasets, inadequate models, weak algorithm designs, or human stereotypes can lead to low performance and unfair decisions, resulting in financial, social, and reputational losses. This paper offers a systematic, interdisciplinary literature review of machine learning biases as well as methods to avoid and mitigate these biases. We identified eight distinct machine learning biases, summarized these biases in the cross-industry standard process for data mining to account for all phases of machine learning projects, and outline twenty-four mitigation methods. We further contextualize these biases in a real-world case study and illustrate adequate mitigation strategies. These insights synthesize the literature on machine learning biases in a concise manner and point to the importance of human judgment for machine learning algorithms.Type: journal articleJournal: Journal of Business ResearchVolume: Vol. 144
-
PublicationExploring the Synergies in Human-AI Hybrids: A Longitudinal Analysis in Sales Forecasting( 2023-08-10)Despite the promised potential of artificial intelligence (AI), insights into real-life human-AI hybrids and their dynamics remain obscure. Based on digital trace data of over 1.4 million forecasting decisions over a 69-month period, we study the implications of an AI sales forecasting system's introduction in a bakery enterprise on decision-makers' overriding of the AI system and resulting hybrid performance. Decisionmakers quickly started to rely on AI forecasts, leading to lower forecast errors. Overall, human intervention deteriorated forecasting performance as overriding resulted in greater forecast error. The results confirm the notion that AI systems outperform humans in forecasting tasks. However, the results also indicate previously neglected, domain-specific implications: As the AI system aimed to reduce forecast error and thus overproduction, forecasting numbers decreased over time, and thereby also sales. We conclude that minimal forecast errors do not inevitably yield optimal business outcomes when detrimental human factors in decision-making are ignored.Type: conference paper
-
PublicationModern Centaurs: How Humans and AI Systems Interact in Sales Forecasting( 2023-06-14)Recent achievements of artificial intelligence (AI) have caused organizations to increasingly bring AI capabilities into their core business processes. Such AI-supported business processes often result in human-AI hybrid systems, which consist of an AI system, which performs most of the execution, and humans, who monitor this execution and occasionally provide additional inputs and overrides. Using sales data from Walmart, we conduct an online study to investigate if human supervision can improve upon state-of-the-art AI forecasts. Furthermore, we analyze the perceptions and behavioral intentions of the human participants over time. We find that human interventions consistently lead to less accurate forecasts and that participants initially underestimate the AI system’s accuracy and overestimate their own potential to improve upon AI forecasts. However, perceptions quickly shift over the course of the study, causing the participants to perceive the AI system increasingly favorably, which also leads to behavioral changes and better hybrid system performance.Type: conference paperJournal: Proceedings of the European Conference on Information Systems
-
PublicationType: conference paper
-
PublicationExplanation Interfaces for Sales Forecasting(Association for Information Systems, 2022-06-18)
;Hruby, RichardAlgorithmic forecasts outperform human forecasts in many tasks. State-of-the-art machine learning (ML) algorithms have even widened that gap. Since sales forecasting plays a key role in business profitability, ML based sales forecasting can have significant advantages. However, individuals are resistant to use algorithmic forecasts. To overcome this algorithm aversion, explainable AI (XAI), where an explanation interface (XI) provides model predictions and explanations to the user, can help. However, current XAI techniques are incomprehensible for laymen. Despite the economic relevance of sales forecasting, there is no significant research effort towards aiding non-expert users make better decisions using ML forecasting systems by designing appropriate XI. We contribute to this research gap by designing a model-agnostic XI for laymen. We propose a design theory for XIs, instantiate our theory and report initial formative evaluation results. A real-world evaluation context is used: A medium-sized Swiss bakery chain provides past sales data and human forecasts.Type: conference paper -
PublicationEffectiveness of Example-Based Explanations to Improve Human Decision Quality in Machine Learning Forecasting Systems( 2022-12-09)Algorithmic forecasts outperform human forecasts by 10% on average. State-of-the-art machine learning (ML) algorithms have further expanded this discrepancy. Because a variety of other activities rely on them, sales forecasting is critical to a company's profitability. However, individuals are hesitant to use ML forecasts. To overcome this algorithm aversion, explainable artificial intelligence (XAI) can be a solution by making ML systems more comprehensible by providing explanations. However, current XAI techniques are incomprehensible for laymen, as they impose too much cognitive load. We contribute to this research gap by investigating the effectiveness in terms of forecast accuracy of two example-based explanation approaches. We conduct an online experiment based on a two-by-two between-subjects design with factual and counterfactual examples as experimental factors. A control group has access to ML predictions, but not to explanations. We report results of this study: While factual explanations significantly improved participants' decision quality, counterfactual explanations did not.Type: conference paper
-
PublicationManaging Bias in Machine Learning Projects( 2021-03-09)
;Huber, ViktoriaThis paper introduces a framework for managing bias in machine learning (ML) projects. When ML-capabilities are used for decision making, they frequently affect the lives of many people. However, bias can lead to low model performance and misguided business decisions, resulting in fatal financial, social, and reputational impacts. This framework provides an overview of potential biases and corresponding mitigation methods for each phase of the well-established process model CRISP-DM. Eight distinct types of biases and 25 mitigation methods were identified through a literature review and allocated to six phases of the reference model in a synthesized way. Furthermore, some biases are mitigated in different phases as they occur. Our framework helps to create clarity in these multiple relationships, thus assisting project managers in avoiding biased ML-outcomes.Type: conference paper -
PublicationKünstliche Intelligenz in Bäckereien: Weniger Foodwaste und auch weniger Kosten( 2021-07-29)
;Haake, KlausSaxer, StefanType: discussion paper -
PublicationDigitalisierung und der Lockdown: Eine Situationsanalyse im Juni 2020(Institut für Wirtschaftsinformatik Universität St.Gallen, 2020-06-25)Type: discussion paper