Options
Benjamin van Giffen
Title
Prof. Dr.
Last Name
van Giffen
First name
Benjamin
Email
benjamin.vangiffen@unisg.ch
Phone
+41 71 224 3635
Now showing
1 - 10 of 14
-
PublicationDigitale Plattformen in der Praxis – Einsatz- und Entwicklungsmodelle(Springer Fachmedien, 2022-08)
;Holler, Manuel ;Dremel, ChristianGaleno, GianlucaType: journal articleJournal: HMD Praxis der Wirtschaftsinformatik -
PublicationOvercoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods( 2022)Over the last decade, the importance of machine learning increased dramatically in business and marketing. However, when machine learning is used for decision-making, bias rooted in unrepresentative datasets, inadequate models, weak algorithm designs, or human stereotypes can lead to low performance and unfair decisions, resulting in financial, social, and reputational losses. This paper offers a systematic, interdisciplinary literature review of machine learning biases as well as methods to avoid and mitigate these biases. We identified eight distinct machine learning biases, summarized these biases in the cross-industry standard process for data mining to account for all phases of machine learning projects, and outline twenty-four mitigation methods. We further contextualize these biases in a real-world case study and illustrate adequate mitigation strategies. These insights synthesize the literature on machine learning biases in a concise manner and point to the importance of human judgment for machine learning algorithms.Type: journal articleJournal: Journal of Business ResearchVolume: Vol. 144
-
PublicationManagement von Künstlicher Intelligenz in UnternehmenType: journal articleJournal: HMD Praxis der WirtschaftsinformatikVolume: 57Issue: 1
-
PublicationWas Unternehmen von der Videospieleindustrie für die Gestaltung der Digital Customer Experience lernen könnenType: journal articleJournal: HMD Praxis der WirtschaftsinformatikVolume: 54Issue: 5
-
PublicationExplanation Interfaces for Sales Forecasting(Association for Information Systems, 2022-06-18)
;Hruby, RichardAlgorithmic forecasts outperform human forecasts in many tasks. State-of-the-art machine learning (ML) algorithms have even widened that gap. Since sales forecasting plays a key role in business profitability, ML based sales forecasting can have significant advantages. However, individuals are resistant to use algorithmic forecasts. To overcome this algorithm aversion, explainable AI (XAI), where an explanation interface (XI) provides model predictions and explanations to the user, can help. However, current XAI techniques are incomprehensible for laymen. Despite the economic relevance of sales forecasting, there is no significant research effort towards aiding non-expert users make better decisions using ML forecasting systems by designing appropriate XI. We contribute to this research gap by designing a model-agnostic XI for laymen. We propose a design theory for XIs, instantiate our theory and report initial formative evaluation results. A real-world evaluation context is used: A medium-sized Swiss bakery chain provides past sales data and human forecasts.Type: conference paper -
PublicationEffectiveness of Example-Based Explanations to Improve Human Decision Quality in Machine Learning Forecasting Systems( 2022-12-09)Algorithmic forecasts outperform human forecasts by 10% on average. State-of-the-art machine learning (ML) algorithms have further expanded this discrepancy. Because a variety of other activities rely on them, sales forecasting is critical to a company's profitability. However, individuals are hesitant to use ML forecasts. To overcome this algorithm aversion, explainable artificial intelligence (XAI) can be a solution by making ML systems more comprehensible by providing explanations. However, current XAI techniques are incomprehensible for laymen, as they impose too much cognitive load. We contribute to this research gap by investigating the effectiveness in terms of forecast accuracy of two example-based explanation approaches. We conduct an online experiment based on a two-by-two between-subjects design with factual and counterfactual examples as experimental factors. A control group has access to ML predictions, but not to explanations. We report results of this study: While factual explanations significantly improved participants' decision quality, counterfactual explanations did not.Type: conference paper
-
PublicationEmpirically Exploring the Cause-Effect Relationships of AI Characteristics, Project Management Challenges, and Organizational Change( 2021-02)Artificial Intelligence (AI) provides organizations with vast opportunities of deploying AI for competitive advantage such as improving processes, and creating new or enriched products and services. However, the failure rate of projects on implementing AI in organizations is still high, and prevents organizations from fully seizing the potential that AI exhibits. To contribute to closing this gap, we seize the unique opportunity to gain insights from five organizational cases. In particular, we empirically investigate how the unique characteristics of AI – i.e. experimental character, context sensitivity, black box character, and learning requirements – induce challenges into project management, and how these challenges are addressed in organizational (socio-technical) contexts. This shall provide researchers with an empirical and conceptual foundation for investigating the cause-effect relationships between the characteristics of AI, project management, and organizational change. Practitioners can benchmark their own practices against the insights to increase the success rates of future AI implementations.Type: conference paper
-
PublicationManaging Bias in Machine Learning Projects( 2021-03-09)
;Huber, ViktoriaThis paper introduces a framework for managing bias in machine learning (ML) projects. When ML-capabilities are used for decision making, they frequently affect the lives of many people. However, bias can lead to low model performance and misguided business decisions, resulting in fatal financial, social, and reputational impacts. This framework provides an overview of potential biases and corresponding mitigation methods for each phase of the well-established process model CRISP-DM. Eight distinct types of biases and 25 mitigation methods were identified through a literature review and allocated to six phases of the reference model in a synthesized way. Furthermore, some biases are mitigated in different phases as they occur. Our framework helps to create clarity in these multiple relationships, thus assisting project managers in avoiding biased ML-outcomes.Type: conference paper -
PublicationManaging Artificial Intelligence, Workshop Paper Series(Institut für Wirtschaftsinformatik Universität St.Gallen, 2021-09-27)
;Koehler, JanaAlbayrak, Can AdamThe algorithms of artificial intelligence are constantly being further developed and are being used in more and more products and applications in business and society. Numerous prototypes are being developed to open up the use of artificial intelligence in a wide variety of application areas. Nevertheless, only a few prototypes succeed in making the leap into productive applications that create sustainable business benefits. This paper series shows that processes and structures are needed for the management of artificial intelligence to ensure the sustainable success of AI systems.Type: conference paper -
PublicationTowards Closing the Affordances Gap of Artificial Intelligence in Financial Service Organizations( 2020-03)Artificial Intelligence (AI) is considered being a disruptive force for existing companies and a promising avenue towards competitive advantage. A myriad of companies started investing in AI initiatives. However, a significant number of AI projects is not successfully deployed. Taking a closer look at financial service organizations, we aim at contributing to closing the gap between understanding the potential of AI and proactively leveraging the latter. We draw on affordance theory and socio-technical systems (STS) theory to identify the required socio-technical changes to actualize affordances of AI in financial service organizations. We present preliminary findings from a multiple case study approach with five financial service organizations based on rigorous interview coding that yields first insights into AI affordances. Building up on this, we will prioritize and structure future in-depth case studies to investigate how to orchestrate AI-induced changes in STS for actualizing AI affordances.Type: conference paper