Options
Benjamin van Giffen
Title
Prof. Dr.
Last Name
van Giffen
First name
Benjamin
Email
benjamin.vangiffen@unisg.ch
Phone
+41 71 224 3635
Now showing
1 - 10 of 17
-
PublicationHow Audi Scales Artificial Intelligence in Manufacturing( 2023)
;Johannes Schniertshauer ;Klemens NiehuesJan Vom BrockeFor organizations to realize maximum value from artificial intelligence (AI), they need the capability to scale it and must consider scaling throughout all stages of an AI innovation project. But AI scaling presents significant challenges, especially for manufacturing companies. We describe how Audi, a leading automotive manufacturer, scaled its crack detection AI solution and unlocked long-term business value in manufacturing. Based on lessons learned at Audi, we provide recommendations and actions for CIOs and senior leaders who seek to capture value through scaling AI solutions.Type: journal articleJournal: MIS Quarterly ExecutiveVolume: 23Issue: 2 -
PublicationHow Boards of Directors Govern Artificial Intelligence( 2023)Helmuth LudwigArtificial intelligence is top of mind, even for nontechnical business executives and board members. However, the majority of boards struggle to understand the implications of AI for their businesses and their role in governing it. We describe how some boards are addressing AI and identify four groups of board-level AI governance issues. We provide examples of effective board-level AI governance practices for each group of issues and make recommendations for establishing board-level AI governance.Type: journal articleJournal: MIS Quarterly ExecutiveVolume: 22Issue: 4
-
PublicationHow Siemens Democratized Artificial Intelligence( 2023)Ludwig, HelmuthMany firms aspire to generate business value with artificial intelligence (AI) but struggle to move beyond pilots and prototypes. Based on an in-depth case study, we describe how Siemens has leveraged AI democratization to identify, realize and scale AI use cases by integrating the unique skills of domain experts, data scientists and IT professionals. From the lessons learned at Siemens, we provide recommendations for building this organizational capability and effectively addressing the challenges of adopting the latest AI technologies.Type: journal articleJournal: MIS Quarterly ExecutiveVolume: 22Issue: 1
-
PublicationDigitale Plattformen in der Praxis – Einsatz- und Entwicklungsmodelle(Springer Fachmedien, 2022-08)
;Holler, Manuel ;Dremel, ChristianGaleno, GianlucaType: journal articleJournal: HMD Praxis der Wirtschaftsinformatik -
PublicationOvercoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods( 2022)Over the last decade, the importance of machine learning increased dramatically in business and marketing. However, when machine learning is used for decision-making, bias rooted in unrepresentative datasets, inadequate models, weak algorithm designs, or human stereotypes can lead to low performance and unfair decisions, resulting in financial, social, and reputational losses. This paper offers a systematic, interdisciplinary literature review of machine learning biases as well as methods to avoid and mitigate these biases. We identified eight distinct machine learning biases, summarized these biases in the cross-industry standard process for data mining to account for all phases of machine learning projects, and outline twenty-four mitigation methods. We further contextualize these biases in a real-world case study and illustrate adequate mitigation strategies. These insights synthesize the literature on machine learning biases in a concise manner and point to the importance of human judgment for machine learning algorithms.Type: journal articleJournal: Journal of Business ResearchVolume: Vol. 144
-
PublicationType: journal articleJournal: HMD Praxis der WirtschaftsinformatikVolume: 57Issue: 1
-
PublicationType: journal articleJournal: HMD Praxis der WirtschaftsinformatikVolume: 54Issue: 5
-
PublicationExplanation Interfaces for Sales Forecasting(Association for Information Systems, 2022-06-18)
;Hruby, RichardAlgorithmic forecasts outperform human forecasts in many tasks. State-of-the-art machine learning (ML) algorithms have even widened that gap. Since sales forecasting plays a key role in business profitability, ML based sales forecasting can have significant advantages. However, individuals are resistant to use algorithmic forecasts. To overcome this algorithm aversion, explainable AI (XAI), where an explanation interface (XI) provides model predictions and explanations to the user, can help. However, current XAI techniques are incomprehensible for laymen. Despite the economic relevance of sales forecasting, there is no significant research effort towards aiding non-expert users make better decisions using ML forecasting systems by designing appropriate XI. We contribute to this research gap by designing a model-agnostic XI for laymen. We propose a design theory for XIs, instantiate our theory and report initial formative evaluation results. A real-world evaluation context is used: A medium-sized Swiss bakery chain provides past sales data and human forecasts.Type: conference paper -
PublicationEffectiveness of Example-Based Explanations to Improve Human Decision Quality in Machine Learning Forecasting Systems( 2022-12-09)Algorithmic forecasts outperform human forecasts by 10% on average. State-of-the-art machine learning (ML) algorithms have further expanded this discrepancy. Because a variety of other activities rely on them, sales forecasting is critical to a company's profitability. However, individuals are hesitant to use ML forecasts. To overcome this algorithm aversion, explainable artificial intelligence (XAI) can be a solution by making ML systems more comprehensible by providing explanations. However, current XAI techniques are incomprehensible for laymen, as they impose too much cognitive load. We contribute to this research gap by investigating the effectiveness in terms of forecast accuracy of two example-based explanation approaches. We conduct an online experiment based on a two-by-two between-subjects design with factual and counterfactual examples as experimental factors. A control group has access to ML predictions, but not to explanations. We report results of this study: While factual explanations significantly improved participants' decision quality, counterfactual explanations did not.Type: conference paper
-
PublicationEmpirically Exploring the Cause-Effect Relationships of AI Characteristics, Project Management Challenges, and Organizational Change( 2021-02)Artificial Intelligence (AI) provides organizations with vast opportunities of deploying AI for competitive advantage such as improving processes, and creating new or enriched products and services. However, the failure rate of projects on implementing AI in organizations is still high, and prevents organizations from fully seizing the potential that AI exhibits. To contribute to closing this gap, we seize the unique opportunity to gain insights from five organizational cases. In particular, we empirically investigate how the unique characteristics of AI – i.e. experimental character, context sensitivity, black box character, and learning requirements – induce challenges into project management, and how these challenges are addressed in organizational (socio-technical) contexts. This shall provide researchers with an empirical and conceptual foundation for investigating the cause-effect relationships between the characteristics of AI, project management, and organizational change. Practitioners can benchmark their own practices against the insights to increase the success rates of future AI implementations.Type: conference paper