Options
Niklas Leicht
Title
Dr.
Last Name
Leicht
First name
Niklas
Email
niklas.leicht@unisg.ch
Now showing
1 - 10 of 15
-
PublicationLeveraging the Power of the Crowd for Software TestingThe rapid development of new IT-enabled business models, a fast-growing hardware market, and that market's segmentation are making software testing more complex. So, manual testing is becoming less applicable--economically and practicably. One approach to overcome these issues is crowdtesting--using crowdsourcing to perform testing. To profit from crowdtesting, companies can use three approaches: engage an external crowd of Internet users, engage their employees, or engage their customers. Three case studies illustrate these approaches' differences, benefits, and challenges, and the potential solutions to those challenges. Researchers' experiences with these approaches have led to guidelines that can help software development executives establish crowdtesting in their organizations.
Scopus© Citations 34 -
Publication
-
PublicationHow to Design Intelligent Decision Support Systems for Crowdsourcing( 2020)
;Rhyn, MarcelType: conference paper -
PublicationThe Imprint of Design Science in Information Systems Research: An Empirical Analysis of the AIS Senior Scholars’ Basket( 2019-12)Design Science (DS) has become an established research paradigm in Information Systems (IS) research. However, existing research still considers it as a challenge to publish DS contributions in top IS journals, due to the rather strict guidelines that DS publications are expected to follow. Against this backdrop, we intend to emphasize the myriad of configurations and empirically describe the status-quo of DS publications in IS. Based on a Systematic Literature Review (SLR) and a conceptually derived analysis frame, we empirically analyze DS papers published in the AIS Senior Scholars’ Basket. Thereby, we intend to contribute conceptually and descriptively to the knowledge base of DS, by providing insights based on empirical evidence to aid and guide the discussion towards the advancement of the field. Overall, this shall lay the descriptive foundation for creating prescriptive knowledge on DS in IS by proposing and opening future research avenues.Type: conference paper
-
PublicationThe Imprint of Design Science in Information Systems Research: An Empirical Analysis of the AIS Senior Scholars’ Basket( 2019)
;Engel, ChristianType: conference paper -
PublicationGiven Enough Eyeballs, all Bugs are Shallow – A Literature Review for the Use of Crowdsourcing in Software TestingOver the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types.Type: conference paperJournal: Hawaii International Conference on System Sciences (HICSS 2018)
-
PublicationHow to Systematically Conduct Crowdsourced Software Testing? Insights from an Action Research ProjectNowadays, traditional testing approaches become less feasible – both economically and practicably - for several reasons, such as an increasingly dynamic environment, shorter product lifecycles, cost pressure, as well as a fast growing and increasingly segmented hardware market. With the surge towards new modes of value creation, crowdsourced software testing (CST) seems to be a promising solution to effectively solve these problems and was already applied in various software testing contexts. However, literature so far mostly neglected the perspective of an organization intending to crowdsource tasks. In this study, we present an ongoing action research project with a consortium of six companies and present a preliminary model for crowdsourced software testing in organizations. The model unfolds necessary activities, process changes, and the accompanied roles for crowdsourced software testing to enable organizations to systematically conduct such initiatives and illustrates how test departments can use crowdsourcing as a new tool in their department.Type: conference paper
-
PublicationAn Empirical Taxonomy of Crowdsourcing Intermediaries(Academy of Management, 2016)
;Durward, David ;Zogaj, ShkodranCrowdsourcing has drawn much attention from researchers in the past. Thus, there are already attempts to conceptualize and classify the phenomenon. All of the existing work has their merits; however they lack an overviewing perspective or meta-characteristic. They are conceptual in nature, lack theoretical grounding, and – most importantly – are not empirically validated. Hence, we develop an empirical taxonomy of crowdsourcing intermediaries embedded in the theory of two-sided markets. Collecting data from 100 intermediaries and performing cluster analysis, we identify five archetypes of crowdsourcing intermediaries: Micro-tasking, knowledge work, design competition, testing and validation as well as innovation. The taxonomy establishes a systematic and comprehensive overview of crowdsourcing intermediaries and thereby provides a better understanding of the basic types of crowdsourcing and its core functions. For practice, we provide decision support for crowdsourcers as well as crowdsourcees on which platform to be active on.Type: conference paper -
PublicationWhen is Crowdsourcing Advantageous? The Case of Crowdsourced Software Testing(Boğaziçi University, 2016)
;Knop, Nicolas ;Müller-Bloch, ChristophCrowdsourcing describes a novel mode of value creation in which organizations broadcast tasks that have been previously performed in-house to a large magnitude of Internet users that perform these tasks. Although the concept has gained maturity and has proven to be an alternative way of problem-solving, an organizational cost-benefit perspective has largely been neglected by existing research. More specifically, it remains unclear when crowdsourcing is advantageous in comparison to alternative governance structures such as in-house production. Drawing on crowdsourcing literature and transaction action cost theory, we present two case studies from the domain of crowdsourced software testing. We systematically analyze two organizations that applied crowdtesting to test a mobile application. As both organizations tested the application via crowdtesting and their traditional in-house testing, we are able to relate the effectiveness of crowdtesting and the associated costs to the effectiveness and costs of in-house testing. We find that crowdtesting is comparable in terms of testing quality and costs, but provides large advantages in terms of speed, heterogeneity of testers and user feedback as added value. We contribute to the crowdsourcing literature by providing first empirical evidence about the instances in which crowdsourcing is an advantageous way of problem solving.Type: conference paper -
PublicationCan Laymen Outperform Experts? The Effects of User Expertise and Task Design in Crowdsourced Software TestingIn recent years, crowdsourcing has increasingly gained attention as a powerful sourcing mechanism for problem-solving in organizations. Depending on the type of activity addressed by crowdsourcing, the complexity of the tasks and the role of the crowdworkers may differ substantially. It is crucial that the tasks are designed and allocated according to the capabilities of the targeted crowds. In this paper, we outline our research in progress which is concerned with the effects of task complexity and user expertise on performance in crowdsourced software testing. We conduct an experiment and gather empirical data from expert and novice crowds that perform different software testing tasks of varying degrees of complexity. Our expected contribution is twofold. For crowdsourcing in general, we aim at providing valuable insights for the process of framing and allocating tasks to crowds in ways that increase the crowdworkers’ performance. Secondly, we intend to improve the configuration of crowdsourced software testing initiatives. More precisely, the results are expected to show practitioners what types of testing tasks should be assigned to which group of dedicated crowdworkers. In this vein, we deliver valuable decision support for both crowdsourcers and intermediaries to enhance the performance of their crowdsourcing initiatives.Type: conference paper