• Keine Ergebnisse gefunden

them before they are visible to the team. Two stakeholders wish to integrate the tool into crash tracker systems such as crashlytics [89], which reports the stack trace of occurred crashes. Consequently, being able to observe crashing and non-crashing bugs would provide a complete view of current problems in one place. Five stakeholders additionally stated that export functionality is essential to back up the analysis results and to use the analysis results in other tools.

Stakeholders see the opportunity to utilize automated approaches to reduce manual effort, remove irrelevant feedback, and include multiple feedback channels in a single place. A second challenge in analyzing user feedback is that it comes in different forms from different platforms. Depending on the company, they may have more than three feedback channels, including traditional channels like phone and email, or modern feedback channels such as Twitter and the app distribution platforms. This situation makes it cumbersome to aggregate the information as there is no single place in which stakeholders can collect and visualize feedback. Although an automated analysis of explicit user feedback brings opportunities, stakeholders also see challenges with the suggested approaches.

In particular, we have to convince stakeholders that the results of the automated analysis are truthful and, to some extent, complete. As automated approaches are not perfect, they may miss valuable feedback that a manual analysis would discover. When considering the SURF approach by Di Sorbo et al. [59], which summarizes similar user feedback, most stakeholders state that they do not feel to miss out on information. Nevertheless, one stake-holder states that these summaries miss crucial information, which suggests that the possibility of seeing the original user feedback might be a good step towards increasing trust in the developed approaches. Our interview study supports that finding as stakeholders state that they want to see all classification results and want a mechanism to correct them.

Stakeholders want to know the features user discuss and the features similar apps provide. Our interview study shows that stakeholders want to identify new features to foster innovation. In a tool, they want a summarization highlighting the features users address. Two publications from our related work review agree with this finding. They further say that stakeholders want to under-stand the users of particular features and that app store analyses shall suggest a set of features from similar apps for either improving an existing app or for developing a new one.

Implicit user feedback

There are only a few approaches that analyze and evaluate implicit user feedback for requirements engineering with stakeholders. In this chapter, we introduced two papers that evaluated approaches for implicit feedback. Additionally, in our interview study, two stakeholders state that they wish to have access to that kind of data.

Stakeholders suggest adding implicit user feedback to explicit user feedback to help developers understanding, e.g., what hardware and software versions are affected and what interactions in what context led to the issue reported. Our interview study, which initially targeted ex-plicit user feedback, reveals the need for imex-plicit user feedback. Though most stakeholders state that explicit user feedback is useful, some stakeholders declare the need to have context and usage data attached to the written feedback. In particular, problem reports rarely include technical information such as context data, which in previous studies also show [157, 187]. However, without context data, bugs are challenging to reproduce. The study of Johanssen et al. [119] fur-ther highlights the need for combining both feedback types as eight stakeholders stressed this fact. Therefore, they agree with the stakeholders from our interview study but further, add more use cases such as monitoring implicit user feedback to evaluate A/B testing and to collect general usage statistics. Another oppor-tunity stakeholders state is that this kind of user feedback can help to improve specific parts of their software. For example, monitoring interaction data can help to learn how users use the app features, in particular, to identify the most frequently used features and the feature users address most often in problem re-ports. Further, stakeholders can monitor interaction data to identify unintended navigation flows and optimize them accordingly. Finally, the workshop of Oriol et al. [184] shows that stakeholders can use implicit feedback to elicit either new requirements or to refine existing ones.

As already a few users generate a million of implicit user feedback, stakeholders need an automated aggregation and filtering of the feed-back before they can analyze it. In the analyzed studies, no stakeholder is stating that they solely rely on implicit user feedback. Usually, stakeholders

combine it with explicit user feedback as they require an explanation from the user perspective of, e.g., what happened during a non-crashing bug. Oriol et al. [184] highlight that one challenge of collecting and analyzing implicit user feedback is the amount of data generated. The amount of generated data highly depends on the software, what stakeholders monitor, and how many users use the tool over what time frame. Nonetheless, their study shows that even 5,000 thousand logged in users generated about a million click events in four months.

Though these numbers do not say much without knowing how long the users stayed on the website, they already suggest that getting meaningful information from implicit user feedback is only feasible if stakeholders can aggregate it.

Automated tool support

Five studies from the literature and our interview study included the evaluation of a tool with stakeholders. As the developed approaches differ, the tools differ, too. In the following, we compare the feedback from the stakeholders.

Most of the 90 studied stakeholders do not have dedicated tool support for feedback analytics yet, though most desire such. The majority of them state that “any” tool support would already be helpful. There are tools that stakeholders use, but they focus on collecting user feedback or documenting requirements in, e.g., issue trackers. Automatically analyzing user feedback is already a good step toward improving the current state in the indus-try, but often scripts and raw data are not sufficient to effectively analyze user feedback. Stakeholders, therefore, need automated approaches that visualize the results. Another opportunity is that with a web-based tool, stakeholders can stay informed about their product at any given time and place, which gives them the feeling of being informed and in control.

Stakeholders see the opportunity in tools to have a quick overview of the current situation in the user feedback, such as seeing how many bug users reported since the last release. This opportunity is particularly important for project managers, who are more interested in seeing the overall performance of the app instead of detailed feedback. These stakeholders further highlight that they need to feel in control and informed all the time. Therefore,

they suggest having a dashboard available on the web, which they can access from different devices.

To gain trust in the results of the developed approaches, stakeholders also want to have the possibility to see the original, unprocessed user feedback. Another advantage of exploring the original user feedback, especially for developers, is to get a complete overview of the particular issues or requests of their users. However, as the explicit and implicit user feedback challenges suggest, user feedback is often noisy and uninformative. Therefore, stakeholders see the opportunity to have a reduced effort in analyzing user feedback if the tool is capable of hiding irrelevant user feedback. In Di Sorbos et al. [59], eight stakeholders state that using their tool to summarize user feedback helps them to save more than 50% of their time.

Tools have to give stakeholders control over the analysis results. Gen-erally speaking, stakeholders see any tool support as an improvement over the current situation. Despite usability aspects specific to the tools developed in the summarized studies, most challenges lie in the underlying analysis of user feed-back, which the tools need to mitigate. For instance, the developed approaches are not perfect, and as such, do make mistakes. The tool should have the ca-pability to handle these mistakes by, e.g., giving the stakeholders the chance to correct machine learning-based results. Integrating human feedback in the tool increases trust and allows the underlying algorithms to improve over time.

Another challenge is the diversity of the infrastructure landscape as each company has different tools for, e.g., tracking issues, and docu-menting requirements. Therefore, stakeholders see diverse ways of integrating requirements intelligence into their current workflows. While some wish for a standalone tool, others would like to see it integrated into their existing tools.

The conclusion is there is not a single solution to the stakeholder’s needs, and therefore, such a tool looks differently, use different approaches, and is integrated differently in every company that may use it.