• Keine Ergebnisse gefunden

4. Results

4.2. Action Patterns for Efficient and Effective Exploratory Information Seeking

4.2.1. Fighting the Masses of Information

More than 50 percent of the participants mentioned to feel information overloaded at least occasionally during exploratory online information seeking. 20% of the participants feel overloaded always or often. This first coding category of action patterns for efficient and effective exploratory information seeking focuses on actions taken by the sample of ongoing experts to fight the masses of information online and thus to reduce (the frequency of) perceived information overload.

Figure 28: Perceived Information Overload

When asking for situations in which the participants felt information overloaded during exploratory online information seeking in the past, the most frequently mentioned ones were the availability of too many results after typing in a search term (mentioned by 11/25 participants) and an insufficient overview of the respective topic, thus a missing knowledge of relevant sub-topics and key words for pre-filtering (mentioned by 6/25 participants).

Figure 29: When Does Perceived Information Overload Occur

Further, when asked how to proceed for finding the right/relevant search terms, next to an intuitive trial and error search the mostly used strategy (by 56% of the participants) was to consciously familiarize with the respective topic beforehand.

Figure 30: Finding the Right/Relevant Search Terms

Hence in the following, strategies identified for three actions will be focused on: (1) For familiarizing with a relatively new topic, (2) for pre-filtering the mass of potential results and (3) for filtering the mass of results when given, thus for evaluating quality and relevance of suggested content in course of the search.

(1) Familiarizing with a relatively new topic

The table below shows all approaches suggested for a conscious familiarization with a relatively unknown topic. More than half of the interviewees search for and read books for this purpose. 30% of the participants make use of the expertise of colleagues or their supervisor in this context, they thus use their network for a first orientation. Watching introductory videos for the purpose of familiarizing with a relatively new topic was mentioned by 30% of the participants (7/25). Other actions mentioned are reading through comments (PhD student law), reading literature reviews, reading blog-/ forum entries, reading through journals which target a relatively broad public, searching for lecture notes, watching online courses, delegating the familiarizing task to another person and reading through diverse material randomly found. 1

1 Note: The actions of familiarization suggested do not necessarily happen online (e.g. reading books or use network). Nevertheless, as they are taken as a preparative steps for online information seeking they are anyway considered.

Figure 31: Familiarizing with a Relatively New Topic

(2) Pre-Filtering the Mass of Results

As illustrated in Figure 27, 44% of the participants consciously use filtering functions provided by search engines/databases for a pre-filtering of potential results. Further, about half of the participants (13/25) narrow their search and thus pre-filter potential results with a conscious integration of Boolean Operators into the search warrant and/or he usage of other command-symbols allowing for this purpose. By those 13 participants, overall the usage of six operators and symbols is mentioned: The Boolean Operators AND, OR and NOT, the usage of quotation marks for getting results with exactly the combination of words typed in, the operator

“filetype:pdf” for only getting documents of the format PDF, and setting the star sign (*) as a place holder for missing/unfinished words. Mostly mentioned was the Boolean Operator AND, by about 77% (10/13) of the participants who mentioned to consciously use operators or other command-symbols. Overall, the usage of Boolean Operators was mentioned more often than the usage of the other command-symbols. Each Boolean Operator was at least mentioned by one third of the 12 participants consciously using operators or other command-symbols.

Figure 32: Usage of Boolean Operators / Other Command-Symbols (in percentage of all 13 participants using Boolean operators or command-symbols)

(3) Evaluating quality and relevance of found material in course of the search for filtering the results

Overall, 9 different approaches for an evaluation of quality and relevance of results in course of the search itself are suggested. More than 60% of the participants lean on their scientific experience for an evaluation of quality and relevance by deciding subjectively when reading or skimming through the material (15/25 participants) and/or by deciding on the basis of meta-information on the material (16/25 participants), e.g. author, year of publication or publisher.

44 percent of the participants (11/25) consider Journal Rankings (VHS, EBS, Scimago) for evaluating the quality or relevance of given results. 28% (7/25 participants) consider by whom or how often the material is cited. Other criteria mentioned for quality/relevance evaluation are if the material is peer reviewed or not, the evaluation on basis of self-defined individual screening criteria for the respective search task and considering the scientific fundament, which means to verify the references of the respective material. 12% (3/25 participants) stated to generally trust in every published material to be of sufficient quality.

Figure 33: Evaluating Quality / Relevance

72% of the participants do not base their evaluation on only one of those evaluation approaches but combine them instead. On average, the participants combine 2-3 evaluation approaches, the maximum is at five (by one participant). The table below provides an overview about the mix of evaluation approaches per participant.

Figure 34: Evaluation of Quality / Relevance per Participant

Five participants criticized the usage of Rankings for an evaluation of quality or scientific relevance. Two of those use Rankings themselves but carefully. The excerpts below serve as examples for the mostly mentioned criticisms: That the rankings of scientific journals were perceived more as a commercial than a scientific phenomenon, that even the highly-ranked journals were not always objective in their choice of scientific articles to publish because they did have an interest in publishing articles of well-known scientists for image reasons, that the competition among scientific journals was quite high and thus the quality difference not forcefully huge and that also second-rank journals could and did publish good articles.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Scientific Fundament X X

Self-Defined Screening Criteria X X

Trust Database X X X

Peer-Reviewed X X X

how often /from whom cited X X X X X X X

Rankings X X X X X X X X X X X

Subjective Impression X X X X X X X X X X X X X X X

Meta-Infos X X X X X X X X X X X X X X X X

Sum of Criteria Considered 1 4 3 5 2 3 1 4 3 1 2 2 2 2 1 1 1 5 1 2 2 3 2 3 3

Quality/Relevance

“Ja ganz ehrlich diese Rankings-Geschichte und dieser, ehm, die ganzen Sachen, die man heutzutage bei den wissenschaftlichen Papers sieht wie zum Beispiel dieser Index Factor oder solche Sachen, oder Impact Factor, entschuldigung, ehm, jo, die kommen mir eher mehr wirtschaftlich vor als wissenschaftlich, ja. […] Deswegen, ich, ich versuche, dass ich nicht aufgrund von einem Impact Factor, eh, einem Paper einen Wert gebe, ja. Das versuche ich, ja. Deswegen schau' ich einfach nicht, wer der Herausgeber ist, ja.“

(Interview Nr. 16, § 226)

“…und oft gibt's auch sehr viel Literatur in den Zeitschriften von dem zweiten Rang, die eigentlich sehr gut sind und sie haben einfach zum Teil Miss-Glück gehabt, oder schlechte Beziehungen mit einer von den Editors, das kann auch passieren. Es ist nicht, dass, natürlich nicht der Standard. Aber das kann alle Fall sein, ja. Und der Wettbewerb ist zum Tei-, zur Zeit einfach sehr hoch. Deswegen ist der Unterschied zwischen diesen Top-Zei-, eh, Artikeln und, eh, zwischen den manchen Artikeln oder Artikeln in den Top-Journals und Artikeln in den einfach guten Top-Journals nicht, nicht dramatisch. Es ist nicht so hoch, ja. Es gibt ein, schon einen großen Unterschied zwischen den sogenannten C-Journals und A-C-Journals, obwohl man muss wirklich aufpassen, ja. Weil diese C-C-Journals oft nicht die schlechten Journals sind, sondern die Journals, die einfach, eh, eng spezialisiert sind, ja. Zum Beispiel es gibt die Journals in der Wirtschaftsgeschichte, und die werden einfach nicht so oft gelesen, von, von ein-, von einem breiten Publikum, ja. Und da kann man, eh, viele schöne Artikel finden, die einfach keinen anderen Weg bei den anderen, eh Journals finden können, weil sie einfach auf, ja, auf historisches Wissen basieren, ja. Und das ist sozusagen auf eine gewisse Region irgendwie gebunden, und deswegen denken die Editors von den größeren Journals, oder von den renomierteren Journals "nein das ist viel zu eng für uns", ja.“ (Interview Nr. 18, § 6)

“Weil zum Beispiel wenn ein Professor eben irgendwo, ehm, dort schon sehr oft publiziert hat, sagen wir Nature, eins der größten. Dann ist seine Chance, dass er dort wieder publiziert, groß. Die möchten ja auch Rang und Namen haben, diese Journale. Möchten Leute mit Rang und Namen schreiben lassen. Aber die Qualität ist halt nicht immer gleich gut.“ (Interview Nr. 23, § 64)

“Ich meine, du kannst zumindest, wenn du die Rankings von einem Journal weißt, ja, und dann weißt okay der, der gibt's weniger Wahrscheinlichkeit, dass in diesem Journal irgendein Bullshit ge-, ausgedruckt, also präsentiert wird, ja. Dann ist es ein Indikator. Aber, dass die, dass der Artikel selbst in einer, in einer hoch-zitierten Zeitschrift ist, sagt noch nicht, dass es eine gute Zeitschrift, dass es ein guter Artikel ist, ja.“ (Interview Nr. 18, § 6) One participant uses the Scimago Journal Ranking as an alternative for traditional rankings.

As shown in the excerpt below, this participant mentions to choose this alternative because of its mix of evaluation criteria: it applies the h-index, it includes journal-external citations and it considers the fit between the topics mentioned for coverage and actually covered through published articles by the respective journal.

“Da hab' ich dieses SCImago Journal Ranking. Weiß jetzt nicht genau wie die Seite heißt.

Da kannst du ein Themengebiet eingeben, und dann gibt es dir nach Rankings die besten Journals von 1 bis Z quasi aus. Eh, und das sortiert's eben nach, also das ist ein bissl gefilterter das System, das ist dieser h-index. Aber nicht, also da gibt's ja das, ehm, den Impact Factor kennt jeder. Aber den sagt man eigentlich der ist ein bisschen veraltet. Weil der hilft halt den alten Journals, und die neuen gehen irgendwie voll unter. Weil bis dass man mal einen bestimmten Impact Factor hat, der für irgendwen interessant ist, jo. Und die haben eben andere Sortier-, also da gibt's diesen. Ich weiß nicht wie das, das müsste ich dann nachträglich noch schicken. Ich hab's vergessen halt. SCI heißt das glaub' ich.

(gemeint ist: http://www.scimagojr.com/journalrank.php) SCI irgend, das ist halt auch da werden halt die Zitierungen, die außer-journalen. Also weil zum Beispiel, die Journals tendieren ja dazu, dass zum Beispiel wer viel ihre eigenen Journals zitiert, kommt leichter in das Journal rein, weil das erhöht ja ihre Zitierungen. Und die machen's halt dann so, die

schauen sich auch an wie viele nicht eigenen Referenzen sind in dem Journal, in dem Jahr, gewesen und so weiter. Und wie gut passen die Themen zu den Themen, die sie vorgeben.

Wie gut passen die Themen, die dann im Journal behandelt werden. Weil es hat oft ein Journal einen Namen, eh zum Beispiel Biomass & Bioenergy, und dann geht's aber eigentlich fast immer nur um Biomass und das Bioenergy Thema ist in den letzten 5 Jahren fast nicht behandelt worden. Dann weiß ich, als jemand der jetzt über Bioenergy schreibt, okay das Journal wird, da werd' ich nicht hereinkommen, weil die das Thema eigentlich sehr Stiefmütterlich behandeln.“ (Interview Nr. 4, § 64)

The evaluation of quality and relevance is further investigated from a more process-oriented point of view by focusing on the participant’s filtering actions taken when facing a result list after typing in a search term (combination). Basically, most participants were found to engage in a first and a second screening action. The prior targeting the whole list of results, the latter targeting one specific material.

When asking for the factors which decide on a further consideration or not of the material given in the result list, overall eight different criteria were mentioned, hereby called the “First Screening Criteria”. 96 % (24/25 participants) consider the title for a decision on if the material given should further be considered or not. Only one participant stated to not look at the title but at the source/type of information instead. Overall, source or type of information is considered by 36% (9/25 participants). Hereby, the source is the website offering the respective content.

About 1/3 of the participants (24%) consider the author in a decision of opening a content or downloading it. Other criteria considered if given are the preview of the context in which parts of or the whole search term is used in, the abstract, the full-text online availability, the year or the number of editions existent of the respective source.

Figure 35: First Screening Criteria

The table below shows the first screening criteria per participant. On average, each participant considered two criteria at the same time. One of the participants considers six criteria for the first screening.

Figure 36: First Screening Criteria per Participant

After the first screening of the results given, 88% of the participants (22/25) regularly engage in a second screening by opening the respective material before deciding to save it, print it or leave it. 1/25 participants only does so in exceptional cases, and 2/25 participants always save after the first screening, thus if they do perform a second screening they do it offline.

The second screening varies widely in deepness, from only reading the abstract to reading the whole document. The most common actions for a second screening among the participants are reading the abstract and/or reading the conclusion. Of those participants who engage in a second screening regularly or at least in exceptional cases, about 70% (16/23 participants) focus on the abstract, about 39% (9/23 participants) focus on the conclusion.

Figure 37: Second Screening Criteria (given in % of those engaging in a second screening regularly or exceptionally)

All in all, the actions taken for a second screening of the results can be grouped in three categories: (1) A content-wise screening, (2) A structure-wise screening and (3) A document-identity check. A content-wise screening is done by all 23 participants who engage in a second

Criteria / Participant 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Sum

Author X X X X X X X X 8

Title X X X X X X X X X X X X X X X X X X X X X X X X 24

Preview which parts of the search term appear

in the respective source X X X X X X 6

Abstract X X X X 4

Source /File-Type/Tipe

of Information X X X X X X X X X 9

Year X X 2

Availability Online X X X 3

Nr. of editions X 1

Total Nr. of Criteria 2 1 2 3 1 2 1 6 1 3 4 2 1 2 2 3 1 3 1 2 3 2 3 3 3

First Screening Criteria

screening regularly or at least in exceptional cases. A structure-wise screening is done by 4/23 participants, and a document-identity check by 2/23 participants. The table below gives an overview on the combinations of actions taken by each participant during a second screening.

On average, the participants performed 2-3 actions in combination for a second screening.

Figure 38: Second Screening of the Material