• Keine Ergebnisse gefunden

Discrete Element Method: Adaptive Resolution Simulations of Granular Flows for Large-Scale Systems / submitted by DI Daniel Queteschiner

N/A
N/A
Protected

Academic year: 2021

Aktie "Discrete Element Method: Adaptive Resolution Simulations of Granular Flows for Large-Scale Systems / submitted by DI Daniel Queteschiner"

Copied!
167
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Submitted by

DI Daniel Queteschiner

Submitted at

Department of Particu-late Flow Modelling

Supervisor and First Examiner

Priv.-Doz. Dr. Simon Schneiderbauer

Second Examiner

Prof. Dr. ir. Niels Deen Co-Supervisors Assoz. Univ.-Prof. Dr. Stefan Pirker DI Dr. Thomas Lichtenegger February 2020 JOHANNES KEPLER UNIVERSITY LINZ Altenbergerstraße 69 4040 Linz, Austria

Discrete Element Method:

Adaptive Resolution

Simula-tions of Granular Flows for

Large-Scale Systems

Doctoral Thesis

to obtain the academic degree of

Doktor der technischen Wissenschaften

in the Doctoral Program

(2)
(3)

Statutory Declaration

I hereby declare that the thesis submitted is my own unaided work, that I have not used other than the sources indicated, and that all direct and indirect sources are acknowledged as references.

This printed thesis is identical with the electronic version submitted.

(4)

Abstract

The ability to model granular systems at the level of individual particles has largely conduced to the success of the discrete element method (DEM). At the same time, this fundamental concept hinders the use of the DEM for industrial-scale simulations as the computational cost of the method increases with the size of the system.

The DEM coarse-grain (CG) model provides one means of counteracting this effect by replacing a group of original particles by a larger (pseudo) particle. The major shortcoming of this approach is that it fails to capture effects that intrinsically depend on particle size.

To overcome this deficiency we have devised a novel model to efficiently combine multiple levels of coarse-graining. While a coarse realization is used where it sufficiently represents the granular flow, the level of resolution is increased recursively in spatially confined regions of interest. Thus, the method is able to benefit from the speedup of the coarse-grain approach and retain the details of the granular system in crucial regions. Two-way coupling between different levels of resolution is established by passing volume-averaged flow properties.

We present validation data based on the comparison between the computed statisti-cal properties of our multi-level coarse-grain (MLCG) model and the corresponding properties of the fully resolved reference system.

Furthermore, it is demonstrated that the developed MLCG model can be applied to unresolved CFD-DEM simulations where the DEM part is typically taking up a large fraction of the computational resources. On the CFD side, each CG level of the granular model is treated separately using appropriately scaled drag laws and a suitable mesh resolution. The DEM component links up the different CG levels via the coupling procedure described above.

(5)

Kurzfassung

Die Möglichkeit granulare Systeme auf der Ebene einzelner Partikel zu modellieren, hat erheblich zum Erfolg der Diskrete-Elemente-Methode (DEM) beigetragen. Gleichzeitig behindert jedoch eben dieses zugrundeliegende Konzept die Anwendung der DEM auf Simulationen im industriellen Maßstab, da der Rechenaufwand für die Methode mit der Größe des Systems wächst.

Das Coarse-Grain (CG) Modell der DEM stellt eine Möglichkeit dar, diesem Ef-fekt entgegenzuwirken, indem eine Gruppe originärer Partikel durch ein größeres (Pseudo-)Partikel ersetzt wird. Das wesentliche Manko dieses Ansatzes ist, dass er Effekte, die von der Partikelgröße selbst abhängen, nicht korrekt abbilden kann.

Um diesen Mangel zu beheben, wurde ein neues Modell entworfen, das mehrere Coarse-Graining-Stufen effizient miteinander kombiniert. Dabei wird eine grob aufgelöste Repräsentation der Partikel in Bereichen verwendet, für die sie den Partikelfluss ausre-ichend genau abbildet. In räumlich begrenzten Bereichen, in denen eine detailreichere Darstellung erwünscht oder für eine akurate Simulation notwendig ist, wird die Auflö-sung dementsprechend rekursiv erhöht. Dadurch kann die Simulation von den kürzeren Berechnungszeiten des CG Modells profitieren und gleichzeitig den erforderlichen Detail-grad in kritischen Simulationsbereichen erhalten. Eine Zweiwegkommunikation zwischen unterschiedlichen Auflösungsstufen wird durch den Austausch von volumengemittelten Fließeigenschaften hergestellt.

Die Validierung der entwickelten Technik erfolgt durch den Vergleich statistischer Eigenschaften des mehrstufigen Coarse-Grain (MSCG) Modells mit den entsprechenden Eigenschaften einer voll aufgelösten Referenzsimulation.

Desweiteren wird demonstriert, dass das entwickelte MSCG Modell auch auf (nicht aufgelöste) CFD-DEM Simulationen angewandt werden kann, bei denen die DEM Berechnungen typischerweise einen großen Anteil der Rechenkapazitäten belegen. Auf der CFD Seite wird dabei jede CG-Stufe des granularen Modells separat behandelt, wobei entsprechend skalierte Interaktionsmodelle und adäquat aufgelöste Rechengitter verwendet werden. Die DEM-Komponente verbindet die unterschiedlichen CG-Stufen wie oben beschrieben.

(6)
(7)
(8)

Acknowledgments

First of all, I would like to thank Prof. Stefan Pirker for providing the opportunity to immerse into the fascinating field of particulate flow modeling, constantly showing his interest in the research activities and coming up with new and spirited suggestions.

I would like to express my deepest gratitude and respect to Dr. Simon Schnei-derbauer who unstintingly shared his profound knowledge on computational fluid dynamics, always bringing up new impulses and intriguing aspects.

I am particularly indebted to Dr. Thomas Lichtenegger, who, with his involved support and valuable advice, made a major contribution to the successful outcome of this thesis.

I wish to express my sincere appreciation to Dr. Christoph Kloss for his guidance during my preliminary studies inducting me into the ins and outs of LIGGGHTS and all the enlightening discussions on particle simulation techniques.

A big thank you goes to Dr. Richard Berger, Dr. Gerhard Holzinger and Dr. Philippe Seil for passionate conversations during the daily programming work and their altruistic willingness to go bug hunting whenever one or two needed to be squashed.

I would like to thank all my colleagues for the motivating and at the same time relaxed atmosphere in the office.

Last but not least I want to thank my family, especially my parents, for their love and support throughout all those years, never asking anything in return.

(9)

Contents

1 Introduction 1

2 The Discrete Element Method 5

2.1 Contact models . . . 7

2.1.1 Linear spring dashpot model . . . 9

2.1.2 Non-linear spring dashpot model . . . 11

2.1.3 Rolling friction models . . . 13

2.2 Critical time step . . . 16

2.3 Coarse-grain model . . . 17

2.4 Multi-level coarse-grain model . . . 19

2.4.1 Reconstruction step . . . 19

2.4.2 Compression step . . . 23

2.5 Fragmentation model . . . 25

2.5.1 Breakage criteria and breakage probability . . . 25

2.5.2 Breakage function and breakage index . . . 28

2.5.3 Momentum and energy conservation . . . 29

2.5.4 Coarse-grain model . . . 30

2.6 Unresolved CFD-DEM . . . 31

2.6.1 Governing equations . . . 31

2.6.2 Drag models . . . 35

3 Implementation Details 39 3.1 Discrete element method . . . 39

3.1.1 Time integration scheme . . . 39

3.1.2 Neighbor lists . . . 43

3.1.3 LIGGGHTS time step . . . 46

3.1.4 Parallelization . . . 47

3.1.5 Tangential contact force model . . . 48

3.2 Multi-level coarse-grain model . . . 50

3.2.1 Definition of mesh geometry . . . 50

3.2.2 Particle properties at the boundary surface . . . 52

3.2.3 Particle insertion . . . 55

3.2.4 Particle binning . . . 55

3.2.5 Volume-averaged particle properties . . . 58

3.2.6 PI controller . . . 59

3.2.7 CG level separation . . . 60

3.2.8 Extension to unresolved CFD-DEM . . . 66

(10)

3.3.1 Fragment size distribution . . . 70

3.3.2 Sphere packing . . . 71

3.3.3 Model extension . . . 77

4 Verification and Validation 79 4.1 Coarse-grain model . . . 81

4.1.1 Granular jet . . . 81

4.2 Multi-level coarse-grain model . . . 84

4.2.1 Verification of the reconstruction step . . . 84

4.2.2 Verification of the compression step . . . 90

4.2.3 Bin discharge . . . 100

4.2.4 Extension to unresolved CFD-DEM . . . 113

4.3 Fragmentation model . . . 117

4.3.1 Verification of breakage criteria . . . 117

4.3.2 Crushing strength of iron ore pellets . . . 120

5 Industrial Application 125 5.1 Direct reduction of iron . . . 125

5.1.1 MIDREX process . . . 126

5.1.2 Shaft furnace simulation setup . . . 129

5.1.3 Shaft furnace filling process . . . 130

5.1.4 Shaft furnace discharge process . . . 134

(11)

“In the beginning there was nothing, which ex-ploded.”

Terry Pratchett

1

Introduction

Since its introduction by Cundall and Strack in 1979 [1], the discrete element method (DEM) has proven to be a valuable tool for the analysis of granular flows. Fostered by the steadily growing computational power, the DEM was able to leave the field of pure academic research and find its way into numerous branches of industry, where it is used to optimize processes and equipment in terms of efficiency, throughput and quality. Major reviews of substantive applications of the DEM are published periodically [2–5]. The published literature demonstrates that simulations are an indispensable tool in process analysis, especially in cases, where experimental data is difficult to obtain. The field of application includes such diverse areas as the food [6, 7], pharmaceutical [8, 9] and mining [10] industries, as well as the iron and steel making industry [11, 12].

The latter encompasses multifarious conceptions which makes it an attractive field of study from a plurality of perspectives. The history of iron smelting alone could fill books, starting from the oldest known smelted iron objects of the Iron Age (e.g. the earliest artifacts in Central Europe from the Hallstatt C culture date back to the 8th century BCE), to the propagation of blast furnaces in China around the beginning of the Common Era and the replacement of charcoal by metallurgical coke in the 18th century, to the modern iron and steel plants of today’s world [13]. Iron and steel are still fundamentals of our modern civilization which are not to be replaced anytime soon.

The multiphysics systems in the iron and steel making processes, including fluid dynamics, heat transfer, granular flow, fragmentation etc., by themselves

(12)

Sinter Lump ore Fine ore Pellets Blast furnace Corex plant Finex plant Finmet plant Rotary kiln Shaft furnace BOF EAF

Figure 1.1: Ore-based routes for iron and steel making [14, 15] including blast furnace, smelting

reduction (Corex, Finex), and direct reduction (Finmet, rotary kiln, shaft furnace).

engage scientific interest. Also, in the context of the European Green Deal and carbon neutrality, it is inevitable to devote attention to the energy efficiency and environmental aspects of the iron and steel making industry. Last but not least, the industrial applications involved, i.e. blast furnace (BF), smelting reduction (SR) and direct reduction (DR) as illustrated in Fig. 1.1, prove a challenging area from a computer simulation point of view. The intent of the present study is to accept this challenge and initiate the development of a technique that enables simulations of a complete plant furnace module, with the shaft furnace of the direct reduction process serving as example, whereby the DEM shall be used to describe the solid phase.

Despite the advances in computer hardware and software parallelization tech-niques such as MPI and OpenMP [16–18], simulations of industrial-scale problems involving billions of particles easily exceed the limits of feasibility. The demand for ever more detailed physical models in combination with complex industrial geometries aggravates the problem even further, limiting the accessible simulation times and length scales. To be able to handle the immense amount of particles in engineering-scale problems, it is necessary to cut back on the level of detail and accuracy. A conventional approach is the usage of continuum mechanics in the form of the finite element method (FEM) [19, 20] or the finite volume method (FVM) [21–23]. A major downside of these methods, though, is their

inapplicabil-ity to processes that are dominated by the discontinuous behavior exhibited by granular material.

(13)

dynamics with macroscopic continuum dynamics is the derivation of Eulerian fields from the discrete system [24–26]. This coarse-graining formulation is designed to yield fields that satisfy the equations of continuum mechanics and can be used to calibrate and validate Eulerian models.

As a different strategy to recover the granular characteristics and increase the depth of physics, spatial multi-scaling techniques have become a practicable approach. The combination of coarse- and fine-scale representations – in confined regions of interest – tries to balance the accuracy of the simulation with the affordable computational expense. For instance, this strategy has been successfully applied in the form of FEM-DEM coupling [27, 28] and CFD-DEM simulations [29– 33]. The underlying idea of combining multiple scales is indeed a more fundamental concept and probably best known from the field of chemistry and molecular dynamics [34–37].

In this spirit, we strive for a spatial multi-scale model that retains the Lagrangian character of the granular flow on all scales. To this end, we draw on the coarse-grain model of the DEM as described in [30, 38–42] (not to be confused with the aforementioned coarse-graining formulation). This technique can ease the severe computational demands of DEM simulations by grouping a number of equal particles together into a representative CG particle, thus effectively reducing the system size. A dimensional analysis of the governing equations yields a set of scaling rules for this approach. However, for effects that inherently depend on particle size, these scaling rules become invalid and the CG model fails to correctly predict the system behavior. In this situation, the model developed in the present work combines the conventional coarse-scale representation with a more detailed realization that is able to capture the decisive effect properly. The embedded fine-scale simulation is spatially restricted to the critical region, while in the remainder of the system a CG representation is maintained. The different levels of resolution are coupled through the exchange of characteristic, volume-averaged flow properties. The concurrent use of coarse- and fine-scale representations allows to speed up the simulation while resolving essential details of the granular system.

(14)
(15)

“Nothing happens until something moves.” Albert Einstein

2

The Discrete Element Method

The discrete element method (DEM) is closely related to molecular dynamics (MD) simulations [43, 44]. The two methods share many common features and conceptual similarities, most notably the tracking and updating of each individual particle within the simulation. Sometimes the terms DEM and MD are even used synonymously [45, 46]. Occasionally, the inclusion of rotational degrees-of-freedom due to the assumption of extended bodies instead of point particles as well as the presence of dissipative forces during a particle contact in the DEM are mentioned as distinguishing features [44, 47].

The DEM can be divided into two different classes, viz. the event-driven (ED) and the time-driven (TD) approach, also referred to as hard-sphere (HS) and soft-sphere (SS) model. The ED method [47–50] assumes perfectly rigid particles implicating instantaneous collisions, i.e. an instantaneous exchange of momentum, at a single point of contact on the particles’ surfaces. Within the framework of ED simulations, the sequence of collision events is discretized with a variable time step where time is advanced to the next event using a collision matrix. This leads to a strictly serial processing of collisions. The most obvious limitation of this approach is the restriction to at most one collision event at any instant of time. Also, the analytic solution approach of the ED method can only handle collisions between exactly two objects as there is no general closed-form solution of the n-body problem. In further consequence this means that the ED scheme cannot be applied to static packings of particles where many-body collisions are

(16)

the determining factor. In systems where the amount of many-body collisions is negligible compared to the number of pairwise collisions ED simulations are a highly efficient method. On the other hand, a performance-related drawback of the ED method is the complexity of its parallelization [51].

The soft-sphere method [1, 45–47, 52–55] advances time in fixed intervals where trajectories are obtained by numerically integrating Newton’s equations of motion, i.e. solving the differential form of the linear and angular momentum balances. In this approach interparticle contacts have a finite duration. The particles are supposed to deform slightly during collision establishing a contact on a finite area. The deformations – which are assumed to be much smaller than the particle size – determine the elastic, plastic and frictional forces between the particles. Although it is generally possible to describe the deformation of a single particle within the framework of contact mechanics [56] such a detailed description would be extremely costly from a computational point of view and thus impracticable for a large system of particles. Furthermore, the mechanical behavior of a granular assembly is principally determined by the movements of the particles as rigid bodies which allows a simplified modeling of the particle deformation. Consequently, instead of actually resolving and applying deformations, a small overlap of (multiple) particles is allowed in the soft-sphere approach and the particles remain geometrically rigid. The contact force between the collision partners is then based on the degree of this overlap as well as the relative velocities of the particles. It follows that the formulation of the contact force model is a fundamental ingredient of the soft-sphere DEM. Several different force schemes have been developed to model the particle contact including continuous potential models [57] that use repulsive pair potentials in combination with velocity dependent dissipation forces, linear and non-linear viscoelastic models [52, 58–60] realized via spring-dashpot systems and hysteretic models [61–64] which avoid velocity dependent damping and dissipate energy plastically making use of different springs with different stiffnesses for the loading and unloading period.

For the rest of this treatise we are focusing on viscoelastic force models for the soft-sphere method as implemented in the simulation software LIGGGHTS [65] and limit ourselves to the case of non-cohesive, spherical grains. The interested reader is referred to the literature [52, 62, 66–69] for an in-depth discussion and comprehensive review of various other force schemes.

(17)

kn γn µ kt γt δn,ij (a) z x y xj xi nij tij ωj ωi ˙ xj ˙ xi (b)

Figure 2.1: (a) Spring-dashpot system with frictional slider used in viscoelastic contact force

models. (b) Contacting particles in their inertial frame of reference.

2.1

Contact models

More formally, in the soft-sphere approach of the DEM, the trajectory of each particle i = 1, . . . , N in the system is obtained by solving Newton’s equations of motion

mi¨xi = fi (2.1)

Iiω˙i = Mi (2.2)

where each element is characterized by its mass mi, position xi, moment of inertia

Ii, angular velocity ωi, total force fi, and torque Mi. The forces fi include external

forces such as drag forces or the gravitational force mig, as well as contact forces

during a binary collision. The latter are typically split into normal and tangential forces and in the case of viscoelastic contact models assume the form:

fn,ij = −knδn,ij − γnn,ij (2.3) ft,ij = −ktδt,ij− γtt,ij (2.4)

with normal and tangential stiffness coefficients kn,t in the elastic terms, damping

coefficients γn,t in the dissipative terms, and overlaps δn,t. ˙xn,ij and ˙xt,ij are the

normal and tangential components of the relative particle velocity ˙xij.

(18)

the unit vector along the line of centers is defined as

nijxj− xi | xj− xi |

(2.5) pointing from particle i to particle j. The overlap of two particles with radii Ri,j

in normal direction is given by

δn,ij = (Ri+ Rj− | xi− xj |) nij. (2.6)

The relative velocity between the particles at the point of contact is calculated as

˙

xij = ˙xi− ˙xj+ (Rc,iωi+ Rc,jωj) × nij (2.7)

with the contact radius of a particle Rc,i defined as

Rc,i ≡ Ri− δn,ij/2 (2.8)

such that Rc,i+ Rc,j =| xj− xi |. From these definitions we can derive the relative

particle velocities in normal and tangential direction

˙

xn,ij = ( ˙xij · nij) nij (2.9) ˙

xt,ij = ˙xij − ˙xn,ij (2.10)

The latter also lets us define the tangential unit vector tij| ˙xxt,ij˙t,ij| if | ˙xt,ij | >0.

The tangential spring length is computed as the integral of the tangential velocity over the contact time:

δt,ij = Z t

tc0

˙

xt,ijdt0. (2.11)

In the discretized form this shall be taken to mean that at each time step during the collision event lasting from tc0 to t, the tangential spring length δt,ij is updated

by first projecting the overlap δ0

t,ij accumulated so far into the current tangential

plane followed by adding the current contribution ∆δt,ij = ˙xt,ij∆t.

δt,ij = δt,ij0 − 

δ0t,ij· nij 

nij + ∆δt,ij (2.12)

The updated δt,ij times kt gives the updated elastic component of the tangential

force ft,el,ij. However, as noted by Thornton et al. [68], this integral form only

works as long as kt is constant. A more general method that also works for

(19)

discussed below) is an incremental approach, i.e. ft,el,ij = ft,el,ij0 −  ft,el,ij0 · nij  nij − kt∆δt,ij (2.13) where f0

t,el,ij is the elastic part of the tangential force accumulated so far. In any

case, as a final step ft,ij is truncated such that

ft,ij ≤ µfn,ij (2.14)

where µ is a Coulomb-like friction coefficient.

2.1.1

Linear spring dashpot model

One of the simplest and commonly used force models is a linear spring dashpot (LSDP) model which was already proposed in the work of Cundall and Strack [1]. This model combines a linear repulsive spring and a linear dissipative dashpot element, both in normal and tangential direction [46, 68, 70]. This implies that the stiffness and damping coefficients in Eqs. (2.3) and (2.4) are constant. In addition, to use the same metaphor of mechanical elements, Eq. (2.14) may be thought of as a frictional slider as illustrated in Fig. 2.1.

In the case of the LSDP model, Eq. (2.3) takes the form of the well-known differential equation of the damped harmonic oscillator.

¨ δn+ γn m˙δn+ kn mδn = 0. (2.15)

Assuming δn(0) = 0 and ˙δn(0) = v0, we obtain for the under-damped case

δn(t) =

v0

exp (−Ψt) sin (Ωt) (2.16)

˙δn(t) =

v0

exp (−Ψt) (−Ψ sin (Ωt) + Ω cos (Ωt)) (2.17) using the definitions

Ψ ≡ γn 2m∗ (2.18) Ω0 ≡ q kn/m∗ (2.19) Ω ≡q Ω2 0−Ψ2 (2.20)

(20)

oscillator. Finding the roots of Eq. (2.16) gives the contact duration

tc,n= π/Ω. (2.21)

Note that the collision time is independent of the impact velocity v0. Based on

the definition of the (normal) coefficient of restitution, i.e. the ratio of the velocity before and after the collision event,

e= − ˙δn˙δ(tc,n)

n(0)

= exp (−πΨ/Ω) (2.22)

a relation between γn and kn can be derived. It is noteworthy that like the

contact duration, e does not depend on the impact velocity v0 in the LSDP model.

Generally, however, e is not independent of the impact velocity [63, 67].

The existence of a simple analytical solution of this problem makes it an interesting candidate for studies despite its deficiencies. Most notably, the LSDP model disregards the fact that particles consisting of linear-elastic material do not exhibit a linear repulsive force. Another problem is related to the damping term, which may cause attractive forces at the end of the collision and even lead to dissipative capture in the case of high damping [71]. Hence, care has to be taken that the collision is terminated using the correct condition, i.e. as soon as fn,ij = 0

which may happen prior to δn= 0.

In LIGGGHTS, the stiffness coefficient is chosen such that in the case of an undamped, perfectly elastic collision (i.e. e = 1) the maximum overlap

δn,max = v0 s

m

kn

(2.23) is equal to the maximum overlap in the non-linear model described in the section below. Alternatively, one could relate the models by equating the collision time. In the LSDP (hooke) model as implemented in LIGGGHTS the stiffness and damping

coefficients as a function of material parameters thus read kn= 16 15E∗ √ R∗ 4/5  mv021/5 (2.24a) γn= −β q 4mkn (2.24b) kt= κkn (2.24c) γt= γn (2.24d)

(21)

where E, R, and m∗ denote the effective Young’s modulus, particle radius, and

mass, respectively. The derivation of the effective parameters can be found, for instance, in [56]. E∗ = 1 − ν 2 i Ei +1 − νj2 Ej !−1 (2.25) R∗ = 1 Ri + 1 Rj !−1 = RiRj Ri+ Rj (2.26) m∗ = 1 mi + 1 mj !−1 = mimj mi+ mj (2.27) Here, νi,j are the Poisson ratios of the particles.

Furthermore, v0is a characteristic impact velocity (i.e. the initial relative particle

velocity at the start of the contact) and β is defined as a function of the coefficient of restitution e:

β = q ln(e)

ln2(e) + π2 (2.28)

The stiffness ratio κ is defined as [56]: κ ≡ (1−νi) Gi + (1−νj) Gj (1−νi/2) Gi + (1−νj/2) Gj (2.29) with the shear modulus G = E/2 (1 + ν). For similar mechanical materials this expression reduces to

κ= 2(1 − ν)

2 − ν (2.30)

From Eq. (2.30) one can observe that κ varies from 1 to 2/3 as Poisson’s ratio varies from 0 to 0.5. Nevertheless, the value of κ is often set arbitrarily, even outside the range of realistic values. A special case is κ = 2/7 which leads to equal oscillation periods of the normal and tangential force.

2.1.2

Non-linear spring dashpot model

To model the interaction between two elastic spheres more accurately, the theory of Hertz [56, 72] is widely used for the description of the normal contact forces. This normal model is frequently complemented by the work of Mindlin [73] and Mindlin and Deresiewicz [74] who derived a formulation for the tangential interaction force. However, since the latter model is rather involved, especially becoming

(22)

computationally demanding if slip is occurring on the contact surface, typically a simplified version is applied.

Such a combination has been described in detail, for instance, by Tsuji et al. [59] as well as Di Renzo and Di Maio [70]. To account for inelastic collisions, Tsuji et al. [59] extended the Hertz model by a heuristic normal damping force. They adopted a normal damping coefficient proportional to δ1/4n,ij and an empirical

constant related to the coefficient of restitution. Antypov and Elliott [75] later gave a complete analytical solution for this constant. Following the suggested procedure the coefficient of restitution becomes a constant input parameter in the contact model – similar to the LSDP model described above.

In contrast to the linear model, however, the collision time tc in the Hertzian

model depends on the impact velocity v0 and in the elastic case can be derived as

tc,n= 2.9432 1516 mE∗√R∗ !2/5 1 v1/50 . (2.31)

The maximum compression for this situation is given by δn,max = 15 16 mv20 E∗√R∗ !2/5 . (2.32)

According to Mindlin and Deresiewicz, the tangential force component depends on the normal displacement δn,ij. Tsuji et al. assumed Mindlin’s (less complicated)

no-slip solution in which case the elastic component of the tangential force is proportional to δ1/2n,ij and linear with respect to δt,ij. Furthermore they assumed γt

to be equal to γn.

The hertz normal contact model in LIGGGHTS follows the model described

in [59, 70]. In a nutshell, the stiffness and damping coefficients of the Hertzian spring dashpot model in LIGGGHTS thus read:

kn= 43E∗ q Rδn,ij (2.33a) γn= −β q 5mkn (2.33b) kt= 8G∗ q Rδn,ij (2.33c) γt= −β s 10 3 mkt (2.33d)

(23)

with the effective shear modulus G∗ = 2 − νi Gi + 2 − νj Gj !−1 (2.34) which in terms of E = 2G (1 + ν) may be written as

G∗ = 2 (2 − νi) (1 + νi) Ei +2 (2 − νj) (1 + νj) Ej !−1 . (2.35)

2.1.3

Rolling friction models

Rolling friction models are commonly added to the contact model to account for micro-slip and friction on the contact surface, surface adhesion, shape effects or energy dissipation mechanisms such as plastic deformation.

According to Ai et al. [76] the multitude of rolling friction models may be catego-rized in four main types, namely constant directional torque (CDT) models, viscous models, elastic-plastic spring dashpot (EPSD) models and contact-independent models.

In compliance with the definition of the coefficient of sliding friction (COF), Ai et al. use a dimensionless coefficient of rolling friction (CORF) in their formulation of the various rolling friction models. In their work the CORF is defined as

µr = tan(β) (2.36)

with β denoting the angle of rolling resistance, i.e. the maximum angle of a slope at which the rolling resistance torque and the torque caused by gravity counterbalance each other. In the following, we give a brief overview of the CDT and EPSD models as those are readily usable in LIGGGHTS.

Constant directional torque model

For this type of rolling friction model a constant torque is applied to the particles. The orientation of the torque is chosen such that it is always in the opposite direction of the relative rotational velocity ωr of two particles in contact. Zhou et

al. [77] suggested the following model formulation:

Mr= µrRknδn,ij

ωrshear

(24)

where ωrshear is the projection of ωr into the current shear plane.

Elastic-plastic spring-dashpot model

In this family of models the rolling friction torque Mr is composed of a mechanical

spring torque Mk

r and a viscous damping torque Mrd. The model presented here

has been proposed by Ai et al. [76]. It is based on the work by Iwashita and Oda [78] and Jiang et al. [79] and extends the applicability of the underlying models to cases involving rolling back and cyclic rolling. Then, the total rolling friction torque Mr is defined as

Mr = Mrk+ Mrd. (2.38)

The spring torque Mk

r depends on the rolling stiffness krand the relative rotation

θr between two contacting particles:

Mrk = −krθr (2.39)

The definition of the rolling stiffness kr in [76] is given for 2-dimensional discs as

kr = 3 (µrR∗)2kn (2.40)

which in the case of 3-dimensional spheres becomes [80]

kr = 94(µrR∗)2kn. (2.41)

Mk

r can be calculated in an incremental manner, i.e.

Mrk(t + ∆t) = Mrk(t) − kr∆θr (2.42)

and is limited such that

| Mk

r |≤ Mrm. (2.43)

Here,

Mrm= µrRknδn,ij (2.44)

denotes the torque at a full mobilization rolling angle

(25)

We note the similarity between Eq. (2.44) and Eq. (2.14). The damping torque Md

r depends on the damping coefficient γr and the relative

rolling angular velocity ωr= ˙θr between the two particles in contact:

Mrd(t + ∆t) =      −γrωr if | Mrk(t + ∆t) |< Mrm −f γrωr if | Mrk(t + ∆t) |= Mrm (2.46) The value of the term f depends on whether the damping is supposed to dissipate a significant amount of energy or is intended to just stabilize the particle motion. In the latter case which is implemented in LIGGGHTS, f is set to 0.

The rolling damping coefficient γr is split into the rolling viscous damping ratio

ηr and the critical rolling damping constant γrcrit:

γr = ηrγrcrit (2.47)

with γcrit

r being a function of the effective moment of inertia Ir and the rolling

stiffness kr γrcrit= 2qIrkr (2.48) Ir= 1 Ii+ miR2i + 1 Ij + mjR2j !−1 (2.49)

(26)

2.2

Critical time step

The DEM, like any other numerical method, is prone to accuracy and stability issues. Especially the time integration schemes typically used with the DEM (cf. chapter 3) are only conditionally stable [81]. Thus, the selection of a suitable time step shall be discussed shortly, since it has a severe impact on simulation runtime and validity.

Generally, it is desirable to find a critical time step ∆tcrit that is as large as

possible to reduce computational costs and at the same time small enough to guarantee numerically stable results.

In the literature, essentially two different ways for the determination of a maximum stable time step can be found. One is based on the frequency of a single degree-of-freedom (SDOF) spring-mass system [1, 82] leading to

∆tcrit= ξ q

m/k (2.50)

where values of ξ between 2 [1] and 0.14 [83] are used by different authors (cf. Eq. (2.21)). Since in the contact models applied in this study two different spring constants are used for the normal and tangential forces, the stiffer spring (typically kn) determines the maximum time step. A time step larger than (2.50) may lead

to undetected contacts or unphysically large particle overlaps potentially resulting in instabilities of the simulation.

Since in the non-linear contact model the collision time tc,n (cf. Eq. (2.31))

depends on the impact velocity v0, its value cannot be calculated a priori. It is thus

sometimes argued that Eq. (2.50) is not applicable in this case. Hence, another approach is taken based on the Rayleigh wave speed

∆tcrit = tR =

πRqρ/G

0.1631ν + 0.8766 (2.51)

as the majority of the radiated energy is transferred by Rayleigh and shear waves which have a similar speed [84–86]. (With respect to the coarse-grain model discussed below, we note that both approaches depend on particle size.)

Various studies [87–89] suggest that an actually conservative time step ∆t not only depends on density, size and elastic modulus as given, but is also influenced by a number of other factors such as particle shape, damping, coordination number, stresses or the degree of polydispersity. It is thus advisable to conduct some test simulations with different values for ∆tcrit prior to executing the targeted case.

(27)

2.3

Coarse-grain model

We follow the coarse-graining approach outlined in [38–40, 90] whereby a number of equal particles is represented by an upscaled (pseudo) particle. To retain the physical behavior of the original system, the interaction forces have to be adjusted accordingly. This scaling is based on consistent energy density and energy density evolution in the original and coarse-scale system. Clearly, for a uniform effect of gravity, particle density ρ must not change and to conserve kinetic energy, particle velocities must be preserved. Under these assumptions, a dimensional analysis of Eqs. (2.3) and (2.4) yields the following invariant parameters:

Π1 = Rj Ri ,Π2 = kn,t RE,Π3 = γn,t R∗2√ρE∗ (2.52)

Π1 is related to the geometric similarity and means that all particles need to be

scaled with the same constant coarse-grain ratio α = Ri,CG/Ri. A straightforward

way to derive at this conclusion is by inspecting the effective particle radius R∗ = RiRj Ri + Rj = RiRRji 1 + Rj Ri = RiΠ1 1 + Π1 (2.53)

which needs to scale with α just as Ri. Similarly, when looking at the effective

mass m∗ = mimj mi+ mj = 3 ρiR 3 i ρjR3j ρiRi3 1 + ρjR3j ρiR3i = 3 ρiR3i ρj ρiΠ 3 1 1 + ρj ρiΠ 3 1 (2.54)

we can note down that m∝ α3 since ρ is assumed to remain constant. To

obtain Π2 and Π3, we substitute the dimensionless mass m∗0= m/ρR3i, overlap

δ0n= δn/Ri, velocity ˙δ0n= ˙δn/v0 and acceleration ¨δn0 = ¨δnRi/v20 with the reference

velocity v0 =

q

E into Eq. (2.3) and arrive at

m∗0¨δn0 = knδ0n RiE∗ − γn˙δ0n R2iρE. (2.55)

The same procedure can be applied to Eq. (2.4) to obtain the dimensionless parameters related to the tangential stiffness and damping coefficients.

According to Π2 and Π3, the stiffness coefficients kn,t scale with α and the

damping coefficients γn,t scale with α2. While Π2 is linked to the mechanical

(28)

is unaffected by coarse-graining. The conditions in Eq. (2.52) are fulfilled by the coefficients given in Eqs. (2.24) and (2.33). Furthermore, from condition Π3

it follows that the coefficient of restitution is invariant. Likewise, the Young’s modulus and coefficient of friction are kept constant.

To preserve the rotational energy of the system, the angular velocity ω needs to be scaled with α−1. We find that the angular momentum – unlike the linear

momentum – is not conserved with this scaling. The rolling friction model is retained unchanged in the present work. Furthermore, we note that as a side effect of increased particle radii in the coarse-grain model, the critical time step ∆tcrit

becomes larger as well.

One aspect that is neglected by the coarse-graining procedure is that the collision frequency f is also affected. According to the kinetic theory of granular flows [91, 92] f is proportional to the number density of the particles squared and the particle diameter squared. Hence, f scales with α. Looking at the energy dissipation during a collision (cf. Eq. (2.22))

∆E = 14m(e2−1) ˙x2n,ij (2.56) one may derive a modified coefficient of restitution [93]

e0 =q1 + (e2−1) α. (2.57)

We note that Eq. (2.57) is only valid for a limited range of e and α, though. Alternatively, as a crude ad hoc correction, one might assume that when coarse-grained particles collide there should actually occur α3 collisions such that

(ln(e00))2/(ln(e))2 = α3 (2.58)

and the modified coefficient of restitution reads

e00= eα3. (2.59)

This is similar to the work of Benyahia et al. [94], however, their line of argument is based on the collisions between original particles that make up a coarse-grained particle which is already considered in the above scaling of kn,t and γn,t.

Generally, when discussing the upscaling of the DEM it is necessary to keep in mind that the scaling of particles without simultaneously scaling the domain violates the principle of geometric similarity and introduces modeling errors [95].

(29)

2.4

Multi-level coarse-grain model

Concurrently coupling multiple coarse-grain levels allows to combine the benefits of coarse- and fine-scale simulations and achieve an optimal balance between speedup and accuracy, especially when modeling errors due to the violation of geometric similarity become significant.1

In our method, all coarse-grain levels are subject to equivalent external forces and make use of the same geometries where relevant to a level’s domain. Establishing a linkage between different coarse-grain levels via an instantaneous one-to-one replacement of individual particles is not a viable option. It may, for instance, introduce artificial clusters of equal particles (as illustrated in Fig. 2.2) or excess energy through false overlaps due to geometric restrictions in the fine-scale simula-tion. Instead, we introduce a coupling scheme based on cell-averaged properties of the granular system. The passing of information between different coarse-grain levels is accomplished through a reconstruction step (coarse-scale to fine-scale) and a compression step (fine-scale to coarse-scale).

2.4.1

Reconstruction step

To facilitate the conservation of mass and particle size distribution we restrict the possible values of α to the set

A=nα ∈ R>0 | α3 ∈ N1 o

(2.60) such that each coarse-grained particle represents an integer number (α3) of fine-scale

particles. At the boundary between the and fine-scale system, the coarse-scale conditions need to be transformed to consistent fine-coarse-scale realizations. To this end, we measure the mass flow rate, the particle velocities and size distribution of the coarse-grained particles at the boundary surface of the embedded fine-scale subdomain. For this purpose, we divide the surface into a grid of quadrilateral cells that are about three to ten coarse-grained diameters in size to form a mesh of representative area elements (RAE).

This 2-dimensional grid is extruded, i.e. its dimensionality is increased via translation, inwards along the surface normals to create a 3-dimensional region for insertion (cf. Fig. 2.3). The extrusion width depends on the particle size, characteristic velocity and insertion frequency.

(30)

(a) (b) (c)

Figure 2.2: In polydisperse systems, a one-to-one replacement of coarse grains (a) may lead

to artificial clusters of equal fine-scale particles (b). This effect becomes more pronounced with increasing coarse-grain ratio. Instead, we assume a random spatial distribution inside an RVE (c).

Figure 2.3: Extrusion of a 2d surface-grid along the surface normals.

Each cell stores information about an ensemble of particles passing the surface-cell during a coupling interval. The particle velocity is mass-averaged in each surface-cell over a coupling period. Likewise, for the angular velocity, a moment of inertia weighted average is computed. For every coupling step, the data thus obtained is used to insert corresponding fine-scale particles through each extruded grid element by means of a simple sequential inhibition (SSI) algorithm [97]. The measured mass flow and size distribution determine the number and size of the fine-scale particles to insert. The initial velocities are set to the cell-averaged ensemble properties. Thereby, velocity fluctuations are neglected, however, a velocity distribution may be imposed if required.

While for dilute flows the particle properties assigned at insertion are sufficient to create an appropriate fine-scale representation, additional matching information is needed in the dense regime. To this end, we introduce a 3-dimensional transition layer inside the fine-scale domain for both coupling partners. Similar to the boundary surface, this region is divided into a single layer of hexahedral cells, which can be used as representative volume elements (RVE) [98] to obtain further characteristic Eulerian properties of the granular system. Concretely, we have identified the macroscopic granular stress [99] composed of ‘contact’ stress and

(31)

‘kinetic’ stress contributions σ = 1 V NV X i   X j6=i 1 2rijfij + miv0ivi0   (2.61)

and the resulting granular pressure p = (σxx+σyy+σzz)/3 as an essential component

for establishing proper boundary conditions for dense granular flows. Here, V is the sample volume (the RVE in our case), NV is the number of particles in the

volume, rij denotes the vector joining the center of the particles, and v0i is the

deviation from the ensemble-averaged velocity.

As a simplification, we assume that the stress component stemming from the velocity fluctuations is negligible in the dense regime and an average stress tensor for a grain of average radius Rave and the volume Vave ≡ V /NV can be given by

σave = Rave Vave

X j6=i

nijfij (2.62)

where nij denotes the unit vector pointing from the center of i to the center of j.

We concentrate on the normal stress components σkk of each cell, from which we

estimate the average normal stress per particle σave kk as

σave

kk ≈ σkk. (2.63)

Considering Eq. (2.62) we may express σave

kk by introducing an equivalent surface

force fn,i acting on a single particle i along the coordinate axes such that

σave kk =

Rave

Vave

ˆek· fn,i (2.64)

with ˆek denoting the unit vectors in x-, y- and z-direction. Rearranging Eq. (2.64)

and taking the approximation (2.63) into account, this force may be calculated from the stress tensor as follows:

fn,i = Vave Rave 3 X k=1 σkkˆek (2.65)

For each cell in the transition layer, we compute at each coupling step t the force fn,i(t) in the coarse-grain and the fine-scale particle system and thus arrive

at the error that needs to be corrected in the fine-scale representation:

(32)

δx/Rmax fn,corr,x/ux 1 2 3 0.1 0.2 0.3 0.4 0.5 (a) δx/cellx ft,yz/uyz 0.5 1.0 0.5 1.0 (b)

Figure 2.4: (a) Control force for pressure correction. (b) Control force to create shear stress.

In terms of control theory, the error value e(t) is defined as the difference between a desired set point (SP) and a measured process variable (PV)

e(t) = SP − P V. (2.67)

Feeding the error value into a discrete proportional-integral (PI) controller one obtains the control variable

u(tn) = Kce(tn) + Kc τI n X i=1 e(ti)∆t (2.68)

with the controller gain Kcand the integral time constant τI. A higher value of Kc

or a lower value of τI make the controller more aggressive. ∆t is the time between

sampling instances and n is the number of sampling instances. The first term on the right-hand side of Eq. (2.68) denotes the proportional contribution while the second term describes the integral part which continually sums up the error and can be thought of as a ‘moving bias’.

In our case, we use at most three PI controllers per cell depending on the direction of the exposed cell faces. The set points and process values of these controllers correspond to the three appropriately scaled components of the average per atom force fn,i,CG and fn,i in a cell as shown in Eq. (2.66). Then ±u(t)/2 can

be viewed as the correction forces fn,corr,k along the coordinate axes that need

to be applied to the fine-scale particles to establish the desired pressure. The control forces are pointing into the embedded subdomain if the pressure needs to be increased and out of the subdomain if the pressure is too high. In the latter case it is also possible to discard the corrections completely since without any confinement the pressure will be relaxed automatically by an expansion of the particle configuration. For each particle, the control forces are limited such that

(33)

the particle velocity will not exceed the maximum velocity in the target cell in any direction by a predefined percentage. Taking advantage of Newton’s third law, it is sufficient to apply the force to the outermost layer of fine-scale particles only. For convenience, in our implementation the control forces act on grains within 3/2 of a fine-scale particle diameter from the domain boundary. To avoid jittering of particles at this point, the force is faded out smoothly as indicated in Fig. 2.4(a).

The transfer of the shear components of the stress tensor is neglected in the present work. However, shear stresses might be introduced in a similar way as the normal components by applying correcting forces in the transition region which act parallel to the shear direction and are gradually decreased along the normal of the boundary surface (cf. Fig. 2.4(b)).

2.4.2

Compression step

As illustrated in Fig. 2.5(a), by applying the same principles for the compression step, a symmetric coupling scheme could be established. This means removal of coarse-grained particles inside the fine-scale domain along with reinsertion according to the mass flow of the fine-scale particles. This procedure obviously introduces some overhead, especially when considering the necessity for computationally expensive neighbor-list rebuilds triggered by particle insertion in the DEM. What is more, opposed to the insertion of fine-scale particles, determining appropriate insertion locations and times may become non-trivial for coarse-grained particles. It can easily happen that for a single coupling interval the measured mass flow and size distribution through a boundary surface element cannot be represented accurately by coarse-grained particles, e.g. only half of the fine-scale particles represented by a coarse-grain particle have crossed the boundary. A complex gathering step across cells and extensive book-keeping of fine-scale particles may be required to guarantee strict conservation of mass.

To avoid these potential difficulties, we propose an asymmetric coupling method as depicted in Fig. 2.5(b). This approach preserves the coarse-grained particles even in the fine-scale subdomain and applies corrections based on the fine-scale data to ensure an appropriate overall behavior of the coarse-grain system. Analogous to the transition region of the reconstruction step, we introduce a hexahedral grid inside the fine-scale domain to obtain volume-averaged particle properties. Again, this data is passed to the coarse-scale simulation using a controller mechanism. Which property of the coarse-grain system requires corrections depends on the decisive characteristics of the system under investigation. Typically, either mass

(34)

(a) (b)

Figure 2.5: (a) Symmetric and (b) asymmetric coupling scheme. The solid and dashed

rectan-gles mark the boundaries of the coarse-grained and the embedded simulation, respectively. The filled cells indicate the cell layers used as transition region (light gray: reconstruction; dark gray: compression). The empty cells in (b) are used for coarse-grain corrections.

flow rate or particle velocity needs to be adjusted. A simple proportional controller is sufficient in this case. For the mass flow rate, the particle velocity of the coarse-grain representation needs to be increased by a factor corresponding to the volume fraction mismatch ε/εCG between coarse-grain and fine-scale simulation.

In situations where coarse-grained particles are comparable or even larger than the spatial constriction, reducing the coarse-grained particle size locally while retaining the mass serves as an alternative correction mechanism. Due to the errors caused by the violation of geometric similarity in the CG model and the interdependence of certain flow characteristics, correcting all flow properties at the same time may be mutually exclusive. Hence, it may only be possible to shift the error to a different property instead of fully resolving the defect.

Since the corrections applied to the coarse-grain simulation are tailored to an accurate overall behavior, any properties that are concomitantly affected by these adjustments may become erroneous locally. Thus, at any given location, the analysis of data must keep the applied corrections in mind and should be performed using the highest resolution available.

Furthermore, we note that corrections in the asymmetric coupling approach may become inadequate if the volume-averaged fine-scale properties fail to describe the system behavior appropriately. For instance, this situation may occur if multiple velocity directions within a single cell become a decisive factor or coarse-grained particles cannot follow their fine-scale counterparts properly due to dispersion effects. In such situations, the limits of the CG model in representing the original system are reached and the symmetric coupling scheme outlined above may prove the better alternative.

(35)

2.5

Fragmentation model

Particle fragmentation models applicable to spherical particles can be basically divided into two different groups: the bonded particle model (BPM) [100] and the particle replacement model (PRM) [101].

In the BP approach a breakable body is represented by an assembly of (spherical) particles which are interconnected via restraining bonds. Broken bonds that may evolve into macroscopic cracks and fractures reflect the actual damage of the body. The PR methods replace the original particle with fragments only at the moment of breakage. Any stress distribution inside a grain is disregarded in this model. Instead, the magnitude of contact forces stemming from surrounding bodies is used as breakage criterion or breakage probability [102–106]. The PRM is computationally less demanding since no bonds need to be updated and the total number of spheres is generally lower.

Hence, we chose to apply the latter model in this study as we strive for compu-tationally cheap methods that can be applied to large particle systems. Essentially, our implementation follows the description of Bruchmüller et al. [105] who use the impact energy as breakage condition. We extended the model to also work with a von Mises stress and a maximum force breakage criterion much like it has been reported by, for instance, Esnault et al. [106].

2.5.1

Breakage criteria and breakage probability

Depending on the scenario, different processes may trigger the actual grain fracture. This includes high energy impacts where the impact energy exceeds the breakage energy as well as stress events exceeding a specific compressive stress threshold [101].

For any of these cases, a breakage probability (or selection function) may be formulated in a similar fashion using the approach outlined by Vogel and Peukert [102, 103]. On the one hand, Vogel and Peukert build upon the work of Rumpf [107] who applied a dimensional analysis to arrive at a similarity law of fracture mechanics. Rumpf comes to the conclusion that the breakage for particles of different material, size and variable flaw distribution can be described with the same probability function if it is of the form:

S = f ( Wv E , Wvd βmax ,vfract vel , vd vel ,li d, ν ) (2.69) with Wv denoting the energy absorbed per unit of volume, the Young’s modulus

(36)

E, the original particle size d, the maximum fracture energy βmax, the fracture

velocity (crack propagation) vfract, the displacement velocity vd, the maximum

speed of elastic waves vel = q

E/ρ, the length of initial cracks li and the Poisson

ratio ν. Vogel and Peukert argue that vfract/vel and vd/vel may be neglected since

these ratios are usually small in typical industrial applications, hence Eq. (2.69) may be reduced to S = f ( Wv E , Wvd βmax ,li d, ν ) . (2.70)

On the other hand, Vogel and Peukert draw upon the fracture mechanical considerations of Weichert [108] who used Weibull statistics [109] to describe the failure probability in comminution processes. Weibull’s ‘worst flaw’ (or ‘weakest link’) theory describes the probability S that a chain made up of z links of strength σS fractures under a certain load σ:

S = 1 − exp  −z  σ σS m (2.71) Here, m denotes the Weibull parameter. Taking the load σ as stress it is possible to express the ratio σ/σS in terms of velocities:

σ σS = v2 v2S !1/5 (2.72) The number of links z may be related to the diameter a of the contact area which can be derived from contact mechanics and in the case of Hertzian contacts is of the form a= const. d " ρv2 E∗ #1/5 . (2.73)

Thus, one can write

S = 1 − exp    −const. d " ρv2 E∗ #1/5" v2 vS2 #m/5   (2.74) and by introducing the mass specific impact energy Wm,kin = v2/2 and using the

Weibull exponent m = 4 obtained from experiments, Vogel and Peukert arrive at S = 1 − exp    −const. d " ρvs2 2E∗ #1/5" Wm,kin vS2/2 #   . (2.75)

(37)

parameter fMat and a minimum energy threshold Wm,min is introduced below which

no breakage occurs:

S = 1 − exp {−fMat d(Wm,kin− Wm,min)} (2.76)

For multiple successive impacts k Vogel and Peukert propose the probability S= 1 − exp {−fMat d k(Wm,kin− Wm,min)} . (2.77)

The material parameter fMat may be determined from drop weight tests (DWT)

similar to those carried out at the Julius Kruttschnitt Mineral Research Centre (JKMRC) [110].

Bruchmüller et al. [105] picked up on Eq. (2.77) and modified it to take into account the accumulated damage after a number of impacts of different specific impact energies Ei with an energy threshold E0:

S = 1 − exp ( −fMat d X i (Ei− E0) ) (2.78) Esnault et al. [106] followed a similar approach in their work applying a maximum contact force criterion for the breakage probability

S = 1 − exp    − d d0 !3 d20F d2F0 !m   . (2.79)

Alternatively, also a von Mises stress criterion is proposed in [106], defined as S= 1 − exp    − d d0 !3 σv σv,0 !m   (2.80) with the von Mises (or equivalent tensile) stress

σv = r 1 2 h 11− σ22)2 + (σ22− σ33)2+ (σ33− σ11)2+ 6 (σ232 + σ312 + σ122 ) i (2.81) where the grain stress tensor σ is given as

σ = Ri Vi

X j6=i

nijfij. (2.82)

(38)

stressed volume and the surface area the force is applied to, respectively. For this work, Eqs. (2.78) to (2.80) have been implemented in the simulation framework providing the possibility to to check for any combination of these three breakage criteria.

2.5.2

Breakage function and breakage index

With the breakage probability at hand, it would be possible at this point to replace the original particle with a predefined configuration of fragments that are packed into the volume of the original sphere as, for example, demonstrated by Esnault et al. [106] and Ciantia et al. [111, 112]. However, this simplification violates mass conservation and typically does not represent the actual size distribution of the progeny very well (or is tuned for just a single breakage scenario).

A generally valid breakage function which denotes the size distribution of the fragments is difficult to derive. However, to get a more realistic particle size distribution (PSD) which also adapts to the severity of the breakage it can be argued that – analogous to the above discussion of the breakage probability – the fragment size distribution is similar if the breakage conditions are similar.

Hence, in the case of the impact energy criterion one can introduce the breakage index as [110, 113] t10= A " 1 − exp ( −fMat d X i (Ei− E0) )# (2.83) and analogous for the other breakage criteria. The breakage index t10 denotes the

cumulative mass fraction of the progeny that is smaller than 1/10 of the parent particle size. A is the maximum achievable t10 in a single breakage event and is

typically in the range of 0 (cleavage into a few number of larger fragments) to 0.5 (attrition-like fragmentation).

We note that Eq. (2.83) is quite similar to the standard equation applied at the JKMRC [110] which is widely used to predict the severity of breakage for the specific comminution energy Ecs:

t10= A [1 − exp {−b Ecs}] (2.84)

That is, b can be related to fMatd making the existing data base of material

parameters usable for the present model. Narayanan and Whiten [113] found that the breakage index t10 is uniquely related to other points on a family of size

(39)

t10 tn 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 t2 t4 t10 t25 t50 t75

Figure 2.6: Determination of size distribution parameter tn from breakage index t10.

distribution curves, tn. Analogous to t10, the tn curves define the cumulative mass

percentage of particles with a size smaller than a certain fraction of the parent particle size, d/n. The relation between t10 and tn is defined as [114]

tn= 1 − (1 − t10)(

10−1

n−1). (2.85)

Figure 2.6 depicts a selection of tn-family curves which have been confirmed for a

large variety of fracture energies.

Once the PSD is known, fragments of different size can be generated. To this end we look at the mass reservoir for a certain size range, e.g. t2− t4, start with

the maximum size of that range (d/2) and reduce the mass pool accordingly. We repeat this as long as the mass pool is large enough. Then we reduce the fragment size by 0.1% until we can generate new fragments from the mass left in the reservoir or we reach the maximum fragment size of the next size range (d/4). We do this for each size range until we get to a (predefined) minimum fragment size. At the end of this procedure we add one more fragment that violates the size distribution to fulfill mass conservation.

The fragments can be subject to fragmentation themselves, but to avoid the increasing computational efforts caused by the introduction of small fragments, it is reasonable to define a minimum size below which no further breakage occurs.

2.5.3

Momentum and energy conservation

The fracture of a particle involves a transformation of energy and momentum. Part of the momentum of the particle goes into the formation of cracks, hence the

(40)

momentum of the fragments after breakage is supposed to be smaller than the initial momentum of the parent particle. In this respect, Bruchmüller et al. [105] propose a momentum factor eM F defined as

eM F = −

vcp,ab

vpp,bi

(2.86) where vcp,ab denotes the velocity of the child particles after breakage and vpp,bi is

the velocity of the parent particle before impact.

Furthermore, the packing of the fragment spheres into the volume of the parent particle results in unphysical overlaps (cf. chapter 3) thus introducing artificial elastic energy. This excess energy needs to be compensated by the contact model. To this end, the elastic energy associated with the geometrical overlaps of the packing is compared to the physically correct elastic energy which can be determined from the energy balance of the parent particle and the progeny. From this comparison a collision factor CF can be derived to scale the binary overlaps between the siblings within the progeny such that the fragments exhibit a physically correct behavior [105]. Once the initial contact between two siblings has ended, they interact like any other particles in the system.

2.5.4

Coarse-grain model

In Eqs. (2.78) to (2.79) we can clearly observe the size dependence of the breakage probability, i.e. larger particles typically exhibit a higher breakage frequency. Hence, upscaling particles in the CG model leads to a different fragmentation behavior than in the original system. In further consequence, it is necessary to adapt Eqs. (2.78) to (2.79) such that the scaled and unscaled systems exhibit the same breakage probability:

S = 1 − exp ( −fMat d α X i (Ei− E0) ) (2.87) S = 1 − exp    − d αd0 !3 d20F d2F0 !m   (2.88) S = 1 − exp    − d αd0 !3 σv σv,0 !m   (2.89)

(41)

2.6

Unresolved CFD-DEM

Thus far, the discussion on particulate flow in this treatise mainly revolved around interparticle contact forces. However, the flow behavior of granular matter is also affected by the surrounding medium. While this influence may be negligible in certain cases, the surrounding fluid (gas or liquid) plays a decisive role for applications such as fluidized/packed bed reactors or pneumatic conveying. Hence, more generally, granular flow may be regarded as a two-phase system consisting of a disperse solid phase and a continuous fluid phase.

Although computationally less expensive two-fluid models (TFM) [92, 115, 116] may be used to investigate such systems taking the particles as continuous solid phase, they require assumptions to be made about the solids rheology [47]. These closure relations are not necessary when using the DEM where interaction forces for each particle can be calculated directly. Furthermore, it is difficult to model particle size distributions or different particle types that should be treated as different phases within the frame of the TFM. Hence, our preferred approach is the combination of the DEM with Computational Fluid Dynamics (CFD).

In CFD-DEM simulations one may differentiate between resolved and unresolved CFD-DEM simulations. In the former approach, the computational cells are substantially smaller than a particle such that the fluid flow around the particle is resolved in detail. This also means that fluid-solid interaction forces obtained are specific to each particle. In the unresolved CFD-DEM method the size ratio is just the opposite with particles being significantly smaller than a CFD cell. The forces exerted on each particle by the surrounding fluid are calculated first in this case. Then the forces acting at the cell scale are computed as the sum of the individual particle forces in a cell. Due to the lower number of cells involved, the unresolved CFD-DEM method is computationally less demanding than the resolved version and thus better suited for large-scale simulations.

2.6.1

Governing equations

The fluid phase is described via the continuity equation and the Navier-Stokes equation, which for single-phase systems read

∂ρf ∂t + ∇ · (ρfu) = 0 (2.90) ∂t(ρfu) + ∇ · (ρfu ⊗ u) = −∇p + µf∇ 2u+ 1 3µf(∇ · u) + ρfg (2.91)

Referenzen

ÄHNLICHE DOKUMENTE

6, exploits the reduced SDE effec- tive model and the DMAP coarse variables to compute the mean escape time between rare dynamic events: in our case this is the time (on average)

To assess the effect of an applied external tension on the height fluctuations, we first consider a simplified case, where the membrane is characterized by a single surface manifold

Let us assume that explained variable A is expressed as a linear form of optional variables B, C(-l), and D(-2) in addition to the above absolutely important, optionally

Chapter 4 decomposes into 2 parts: a) a priori error estimates in space-time Sobolev spaces for the time-domain boundary element method in R 3 + (Section 4.1). b) reliabil- ity of

After the parameters are chosen, we present numerical examples for solving the interior Laplace equation with Dirichlet boundary conditions by means of the single layer

The coarsening (Step 3) destroys the monotonicity of the adapting procedure and therefore convergence is no longer guaranteed. However, Lemma 2.12 yields that the difference between

&#34;Community Medicine&#34; aufgebaut. Ein Eckpfeiler dieses Schwerpunktes ist die Integration der Problemstellungen der Lehre, Forschung und medizinischen Versorgung.

A simple description of the phase separation of binary polymer blends in terms of a lattice.. model was given by Flory and Huggins,