• Keine Ergebnisse gefunden

3.5 Enhanced Probability of Improvement Method

3.5.4 Constrained Problems

Carpio, Giordano, and Secchi, 2018 extended the original formulations to also cover inequality constraints of the form gj(u, x) ≤ 0 in a fully probabilistic fashion. A Gaussian Process Regression (GPR) is again applied to generate a surrogate modelˆ︁gj(u)of the inequality constraints. The probability of fullling a constraint is, thus computed by Equation 3.11. Finally, the probability of improvement for a constrained problem, to be maximized in the optimization sub-problem, is obtained by multiplying the probabilities of improvement of the unconstrained problem to the probability of fullling each constraint as given by Equation 3.12.

P Cj(u) =ϕ

(︃0−ˆ︁gj(u) sc,j(u)

)︃

(3.11) P Ic(u) =P I(u)·

n

∏︂

j=1

P Cj(u) (3.12)

In Equations 3.11 and 3.12:

P Cj(u) is the probability of fullling constraintj ˆ︁gj(u) is the surrogate model of constraintj sc,j(u) is the standard error of constraintj

ϕis the standard normal cumulative distribution function

P Ic(u) is probability of improvement for the constrained problem 3.5.5 Handling Failed Simulations

One question that arises when applying this method to optimize black-box simulations is what to do if simulations fail or do not converge. Simulations may fail due to physical, e.g., ash calculations failing to converge; numerical, e.g., insucient number of iterations; and/or software-related issues, e.g., crashes due to lack of Random-Access Memory (RAM). It is extremely dicult to formulate analytical constraint expressions for this behavior at the optimization layer.

The problem is mentioned by Carpio, Giordano, and Secchi, 2018, but no clear solution is provided. On a personal communication with the authors, it has been cleared that a large objective function value is returned to the optimization algorithm if a simulation fails. This creates peaks of low PI in areas of non-convergence and, thus the algorithm will avoid these in the following iterations.

3.5 Enhanced Probability of Improvement Method This is a frequent solution in black-box optimization problems because it is straightforward to implement, but has downsides.

1. It is hard to determine, a priori, how large the objective value should be when a simulation fails. This has a direct impact on the construction of the surrogate model and, hence on the function and standard error pre-dictions between the converged samples and the non-converged samples.

2. For the optimization layer, it is impossible to determine whether the sim-ulation failed because the sample point is indeed infeasible, or simply because of bad convergence behavior. Process simulators also typically rely on previous results for initialization, so that a non-converging area may become convergent after more solutions are placed around it.

Assigning arbitrarily high values to the objective function on non-convergence is detrimental, because it may lead the algorithm to oversee points in these locations and miss an optimum at the intersection of these unformulated con-vergence constraints. A practical example is an optimum located near or at the maximum feasible stream recycle ratio, wherein convergence is usually challeng-ing. There is, thus the need to develop a more suitable yet practical methodol-ogy to deal with failed simulations in black-box optimization.

In this work, a data-driven modeling approach is proposed and applied to map convergence behavior. A classication method is selected for this purpose, due to its simplicity and suitability.

The k-Nearest Neighbor Classication Method

The k-Nearest Neighbor (kNN) is a simple algorithm frequently utilized in pattern recognition and classication problems. It computes the probability of an unknown sample belonging to a certain class simply by looking at the classes to which its k nearest neighbors belong to. The kNN classier from scikit-learn has been used (Pedregosa et al., 2011).

This simple two-class rule can be applied to classify the known simulations into converged and non-converged and then to compute the probability of a new simulation belonging to the converged class at a new sample point. The proba-bility is the distance-weighted average of the classes of itsk-nearest neighbors, as given in Equation 3.13. For example, if k = 2 and the two nearest points are equidistant and belong to the classes converged (c= 1) and non-converged (c = 0) respectively, then the probability that the new sample belongs to the converged class is50 %.

Pconv=P(c(u) = 1) = 1

∑︁k j=1dj ·

k

∑︂

j=1

dj·cj (3.13) In Equation 3.13:

PconvorP(c(u, x) = 1)is the probability that the sample at(u, x)belongs to the converged class (c= 1)

dj is the distance between the sample(u, x) and samplej cj is the the classication of sample j

The probability of convergence can be integrated into Equation 3.12, hence obtaining a new objective function to the PI maximization sub-problem, as given in Equation 3.14. This approach leaves the original construction of the surrogate model and constraints intact, avoiding the introduction of articial and arbitrarily high peaks. It will dampen the PI auxiliary function close to known failed simulations. Nonetheless, if more simulations converge around this area, the probability of convergence will increase and hence, ifP I and P C are also high, the solver may re-visit these areas by placing a sample there.

P Ic(u) =Pconv(u)·P I(u)·

n

∏︂

j=1

P Cj(u) (3.14)

The choice ofk, i.e., how many points should be considered in Equation 3.13, is the only decision that must be taken. While reasonable results have already been obtained with a default value ofk= 3, Kung, Lin, and Kao, 2012 suggests that the optimal value of kis given by Equation 3.15.

koptimal = 2·nd1 (3.15)

In Equation 3.15:

koptimal is the optimal value fork nis the number of sample points

dis the number of input/design variables, i.e., dimension ofu.

3.5 Enhanced Probability of Improvement Method 3.5.6 Process Optimization Case

To test and illustrate the functionality of the framework with a simpler As-pen Plus model, an optimization problem using the Hydrodealkylation (HDA) process is solved using the dierent algorithms described. The simulation ow-sheet and reference design conditions for the HDA process modeled in Aspen Plus using the PENG-ROB property package are given in Figure 3.6. Hydro-gen is fed together with toluene and two recycle streams into a reHydro-generation heat-exchanger (HEATX) and a furnace upstream of the reactor. The reac-tor is modeled as a kinetic isothermal Plug-Flow Reacreac-tor (PFR) including the main reaction and a side-reaction forming an undesired heavy product biphenyl (diphenyl) as seen in Equations 3.16 and 3.17. The reaction product is cooled and ashed, with the lights cut containing H2 and CH4 being partially recy-cled and the heavies cut following to the distillation train. A stabilizer col-umn (STABILIZ) removes the remaining lights at the top; the product colcol-umn (COLUMN1) recovers the main product benzene at the top; and the diphenyl column (COLUMN2) removes the by-product biphenyl at the top and recovers unreacted toluene at the bottom, which is completely recycled. Design-Specs are formulated as hidden constraints within the simulation in order to maintain a toluene conversion of 75 %by manipulating the reactor's length and also for product purities and recoveries within the distillation train by manipulating the reux and boil-up ratios in the columns.

MIXER HEATX

Figure 3.6: Simulation Flowsheet of the HDA Process. Main process streams (black), light gases streams (blue), heavies streams (purple).

C6H5CH3+ H2−−→C6H6+ CH4 (3.16) 2 C6H6−−→←−−C12H10+ H2 (3.17) A two-dimensional bounded unconstrained optimization problem is formu-lated in order to minimize benzene's production cost (USD t−1), which is given by the sum of utility and hydrogen feed cost rates (USD h−1) as given in Eq.

3.18. The decision variables are the hydrogen to toluene feed ratio, which is set by manipulating the hydrogen molar feed ow rate and keeping the toluene feed constant, and the purge ratio set in the block SPLITTER.

cBenzene=

∑︁utility

u=0u·cu+ḞH

2 ·cH2

Benzene (3.18)

A high cost is attributed to the hydrogen feed, so that the optimum lies at lower hydrogen to toluene and purge ratios, wherein convergence is also challenging. This is done in order to test the new method for handling failed simulations described in PI method (Chapter 3.5.5). A purge stream is always necessary because low amounts of methane are fed as impurity with the toluene feed and more is formed in the reaction, hence setting the purge ratio too low leads to a failure in converging the tear streams. Setting the H2:Toluene ratio too low leads to a Design-Spec diverging, since there is not enough H2to achieve the specied Toluene conversion. The problem is also solved using Sequential Least-Squares Programming (SLSQP), DE, and SHGO for comparative pur-poses. The SQP algorithm from Aspen Plus could not solve this problem due to convergence issues.

Figure 3.7 shows the solution of the HDA process optimization case using the PI algorithm. The initial design of simulations has been performed us-ing a Hammersley set with 20 samples. The eect of the H2:Toluene ratio on the objective function is much more pronounced than that of the purge ratio.

The algorithm quickly identies the area of interest (low H2:Toluene and purge ratios) and most samples are placed therein. Nevertheless, the global charac-teristic of the PI algorithm also led to samples near the upper bounds, because the predicted standard error there was still high during the iterations. Once the P Icrit is reached, the surrogate model is minimized and the solution is used as a starting value for the direct minimization of the rigorous model using SLSQP.

Figure 3.8 shows the convergence map of the HDA process optimization case using the PI algorithm. Converged simulations are marked with dots and failed

3.5 Enhanced Probability of Improvement Method ones with crosses, while the color-map indicates the probability of a simulation converging in that area (Pconvin Eq. 3.14). It is clear that the algorithm is able to sample near region of interest even though it is close to the non-converging area. In fact, the optimum lies very close to two failed simulations. It can also be seen that the solver repeatedly attempted to visit areas of low purge ratio at H2:Toluene ratios of around 3.5, 7.4, and 9.7, because the predicted standard error there was, due to the lack of samples, high. Nevertheless, after a few of these attempts failed, Pconv tends to zero and the algorithm is shifted away because the objective predictions are also not low enough.

Table 3.1 shows the results obtained for this problem for the dierent tested solvers. This is not intended as a rigorous solver benchmark, but rather as a guideline to indicate whether the performance newly proposed PI algorithm is even comparable to other readily available ones. Dierential Evolution (DE) is able to locate the best solution, but the modied PI algorithm developed in this thesis performs quite well and is able to locate similarly low objective function values with far fewer function evaluations (rigorous simulations). The proposed PI method is used to optimize the BG-OCM reaction section as described in Chapter 4.2

Table 3.1: Solution of the HDA process optimization case by dierent algo-rithms (Penteado et al., 2020)

Soltuion Objective Number of Decision Variables Method Value Rigorous Purge Ratio H2:Toluene

USD t−1 Simulations in SPLITTER Feed Ratio

1DE 86.3 831 0.049 0.913

PI (this work) 86.7 149 0.060 0.896

2SHGO 86.8 1091 0.064 0.899

SLSQP 88.3 73 0.076 0.902

1Population size: 15 individuals per decision variable (30 individuals)

2Sampling method: Sobol; Number of samples: 100; Iterations: 3

Purge Ratio 0.2 0.0 0.6 0.4 1.0 0.8 H2:Toluene Ratio

0 2 4 6 8 10

Objective Function

0.2 0.0 0.6 0.4 0.8

Figure 3.7: HDA process optimization case solved by the enhanced PI method.

Rigorous function evaluations (black dots), optimum (magenta dot), surrogate model (surface with color scale) (Penteado et al., 2020).

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Purge Ratio

0 1 2 3 4 5 6 7 8 9 10

H2:Toluene Ratio

0.2 0.4 0.6 0.8 1.0

Figure 3.8: Convergence map for the HDA process optimization case. Proba-bility of a simulation belonging to the class converged (color map) computed with kNN classication andk= 3, converged simulations (dots), failed simulations (crosses) (Penteado et al., 2020).

4 Case Studies

This chapter describes dierent studies performed with the models from Chap-ters 2 and methods from Chapter 3. The goal is to carry out the optimal design of a commercial-scale Biogas-based Oxidative Coupling of Methane (BG-OCM) process and use it as a basis for a techno-economic evaluation.

In Chapter 4.1, previous studies that have been already published are sum-marized and put into the context of this thesis. This includes partial develop-ments and evaluations for the CO2 removal section of the Natural Gas-based Oxidative Coupling of Methane (NG-OCM) process, the development of the rst version of the Bbop optimization framework, and the preliminary design and cost estimations for the BG-OCM process.

The following Chapters 4.2 to 4.6 deal with the optimal design and techno-evaluation of an industrial-scale BG-OCM plant located in Brazil to process 15,000 Nm3h−1 of biogas derived from Anaerobic Digestion (AD) of vinasse and produce polymer-grade bio-ethylene as the main product. The biogas is assumed to be treated and to have60 mol% CH4 and 40 mol%CO2. The pro-posed BG-OCM process structure in Figure 1.3 is adopted. Each section of the BG-OCM process is then detailed and optimized individually, i.e., reaction sec-tion (Chapter 4.2), the CO2 removal section (Chapter 4.3), and the distillation section (Chapter 4.4), while chapter 4.5 presents the main conclusions from process design perspective. In Chapter 4.6, the bio-ethylene production cost is estimated, an economic analysis using a Monte Carlo simulation is performed, and the results are compared to the market value of fossil ethylene.

4.1 Previous Studies

4.1.1 Design and Assessment of a Hybrid Membrane-Absorption CO2 Removal Process for OCM

As previously discussed in Chapter 2.2.1, a hybrid CO2 removal section for the NG-OCM had already been developed and tested at mini-plant scale accom-panied by the creation of process models for its simulation and optimization at TUB (Esche, 2015; Song et al., 2013; Stünkel, 2013). However, a clear pic-ture on its techno-economic feasibility at industrial scale production was still

Table 4.1: Dierent reactor gas outlet compositions considered as feed for the design of the hybrid CO2 removal process in the previous studies

Mole Fractions I II III IV

CO2 0.110 0.090 0.245 0.380 C2H4 0.060 0.090 0.045 0.050 N2 0.330 0.167 0.080 0.100 CH4 0.500 0.653 0.630 0.470

Total 1 1 1 1

missing. In (Penteado et al., 2016c) and in greater detail in (Penteado et al., 2016b), a simulation-based design and economic assessment of the system for a NG-OCM plant producing 100 ktC2H4year−1 is performed.

The hybrid CO2 removal system has been designed considering four dierent feed compositions listed in Table 4.1. They represent the NG-OCM reactor product gas using four dierent operating scenarios, i.e., nitrogen dilution (feed composition I), high methane to oxygen ratio (feed composition II), and two dierent levels of CO2 dilution (feed compositions III and IV). The standalone absorption/desorption process with a 30 wt%aqueous Monoethanolamine (IU-PAC: 2-aminoethan-1-ol) (MEA) solution is taken as the benchmark for eval-uating the hybrid process. The two considered membrane materials are Poly-imide Membrane (PIM) and Poly-(ethylene oxide) Membrane (PEOM). The Gas-Separation Membraness (GSMs) can be employed in a single module, as previously depicted in the process ow diagram in Figure 1.5, or in dierent cascade congurations, as illustrated in Figure 4.1. A simulation-based design is performed for each system ensuring a xed CO2 removal ratio (Eq. 4.1) of 97 % and the Total Annualized Cost (TAC) of each conguration is computed and compared.

ηCO2 = Ṅin

CO2 −Ṅout

CO2

in

CO2

(4.1) The operation pressure range of 10 bar to 32 bar has been adopted based on (Salerno-Paredes, 2012; Stünkel, 2013) in order to reduce the amine re-generation energy required, i.e., reduce steam consumption as heating utility.

However, it has been shown that higher pressures lead to higher electricity (compression) cost and product loss due to the higher ethylene solubility in the absorption solution (Penteado et al., 2016b). Based on ethylene's market

4.1 Previous Studies

SINGLE MEMBRANE WITH PERMEATE RECYCLE

PURGE CO2

C2H4

TWO-STAGE STRIPPING CASCADE WITH PERMEATE RECYCLE

TWO-STAGE RECTIFICATION CASCADE WITH RETENTATE RECYCLE

Figure 4.1: Membrane cascade congurations considered in the previous stud-ies. Adapted from (Penteado et al., 2016b)

0 100 200 300 400 500 600 700 800 900

10.0 15.5 21.0 26.5 32.0

Operating costs (USD/h)

Absorption Pressure (bar)

Total Cost Compression Steam Ethylene Loss

Figure 4.2: Sensitivity study for the inuence of the absorption pressure on the operating costs for feed composition II and standalone absorption.

Reproduced from (Penteado et al., 2016b))

value and Aspen Plus' default steam and electricity cost, the parametric study in Figure 4.2 is performed to show that absorption pressures lower than 10 bar should also be considered.

In terms of OPEX, the hybrid process only outperforms the standalone ab-sorption for the CO2 dilution scenarios. For the other scenarios, the CO2 par-tial pressure in the gas is too low to justify a permeation-based separation. No conguration utilizing the PEOM material outperformed the standalone ab-sorption. The main advantages of the PEOM are the high CO2 permeance, which leads to low membrane areas, and its high selectivity towards H2 and N2 (8.42 and 45.93 at 303 K respectively), but selectivities towards hydrocarbons are lower than in polyimide or cellulose acetate membranes and only 3.14 for C2H4 at 303 K (Brinkmann et al., 2015). This translates into high ethylene loss, which is critical to the OCM process. Therefore, the use PEOM is not considered further.

The hybrid process congurations using a single membrane module or a two-stage rectication cascade (see Figure 4.1) with PIM material had a comparable performance to standalone absorption in terms of OPEX. The equipment cost is then estimated and the TAC of each conguration is compared. For feed composition III, the TAC of the hybrid process employing a single PIM module

4.1 Previous Studies is only slightly lower than that of the standalone absorption, so that a clear advantage is not attainable. For feed composition IV, the hybrid process em-ploying two-stage rectication cascade yields the lowest TAC in despite of its higher capital investment cost, due to its higher ethylene recovery.

Another study in a similar fashion also considered the utilization of a37 wt%

aqueous N-Methyl Diethanolamine (MDEA) solution with 3 wt% content of activated Piperazine (PZ) as absorption uid to remove the CO2 from reactor outlet gas composition III (Penteado et al., 2016a). The standalone MDEA+PZ absorption performed only slightly better than the standalone MEA absorption, providing 4.6 % reduction in the TAC. On the other hand, the hybrid PIM module and MDEA+PZ absorption process provided 20.5 % reduction in the TAC. This mixed amine solution has not been considered further, because the model has not been tted nor validated to the same extent as the MEA model described in Chapter 2.2.2.

These three studies concluded that the addition of GSM in the upstream of the absorption unit can only bring economic advantages if a high amount of CO2, i.e., above 20 %, is present in the feed gas. This is the case if CO2 is added as dilution gas in a natural gas-fed OCM reactor or if biogas is used.

Also, absorption pressures lower than10 barmust be considered. There is the need to further develop and validate models for the equilibrium of the OCM component system with other amine solutions such as MDEA+PZ for a more rigorous comparison.

Limited process optimization was applied within these preliminary studies, mainly due to the diculties related to the size, complexity, and bad conver-gence behavior of the model. This has been achieved in a more recent study by applying the Bbop framework described in Chapter 3 (Penteado et al., 2018a).

The TAC for the CO2 removal section of a BG-OCM process consisting of stan-dalone absorption with MEA is minimized. The resulting process conguration is depicted in Figure 4.3, wherein bypassed equipment/lines are shown in blank and dashed respectively. The most important cost savings have been achieved by reducing the absorption pressure from10 bar to 3.7 barand by eliminating one compression stage (Penteado et al., 2018a). Since a large amount of CO2 is present if biogas is used as a feed, it is sensible to reduce the absorption pres-sure and shift most of the compression duty to the downstream compressors, i.e., after CO2 is removed and the total gas ow is smaller. In Chapter 4.3, this study is extended to include Gas-Separation Membranes (GSM), which typi-cally require a higher operating pressure to increase separation driving force.

CO2

CO2 OCM Gas

from reaction

Water Out

CO2-Free Gas to distillation

Water Out

Figure 4.3: Superstructure and optimal process conguration for the gas quenching, rst compression, amine-based CO2 removal, and sec-ond compression steps of the BG-OCM process. Blank equipment

Figure 4.3: Superstructure and optimal process conguration for the gas quenching, rst compression, amine-based CO2 removal, and sec-ond compression steps of the BG-OCM process. Blank equipment