• Keine Ergebnisse gefunden

The produced parts were then digitized in the photobox, which can be seen at figure 3.2. To have the best possible result and to be able to compare the parts, it was necessary to have constant conditions during the photographing session. First, the white stand where the specimen were put in was always at the same position and height. The two lights illuminating the part were placed outside, next to the two closer corners of the tent. These light sources were also at the same position and height. They were set before the session, and the prime aim was to have the same lightning over the length of the whole part. The tripod and the settings for the camera were also the same each time a photo was taken. Inside the tent, some modifications had to be done. As mentioned before, the specimen had four different surface grains, where one of them was highly polished. This caused a problem, as the white tent and stand were reflected on the black surface too much, making the grayscale levels uneven, and therefore it was not possible to assess any defects on this side of the part anymore, especially at the bottom area. The solution to this was to put black sheets on the tent and on the stand right above and below the polished side of the part. This caused a slight decrease on the grayscale levels, but at the same time, the contrast was better, so this solution was preferred.

Next, after every part was digitized, the photos had to be cropped. This was necessary so that the four surfaces could be separated from each other, and by that the assessment could be done regarding how the different surface grains would react to different process conditions. The cropping process was mostly automated by the software called GIMP. Using the plugin "David’s Batch Processor", the program could be given parameters about which areas of the pictures would be in need of cropping. Right after, a manual inspection was done for these pictures, and some of them had to be adjusted, where the cropping process was not satisfying enough. Although care was taken so that every specimen would be in the same position, sometimes the stand was not at the exact same position or angle as desired, because every specimen had to be taken in and out with the stand every time a new photo was taken.

When the pictures were ready, the grayscale levels had to be extracted from them. This was done through Matlab, where it is possible to not only get a value for the grayscale levels pixel-by-pixel, but to also make 3D graphs out of them. The graphs were used sometimes to qualitatively assess the defects that appeared on the part. The used scripts are available at Attachment B:.

Figure 3.3: The evaluation process with a., part digitization b., cropping the part c., extracting the grayscale values

As previously mentioned in the literature review, many evaluations methods focuses on certain points or areas of the surface, which can lead to questionable results. The goal that was set up for this master thesis however was to analyze the whole surface qualitatively and quantitatively.

As such, two novel methods to analyze the surfaces were born. First, an overall average of the

grayscale values were extracted from the pictures. This way, it is possible to compare one part with an other, because only one value is gained from the whole picture. Grayscale can range from 0 to 255, where the former means a pure black state, while the latter a pure white. If this value is low, it means that the surface has less defects, because the discoloring caused by the silver streaks and other surface defects are not prominent enough on a black surface, which means the set process parameters should be better than one that has a higher grayscale level.

Although calculating the average values brings the advantage that the data becomes easy to analyze as only one value is produced from the surface, it is still not enough for evaluation, because local data gets lost. It could happen, that the average grayscale level is at an acceptable level, but a periodic wave-like defect affects the surface. From one value, it is also not possible to assess which part of the surface is better, and as such, homogeneity of the surface should be measured. As none of the read articles dealt with this, a novel method had to be adapted for this master thesis.

As the grayscale values were produced in a matrix form, with manipulation of the data, it is possible to get local data out of the pictures. It was decided, that the middle 5 pixels, which is around 1 mm on the part, should be averaged over the length of the part to evaluate homogeneity.

Then, plotting these values over the length will give a graph about how the grayscale values vary in the middle section of the specimen, giving an idea about how homogeneous the surface is. As the parts had to be compared with each other somehow, getting quantitative data for homogeneity was the next goal. Homogeneity is defined as the quality or state of being all the same or all of the same kind. If this definition is applied to the generated graph, it should mean that the surface is homogeneous, when the averaged grayscale levels are on a straight line, parallel with the x-axis.

The solution to how to get a quantitative data for homogeneity was to first calculate the overall mean value of the grayscale levels, which was added to the homogeneity plot as a straight line which paralleled with the x-axis. Then, below and above this line, the absolute area was calculated. This value, which has the dimension of "pixels × grayscale level", had been determined as an indicator for homogeneity. The reason this works, is because by calculating an absolute area, it does not matter how high or low the grayscale values are, because only the difference between the mean line and the actual grayscale value matters. For example, if the overall average grayscale value is 70, which would be regarded as a bad surface, might be fully homogeneous, and would be regarded as a perfectly fine surface, which would not be purely black, but still acceptable. On the other hand, if the overall values are low, but these always vary, it should be regarded as an inhomogeneous surface. As with the overall grayscale values, a low value indicates a more homogeneous surface.

Theoretically, the lowest value for homogeneity is 0, while the most inhomogeneous surface has a value of 2200 × 256 = 563200. This happens if the surface is pure black or white until the midway, and suddenly changes to the opposite. The figure below showcases how comparing two surface homogeneities work.

Figure 3.4: The part on the left has less variance in grayscale levels than the specimen on the right, which means it is more homogeneous, as indicated by the calculated area

It should be pointed out, that the overall grayscale value and homogeneity values are not absolute indicators, and should only be used to compare different parts with each other. This is because grayscale is heavily influenced by the lightning conditions. If the light is brought closer, it would increase the grayscale to a quite high level. Nevertheless, because the lightning conditions were always the same, comparing different runs is possible, but just saying that a surface is good, because it does not reach a certain level in grayscale is not sensible. Some slight variation was also done in evaluating the surface homogeneity, compared to the calculation of overall grayscale levels. The bottom third of the part was cut, because as previously mentioned, a black sheet had to be put right under the parts, which might have had an influence on the local grayscale levels. This would not have been a huge issue, because by only making comparisons means that all specimen would have a lower grayscale value exactly at this area, however, as some parts had to be cropped manually, not every pictures had the same amount of datapoints, which makes a length-based analysis harder.

Lastly, there might have been some issues with the mold heating at this bottom area, which might have had an unwanted influence on the surface, thus, it was decided to only focus on the top 2/3 area of the pictures for homogeneity. For the overall average grayscale levels, this was not necessary, as this analysis did not rely on position based data, and for comparison, it is sufficiently enough.

The results were produced by averaging the grayscale and homogeneity values of the five speci-men for each run and a standard deviation was also calculated to determine how reliable the outcome is. The runs with different process parameters were numbered in the order of production, which can be found at [Attachment C:]. The results were divided into four segments, represented by the four different surfaces. These are also available in the appendix, at Attachment D: for the overall grayscale averages and at Attachment E: for the homogeneity results. The four surfaces are labeled as "S1", "S2", "S3" and "S4", which are in accordance to figure 3.1. For the sake of convenience, when a run is referred, it is done by the following formula: "Run (identificaion number of run) [In-jection volume flow rate/Melt temperature/Mold temperature/SCF content]". For example, "Run 39 [400/220/100/0.7]" means that the specimen were identified as the 39th run out of the total 59, and the set parameters were 400 cm3/s for injection volume flow rate, 220 °C melt temperature, 100 °C variotherm heating, and 0.7 % SCF content.

All in all, the presentation of the results were done in three different ways. First, an overall influence graph was made for each parameter, to see whether changing only one parameter has an affect on the surface or not. This method was expanded also to the different surface textures, to check how they react to a parameter change. Lastly, a closer look was taken to assess any underlying notable differences between the different runs.

4 Results and discussion

In the hopes of getting a "perfect" specimen, some runs had no blowing agent in them. Opposed to this, nearly all of the parts suffered from some kind of surface defect. Due to this, as mentioned before, only comparisons were made between the different parameters. However, there are some runs which show promise, and with a small tweaking, it is believed that a plaque with good surface qualities could be made. In the following, each of the parameters will be looked at to see which had the largest influence on the surface.

4.1 Verification of measurements

As previously explained, currently, no standards exist that analyze the surfaces of foam injected parts. The evaluation methods employed in this thesis are novel ways to quantitatively and qualita-tively categorize the exterior of the specimen. However, in order to verify that these methods indeed work, a separate test was needed to confirm the validity of the results of this thesis. Therefore, the homogeneity of fifteen specimen that were not part of the original DoE were evaluated. These parts were supplemented by Borealis, and were made from a PP trial material with different process settings. Their general characteristic is that they all have tiger stripes on them, which made them a good candidate to test the evaluation methodology. Alongside the parts, the results from Borealis were also received. Thus, these results were compared to the conclusion that was made by using the evaluation method proposed in this thesis.

As with the original 56 runs, a photobox was set up to digitize the parts. The size of these specimen were different, so the position of the lights had to be also changed, as it can be seen in figure 4.1. In this setup, the lights were placed outside of the tent, were pointing in two different directions and were farther away compared to the original setup. This had to be done for two main reasons. First, if the light hit the specimen from above and below, the tiger stripes were not visible.

Secondly, by concentrating the two light sources at the two ends of the specimen, it was made sure that the middle of the plaque was not too bright. With this, the grayscale levels behaved the same way in the middle as on the two sides of the part. After defining the proper setup, the homogeneity results by calculating the absolute area were produced the same way as described in chapter 3.3.

Figure 4.1: The lightning setup for the tiger stripes specimen

In Borealis, a separate test was conducted to get the homogeneity of the specimen with a different measurement tool and evaluation method, as it can be seen in figure 4.2. The plaques were placed in a black box, where a LED-array was employed to create a diffuse reflection of the sample surface.

Through two mirrors, a CCD Camera captured the image. This picture was then converted to

grayscale, and a graph was produced that showed the grayscale values along a central axis. Then, a fitted curve was produced. Finally, a mean square error was calculated, which showed how homogeneous the surface is. A higher value means that the grayscale values are varying from the fitted curve, which means that the surface is more inhomogeneous. Conversely, a low MSE (mean square error) means less deviance from the regression line, From this, it can be concluded that the lower this value is, the more homogeneous the surface is.

Figure 4.2: The measurement principle used by Borealis. On the top: the arrangement of the test device.

On the bottom: the evaluation method to produce MSE.

The methodology between the one used in this thesis and the one used by Borealis are based on the same principle. Both methods are plotting the grayscale values over the length, produces a central line, and assumes that a low deviation from this line equals in a homogeneous surface. The difference between the two are the quantitative values associated with the surface. The Borealis method uses a statistical expression, while the one used in the thesis calculates the absolute area. The idea is the same, but comparing can only be done by ranking the specimen by their homogeneity, because the way to calculate MSE and absolute area is different. If the ranking matches, it means that both methods are sufficiently enough to categorize the surface, as it will not depend on which way the homogeneity is calculated, both will produce the same result. In table 4.1, the results of the two methods and the ranking of the specimen can be seen.

As it is shown, the two results correlate with each other. N12582, which only had tiger stripes Name of specimen Ranking Borealis Ranking Homogeneity TS,MSE [-] Absolute area [-]

N12582 1 1 3 8483

N12508 - 6 sek 2 2 5 9649

N12508 - 3 sek 3 3 6 10685

N26298 4 5 26 13820

N26624-3 5 4 32 13813

N26624-1 6 6 51 16013

Table 4.1: The results and rankings of the two different ways to evaluate homogeneity

on only one half of the surface, had the lowest MSE and absolute area, which means that both methodologies agree that this specimen is the best in regards of homogeneity. N26624-1 had the most and strongest tiger stripes among the plaques, and thus both outcomes agree that this is the worst surface. The rest of the ranking correlates similarly, with the only exception being between N26298 and N26624-3. This could have been caused by the difference in the quality of the pictures made for the absolute area method. Nevertheless, Borealis verified the validity of the absolute area results, and thus it can be concluded that the absolute area method is sufficient to characterize the surfaces of the parts originally produced for this master thesis.

4.2 Injection volume flow rate