Engineering Asked on March 20, 2021
In a polymer production process, there are a lot of manipulated variables(MV) such as temperature ,pressure , or level of reactor that affect the quality of my polymer product.
However, it usually has “dead-time” problems in a process especially when the scale of the system is quite large.
I’ve already collected empirical data from the process for 6 months. Now, I want to find out the “dead-time” of each MV that has significantly importance on the final quality.
For instance, if I increase the temperature for 1 degree C, after 1 hour, we’ll see the viscosity of polymer also increase for 100 cP. The time 1 hour is the answer I’d like to figure out.
How should I do if I can only analyze the empirical data on hand without doing experiment on the production lines?
Consider that you categorize your data by parametric variations. Suppose you have three $MVs$: $p, T, z$. You might separate the data as such ...
Analyze the first three sets. Does anything happen when only $p$ changes? Does anything happen when only $T$ changes? Does anything happen when only $z$ changes? Make notes. Check for consistency across each $MV$. The results are the influences of the fundamental or first order control factors. Derive whatever metrics you want from this analysis (e.g. "delay time" or other).
Continue the analysis with the other sets. The results are the cross-correlation influences (e.g. how the combined set $p,T$ effects the output).
The analysis may be done manually by visual inspection of the data sets you have. Alternatively, you may find statistical programs such as R to aid in culling through the noise.
Answered by Jeffrey J Weimer on March 20, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP