Engineering Asked by Diogo on June 15, 2021
Say you are trying to measure the liquid flow rate (mass or volume) through a pipe and you want to get the best measurement possible, and you have two flow meters. In one scenario, you put one flow meter and leave the other out as a back up. In the other scenario, you split the flow up into two pipes, put a flow meter on each, and then join them back up later. In the final scenario, you hook them up in series on one pipe only.
My instinct is that using two measuring devices instead of one is always better, and I’m fairly sure that the series case is the best scenario. However, what happens if you can’t put them in series? Is it better to just have one, or to split the flow up if you can?
For the purposes of these scenarios, I’m always calibrating the whole unit end to end, so I would always calibrate both meters as one.
Obviously putting it in series is better than the other two options. Putting in series, means that If you put two (perfectly calibrated) flow meters in series, then you will be able to make inferences on the actual value based on the calibration uncertainty, the reading uncertainty, and the sampling statistics.
I'll start with the problems in the other two:
In that case, you will only have only one measurement value. There is no added information from one of the flow meters.
There is no way to make certain that the flow will be split equally, so you would still get only one value. If the flow meters are calibrated to measure the entire flow, you will have additional error due to discretization error (similar to taking a ruler with mm tick intervals and try to measure something that is close to a mm compared to a cm. The error will be much greater in the smaller object).
If you really can't put two in series, then probably you'd be (marginally) better off putting two in parallel. However this has nothing to do with the uncertainty, that the question is about .
The benefit is that you will be able to see if for some reason one of the flow meters has a problem. So if it breaks down or if a cable becomes loose, then you'd see a significant difference between the two. Additionally, you can set it up, so that you can replace a flow meter without stopping the process (since you essentially have a bypass).
On the other hand, if you only have two flow meters (no backup), then you might be better off, to use only one. Again this has nothing to do with uncertainty.
Correct answer by NMech on June 15, 2021
In series would usually be best, with two caveats:
The first of the above is a particular issue I've seen with thermal mass flow meters at low flow rates. I.e. the upstream sensor heats the flow a little, throwing off the downstream sensor. I suspect coriolis and ultrasonic technologies can also be affected.
To get best measurement, the thing to do is make sure flow for all sensors is fully developed, free from vorticity, free from temperature gradients (both axial and radial), and everything is at thermal equilibrium. Probably these are the most important.
For flows that are rising and falling in time, different sensor technologies can produce distortions in the frequency domain. A combination of technologies (e.g. differential pressure and thermal) could improve overall performance in such dynamic situations (e.g. dispensing), but it needs careful setup again. The idea is one of the technologies provides low frequency accuracy, and the other can inherently capture the mid band dynamics with less distortion.
For calibration, I would prefer a mass measurement.
Answered by Pete W on June 15, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP