What we've found so far is that comparing microplastic levels in salt across studies is difficult because researchers use different methods to measure them. Our analysis of the available research shows that this lack of consistent methods makes it hard to know if differences in results come from actual contamination levels or just from how the studies were done [1].
The evidence we've reviewed leans toward the idea that without a standard way to detect and report microplastics, the numbers from one study may not match another — even if they’re looking at the same type of salt. Some studies might count smaller particles, while others focus on larger ones. Some may use different tools or ways to identify plastic types, which can change the results [1]. Because of this, what we see in the data reflects not just the salt itself, but also the techniques used in each lab.
Based on what we've reviewed so far, there is no disagreement in the evidence — all 39 supporting assertions point to this methodological challenge, and none refute it. Still, we don’t have enough information to say how big the differences are or which method works best. Our current analysis can’t tell us what the “true” microplastic level in any given salt is, only that comparing across studies is limited by how the measurements are taken.
This doesn’t mean the data is useless — it tells us that better standards are needed. As more studies adopt consistent methods, we’ll be in a better position to make real comparisons.
Practical takeaway: If you're trying to compare microplastic levels in salt, keep in mind that the numbers might not mean the same thing from one study to the next — the way they were measured matters just as much as the results.
2 items of evidenceView full answer