When applying statistical hypothesis testing, type I error (false positive) could happen. Often we would not know whether type I error happens. But are there cases otherwise, i.e., we can have the truth later after applying hypothesis testing?
For example, I would like to know if women live longer than men. I set up my hypothesis testing for ages at death under two genders: H0 is equality and H1 is women's death age is larger. Assume the result shows significance - reject null. Also assume later scientific research shows women don't live longer than men, and new data shows insignificance. This would be a type I error, and it's known later after the hypothesis testing.
Where could I find cases like this - type I error is known by other measurements?
One example could be Covid testing, where the null hypothesis is that the individual does not have Covid, and the alternative hypothesis is that the individual has Covid.
When developing Covid test schemes in labs, it is usually the case that we know beforehand whether the individuals have Covid or not (through X-ray or other methods) and assess the probability of Type 1 error of the test by comparing the actual results and the test results.
When applying developed Covid test schemes in practice, we can also detect false positives through repeated sampling/testing of the concerned individuals and see if the test results are consistent throughout. Here is an example (https://medicine.missouri.edu/news/researchers-identify-technique-detect-false-positive-covid-19-results), where individuals who were tested positive went over a quality control protocol for repeat testing to reduce false positives.