New WebObs submission: observations outside expected range

I have still been submitting photometry observations using the old WebObs. Today, I thought I’d try the new version to see if all the bugs had been ironed out.

When I submitted, I got the following:

It says one or more of my observations are outside the expected range. However, if you look at the data, my values are almost exactly on the median. I went ahead and submitted them despite the warning since it seems to be incorrect.

I do like the new graph. Hopefully it will prevent people from submitting bad data.

1 Like

Just my personal view here… but if “normal” white-noise process statistics are being used and that is what appears to be used in these plots any decisions made on outliers at upload time are basically not valid. I would not delete any observations based on that plot. The plots and warning given might, might be useful for an observer to check for blunder errors, weather issues like a cloud going over the field, data entry errors if manually entering data (you know typing), etc. Any other “outlier” filtering should be done by the future users of the data using all the available information post facto.

The age old problem of is it just observing system noise or is it real signal. What if T CrB went into outburst? It would rapidly exceed the 5-sigma limit and the median would have significant lag and so you might not upload your real signal data!

Jim (DEY)

1 Like

WebObs 2.0 does not filter out any data - it provides warnings. The purpose is to help people avoid uploading obviously broken photometry such as I detailed in my data quality report. My personal view is that we have a serious problem with observers taking a “fire and forget” attitude, wherein they perform no quality control on their data. We do researchers no favors by accumulating systematic errors.

Tom

1 Like

Is there an explaination on over what time period the median and 5-sigma limits are computed in the WebObs 2.0? Are the median and symmetric 5 sigma limits valid and appropriate statistics for esp. rapidly varing objects? Why not 6 sigma which is a sort of industry standard in process control? :grinning:

I really don’t think those plots bring anything to the table on getting observers to fix their errors, extremes, or bias’ at upload time. Also on objects without a lot of observations there is no control to evaluate against.

Thanks for the pdf. I take a look at it.

Jim (DEY)