Observed anomalies in the data lead to discoveries, but you have to be absolutely sure the anomalies are genuine, reproducible, etc. I had a very interesting result this morning (in the form of a plot) that turned out to be due to a bug in the code.
“Bugs” crop up not just in software development, but in theoretical and experimental work also. I do think that these two latter disciplines have a deeper tradition of verification. Simulation work too often suffers from a lack of even basic “sanity-check” validation.
The process I’ve been following is that before I start coding, I come up with a few “interesting” test cases. And I use these test cases throughout the development process to make sure the code I’m implementing works as I expect. The moment it doesn’t, even in minor ways, I expand the set of test cases until I’m sure I’ve encountered a legitimate anomaly.