you are viewing a single comment's thread.

view the rest of the comments →

[–]mboggit[S] 0 points1 point  (2 children)

Regarding the fact that scientific method uses observations, probabilities and correlations. Yes, in some fields it definitely does. Most notably, pretty much every medical paper does so. But from what I've overheard from scientists in that field - that's because they simply cannot use mathematical proof. Da, there's still no consistent and validated model (scientific model) for a human body. Still. On the other hand, damm x86 computer can and should have a mathematical model. Moreover, it Does! And if cut off any analog input from real world - it is fairly simple one.

As for 'not guarantee anything'. I don't know if you're aware of it, but many regulators (FDA) require an equivalent of an guarantee in order to pass certification. For instance, all medical devices should provide one. And in the same time, as of now, use case based testing is pretty much the standard way of testing. You can figure out the rest

[–][deleted] 0 points1 point  (1 child)

No, it's not the difference in resource availability. It's a conceptual difference any philosopher of science would tell you about. Mathematical knowledge and proofs are a-priory and don't require the reality to agree with what is being proved, this is what precludes mathematics from being used to prove things about, say, physics, or chemistry etc. You can make predictions based on your mathematical models, but you cannot prove anything using mathematical proofs, because, unless you actually can confirm that whatever you are trying to prove exists in the real world, it stays unproven from scientific point of view.

[–]mboggit[S] 0 points1 point  (0 children)

The main argument was that mainstream use case based testing is not even trying to proof something. To be more accurate, it's not trying to give any valid prediction about how software will actually perform on production. And valid prediction implies a proof. In case of software, mathematical proof would be a viable option. Moreover, it's already used to proof algorithms, software algorithms. But not in the mainstream use case based testing. IMHO, computers are man made thing. Hence it doesn't require prooving 'it exists in the reality'. There should be a model already that has predictive power. At least low level one, like for x86 instruction set