View Single Post
  #7   (View Single Post)  
Old 12th January 2017
ocicat ocicat is offline
Administrator
 
Join Date: Apr 2008
Posts: 3,319
Default

Quote:
Originally Posted by psypro View Post
"It would be very interesting if somebody could write formally verified small, trusted part of these projects. It would be interesting if somebody could write formally verified pledge kernel's subsystem. "
There is hardware simple enough to make it possible to thoroughly test all aspects. Software is exponentially more complicated -- complicated enough to make the process of testing every code path impossible

Yes, there have been people of late who talk that software is now mathematically provable, but I have yet to find a complete argument which proves this beyond doubt. I am still of the opinion that the problem is still so complicated that it is not worth a lot of discussion -- even for small portions of a kernel.
Quote:
I think if OpenBSD dev community managed to write some small part of OpenBSD in a mathematically proven way, they would learn something while doing it, that would in the long therm improve code quality of the rest of the project as well.
Your argument implies that all details are known -- all bugs existing in the compilers or interpreters used as well as the processor itself. I can tell you from experience that hardware manufacturers don't make all bugs public. Trying to "prove" correctness cannot get around the politics and impracticality of trying to expose all details that manufacturers are not willing to air to the world.
Quote:
If so is C the language for the further for OpenBSD, or are more complex compilers/languages like Rust needed to improve code correction to a level where it can be mathematically somewhat proven?
Again, the argument does not take into account the realities of implementation issues. Some compilers are better at translating higher level logic into processor instructions than other tools. So when others claim that they have mathematically proven the correctness of a particular chunk of software, do they say on what processor using a specific set of tools? If they don't, what is it that they are proving?

And when the tools change, are "they" going back & re-verifying that all translations used in the previous verification are still exactly the same with a newer toolchain? Proving correctness then becomes a never-ending process, and who is going to do this? And how fast can they do it? And are they testing on an ecosystem of hardware & software that resembles my hardware & software of choice?

I know there are those that claim that the abstractions implemented by languages such as Rust are "significantly better than C", but show me a perfect Rust compiler. There isn't one, just like there isn't a perfect C compiler. Making claims of "my memory model is better than your memory model" don't hold a lot of weight either because it all comes down to what is the implementation doing on a specific piece of hardware. If you dig into the history of OpenBSD's tools, you will find that there are a number of compiler fixes implemented by the project developers to the compilers used in the various OpenBSD platform supported. No other platform implements everything used here. And there is likely other projects which implement compiler fixes which are not universally shared everywhere, too.

So how does this fall into your argument of "mathematically proving" the OpenBSD system? Who is doing the proving on what platform using what toolchain? I don't ever see this level of detail stated in the underlying provability claims. If these details are not being aired openly, I question what it is that is being proven. Again, the fundamental problem is so complicated that many will say that their time is better spent doing other things. I tend to agree with them.

Last edited by ocicat; 12th January 2017 at 06:27 AM. Reason: add clarity
Reply With Quote