Discussion:
Rust's temporal safety for Ada/SPARK
(too old to reply)
d***@gmail.com
2017-05-13 20:33:27 UTC
Permalink
I'm a high-assurance engineer/researcher who mainly evangelizes the use of proven methods, tools, etc in proprietary and FOSS systems. I've promoted Ada long time (esp Barnes' Safe and Secure book) but I don't use it myself so I need help answering something. Inspired by Cyclone language and linear types, the Rust language has pulled off a rare feat in using the ownership and type system (esp affine types) to eliminate temporal errors that come from mismanaging references. That plus two "traits" eliminates race conditions, too. This is not in static code or something with heavy restrictions on what the code can do. Their thriving community is coding about everything you can think of from desktop (eg Redox OS) to servers (eg TrustDNS) to embedded (eg Tock OS). Here's a description of their memory-safety scheme:

https://doc.rust-lang.org/book/references-and-borrowing.html

Here's the Cyclone page for people just curious where it came from or on safe languages in general:

https://en.wikipedia.org/wiki/Cyclone_(programming_language)

So, with Rust's approach, they get memory safety even for *dynamic or concurrent use* of memory at compile time with no overhead, runtime checks, GC, etc. Whereas, the last thing I read on Ada showed it has a few tricks but many dynamic uses resort to unsafe deallocations at some point. Other people were suggesting reference counting or a GC leading me to further think it lacks this ability of Rust. So, my question is, does Ada 2012 currently have Rust's capability to enforce both temporal, memory safety and immunity to race conditions? I'm really focusing on an equivalent to the borrow-checker in Rust, though. If it doesn't have an equivalent, is there anyone working on adding it to Ada esp at AdaCore? What Ada/SPARK have already + memory safety for dynamic code would be an awesome combination that would put Rust in distant second. I may be misunderstanding Ada's capabilities, though, so I figure I ask the experts first. Thanks ahead of time.

Nick P.
Jeffrey R. Carter
2017-05-13 21:19:26 UTC
Permalink
Post by d***@gmail.com
So, with Rust's approach, they get memory safety even for *dynamic or
concurrent use* of memory at compile time with no overhead, runtime checks,
GC, etc. Whereas, the last thing I read on Ada showed it has a few tricks but
many dynamic uses resort to unsafe deallocations at some point. Other people
were suggesting reference counting or a GC leading me to further think it
lacks this ability of Rust. So, my question is, does Ada 2012 currently have
Rust's capability to enforce both temporal, memory safety and immunity to
race conditions? I'm really focusing on an equivalent to the borrow-checker
in Rust, though. If it doesn't have an equivalent, is there anyone working on
adding it to Ada esp at AdaCore? What Ada/SPARK have already + memory safety
for dynamic code would be an awesome combination that would put Rust in
distant second. I may be misunderstanding Ada's capabilities, though, so I
figure I ask the experts first. Thanks ahead of time.
This looks sort of like Ada's accessibility levels and accessibility rules, from
ARM 3.10.2, though as it says there, "In most cases, accessibility is enforced
at compile time by Legality Rules. Run-time accessibility checks are also used,
since the Legality Rules do not cover certain cases involving access parameters
and generic packages."
--
Jeff Carter
"Well, a gala day is enough for me. I don't think
I can handle any more."
Duck Soup
93
Niklas Holsti
2017-05-14 10:19:53 UTC
Permalink
Post by Jeffrey R. Carter
Post by d***@gmail.com
So, with Rust's approach, they get memory safety even for *dynamic or
concurrent use* of memory at compile time with no overhead, runtime checks,
GC, etc. Whereas, the last thing I read on Ada showed it has a few tricks but
many dynamic uses resort to unsafe deallocations at some point. Other people
were suggesting reference counting or a GC leading me to further think it
lacks this ability of Rust. So, my question is, does Ada 2012
currently have
Rust's capability to enforce both temporal, memory safety and immunity to
race conditions? I'm really focusing on an equivalent to the
borrow-checker
in Rust, though. If it doesn't have an equivalent, is there anyone working on
adding it to Ada esp at AdaCore? What Ada/SPARK have already + memory safety
for dynamic code would be an awesome combination that would put Rust in
distant second. I may be misunderstanding Ada's capabilities, though, so I
figure I ask the experts first. Thanks ahead of time.
This looks sort of like Ada's accessibility levels and accessibility
rules, from ARM 3.10.2, though as it says there, "In most cases,
accessibility is enforced at compile time by Legality Rules. Run-time
accessibility checks are also used, since the Legality Rules do not
cover certain cases involving access parameters and generic packages."
I agree that Ada accessibility rules are related to Rust's scoped
lifetimes, but my impression (after a brief read of the Rust
"borrow-checker" material) is that the Rust scheme goes a lot further
than what is today standard in Ada. For example, AIUI Rust makes it
impossible to try to dereference a null pointer, and Rust also
completely prevents dangling references, even when dynamically allocated
objects are deallocated.

In a multi-threaded program, again AIUI, Rust statically prevents
concurrent writes from different threads to the same variable. That is
"legal" in Ada, but (as discussed in a concurrent thread on "Portable
memory barriers") Ada has unchecked rules on when such access is
non-erroneous.

AIUI the Rust scheme is based on (a) compile-time tracking of the set of
references that refer to a given object, as well as the kind of access
(read-only, or read-and-write) that each reference allows, and (b)
wrapping all possibly-null references into "Optional" types (similar to
Ada's variant records) to hide the "null" values.

It is not clear to me if these Rust advantages bring with them some
restrictions on the kinds of data structures that a Rust program can
use, or require some Rust-specific idioms for transforming traditional
reference-heavy data structures (for example graphs) into Rust form.

I hope "someone" will make an in-depth study of how the advantages of
the Rust scheme could be imported into Ada. I'm afraid it may be rather
hard to do, as Rust references are so different from Ada's access values.
--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
. @ .
Simon Wright
2017-05-15 19:07:49 UTC
Permalink
Post by Jeffrey R. Carter
Post by d***@gmail.com
So, with Rust's approach, they get memory safety even for *dynamic or
concurrent use* of memory at compile time with no overhead, runtime
checks, GC, etc. Whereas, the last thing I read on Ada showed it has
a few tricks but many dynamic uses resort to unsafe deallocations at
some point. Other people were suggesting reference counting or a GC
leading me to further think it lacks this ability of Rust. So, my
question is, does Ada 2012 currently have Rust's capability to
enforce both temporal, memory safety and immunity to race conditions?
I'm really focusing on an equivalent to the borrow-checker in Rust,
though. If it doesn't have an equivalent, is there anyone working on
adding it to Ada esp at AdaCore? What Ada/SPARK have already + memory
safety for dynamic code would be an awesome combination that would
put Rust in distant second. I may be misunderstanding Ada's
capabilities, though, so I figure I ask the experts first. Thanks
ahead of time.
This looks sort of like Ada's accessibility levels and accessibility
rules, from ARM 3.10.2, though as it says there, "In most cases,
accessibility is enforced at compile time by Legality Rules. Run-time
accessibility checks are also used, since the Legality Rules do not
cover certain cases involving access parameters and generic packages."
I was thinking that an alternative subject area might be code like

procedure Add (Dest : in out Integer; N : Integer)
with
Post => Dest = Dest'Old + N;

procedure Add (Dest : in out Integer; N : Integer)
is
begin
Dest := Dest + N;
end Add;

Dest : Integer := 42;

...

Add (Dest, Dest);

(where we have alternate paths to the variable Dest); but SPARK is happy
and the runtime check isn't triggered, so I suppose the ARM supports
this (by copy-out?).

Still, the usage shouldn't pass code review, I think.
Jeffrey R. Carter
2017-05-15 19:30:00 UTC
Permalink
Post by Simon Wright
I was thinking that an alternative subject area might be code like
procedure Add (Dest : in out Integer; N : Integer)
with
Post => Dest = Dest'Old + N;
procedure Add (Dest : in out Integer; N : Integer)
is
begin
Dest := Dest + N;
end Add;
Dest : Integer := 42;
...
Add (Dest, Dest);
(where we have alternate paths to the variable Dest); but SPARK is happy
and the runtime check isn't triggered, so I suppose the ARM supports
this (by copy-out?).
Elementary types are passed by copy, so you don't have any paths to the variable
Dest inside Add. That's why this is OK. If the type might be by reference,
though, this can be a problem.
--
Jeff Carter
"Strange women lying in ponds distributing swords
is no basis for a system of government."
Monty Python & the Holy Grail
66
m***@adacore.com
2017-05-17 21:21:47 UTC
Permalink
Post by Simon Wright
(where we have alternate paths to the variable Dest); but SPARK is happy
and the runtime check isn't triggered, so I suppose the ARM supports
this (by copy-out?).
Hi Simon, SPARK is happy because there is no problematic aliasing with the code you posted. Since N is of scalar type, it is always passed by copy. Since N is also of mode 'in', it is only copied on entry, not copied back. So the contract you wrote on Add is still valid even if you call it on Dest and Dest. But the postcondition only holds upon returning from Add, not after the call statement. Between these two, the value returned in parameter Dest is copied back in local variable Dest. This is how both the GNAT compiler and the GNATprove analyzer understand it.
Post by Simon Wright
Still, the usage shouldn't pass code review, I think.
I guess it depends on the meaning of the code. If the use of the same value in different parameters is confusing, indeed not. Otherwise, there is nothing wrong with that code.
Robert Eachus
2017-05-14 03:24:46 UTC
Permalink
I'm going to try to avoid getting into a long discussion here. The part of Rust referenced is similar to Ada without tasking. As Jeffery Carter points out, most of the time Ada scope rules deal with this at compile time. There are some cases that require run-time checking, and most programmers consider code that needs run-time checking as a bug. (In SPARK, it is. ;-)

As for concurrency of the type we have been discussing in another thread, it is at a very low level, and should usually only come up in hard real-time code. If you have time limits in milliseconds or less, just using the Ada thread safe versions of data structures are not well defined enough. What we are/were discussing was code that is lock free, and where synchronization will cause one CPU core to stall (usually for just much less than a microsecond.) until the data is available, rather than a sleep-state or task switch.
d***@gmail.com
2017-05-14 16:46:04 UTC
Permalink
@ Robert Eachus

"There are some cases that require run-time checking, and most programmers consider code that needs run-time checking as a bug"

I can see why they'd avoid such constructions if their tooling couldn't prove them safe. If another tool can, though, then I'm thinking it's a bug in Ada in that Ada's model or tools can't handle that analysis at compile-time. Something worth fixing with R&D. That's why I'm trying to assess what Ada can do currently in this area. Several other languages knock this problem out at compile time. They're functional, logical, and imperative. So, I know it is feasible in general case.

@ All

Niklas Holsti's post shows he understands what capability I'm describing. Rust code can be shown free of double-free's and dangling-pointers at *compile-time* with *no runtime checks or GC*. It does it with just a few simple rules. Here's the simplest, shortest description I could find to save you all time:

https://stackoverflow.com/questions/36136201/how-does-rust-guarantee-memory-safety-and-prevent-segfaults

Those are combined with "traits"... or something else in language... to allow race-free concurrency. Instead of mandating one model for language, various models of concurrency are defined in libraries to let you use model easiest for your problem. Language's ownership model & borrow-checker ensures they're all memory-safe and race-free along with any code using them. So, you get these guarantees even in multi-threaded code doing many allocations and de-allocations of memory dynamically. New developers do have a hard time fighting with the borrow-checker early on. My meta-analysis of their comments indicates much of it is intrinsic to safely handling ownership and borrowing of references that C and GC'd languages didn't teach them. The tool just enforces rules (i.e. affine types) known to work. Learning curve is about 1-2 months of practice until they say borrow-checker rarely has problems with their code.

So, can someone today use Ada in a straight-forward way to write single- or multi-threaded applications that are, in every use-case, totally immune at compile-time to use-after-free and double-free errors with zero, runtime overhead? Or can it not do that?
Dmitry A. Kazakov
2017-05-14 17:18:45 UTC
Permalink
Post by d***@gmail.com
Niklas Holsti's post shows he understands what capability I'm
describing. Rust code can be shown free of double-free's and
dangling-pointers at *compile-time* with *no runtime checks or GC*.
It does it with just a few simple rules. Here's the simplest,
shortest description I could find to save you all time: >
https://stackoverflow.com/questions/36136201/how-does-rust-guarantee-memory-safety-and-prevent-segfaults

Helpful would be to have an example where pointers are needed. The
example provided requires no pointers. So in Ada case, no such problem
exist at all.

[...]
Post by d***@gmail.com
So, can someone today use Ada in a straight-forward way to write
single- or multi-threaded applications that are, in every use-case,
totally immune at compile-time to use-after-free and double-free
errors with zero, runtime overhead? Or can it not do that?
Without having other examples, the answer is: there is no such problem
in Ada because arguments of task rendezvous are not pointers.

For multitasking, problems arise when the method of problem
decomposition requires constructs which are not safe in the sense that
safety is statically undecidable. Which means that a decidable static
constraints would simply kill the algorithm.

From the practical perspective it is usually possible to trade one
danger for another (lesser, greater, equal), e.g. to exchange deadlock
for livelock, but not getting rid of it altogether.

Considering the problem of having tasking safe per construction, my
impression is that constraints are no or very little help. Additional
methods of decomposition are.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Jeffrey R. Carter
2017-05-14 17:36:57 UTC
Permalink
Post by d***@gmail.com
So, can someone today use Ada in a straight-forward way to write single- or
multi-threaded applications that are, in every use-case, totally immune at
compile-time to use-after-free and double-free errors with zero, runtime
overhead? Or can it not do that?
Of course this is possible. It's very rare for well designed Ada to need access
types. An overwhelming majority of applications can be implemented without ever
writing "access".
--
Jeff Carter
"Friends don't let friends program in C++"
Zalman Stern
114
Niklas Holsti
2017-05-14 19:59:55 UTC
Permalink
Post by Jeffrey R. Carter
Post by d***@gmail.com
So, can someone today use Ada in a straight-forward way to write single- or
multi-threaded applications that are, in every use-case, totally immune at
compile-time to use-after-free and double-free errors with zero, runtime
overhead? Or can it not do that?
Of course this is possible.
Yes... if one does not have to meet stringent resource constraints
(time, space) on limited HW.
Post by Jeffrey R. Carter
It's very rare for well designed Ada to need access types.
"Well designed" is of course subjective. The container library has made
it practical to avoid access types in the application code, but then
there are other potential run-time problems, such as "tampering" with
the containers, which require run-time checks (and which are to some
extent consequences of the use of access types within the container
library).
Post by Jeffrey R. Carter
An overwhelming majority of applications can be
implemented without ever writing "access".
I find it difficult to agree with that "overwhelming", at least if one
includes the access types used under the covers in the container library.

Even in applications where heap allocation is forbidden, there are
usually some dynamically allocated resources -- elements of "resource
pools" such as message buffers -- with the corresponding
application-defined "reference" data types, and the same problems of
managing allocations over time. I don't know if Rust's memory-management
scheme extends to such non-heap "references, however.
--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
. @ .
Simon Wright
2017-05-14 20:43:50 UTC
Permalink
The container library has made it practical to avoid access types in
the application code, but then there are other potential run-time
problems, such as "tampering" with the containers, which require
run-time checks (and which are to some extent consequences of the use
of access types within the container library).
The current (GCC 7) containers allow suppression of tampering checks,
see [1], for performance reasons.

I don't believe that the tampering checks are mainly, if at all, the
consequnece of using access types. Suppose you are iterating over an
ordered map and you change the current element so as to alter its sort
key? Or, for that matter, any other element? Suppose you delete the
element you are currently visiting?

[1] http://www.adacore.com/developers/development-log/NF-74-O426-003-gnat/
Dmitry A. Kazakov
2017-05-15 07:27:14 UTC
Permalink
Post by Simon Wright
I don't believe that the tampering checks are mainly, if at all, the
consequnece of using access types. Suppose you are iterating over an
ordered map and you change the current element so as to alter its sort
key? Or, for that matter, any other element? Suppose you delete the
element you are currently visiting?
That is an argument against iterators. There is little difference
between pointer and iterator.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Dennis Lee Bieber
2017-05-14 22:20:15 UTC
Permalink
On Sun, 14 May 2017 22:59:55 +0300, Niklas Holsti
Post by Niklas Holsti
Even in applications where heap allocation is forbidden, there are
usually some dynamically allocated resources -- elements of "resource
pools" such as message buffers -- with the corresponding
application-defined "reference" data types, and the same problems of
managing allocations over time. I don't know if Rust's memory-management
scheme extends to such non-heap "references, however.
My last employment system environment came down to allocation only
during the initialization of the application, no allocation permitted once
the application transitioned to "running", an NO DEALLOCATION ALLOWED.

That left only two potential erroneous situations: duplicating a
reference such that two parts of the code were independently accessing the
same memory, and the converse, "dropping" a reference such that the memory
could no longer be accessed. The latter being the less likely to get out
into the wild, as running out of a buffers from a pool would tend to be
found in testing, given the limited number of buffers available.
--
Wulfraed Dennis Lee Bieber AF6VN
***@ix.netcom.com HTTP://wlfraed.home.netcom.com/
Jeffrey R. Carter
2017-05-15 16:23:07 UTC
Permalink
"Well designed" is of course subjective. The container library has made it
practical to avoid access types in the application code, but then there are
other potential run-time problems, such as "tampering" with the containers,
which require run-time checks (and which are to some extent consequences of the
use of access types within the container library).
I'm pretty sure the "tampering" restrictions in the containers have nothing to
do with possible implementations (which need not even be in Ada), and everything
to do with maintaining the integrity of the structures. They're intended to
ensure that an ordered container doesn't have an element out of order, or a
hashed container, one with a different hash than its bin.
I find it difficult to agree with that "overwhelming", at least if one includes
the access types used under the covers in the container library.
There's nothing about using the containers that requires the user to write
"access", so clearly they should not be included.

One might want to use 'access to pass a subprogram as an anonymous
access-to-subprogram parameter of a container operation, but since such things
can't be assigned and can't be freed, they're not really access types, but
rather a strange syntax for limited subprogram types.
--
Jeff Carter
"Strange women lying in ponds distributing swords
is no basis for a system of government."
Monty Python & the Holy Grail
66
Randy Brukardt
2017-05-15 23:19:43 UTC
Permalink
Post by Jeffrey R. Carter
Post by d***@gmail.com
So, can someone today use Ada in a straight-forward way to write single- or
multi-threaded applications that are, in every use-case, totally immune at
compile-time to use-after-free and double-free errors with zero, runtime
overhead? Or can it not do that?
Of course this is possible. It's very rare for well designed Ada to need
access types. An overwhelming majority of applications can be implemented
without ever writing "access".
I would suggest that Jeff's answer here should be read to say that
"well-designed Ada code has no need for dynamic checks". Ada has plenty of
unsafe constructs (for low-level memory managment, machine access, and the
like), but one never has to use them. Ada after all is about combining
safety with capability -- we try to allow everything (including unsafe
stuff), but try to make it clear what is unsafe so that code review tools
can determine what issues exist.

In any case, Ada code can be written so that there are no dereference checks
and no dangling pointers. You obviously lose some capability when one does
so. Does that matter? Depends on what you are doing.

Static rules (of any sort) are always going to reduce capability,
*especially* when it comes to multithreaded programs. I find it highly
unlikely that any sort of static rules would actually work in "every
use-case". That's been claimed many times in the past, and it has always
turned out that the claims were *way* overblown. The people making such
claims often don't understand multithreading well enough. (It's an area,
like random numbers and probability, that a little knowledge is more
dangerous than no knowledge.)

I'm also very suspicious of any claims of new static rules simply because
OOP pretty much forces dynamic checks if one uses references; strong typing
breaks down for that and everything essentially becomes dynamic. There
probably aren't any dereference checks, but you end up with dynamic type
checks instead (substantially worse). Best thing is to avoid references
altogether (but of course that too reduces capability).

Randy.
Shark8
2017-05-16 16:45:12 UTC
Permalink
Post by Randy Brukardt
I'm also very suspicious of any claims of new static rules simply because
OOP pretty much forces dynamic checks if one uses references; strong typing
breaks down for that and everything essentially becomes dynamic. There
probably aren't any dereference checks, but you end up with dynamic type
checks instead (substantially worse). Best thing is to avoid references
altogether (but of course that too reduces capability).
Randy.
But a lot of those dynamic checks can be eliminated altogether; this paper's 20 years old and shows that up to 87% of indirect [dynamic] calls could be converted to direct [static] calls:
http://dabamirror.sci-hub.io/0907a19f84f6a6fc61cc0416a09a9958/fernndez1995.pdf

Granted, this *IS* at link-time; but a brief look at some of the similar and/or referencing articles gives me the slight [counterintuitive] impression that not everything about OOP need be dynamic. -- It'll take more time and energy to properly delve into the subject than I have at the moment, but some of the results have quite fascinating titles.
Randy Brukardt
2017-05-16 21:36:53 UTC
Permalink
My concern is that one almost always has to use access-to-classwide types in
an Ada context as it is impossible to do "co-derivation" so that a type and
its static reference type get derived together. One can use dispatching to
cover some of the pain, but that of course is dynamic itself and can cause
other issues. (And the problem is even worse if one uses an abstract handle
as the containers do, as dispatching no longer is an option.)

Remember that Ada statically binds almost all OOP calls to bring with. The
problem is that there is no way to derive two related types together, and it
appears that is impossible in an OOP context. (It could be made to work --
maybe -- for untagged types.)

Randy.
Post by Randy Brukardt
I'm also very suspicious of any claims of new static rules simply because
OOP pretty much forces dynamic checks if one uses references; strong typing
breaks down for that and everything essentially becomes dynamic. There
probably aren't any dereference checks, but you end up with dynamic type
checks instead (substantially worse). Best thing is to avoid references
altogether (but of course that too reduces capability).
Randy.
But a lot of those dynamic checks can be eliminated altogether; this paper's
20 years old and shows that up to 87% of indirect [dynamic] calls could be
converted to direct [static] calls:
http://dabamirror.sci-hub.io/0907a19f84f6a6fc61cc0416a09a9958/fernndez1995.pdf

Granted, this *IS* at link-time; but a brief look at some of the similar
and/or referencing articles gives me the slight [counterintuitive]
impression that not everything about OOP need be dynamic. -- It'll take more
time and energy to properly delve into the subject than I have at the
moment, but some of the results have quite fascinating titles.
Shark8
2017-05-16 23:37:51 UTC
Permalink
Post by Randy Brukardt
My concern is that one almost always has to use access-to-classwide types in
an Ada context as it is impossible to do "co-derivation" so that a type and
its static reference type get derived together. One can use dispatching to
cover some of the pain, but that of course is dynamic itself and can cause
other issues. (And the problem is even worse if one uses an abstract handle
as the containers do, as dispatching no longer is an option.)
Which is why we have access parameters in read/write/input/output as well as the generic-dispatching constructor, right? -- That's one of the biggest pain-points I'm having with SPARK: I want to be able to use stream-attributes AND have verification!

While the mode-specification in Ada is pretty great (describing data-flow rather than mechanics) it does lead to a pretty painful problem w/ classwide parameters: combined with how you constrained values/type-passing works you can't have a
Procedure Visitor( Item : in out Node'Class ) is
-- On some condition replace Item with value of Make_More_Specific_Node( Item )

and instead have to have a
Procedure Visitor( Item : in out access Node'Class ) is
-- On some condition replace Item.all with value of Make_More_Specific_Node( Item )

It's really quite irritating (and IMO disappointing) that "access" was added as a mode-indicator/-modifier.
m***@adacore.com
2017-05-14 21:28:34 UTC
Permalink
[I'm reposting here my answer to your message on ada Reddit.]

In fact, we currently have at AdaCore an intern working with us on the inclusion of Rust-like pointers in SPARK. He has reached a first milestone which was the description of suitable rules to include safe pointers in SPARK, which have convinced the SPARK Language Design Group at AdaCore and Altran UK (the small group working on the evolutions of the SPARK language).

He's now working with us and researchers from Inria team Toccata to give a mathematical semantics to the notions that we're using for these safe pointers: move (on assignment mostly), borrow (on parameter passing for mutable objects) and observe (on parameter passing for immutable objects). We have also started looking at the concrete implementation of these rules in GNATprove (the SPARK analysis tool).

In this work, we don't target everything that the Rust borrow checker does:

- we leave accessibility checking (the lifetime checking in Rust) to the compiler, using existing Ada rules, plus some restrictions in SPARK to avoid the need for dynamic accessibility checks

- we leave nullity checking to proof (a Verification Condition will be generated for dereference of possibly null pointers), with the help here of Ada non-null types that reduce the need for such proofs. Given that pointers are always initialized to null in Ada, there is no need to separately deal with initialization.

- we ignore the problem of memory leaks (which could be tackled later as an extension of the current scheme)

So the main issue that we really address with this work is the issue of non-aliasing. Or rather the issue of problematic interferences, when two names, one of which can be updated, are referring to the same memory location. We're focusing on this issue, because it is the one preventing inclusion of pointers in SPARK, as for formal analysis we rely on the ability to perform modular analysis, where we make assumptions on the absence of problematic interferences.

But since our solution to non-aliasing is based on this Rust-like notion of ownership of pointers, the same solution will also forbid use-after-free or double-free.

This work is ongoing, we will certainly let the community know about our progress after the summer.
d***@gmail.com
2017-05-15 22:59:14 UTC
Permalink
Yannick's message about AdaCore project mostly answers my question on top of the other information everyone has contributed. I appreciate the help from everyone that posted in this thread. Lots of interesting details to learn about. :)
Loading...