logo

Vale Programming Language

Posted on: 2023-02-06

Conclusion

Vale has been around for a while. It still retains a somewhat experimental feel, as the developers come up with new ideas to try out. There are some quite interesting and novel ideas here.

Good

Not sure

Bad

Ugly

Notes

Has front end written in Scala, and the backend in C++, presumably because it uses LLVM for final generation.

= to create a variable, and set to assign. Not really liking.

exported func main() {
  a str = "world!";
  println("Hello " + a);
}

Can put the type directly after the variable name.

Runtime arrays, can't have their capacity change. List is a dynamic array it seems.

Uses type prefixing so []int(n), n is the capacity which can't change.

Has Universal Function Call Syntax(UFCS).

The Higher RAII idea is interesting.

In fact, in Vale, the whole "destructor" side of the language is built from one small rule: "If an owning reference goes out of scope, call .drop() on it. If no public .drop() exists, give a compile error."

So this is a linear typing idea. If there is a drop its' like C++ or Rust (affine type-like), where the drop is automatically called. Otherwise it's a user space problem for the user. How does the compiler know what functions are user drops? I guess thats the owning reference part - if we have a function that consumes owenrship.

Has tuples constructed with parens, and accessed via .0 etc.

Has weak references, requires marking the struct. When used requires lock to call access and will return an optional reference to the item if it still exists.

Has interfaces. Instead of having to write the functions as part of an impl, it just invokes that it does with impl Bipedal for DarkElf;. Is that better than defining what is implemeted as part of the type declaration? I guess an implementation can be made in a different module? In future directions, it goes for a more conventional approach.

Generics seem similar to C++ templates.

Has destructuring around [a, b, c] = ... syntax.

Regions, seems somewhat inspired by rusts borrowing markup. The mutex example - perhaps the idea is to convey more information to the compiler about ownership. Seems like you could do something pretty similar in C++, without the region idea. Succeeding does a bunch of stuff, severing references. It seems akin to moving between threads by removing any way the current thread can reach the object.

The structs section in FFI shows that imm types are allocated on the heap and have to be freed as consumed. Perhaps that extra overhead is by design as [Fearless FFI], describes how with mut types they are scrambled and can only be accessed via functions, and imm as allocated and copied to (ie the callee cannot corrupt, as it's a copy). You could do such a copy on the stack, which wouldn't need the heap allocation. Nothing is really lost, because the FFI receiver can corrupt the stack. The idea of having a copy does add some extra protection.

The generational references, adds a generational value to non owning references. When such a reference is used it can be checked against. There is also scope tethering concept that allows extra checks, by marking before using. The tethering delays an objects destruction.

In Inko we have a somewhat similar mind set. There are owning/non-owning references. In debug builds, non owning references increase/decrease a ref count. When the owning reference goes out of scope, if the reference is not 0, it can panic. On a release build, the check can be removed. Some of the checks can be removed by the compiler anyway. Seems simpler, and potentially faster (in release, if unsafer).

Links