- LLVM compiler infrastructure to:
- Rust language to a comment saying that Go had a GC: "Rust does not use an automated garbage collection system like those used by Go, Java or .NET." and finally to this article:
- Go GC: Prioritizing low latency and simplicity;
To create a garbage collector for the next decade, we turned to an algorithm from decades ago. Go's new garbage collector is a concurrent, tri-color, mark-sweep collector, an idea first proposed by Dijkstra in 1978. This is a deliberate divergence from most "enterprise" grade garbage collectors of today ...
And yes, they talk about JVMs GC 'under the lines' several times, which is funny to read (not in a disrespectful way, just trivially funny to see them avoiding to mention the JVM).
They mention this GC which will do as much as possible concurrently (which is a strong characteristic of Golang) and Richard Hudson gives us a nice summarized explanation of the tri-color collector:
In a tri-color collector, every object is either white, grey, or black and we view the heap as a graph of connected objects. At the start of a GC cycle all objects are white. The GC visits all roots, which are objects directly accessible by the application such as globals and things on the stack, and colors these grey. The GC then chooses a grey object, blackens it, and then scans it for pointers to other objects. When this scan finds a pointer to a white object, it turns that object grey. This process repeats until there are no more grey objects. At this point, white objects are known to be unreachable and can be reused.
The "Devil is in the details" section that follows is very honest and elucidating too, but they are explicit running away from giving the users all kinds of configurations to play with the GC strategy of even to change it, which the Oracle's JVMs offers in a very complex|powerful way, instead they want their GC to able to handle things intelligently for the user with only one single configuration or 'knob':
At a higher level, one approach to solving performance problems is to add GC knobs, one for each performance issue. The programmer can then turn the knobs in search of appropriate settings for their application. The downside is that after a decade with one or two new knobs each year you end up with the GC Knobs Turner Employment Act. Go is not going down that path. Instead we provide a single knob, called GOGC. This value controls the total size of the heap relative to the size of reachable objects. The default value of 100 means that total heap size is now 100% bigger than (i.e., twice) the size of the reachable objects after the last collection. 200 means total heap size is 200% bigger than (i.e., three times) the size of the reachable objects. If you want to lower the total time spent in GC, increase GOGC. If you want to trade more GC time for less memory, lower GOGC.
Which is to say the only thing the user can choose is the ratio between the heap size and the number of objects in use (not elligible to be collected), which is also a very common type of GC 'knob' presented (for a long time...) by the Java HotSpot VM Options related to its GC:
- InitialSurvivorRatio: Sets the initial survivor space ratio used by the throughput garbage collector ...
- MaxHeapFreeRatio: Sets the maximum allowed percentage of free heap space (0 to 100) after a GC event.
- MinHeapFreeRatio: Sets the minimum allowed percentage of free heap space (0 to 100) after a GC event.
- NewRatio: Sets the ratio between young and old generation sizes.
- SurvivorRatio: Sets the ratio between eden space size and survivor space size.
Credit where it is due Golang designers: even going back to Dijkstra in 1978 there is still some Java inspiration in there, but you just come to cite (not giving names) the things of the Hotspot VM you consider bad and seeks to avoid... not fair. That said, the Go lang goal to prevent the "GC Knobs Turner Employment Act" is noble but we will always have to prepare ourselves to The Law of Leaky Abstractions as the wise Joel remembered us back in 2002.