A lot of his silent givens also go into Eliezer's coherent extrapolated volition approach to FAI (Friendly AI). I don't however think there is much need to discuss them unless someone specifically brings them up since materialism doesn't seem that controversial.
What he is saying is pretty non-PC, objective comparisons between moral systems on their claims is possible. This puts absolute cultural relativists in a hard spot. However I don't see any reason for this to be a great issue people can keep older moral systems even if they are proved to fail at their stated objectives if they declare such systems terminal values.
And he only makes a case for proximate values, and perhaps values we confuse for terminal but are really just promoted proximal values.