Dont ignore omissions!
Most semantic evaluation of LLMs focuses on accuracy and hallucination. These are very important, but it is also important to look at completeness and omission; does the generated text include all of the key information which the user needs to know? Omissions are a huge problem in medical NLG, and in other NLG tasks as well.