Could NLG systems injure or even kill people?
I recently attended a workshop on Safety for Conversational AI, which discussed how such systems could potentially harm people. Is it possible that NLG systems could harm their users, maybe even contributing to death in the worst case?