Seeing as Grok has been in the news lately for it's lack of guardrails (i.e. giving detailed instructions on how to build chemical weapons and generating links to where one can source materials for them, planning assassinations, and giving advice for how to do very unethical things involving minors), and all without even jailbreaking the model, I wanted to ask how important everyone thought guardrails on AI models were.
Many companies spend a good deal of time and money on red teaming and trying to make it difficult to use a model for societal harm. xAI doesn't seem to care at all.
Where would you draw the line?