AI just does not seem to be ready for prime time. Not like humans don't make mistakes but the apparent inability of AI to stay between guardrails (or humans to figure out how to build them properly) should be very concerning. James referenced the Anthropic/DOW dust-up, and for me this kind of behavior really calls into question why Anthropic thought their internal code would have been better at avoiding improper use.
Right after the vampire clip: https://www.youtube.com/shorts/xEEvpTeIbhs
ReplyDeleteAI just does not seem to be ready for prime time. Not like humans don't make mistakes but the apparent inability of AI to stay between guardrails (or humans to figure out how to build them properly) should be very concerning. James referenced the Anthropic/DOW dust-up, and for me this kind of behavior really calls into question why Anthropic thought their internal code would have been better at avoiding improper use.
ReplyDelete