Skip to main content

We don’t fully understand how large language models work

Jamie Jin/Shutterstock

AI models can trick each other into disobeying their creators and providing banned instructions for making methamphetamine, building a bomb or laundering money, suggesting that the problem of preventing such AI “jailbreaks” is more difficult than it seems.

Many publicly available large language models (LLMs), such as ChatGPT, have hard-coded rules that aim to prevent them from exhibiting racist or sexist bias, or answering questions with illegal or problematic answers – things they have learned to do from humans via training…

Source link

See also  Place of work Anti-Bias Trainings Are not Sufficient

Felecia Phillips Ollie DD (h.c.) is the inspiring leader and founder of The Equality Network LLC (TEN). With a background in coaching, travel, and a career in news, Felecia brings a unique perspective to promoting diversity and inclusion. Holding a Bachelor's Degree in English/Communications, she is passionate about creating a more inclusive future. From graduating from Mississippi Valley State University to leading initiatives like the Washington State Department of Ecology’s Equal Employment Opportunity Program, Felecia is dedicated to making a positive impact. Join her journey on our blog as she shares insights and leads the charge for equity through The Equality Network.

Leave a Reply