#languagemodels

  1. AI models can outperform humans in tests to identify mental states

    By Rhiannon Williams Humans are complicated beings. The ways we communicate are multilayered, and psychologists have devised many kinds of tests to measure our ability to infer meaning and understanding from interactions with each other. AI models are getting better at these tests
  2. The Download: GPT-4o’s polluted Chinese training data, and astronomy’s AI challenge

    By Rhiannon Williams This is today’s edition of The Download , our weekday newsletter that provides a daily dose of what’s going on in the world of technology . GPT-4o’s Chinese token-training data is polluted by spam and porn websites Soon after OpenAI released GPT-4o last Monday, so
  3. GPT-4o’s Chinese token-training data is polluted by spam and porn websites

    By Zeyi Yang Soon after OpenAI released GPT-4o on Monday, May 13, some Chinese speakers started to notice that something seemed off about this newest version of the chatbot: the tokens it uses to parse text were full of spam and porn phrases. On May 14, Tianle Cai, a PhD student at
  4. OpenAI unveils newest AI model, GPT-4o

    By Clare Duffy, CNN ChatGPT is about to become a lot more useful. OpenAI on Monday announced its latest artificial intelligence large language model that it says will make ChatGPT smarter and easier to use. The new model, called GPT-4o, is an update from the company’s previous GPT-4
  5. Current LLMs are more useful than Generalized Intelligence will be We are using today "persona prompts" beginning with "You are" Here is how they'll be if used with an AGI: -You are HustleGPT, you invest money -No, I'm not. My name is ChatGPT -HustleGPT says what? -Excuse me?
  6. Top 15 Open-Source LLMs for 2024 and Their Uses

    The current revolution in generative AI owes its success to the large language models (LLMs). These AI systems, built on powerful neural architecture, are used to model and process human language and are the foundation of popular chatbots like ChatGPT and Google Bard. However, ma
  7. Evaluating Neural Toxic Degeneration in Language Models

    Are these systems safe to deploy and what risks do they pose of producing offensive, problematic, or toxic content? —Prompting models can reliably produce toxic content —Models easily produce toxic content spontaneously —Toxicity, factual unreliability, and political bias are preval
  8. My colleague at the IGP at UCL, Onya Idoko has written a thoughtful blog post on how to deal with disruptive innovation and why in the case of generative AI, we should embrace change and adapt how we work to benefit from this technology rather than resisting change. She reviews r
  9. AI-powered tools are reshaping the landscape of university education!

    The recent advancements in language models like ChatGPT and GPT4 have sparked discussions on their impact, particularly on assessment methods. Traditionally, take-home essays have been used to gauge students' understanding, but the rise of artificial intelligence is making it eas