Happy Friday!
Does AI weigh choices in ways that feel human? A new study put this question to the test and uncovered decision-making patterns that look strikingly similar to our own.
The implications are profound and could change how we see both AIāand ourselves.
But first, hereās what you need to know about AI this week (clickable links appear in orange in emails and underlined in the Substack app):
Oh, waitā¦Before I move on to my updates, I just wanted to let you know that last weekās mystery has been solved thanksĀ toĀ theĀ BBC. The woman to the right of Elon in the Trump family photo whose outfit I wanted is Ericās wife (Trumpās daughter-in-law).in-law).
Also, I canāt believe Elon named his son Techno Mechanicus š¤Æ.
That poor kid. Well, not literally poor, but you know what I mean.
Ok, now back to the updates.
š OpenAI scored a legal win after a New York judge dismissed a copyright lawsuit by progressive news outlets Raw Story and AlterNet, which claimed their articles were used without permission or compensation to train ChatGPT.
The judge ruled that the outlets failed to show clear harm, partly because ChatGPT synthesizes information rather than copying it word-for-word. This decision, consistent with other recent rulings, suggests that courts may require stronger evidence of damage for AI-related copyright claims.
This ruling may also disrupt a growing trend of AI developers paying to license content from publishers to avoid copyright disputes. OpenAI, for example, reached a $250 million licensing agreement with Dow Jones (parent company of the Wall Street Journal) in May, followed by similar multimillion-dollar deals with Axel Springer (owner of Business Insider and Politico), the Financial Times, and the Associated Press.
Of course, OpenAI wasted no time in using this ruling as grounds to dismiss a similar lawsuit from The New York Daily News.
Meanwhile, Germanyās music rights organization GEMA has filed a lawsuit against OpenAI, accusing it of copyright infringement for using song lyrics to train ChatGPT without proper licensing or compensation to creators.
AI-powered search engine Perplexity has started experimenting with ads in the U.S., introducing āsponsored follow-up questionsā that appear next to answers, clearly marked as āsponsored.ā
The ads are generated by Perplexityās AIānot the brandsāthough brands have some influence over content guidelines.
Initial partners include Indeed, PMG, Universal McCann, and Whole Foods. This ad push contrasts with OpenAIās ad-free ChatGPT Search and follows plagiarism claims from publishers, which might discourage advertisers.
The Beatlesā āNow and Thenā makes history as the first AI-assisted song to earn Grammy nominations for Record of the Year and Best Rock Performance. Rather than using deepfake technology to recreate John Lennonās voice, AI helped isolate and clean up his original vocal from a 60-year-old demo.
To reduce its reliance on Nvidia, Amazon is developing its own AI chips to lower costs, improve efficiency and gain a stronger competitive edge in AI.
With Apple Intelligenceās new AI-generated notification summaries, iPhones are now trying to āsummarizeā our chaotic livesābut the results are often more hilarious than useful, which, in a way, is its own kind of useful š. Check out a few examples below:

Google has launched 'Learn About,' a new AI tool designed to move beyond traditional chatbot answers by delivering interactive, educational-style responses with visuals, vocabulary-building tools, and follow-up questions to support in-depth learning.
To try the free platform, simplyĀ sign upĀ using your current Google account. From there, start by asking a question in the search box in the middle of the screen or upload an image or document to explore further.
For a closer look at how Googleās Learn About compares to ChatGPT in action, check out this article, which tests both tools with the same prompts and highlights each oneās strengths and best use cases.
Iām pretty excited about this. Iāll play around with it and let you know my thoughts.
Sothebyās just auctioned off a million-dollar portrait of Alan Turingāpioneering mathematician, WWII codebreaker, and the father of modern computingāto an undisclosed buyer.
But hereās the twist: it was painted by Ai-Da, a robot, who whipped it up in just eight hours (thatās the portrait on the right, in case you were wondering).
Because only at Sothebyās could this piece, titled āA.I. Godā, be marketed as āa definitive piece for the thinking eliteāāassuming the thinking elite can cough up a cool million.
Particle is a new AI news app that aims to direct readers back to publishers by linking to original sources and prioritizing partner content. Key features include multiple story formats (such as simplified summaries and essential facts), a tool for comparing perspectives on polarizing topics, audio summaries, and a Q&A chatbot for in-depth questions.
The app is free to download on iOSĀ for now and works across iPhone and iPad.
Iām still mourning the loss of Artifact, my beloved AI news app, which got quietly swallowed up by Yahoo News earlier this year.
In the new AI world, one month youāre a hot startup, and the next, youāre either acquired or obsolete as big labs like OpenAI roll out new capabilities and features that make your product yesterdayās news. Still a few savvy ones will manage to thrive.
But Iāll give Particle a try and see if it can replace my old boo.
I realize most of you probably aren't using ChatGPT through work, but for those who are (or who signed up with a work email), here's a heads-up: if you change jobs, your ChatGPT conversation history isn't transferable to a new account. Consider signing up for a personal subscription instead and have work reimburse you for the cost (if your company policy allows this).
The Washington Post has launched āAsk The Post AI,ā a generative AI tool designed to answer usersā questions using its extensive news archives from 2016 onward. To ensure reliability and accuracy, it only responds when it finds highly relevant information.
INSIGHT SPOTLIGHT
Does AI Avoid Pain, Chase Pleasure, and Try to Win Like We Do? š¤
That is the question at the heart of a new and fascinating study from Google and London School of Economics that pushed LLMs (Large Language Models) like GPT-4 and Claude to choose between simulated āpainā and āpleasureā states to test how they handle choices involving comfort and discomfort.
The researchers created a game and gave the AI models a simple choice:
1ļøā£ Get more points (explicit goal/best possible outcome)
OR
2ļøā£ Avoid "pain penalties" / gain "pleasure rewards"
Think of it like testing how we balance competing priorities.
The result? Some AIs showed surprisingly human-like decision-making.
They'd maximize points when stakes were low, but switch priorities when "pain" or "pleasure" intensified, even at the cost of āwinningā points, making sophisticated trade-offs between goals and emotional motivators, echoing how we often make decisions.
Turning down a higher-paying job because the stress isn't worth it
Paying more for a direct flight to avoid travel hassle
Choosing a closer grocery store over a cheaper one farther away
The other fascinating part?
Each AI showed distinct patterns, like different human personalities:
Some acted like your risk-averse friend: always avoided potential discomfort or āpainā, chose the safe path, prioritized well-being over achievement.
Others were strict rule-followers, more like your achievement-focused colleague: always maximized points/results, ignored emotional factors, stuck to goals no matter what.
And interestingly, models with stronger safety controls showed more conservative choicesāconsistently avoiding āharmfulā options regardless of reward.
š¤ Fun fact: Anthropic, the maker of Claude, has an entire team dedicated to shaping Claudeās personality and character, led by Amanda Askell, a researcher with a non-technical background and a PhD in philosophy. And sheās badass.
The researchers wanted to be clear: while these AI responses might seem remarkably human-like, they're purely simulations. The AIs arenāt conscious or actually experiencing emotionsātheyāre simply reflecting patterns learned from human data,
What does all of this mean?
I'm sitting with this question because the implications feel profound.
Here's what seems clear so far:
AI isn't just a technology. It is a mirror.
It has learned from our collective experiences and is reflecting who we are back to usāour values, patterns, fears and biases.
It shows us how we really make decisionsānot how we think we make them.
What we truly prioritizeānot what we say we value.
Where our cultural biases liveāeven the ones we prefer not to see.
How deeply emotion shapes our choicesāoften more than logic ever could.
Let that sink in for a moment.
The irony isnāt lost on me: It turns out that in order to better understand and steer this most advanced technology, weāll need to better understand ourselves.
And for a psychology nerd like me, nothing feels more exciting.
In case you missed last weekās post, you can find it š:
That's all for this week.
Iāll see you next Friday. Thoughts, feedback and questions are welcome and much appreciated. Shoot me a note at avi@joinsavvyavi.com.
Stay curious,
Avi
ššš P.S. A huge thank you to my paid subscribers and those of you who share this newsletter with curious friends and coworkers. It takes me about 8+ hours each week to curate, simplify the complex, and write this newsletter. So, your support means the world to me, as it helps me make this process sustainable.