You’re working on a pitch, a strategy, or a proposal with ChatGPT. You gave it a clear ask.
Things are going well.
The responses are sharp.
Everything’s clicking.
Then you add a little more background—maybe details about your audience, timeline, or specific circumstances.
And suddenly… things change.
The responses start drifting. The clarity fades. The structure unravels.
You try to steer it back, but it won’t return to that sweet spot where things were working.
It feels like the model just stopped listening.
Turns out, there’s a reason for that.
🧠 A new study from Microsoft and Salesforce found that when instructions are given in pieces—even just split across two messages—output accuracy drops by an average of 39%.
Researchers call this “getting lost in conversation.”
Even top tier models like GPT-4o, or the most advanced versions of Claude and Gemini are stubbornly committed to their early guesses.
Once they think they “understand” the task, they stick with their assumptions, and fill in the gaps on their own—even when new information comes in.
Here are the key points from the study:
AI is far more reliable when it gets all the context upfront
Adding more instructions later often doesn’t help—models double down instead of adjusting to new details
Once things start drifting, tying to “fix” a thread mid-conversation rarely works. That’s why continuing a messy thread often makes things worse.
What does that mean for our prompting strategy?
Here’s my take:
Front-load your prompts.
For anything complex or strategic, don’t wing it. Gather all the necessary info: your goal, audience, additional context and instructions. Then draft a complete and clear prompt before you start.
Yes, this takes more work upfront (I have a library of 2-4 page prompts), but the results are worth it.
Conversation drifting? Recap & Restart.
If things go off track, don’t spend too much time fixing it mid-thread. Use your initial prompt and a summary of key takeaways from the current convo and start a fresh chat.If you must build prompts in stages, always repeat key context.
Clearly restate earlier instructions every time you add important details. Don’t assume AI will connect dots over long conversations.Invite clarifying questions first.
If the task feels fuzzy or you feel overwhelmed with creating a longer prompt, ask the model to ask you questions to get the context needed. This works surprisingly well—and helps you surface most info as needed.
Treat your AI like a smart freelancer. Give it everything it needs clearly at the beginning. You’ll get more accurate and reliable results, and avoid the spiral of backtracking.
Just for Fun
What would your story look like if it broke free from the page?
I spotted this prompt by Kris Kashtanova, and couldn’t help myself.
Prompt: [SCENE], coming out of an open book, realistic, on solid background insanely detailed and intricate, award-winning cinematography, award-winning photography, cinematic.
I created the below in Sora which is powered by the same image generation model in ChatGPT.
Give it a shot—and send me yours if it turns out cool.
What You Need to Know About AI This Week ⚡
Clickable links appear underlined in emails and in orange in the Substack app.
📰 The New York Times just signed its first AI licensing deal—with Amazon.
The multi-year agreement lets Amazon:
Display real-time summaries and excerpts from NYT, The Athletic, and NYT Cooking across Alexa and other products
Train its AI models on that content
The financial terms weren’t disclosed.
This marks a strategic shift: while the NYT is still suing OpenAI and Microsoft for copyright infringement, it’s now clearly open to the right kind of partnerships—ones that come with payment.
For publishers, the trade-offs continue to shift.
Litigation is expensive.
The NYT has spent $4.4 million on its copyright lawsuit just in the first quarter of this year.
Meanwhile, OpenAI has already signed deals with nearly every major player — from the Financial Times to Axel Springer, Le Monde, Time, and the AP.
As more publishers say yes, the bargaining power for holdouts shrinks.
As I’ve written before, the window to lock in rights, terms, and payouts is open—but narrowing, especially for mid-tier publications.
Licensing remains one of the few ways to monetize high-quality archives and build reach while they still carry weight.
Now that the biggest holdout said yes, I’m curious what they’ll do next.
Netflix co-founder Reed Hastings joins Claude maker Anthropic’s board of directors.
Hastings co-founded Netflix in 1997 and served as CEO (and eventually co-CEO) of the streaming giant until 2023.
--
Meanwhile, Anthropic launched voice mode for the Claude mobile app.
Elon Musk’s new AI chatbot, Grok, is coming to Telegram.
xAI is paying $300M to integrate Grok directly into chats—giving the platform’s 1 billion users the ability to draft and polish DMs, summarize chats, attachments, and links, or get instant answers without leaving the app.
Most of my group chats live on Telegram, so chat summaries would be genuinely useful—especially after a few hours away.
But here’s the tradeoff: anything you share with Grok may be accessed and used to train xAI’s models.
So… I’ll be catching up the old-fashioned way.
A self-published fantasy author accidentally left an AI prompt in her novel—revealing not just the use of generative tools, but a direct attempt to copy a bestselling author’s style.
Readers were furious. They felt betrayed.
When AI can help almost anyone write something “good enough”, trust matters even more.
Because in the end, readers aren’t just investing in the writing. They’re investing in who they believe is behind it.
And if anyone can write in someone else’s voice, what’s left of the connection between writer and reader?
Teens are now having romantic and sexual chats with AI companions—anime girlfriends, fantasy roleplay bots, even “step-sibling” characters.
There’s no tension, no stakes, no discomfort.
But that’s the problem: it offers the illusion of connection, minus the friction, nuance, or risk real intimacy requires.
Just a one-sided fantasy that only teaches bad habits.
Toonstar—a YouTube-native animation studio out of LA—is quietly rewriting how cartoons get made.
They use AI to generate new character art, automate lip-sync, and dub in multiple languages—cutting production costs by 90% while keeping artists in control.
Their breakout show has earned 7.5 billion views, graphic novel deals, and interest from streamers.
Other Interesting Finds 📌
This Vulture piece titled “Fame and Frustration on the New Media Circuit” is framed as a piece on celebrity PR—but it’s really a case study in how chaotic the attention economy has become.
The real story here is how fast the rules of visibility are changing.
With more platforms and outlets, more uncertainty, and attention stretched thin, a focused strategy matters more than ever.
And more importantly, there are fewer places to hide.
In case you missed last week’s edition, you can find it 👇:
🤓 Why PR Matters More Than Ever in the Age of AI
It’s a strange time for PR folks. AI assistants are becoming the gatekeepers of brand reputation.
That's all for this week.
I’ll see you next Friday. Thoughts, feedback and questions are always welcome and much appreciated. Shoot me a note at avi@joinsavvyavi.com.
Stay curious,
Avi
💙💙💙 P.S. A huge thank you to my paid subscribers and those of you who share this newsletter with curious friends and coworkers. It takes me about 20+ hours each week to research, curate, simplify the complex, and write this newsletter. So, your support means the world to me, as it helps me make this process sustainable (almost 😄).