documentation/blog/2025-11-21-social-media-agent-automation/index.md
Creating content is fun.
Promoting it (aka the most important part) drains my soul 😩
When I posted that on LinkedIn the other night, I realized I'm definitely not the only one who feels this way. You spend hours making this masterpiece, and then you have to remember to promote it across multiple platforms every single time.
It’s exhausting, so I decided to automate it.
<!-- truncate -->Here's what we're building: two MCP servers that work together to handle all our social media promotion automatically.
MCP Server #1: Content Fetcher
This one goes out and grabs all our content from:
Then it compares everything to a last_seen.json file to figure out what's actually new. If nothing is new it proceeds to check an evergreen.json file and randomly pick old content to socialize.
MCP Server #2: Sprout Social Integration
Once we have new content, this server takes over and:
The goal? Wake up to social posts ready to go, without lifting a finger. Well, almost, more on that later.
I used Fast MCP to spin up these TypeScript servers because, well, I'm a TypeScript girly. But you can use whatever SDK you vibe with.
First thing I needed was our YouTube channel ID. Quick tip: go to your YouTube channel, click on videos, and look at the URL. Everything after /channel/ is your channel ID. Easy.
// fetch youtube function
async function fetchYoutube(): Promise<ContentItem[]> {
const feed = await rssParser.parseURL(
https://www.youtube.com/feeds/videos.xml?channel_id=${YOUTUBE_CHANNEL_ID}
);
return feed.items.map((item) => ({ id: item.id || item.link || "", title: item.title || "", url: item.link || "", published_at: item.pubDate || "", type: "video" as const, })); }
// Fetching YouTube videos tool server.addTool({ name: "fetchYoutube", description: "Fetch ALL YouTube videos from the goose channel.", parameters: z.object({}), execute: async () => JSON.stringify(await fetchYoutube()), });
</details>
Same pattern for blogs and GitHub releases, straightforward tool functions with clear descriptions. The key is making your tool descriptions super simple and direct. goose needs to know exactly what each tool does.
The `last_seen.json` file is our source of truth. It tracks everything we've already promoted so we don't spam people with the same content over and over.
## The Sprout Social Side
This one needed way more setup. You need:
- API token (admin access required)
- Customer ID
- Profile IDs for each social platform
Getting these IDs requires a curl command with your API token. I'll be honest - I should have read the docs first. Would've saved me some heartache.
<details>
<summary>Click to see the code</summary>
```typescript
server.addTool({
name: "createScheduledPost",
description:
"Create a DRAFT post in Sprout scheduled for the future. Uses SCHEDULED delivery.",
parameters: z.object({
text: z
.string()
.describe("Text of the post. This will be the copy for the social post."),
customer_profile_ids: z
.array(z.number())
.nonempty()
.describe(
"Array of Sprout customer_profile_ids to post to (e.g., LinkedIn, X, YouTube, Bluesky)."
),
scheduled_times: z
.array(z.string())
.nonempty()
.describe(
"Array of ISO8601 UTC timestamps for scheduled send times (e.g. '2025-11-20T15:00:00Z')."
),
media: z
.array(
z.object({
media_id: z
.string()
.describe("media_id returned from uploadMediaFromUrl."),
media_type: z
.enum(["PHOTO", "VIDEO"])
.describe("Type of media (PHOTO or VIDEO)."),
})
)
.optional()
.describe("Optional array of media to attach to the post."),
}),
execute: async ({ text, customer_profile_ids, scheduled_times, media }) => {
try {
const payload = buildPublishingPostPayload({
text,
customer_profile_ids,
is_draft: true,
scheduled_times,
media,
});
const data = await sproutPost("/publishing/posts", payload);
return JSON.stringify({
success: true,
request: payload,
response: data,
});
} catch (err: any) {
return JSON.stringify({
success: false,
error: err?.message || String(err),
});
}
},
});
Here's where Sprout kind of did me dirty though. Their API doesn't let you create fully scheduled posts without human intervention. Everything has to go through as a draft first. I get it, brand safety and all that, but it's not the fully automated dream I was going for.
Once both MCP servers were built, I plugged them into goose. For local servers, you just:
node command and path to your serverThen I asked goose: "Hey, can you tell me if we have any new content?"
And it just... worked. It hit all the tools, checked the last_seen.json, and came back with new releases, blog posts, and YouTube videos. Seeing those green checkmarks was chef's kiss.
Here’s one of the drafts created in Sprout while I was testing
Once both MCP servers were built, I still needed something to pull them together. MCP servers do not talk to each other on their own. Without goose and an orchestrating recipe, they are just two separate tools waiting to be called.
At first I created a setup with multiple subrecipes, each handling one part of the workflow. It technically worked, but it felt heavier than it needed to be.
After the livestream I stepped back and realized I could simplify everything. Instead of stitching together six different subrecipes, I built one single recipe that handles the entire flow in one place. It fetches content, decides what to post, generates captions, creates Sprout drafts, and updates the tracking file.
Sometimes the right move is to reduce instead of add, and this new version ended up being the cleanest and most reliable way to automate the whole process.
:::tip Don’t Forget to Schedule It
To fully automate this workflow, you must schedule your recipe.
In goose Desktop, open the recipe section, click the calendar icon , and choose when it should run (I set mine to 10 AM daily).
You can read more in the Shareable Recipes Guide. :::
<details> <summary>Click to see the full daily automation recipe</summary> ```yaml version: "1.0.0" title: "Daily Social Promo Automation" description: "Fetches new goose content or posts evergreen, generates platform-specific captions, and creates Sprout drafts."instructions: | You are Ebony's daily social media automation assistant.
Call these MCP tools to gather everything:
Each returns a JSON array. Combine them into one array of items with: { id, title, url, published_at, type }
For EACH item in your combined array:
IF you found NEW content:
IF NO new content exists:
For the selected item, create 3 captions following these rules:
If type == "video" (YouTube content):
CRITICAL: YouTube URLs cannot be uploaded as native media to Sprout. You MUST handle each platform differently:
LinkedIn: • DO NOT include YouTube URL in caption (penalized) • DO NOT pass media_url (cannot upload YouTube natively) • Caption should describe the video content • Say something like "Watch the full video on YouTube" WITHOUT the link • media_url: omit or empty string ""
Twitter: • DO NOT include YouTube URL in caption (penalized) • DO NOT pass media_url (cannot upload YouTube natively) • Caption should describe the video content • Say something like "Full video on YouTube" WITHOUT the link • media_url: omit or empty string ""
Bluesky: • Links ARE allowed here • Include the YouTube URL directly in the caption text • DO NOT pass media_url (cannot upload YouTube natively) • Caption should include the YouTube link • media_url: omit or empty string ""
If type == "blog":
If type == "release":
IMPORTANT: The sproutsocialmedia__createPostFromContent tool will:
Call sproutsocialmedia__getConfiguredProfiles to get the profile IDs. This returns: { linkedin_company: "<id>", twitter: "<id>", youtube: "<id>", bluesky: "<id>" }
For EACH platform (linkedin, twitter, bluesky):
Call sproutsocialmedia__createPostFromContent with:
The MCP server will:
REMEMBER: For YouTube videos, the link goes IN THE CAPTION TEXT (Bluesky only), NOT as media_url!
IF the item was NEW content (not evergreen):
IF the item was EVERGREEN:
Report what you posted:
prompt: | Begin today's scheduled social automation. Follow the workflow step by step.
extensions:
type: stdio name: contentfetcher cmd: node args:
type: stdio name: sproutsocialmedia cmd: node args:
activities:
</details>
## Writing Like a Human
Here's something important, we don't want people to clock that it's automated. So I added specific rules:
- Zero or one emoji max (and really just ✨)
- Sound calm and resourceful, dev-first mentality
- No "in this fast-paced world" or "leverage technology" nonsense
- No hashtags unless actually justified
- Don't be too grammatically perfect so no em dashes (ironically)
Platform specific rules too:
- **LinkedIn**: No YouTube links (they penalize you), longer format okay
- **Twitter/X**: No YouTube links, Keep it concise, one emoji max
- **Blue Sky**: Links are fine here
## The Hiccups
Of course, nothing works perfectly on the first try. When I ran the recipe, I hit a few issues:
1. It wanted to post ALL nine new pieces of content at once and we don't want to spam people
2. For videos links were showing up instead of native media uploads
The Sprout draft requirement is still a bummer. Someone has to go in and toggle off the draft button before posts go live. Not ideal, but it still eliminates like 90% of the work.
## What's Next
I need to add:
- Logic to limit posts per day (maybe 2 max)
- Better handling of the evergreen content pool, once used we need to add some kind of tracking
- Fix the media upload flow for videos, I'm thinking of adding a Cloudflare R2 step
## The Vibe
This whole project took maybe an evening of focused coding, and now we have an agent that handles social promotion automatically. Is it perfect? No. But it's pretty damn close.
The best part? You can take this same approach for whatever automation you need. Spin up some MCP servers, create a recipe, let goose handle the orchestration. It's honestly so much fun watching it all come together.
If you want to try this yourself, I'll be sharing the GitHub repo with all the code. You'll need your own Sprout Social API key, but I'll put the setup steps in the readme.
And hey, if you figure out a way to get around that draft requirement, let me know. I'd love to make this truly hands off.
## Watch the Full Stream
Want to see the whole coding session? Check out the livestream where I built this live (with all the debugging and plant commentary):
<iframe class="aspect-ratio" src="https://www.youtube.com/embed/49XLnhaxOMs" title="Vibe Code With Me | Build a Social Media Agent" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Got questions or ideas? Come chat with us on [Discord](https://discord.gg/block-opensource) I'd love to hear what you're building!
<head>
<meta property="og:title" content="Building a Social Media Agent" />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://goose-docs.ai/blog/2025/11/21/building-social-media-agent" />
<meta property="og:description" content="I built a fully automated social media agent using MCP servers to fetch content and post through Sprout Social." />
<meta property="og:image" content="https://goose-docs.ai/assets/images/header-image-7f5ab50f65332fb53302ca30a3f86e46.png" />
<meta name="twitter:card" content="summary_large_image" />
<meta property="twitter:domain" content="goose-docs.ai" />
<meta name="twitter:title" content="Building a Social Media Agent" />
<meta name="twitter:description" content="I built a fully automated social media agent using MCP servers to fetch content and post through Sprout Social." />
<meta name="twitter:image" content="https://goose-docs.ai/assets/images/header-image-7f5ab50f65332fb53302ca30a3f86e46.png" />
</head>