RSS Digest Skill — Feed Automation

Turn RSS feeds into daily or weekly digests delivered to your inbox or Slack.

rssdigestnewsemail

RSS Digest Skill

TL;DR

The RSS Digest skill aggregates multiple RSS and Atom feeds, filters items by keyword or category, and compiles them into a formatted digest — delivered on a schedule you control. Instead of checking a dozen blogs and news sites manually, you get one curated summary in your inbox or Slack channel each morning.

It’s one of the lowest-friction ways to stay informed without doomscrolling. The main gotchas are dead feeds, Atom vs RSS2 format inconsistencies, and digest emails that balloon in length when feeds are too noisy.


What it does

  • Fetches multiple feeds simultaneously — RSS 2.0, Atom 1.0, and JSON Feed formats are all supported by most implementations.
  • Deduplicates items across feeds so the same story from two sources appears only once in the digest.
  • Filters by keyword, tag, or category so you can subscribe to a broad feed (e.g., TechCrunch) but only surface items matching “AI” or “regulation.”
  • Ranks items by engagement signals (share count, comment count) when the feed includes that metadata, putting the most-discussed stories first.
  • Formats the digest as plain-text email, HTML email, Slack blocks, or Markdown — depending on your delivery channel.
  • Handles feed errors gracefully — if a feed returns a 404 or times out, the digest still sends with a note that the feed was unavailable rather than failing silently.

Best for

Engineering team briefings: Subscribe to GitHub release feeds, Hacker News “Ask HN” threads, and your stack’s official blogs. Get a Monday morning digest of what changed in your dependencies last week.

Industry news monitoring: Combine feeds from trade publications, analyst blogs, and competitor press release pages. Filter by your product category to cut through the noise.

Personal reading queues: Replace a sprawling RSS reader with a single daily email. Read it, archive it, move on — no unread count anxiety.

Content research: Use the digest as a source-discovery tool for content briefs or SEO keyword clustering. Fresh industry coverage surfaces the topics your audience is actually discussing.

This skill is less useful when you need real-time alerts (use website monitor instead) or when the sources you care about don’t publish RSS feeds.


How to use (example)

Scenario: Weekly AI industry briefing for a product team

You want a Friday afternoon digest of the week’s most important AI news, delivered to a Slack channel.

Feed list:

https://feeds.feedburner.com/oreilly/radar
https://www.technologyreview.com/feed/
https://openai.com/blog/rss.xml
https://huggingface.co/blog/feed.xml
https://news.ycombinator.com/rss (filtered: "AI" OR "LLM" OR "machine learning")

Configuration:

schedule: "Friday 16:00"
timezone: "America/New_York"
max_items_per_feed: 5
dedup: true
filter_keywords: ["AI", "LLM", "machine learning", "foundation model"]
output_format: "slack_blocks"
channel: "#product-intel"
include_summary: true   # Generate a 1-sentence summary per item

Example digest output (Slack):

📰 AI Weekly Digest — Friday, Mar 28

• OpenAI releases GPT-5 with extended context window
  "OpenAI announced..." → openai.com/blog/...

• MIT study: LLMs struggle with multi-step reasoning
  "Researchers found..." → technologyreview.com/...

• HuggingFace launches new fine-tuning toolkit
  "The open-source..." → huggingface.co/blog/...

[+12 more items] → View full digest

Common variations:

  • Use max_items_per_feed: 3 and max_total_items: 10 to keep digests scannable.
  • Add a summary_length: 2_sentences option to get more context per item without opening links.
  • Pipe the digest into web search to automatically pull full articles for the top 3 stories.

Permissions & Risks

Required permissions: Network, Email
Risk level: Low

The skill fetches public RSS feeds (read-only) and sends an email or posts to a channel. No data is written back to any source. Key operational considerations:

Feed format inconsistencies: RSS 2.0 and Atom 1.0 use different field names for the same concepts (<pubDate> vs <updated>, <description> vs <summary>). A robust implementation normalizes these automatically, but some edge cases — especially custom namespaces used by podcast feeds — may cause items to appear without titles or dates.

Dead and redirected feeds: Feeds go stale. A feed URL that worked six months ago may now 301-redirect to a new path, return a 410 Gone, or serve HTML instead of XML. Build in a weekly check that flags feeds returning non-200 status codes.

Digest length creep: High-volume feeds (e.g., Reddit, Hacker News) can produce 50+ items per day. Without a max_items cap, your digest becomes unreadable. Start conservative (5 items per feed) and adjust based on actual reading behavior.

Recommended guardrails:

  • Preview the first digest manually before enabling automated delivery.
  • Set a hard cap on total digest length (e.g., 15 items maximum).
  • Log feed fetch errors separately from digest delivery so you can spot dead feeds quickly.

Troubleshooting

Feed items appear with no title or date
The feed uses a non-standard namespace or Atom format. Check the raw XML at the feed URL. If <title> is missing, the feed may use <dc:title> — your parser needs to handle Dublin Core extensions.

Duplicate items appearing in the digest
Deduplication is matching on item GUID, but some feeds don’t include GUIDs — they use the item URL as the unique key. Configure your dedup strategy to fall back to URL matching when GUID is absent.

Digest not sending on schedule
Check your scheduler timezone configuration. A cron job set to 16:00 without an explicit timezone will use the server’s local time, which may differ from your intended timezone by hours.

Feed returns HTML instead of XML
The site may have moved its feed behind a login, changed the feed URL, or discontinued it. Visit the site directly and look for a new feed link in the <head> element (<link rel="alternate" type="application/rss+xml">).

Items from weeks ago appearing in today’s digest
The feed’s <pubDate> or <updated> field is unreliable. Some feeds republish old items when they’re edited. Add a max_age_days: 7 filter to exclude items older than your digest window regardless of what the feed reports.


Alternatives

Feedly — A full-featured RSS reader with team sharing, keyword alerts, and AI-powered prioritization. Better for interactive reading; less flexible for custom digest formatting or programmatic output. Free tier limits the number of sources.

Inoreader — Similar to Feedly but with stronger filtering rules and automation features. Supports RSS-to-email natively. Good for power users who want a managed service rather than a self-hosted skill.

Mailbrew — A digest-focused service that combines RSS, Twitter, Reddit, and other sources into a single daily email. Easiest setup for non-technical users; limited customization compared to a skill-based approach.

If you need to monitor a specific page that doesn’t publish an RSS feed, website monitor is the right tool.


  • RSS 2.0 specification: cyber.harvard.edu/rss/rss.html
  • Atom 1.0 specification: tools.ietf.org/html/rfc4287
  • Related guide: Weekly Research Digest