Press "Enter" to skip to content

Mastodon timelines for teams | InfoWorld

Welcome to Episode 17 of the Steampipe+Mastodon series, where we introduce a new subplot: the timeline story. So far, the examples I’ve shown and discussed work with current timelines. We’ve seen SQL queries getting results from Mastodon’s real-time API calls and Steampipe dashboards showing those results. But Steampipe isn’t just an API siphon, it’s also a Postgres database. As such, it supports the transient tables created by the external data wrapper and Steampipe plugins, but also allows you to create your own native tables. And you can use those native tables to roll up data from the transient foreign tables.

Because saving and searching Mastodon data is such a hot topic on fediverse, none of us want to recap Big Social, so far I’ve focused on queries exploring the recent Mastodon stream, of which there’s plenty more to write about. But no one should care that I remember my own home timeline, so a few weeks ago I created a tool to read it hourly and add new toots to a Postgres table.

Of course, before you can add any touches to a table, you must create that table. This is how I made this one.

create table mastodon_home_timeline as 
  select * from mastodon_toot_home limit 200

Once created, the table can be updated with new tools like this.

with data as (
  select
    account,
    -- more
    -- columns
    username
  from
    mastodon_toot_home
  limit 200
)
insert into mastodon_home_timeline (
  account,
    -- more
    -- columns
  username
)
select
  *
from 
  data
where 
  id not in ( select t.id from mastodon_home_timeline t )

To run that query from a crontab, on a machine where Steampipe is installed, save it as mastodon_home_timeline.sqlthen program it.

15 * * * * cd /home/jon/mastodon; steampipe query mastodon_home_timeline.sql

That’s all! Now the number reported by select count(*) from mastodon_home_timeline it is growing every hour.

I’ve only been collecting toots for a couple of weeks and haven’t started exploring that data yet; We’ll see what happens when we get there. In the meantime, however, I want to show how such exploration can be a team exercise.

A friend of mine, whom I’ll call Elvis, shares my interest in discovering connections between people, servers, and hashtags. He could capture his own timeline using the method shown here. But since we will analyze this data together, we agreed that I will collect our schedules. To enable that, he shared a Mastodon API token (revocable) that I’ve used to set up Steampipe with credentials for both of our accounts.

connection "mastodon_social_jon" {
    plugin = "mastodon"
    server = "https://mastodon.social"
    access_token = "..."
}

connection "mastodon_social_elvis" {
    plugin = "mastodon"
    server = "https://mastodon.social"
    access_token = "..."
}

Steampipe’s external data wrapper converts each of these named connections into its own Postgres schema. Although we share the same home server, by the way, it is not necessary. A team collaborating in this way might group the schedules of mastodon.social and hachyderm.io and fosstodon.org and any other servers that support Mastodon-API.

Also Read:  Protocols, APIs, and conventions | InfoWorld

(You can do the same with AWS, Slack, GitHub, or another type of account by defining multiple connections. Steampipe makes API calls simultaneously over parallel connections.)

With this setup I can read my timeline like this.

select * from mastodon_social_jon.mastodon_toot_home limit 200

And Elvis is like that.

select * from mastodon_social_elvis.mastodon_toot_home limit 200

If I want to query both in real time, for example to count the combined total, I can use SQL UNION. Or I can define a general connection that aggregates these two.

connection "all_mastodon" {
    plugin = "mastodon"
    type = "aggregator"
    connections = [ "mastodon_social_jon", "mastodon_social_elvis" ]
}

connection "mastodon_social_jon" {
    plugin = "mastodon"
    server = "https://mastodon.social"
    access_token = "..."
}

connection "mastodon_social_elvis" {
    plugin = "mastodon"
    server = "https://mastodon.social"
    access_token = "..."
}

Now the query select * from all_mastodon.mastodon_toot_home limit 200 makes API calls on behalf of both accounts, in parallel, and combines the results. When we follow the resulting URLs to respond to or promote, we will do so as individual identities. And we will be able to use Steampipe queries and dashboards in that same single user mode. But we will also be able to group our timelines and point our queries and dashboards to the combined history.

Will that be interesting? Useful? That remains to be seen. I think it’s one of many experiments worth trying as you work out the fediverse. And I see Steampipe as a laboratory in which to carry out such experiments. With SQL as an abstraction on top of API, connection aggregation, and dashboards as code, you have all the ingredients needed to iterate quickly, at low cost, into Mastodon shared spaces designed for teams or groups.

These series:

  1. Autonomy, pack size, friction, fanout and speed
  2. Mastodon, Steampipe and RSS
  3. Navigating the fediverse
  4. A Bloomberg terminal for Mastodon
  5. Create your own Mastodon UX
  6. Lists and people on Mastodon
  7. How many people on my Mastodon feed also tweeted today?
  8. Qualified Mastodon URLs per instance
  9. Mastodon Ratio Charts
  10. Working with Mastodon lists
  11. Images considered harmful (sometimes)
  12. Mapping the broader fediverse
  13. Protocols, APIs and conventions
  14. News in fediversa
  15. Mapping people and tags in Mastodon
  16. Mastodon server moderation display
  17. Mastodon schedules for teams

Copyright © 2023 IDG Communications, Inc.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *