How To

How Steampipe translates SQL queries to API calls

A Steampipe plugin can map a Postgres WHERE clause to an API parameter. We took advantage of that to enable the Vercel plugin to query deployments by date.

Jon Udell
6 min. read - Jan 26, 2023
A Steampipe plugin can map a Postgres WHERE clause to an API parameter. We took advantage of that to enable the Vercel plugin to query deployments by date.

The blog post you are reading is served by Vercel, the system that hosts both and When we began using the Vercel plugin to keep track of our Vercel deployments we realized that the plugin was missing a table. It provided vercel_project which returns a handful of recent deployments, but we wanted a more complete view. For that we needed a new table: vercel_deployment.

Why was it missing? The Go SDK for the Vercel API hadn't provided a wrapper for Vercel's deployments API.

As a plugin author you may find yourself in one of three situations with respect to the Go SDK for the API your plugin works with.

  1. There is no Go SDK.

In that case you have two choices. You can build the API wrapper into your plugin, but ideally you'll create and publish a standalone SDK that powers your plugin and is also available for other uses.

  1. There is a Go SDK that does everything you need.

That's the happy path. The Go ecosystem is robust, and it's not unusual to find yourself walking that happy path!

  1. There is a Go SDK but it lacks something you need.

That was our situation in this case. Because the Go SDK for Vercel is archived, we have forked it in order to add support for Vercel's deployments API.

Mapping an API response to a Go struct

Every Go SDK maps API responses (typically in JSON format) to Go structs. Here's the one we need.

type Deployment struct {
UID string `json:"uid"`
Name string `json:"name"`
URL string `json:"url"`
Created int64 `json:"created"`
Source string `json:"source"`
State string `json:"state"`
Type string `json:"type"`
Creator struct {
UID string `json:"uid"`
Email string `json:"email"`
Username string `json:"username"`
GithubLogin string `json:"githubLogin"`
} `json:"creator"`
InspectorURL string `json:"inspectorUrl"`
Meta struct {
GithubCommitAuthorName string `json:"githubCommitAuthorName"`
GithubCommitMessage string `json:"githubCommitMessage"`
GithubCommitOrg string `json:"githubCommitOrg"`
GithubCommitRef string `json:"githubCommitRef"`
GithubCommitRepo string `json:"githubCommitRepo"`
GithubCommitSha string `json:"githubCommitSha"`
GithubDeployment string `json:"githubDeployment"`
GithubOrg string `json:"githubOrg"`
GithubRepo string `json:"githubRepo"`
GithubCommitRepoID string `json:"githubCommitRepoId"`
GithubRepoID string `json:"githubRepoId"`
GithubCommitAuthorLogin string `json:"githubCommitAuthorLogin"`
GithubPrID string `json:"githubPrId"`
} `json:"meta,omitempty"`
Target interface{} `json:"target"`
AliasError interface{} `json:"aliasError"`
AliasAssigned int64 `json:"aliasAssigned"`
IsRollbackCandidate bool `json:"isRollbackCandidate"`
CreatedAt int64 `json:"createdAt"`
BuildingAt int64 `json:"buildingAt"`
Ready int64 `json:"ready"`

It would be a daunting task to create such a mapping by hand, but fortunately you don't have to. With a tool like JSON-to-Go you can generate the struct from a sample API response. Just make sure to choose a sample that includes any optional parts of the response! I made that mistake with the SDK I created for the Hypothesis plugin and later had to update the SDK to add a missing ingredient.

The rest of the exercise was straightforward. I followed the pattern used by the existing SDK for wrapping and testing Vercel endpoints, and published the modified package that the Steampipe plugin would use.

Adding the vercel_deployment table

Thanks to the powerful abstractions provided by the plugin SDK, the code for the new deployments table is brief -- less than 100 lines. Let's unpack some of those abstractions, starting with the table definition.

func tableVercelDeployment(ctx context.Context) *plugin.Table {
return &plugin.Table{
Name: "vercel_deployment",
Description: "Deployments in the Vercel account.",
List: &plugin.ListConfig{
KeyColumns: plugin.KeyColumnSlice{
{Name: "created_at", Require: plugin.Optional, Operators: []string{">", ">=", "="}},
Hydrate: listDeployment,
Columns: []*plugin.Column{
{Name: "name", Type: proto.ColumnType_STRING, Description: "Name of the deployment."},
{Name: "url", Type: proto.ColumnType_STRING, Description: "URL of the deployment."},
{Name: "state", Type: proto.ColumnType_STRING, Description: "One of: BUILDING, ERROR, INITIALIZING, QUEUED, READY, CANCELED."},
{Name: "created_at", Type: proto.ColumnType_TIMESTAMP, Transform: transform.FromField("Created").Transform(transform.UnixMsToTimestamp), Description: "Time when the deployment was created."},
{Name: "creator", Type: proto.ColumnType_JSON, Description: "Creator of the deployment."},
{Name: "building_at", Type: proto.ColumnType_TIMESTAMP, Transform: transform.FromField("BuildingAt").Transform(transform.UnixMsToTimestamp), Description: "Time when deployment started to build."},
{Name: "ready", Type: proto.ColumnType_TIMESTAMP, Transform: transform.FromField("Ready").Transform(transform.UnixMsToTimestamp), Description: "Time when deployment is ready to view."},
{Name: "meta", Type: proto.ColumnType_JSON, Description: "GitHub metadata associated with the deployment."},

The columns are a typical mix of plugin data types: STRING, JSON, and TIMESTAMP. We can send STRING and JSON values directly to Postgres where they become types text and JSONB. But the Vercel fields Created, BuildingAt, and Ready don't map directly. Vercel uses JavaScript timestamps that count the number of milliseconds since January 1, 1970, the Unix epoch. But Postgres uses the conventional Unix timestamp which counts seconds since the epoch.

The plugin SDK provides a set of transform functions for situations like this, among them the one needed here: UnixMsToTimestamp. When the transform you need isn't available it's an easy thing to add in your plugin, but if you do -- and if you think it's generally useful -- please consider contributing it to the plugin SDK.

Mapping Postgres constraints to API constraints

The UnixMsToTimestamp takes care of converting Vercel's JavaScript timestamps to the Unix timestamps that Postgres expects. But there's another piece of the puzzle to consider. We've done a lot of Vercel deployments. A query like select * from vercel_deployment will try to fetch all of them and will run for a very long time. We are typically interested in recent deployments, so we want to write queries like select * from vercel_deployment where created_at > now() - interval '2 weeks'. How can we translate the Postgres WHERE clause into a constraint that the Go SDK can pass to the Vercel API?

Here's the relevant part of the plugin's listDeployment function.

if d.Quals["created_at"] != nil {
for _, q := range d.Quals["created_at"].Quals {
postgresTimestamp := q.Value.GetTimestampValue().Seconds * 1000
switch q.Operator {
case ">":
req.Since = postgresTimestamp
case ">=":
req.Since = postgresTimestamp
case "=":
req.Since = postgresTimestamp

Here d.Quals refers to the KeyColumns property shown in table definition. The plugin SDK implements the concept of "quals" (or qualifiers) -- constraints that appear in the WHERE (or JOIN .. ON) clauses of SQL statements. As a plugin author you can capture these and use them to influence how the plugin calls the Go SDK for your plugin (and in turn the underlying API).

Requests to the Vercel API, including /v6/deployments which is the one we're wrapping here, can include the parameter since which means "Get deployments created after this JavaScript timestamp." The KeyColumns property enumerates the operators that govern that API parameter. So when the SQL query says where created_at > date('2022-12-01'), the API sees a since parameter with that Postgres timestamp converted to milliseconds.


So that's how the magic works! Now we can query for recent Vercel deployments, and track them on an internal dashboard.

If you're building a plugin and have questions to ask -- or observations to share -- please join us in our Slack community.