Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
DoltHub and DoltLab provide multiple APIs for executing Dolt SQL, fetching database data as CSV files, or automating your workflows for your database.
API tokens can be used to authenticate calls to the SQL API over Basic Authentication. This is useful for executing SQL queries against private databases or executing write queries.
First, create an API token in your settings on DoltHub. Copy the token right away, as you won't be able to see it again.
Now you can use this token in the header when executing a query against a private database.
owner, repo, branch = "dolthub", "private-db", "main"
query = """SELECT * FROM testtable"""
res = requests.get(
"https://www.dolthub.com/api/v1alpha1/{}/{}/{}".format(owner, repo, branch),
params={"q": query},
headers={ "authorization": "token [TOKEN YOU COPIED]" },
)
res.json()Please note: You must include a ref name (branch, tag, commit hash, etc) when making authenticated calls to the SQL API using a token. Unauthenticated API requests do not require this. They use the default branch (main or master).
DoltHub is a place to share Dolt databases. We host public data for free! DoltHub adds a modern, secure, always on database management web GUI to the Dolt ecosystem. Edit your database on the web, have another person review it via a pull request, and have the production database pull it to deploy.
DoltHub is GitHub for Dolt. DoltHub acts as a Dolt remote you can clone, push, pull and fetch from. DoltHub adds permissions, pull requests, issues, and forks to the Dolt ecosystem. Additionally, DoltHub has a modern SQL workbench built in so you can explore and change databases on the web.
DoltHub has many uses. We recommend getting started by .
This documentation will walk you through discovering data on DoltHub, cloning a copy locally, making a change on a fork, and submitting a pull request to the original database.
DoltHub has you can script against. The documentation covers:
- Used to make read or write SQL queries to a DoltHub database
- Used to download CSV format files of DoltHub tables
We added a feature to . This allows you to set up an API to receive and transform files that are uploaded to your databases on DoltHub. The transformed files are sent back to DoltHub for import to a DoltHub database.
We added the workspaces concept to DoltHub as a staging area for changes made from the web. Learn what workspaces are and how to use them most effectively.
DoltHub and DoltLab support continuous integration (CI) testing which allow you to validate changes before you commit them on your primary branch. Learn how to configure and run CI on your DoltHub and DoltLab databases.
Hooks - Used to receive change events to your DoltHub databases

Workspaces are a concept we invented for editing data on DoltHub. There is not a GitHub equivalent. Essentially workspaces are a special type of ref, similar to or . They are meant to simulate a "staging area" on DoltHub, similar to in Dolt.
Workspaces are shareable and temporary. Make any change to your database without affecting the state of your database. When you're satisfied with your changes you can commit them to a branch or create a pull request. If you're not satisfied with your changes, you can delete the workspace without any repercussions.
We will use an example DoltHub database, dolthub/us-jails and the Python requests library to explore it in the Python console.
One Table
Download the table incidents from main branch:
Download the table incidents at a commit hash:
All Tables
Download a ZIP file of all database tables from the main branch:
Download a ZIP file of all database tables at a commit hash:
API tokens can be used to authenticate calls to the CSV API over Basic Authentication. This is useful for downloading data from private databases.
You can use the token in the header when downloading CSVs from a private database.
import requests
local_file = 'incidents_main.csv'
res = requests.get('https://www.dolthub.com/csv/dolthub/us-jails/main/incidents')
with open(local_file, 'wb') as file:
file.write(res.content)import requests
local_file = 'incidents_commit.csv'
res = requests.get('https://www.dolthub.com/csv/dolthub/us-jails/u8s83gapv7ghnbmrtpm8q5es0dbl7lpd/incidents')
with open(local_file, 'wb') as file:
file.write(res.content)import requests
local_file = 'us-jails_main.zip'
res = requests.get('https://www.dolthub.com/csv/dolthub/us-jails/main')
with open(local_file, 'wb') as file:
file.write(res.content)import requests
local_file = 'us-jails_commit.zip'
res = requests.get('https://www.dolthub.com/csv/dolthub/us-jails/u8s83gapv7ghnbmrtpm8q5es0dbl7lpd')
with open(local_file, 'wb') as file:
file.write(res.content)import requests
local_file = 'private_db_main.zip'
res = requests.get(
'https://www.dolthub.com/csv/owner/private-db/main',
headers={ "authorization": "token [TOKEN YOU COPIED]" },
)
with open(local_file, 'wb') as file:
file.write(res.content)Workspaces are automatically created when you run a write query from the SQL Console on a DoltHub database page.
You immediately see the history of queries that have been run within the workspace. From here, there are a few actions you can take:
View the cumulative and current diffs
The Cumulative Diff tab shows all changes made to the workspace since it was created from the main branch (also known as a three dot diff between main and the tip of the workspace).
The Current Diff tab shows the change from the most recent commit on the workspace (or the two dot diff between HEAD and its parent).
Diffs are useful for seeing exactly how your queries affected your data.
Run more queries
Any query run from the SQL Console within the workspace will be added to the workspace. You can also view the data table from within the workspace (by running a SELECT query or clicking the "View data table" button) to use cell buttons to make changes.
Delete the workspace
Clicking on the trash icon on the top right in your workspace will delete it and your changes. This will have no consequences to the rest of your database.
Create a pull request or commit
Either create a pull request from the workspace by clicking on the "Create pull request" button, or click on the arrow to change the button to "Commit directly" to create a commit on the main branch (or whatever branch you created the workspace from).
Come back to the workspace later
If you don't want to decide right away what you want to do with your workspace, you can come back to it later by clicking on "1 open workspace" at the top of the SQL Console.
You can also share the workspace link with teammates and have them contribute to your workspace.






on identifies the events that should trigger the workflow to run. Required.
Runs workflow whenever a push event occurs. Optional.
A push event refers to a branch head update on the remote database, usually following the dolt push command.
Runs workflow whenever a pull_request event occurs. Optional.
A pull_request event refers to any "activity" or action involving a pull request on the remote database. Activities on pull request might include, but are not limited to, opening a pull request, closing a pull request, or synchronizing a pull request.
List of Strings. The branches filter indicates which branch(es) should cause the workflow to run. Required.
For example, if the main branch is listed under on.push.branches, then only a push to main will trigger the workflow to run.
In the case of on.pull_request.branches, branches listed refer to the base branch of the pull request. If main is specified as a branch in this case, a pull request opened with main as its base branch will trigger the workflow.
List of Strings. The activities filter indicates which pull request activity types should trigger a workflow. Optional.
Supported types as of Dolt v1.45.3 are:
opened
closed
reopened
jobs specifies one or more Jobs a workflow should run when it is triggered. Required.
String. The case-insensitive name of a job, must be unique. Required.
steps are a sequence of checks or tests to execute against the database during a workflow run. Required.
Steps run in the order they are defined.
String. The case-insensitive name of a step, must be unique. Required.
String. The name of the saved query that should be executed during the workflow run. Required.
String. The number of expected rows resulting from the execution of the named saved query. Optional.
This should be in the format: <comparator> <number>, for example, != 15. Valid comparators are:
== for equals
!= for not equals
> for greater than
>= for greater than or equal to
< for less than
<= for less than or equal to
String. The number of expected columns resulting from the execution of the named saved query. Optional.
This should be in the format: <comparator> <number>, for example, != 15. Valid comparators are:
== for equals
!= for not equals
> for greater than
>= for greater than or equal to
< for less than
<= for less than or equal to
name: "workflow name"
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
- name: "job name"
steps:
- name: "step name"
saved_query_name: "saved query name"
expected_rows: "== 2"
expected_columns: "== 1"See an interesting database you'd like to make changes to? Get the database locally with just one command (assuming you have Dolt installed), which is found on every database page:
We are going to use the dolthub/corona-virus database as an example
Once you've cloned a database from DoltHub, you can use the SQL shell to make a change to the data. We're going to add a row to the cases table on a new branch:
You can see how the insert query changed the cases table by viewing the diff:
If the change looks good we add the table and commit:
Now that we've made a change locally using Dolt, we want to push the change back to DoltHub so that it can be incorporated into the database.
Since we don't have write permissions to dolthub/corona-virus, we need to fork it to our own namespace. This is easily done using the fork button:
Now we have a database in our namespace that we can write to (taylor/corona-virus).
Now that we have a fork of the original database, we need to add a remote to our locally cloned database.
We now have two remotes: origin which refers to the dolthub/corona-virus database on DoltHub, and taylor which refers to the fork taylor/corona-virus on DoltHub.
Before we can push changes to our remote, we need to log in to Dolt so that it can recognize DoltHub as the remote database. First, run the dolt login command:
This will open your browser to the credentials settings page with the hash from above populated in the "Public key" field. Add a name for the token and click Create:
You should see this message if it was successful:
You are now logged in and can push data to DoltHub.
We want to push the new branch we made changes to locally to the remote fork.
When you navigate to the fork on DoltHub, you'll see the new branch in the branch selector dropdown on the left:
In order to incorporate this change into the parent database (dolthub/corona-virus), we need to create a pull request. A pull request is a way to propose changes to a database and discuss these changes with the database owners.
To create a pull request, we first navigate to the parent database. In the "Pull Requests" tab click on the "Create Pull Request" button. Choose your fork as the "From database" and add a title and description.
The database owner can now view your changes and request updates if needed. You can make changes to your pull request by creating a new commit with a change on the same branch and pushing again. Once the owner is satisfied with the change they can decide to merge the pull request, which would update the parent database's main branch.

DoltHub and DoltLab support continuous integration (CI) testing which allow you to validate changes before you commit them on your primary branch.
Continuous integration (CI) testing originated as a software development best practice where automated tests run against incoming code changes pushed by software developers who are collaborating on a code repository.
If a developer pushes changes that fail to pass the automated tests, the proposed changes are rejected. This practice ensures that only valid, high quality changes get committed on the primary build branch, resulting in fewer bugs, and higher developer productivity.
Dolt's revolutionary technology that marries Git and MySql now allows for CI testing on data and is supported on both DoltHub and DoltLab. In the same way the proposed code changes undergo automated tests to ensure they're valid, proposed data changes on a DoltHub or DoltLab database can also undergo automated tests to assert their validity.
The following sections will introduce you to how CI works with Dolt, DoltHub and DoltLab and help you setup CI testing for your own databases.
CI configuration for a DoltHub or DoltLab database is stored in the database itself. At the time of this writing, in order to add CI configuration to a DoltHub or DoltLab database, you will need to have a local Dolt client version >= and will have to clone a copy of the the database. In order to configure CI on the database, you will use Dolt's CI CLI commands.
The primary interface for creating and editing CI configuration in a Dolt database is via the dolt ci CLI command. These commands aim to simplify CI configuration in Dolt, so that users do not need to manually interact with the underlying CI tables directly.
The dolt ci commands as of Dolt v1.45.3 are:
. This command creates internal database tables used to store continuous integration configuration.
. This command drops all database tables used to store continuous integration configuration.
. This command will import a Dolt continuous integration workflow file into the database.
. This command will export a Dolt continuous integration workflow by name.
The dolt ci init command is the starting point for adding CI to a Dolt database, since it creates the underlying tables Dolt needs to begin storing configuration. To get started adding CI to a Dolt database, follow our .
Borrowing from terminology, Dolt CI configuration is also defined as workflows and follows their same general syntax and definitions. Workflows are yaml files that define when CI on a database should run, and what should happen during the course of that run. Though workflows in Dolt are defined and edited as files, Dolt does not store these files directly. Instead, it parses these files and only stores the relevant content of each file in its internal CI tables. These tables are then read by DoltHub and DoltLab to enact the defined CI runs.
More specifically, a workflow file specifies when it should run, by the Events defined in it, and what should happen during the run by the Jobs defined in it.
Events are specific activities that occur in a DoltHub or DoltLab database that trigger a workflow to run. One such event might be the pushing of a branch to the database, known as a push event, or the opening of a pull request on a database, a pull_request event. When these events occur on a database that contains a workflow that specifies it should run on these events, DoltHub and DoltLab run them.
For a complete list of events that trigger workflows, please see the .
In addition to Events, A workflow file also contains Jobs. In the context of a workflow, Jobs define what should happen when a workflow runs. This definition consists of a series of programmatic Steps that DoltHub or DoltLab will execute during the course of a workflow run. These workflow Jobs are somewhat related to DoltHub and DoltLab Jobs, although at this time their relationship has no direct impact on the end-user.
These DoltHub/DoltLab Jobs, are the automated asynchronous machinery that allow DoltHub and DoltLab to run long running processes to do programmatic work outside of its main API. These come in different types and depending on their type, do things like merge pull requests, or import a file uploaded by a user into a database. With the addition of CI on DoltHub and DoltLab, a new type of Job was added, a Dolt CI Job. This is the type of DoltHub/DoltLab Job executes a workflow Job as it is defined in the workflow file.
A workflow Job is made up of a series of Steps. A step, in its current form, is a single Saved Query that will run against the database as the "check", or test, that asserts the database branch's validity. Steps run in the order they're defined and will "pass", or succeed, if the Saved Query they execute completes without error and if the defined expected SQL results match the actual SQL results returned from the Saved Query.
For more information on Steps, please see the .
A is a SQL query that is stored and versioned in a Dolt database. For the purpose of DoltHub and DoltLab CI, this allows users to write a SQL query that will be executed on command at a later time, during a CI run.
For example, take the simple SQL query "show tables;". This can be added to a Dolt database as a saved query using the with the --save flag.
The query is saved under the name "Show tables query", and can be executed at anytime with the -x flag.
In a Dolt CI workflow file, this saved query can be reference by name as a Step during the execution of a workflow Job.
During the execution of the example workflow defined above, the "Show tables query" will be executed whenever the "example job" runs. This will result in the SQL query "show tables;" running against the database.
Saved queries are currently the primary method for defining tests or checks on a Dolt database.
For more information on Saved Query Steps, please see the .
% dolt clone dolthub/corona-virus
cloning https://doltremoteapi.dolthub.com/dolthub/corona-virus
16,690 of 16,690 chunks complete. 0 chunks being downloaded currently.
% cd corona-virus% dolt checkout -b add-new-case
Switched to branch 'add-new-case'
% dolt sql
# Welcome to the DoltSQL shell.
# Statements must be terminated with ';'.
# "exit" or "quit" (or Ctrl-D) to exit.
corona_virus> insert into cases (observation_time, place_id, confirmed_count, recovered_count, death_count) values("2021-12-07", 39, 400, 100, 3);
Query OK, 1 row affected
corona_virus> exit;
Bye% dolt diff
diff --dolt a/cases b/cases
--- a/cases @ ullgt5s1i14g1vgndmuhek3269fjsvbc
+++ b/cases @ 3tfb3onk0r7i13hq1aohqj55lnsft912
+-----+---------------------+----------+-----------------+-----------------+-------------+
| | observation_time | place_id | confirmed_count | recovered_count | death_count |
+-----+---------------------+----------+-----------------+-----------------+-------------+
| + | 2021-12-07 00:00:00 | 39 | 400 | 100 | 3 |
+-----+---------------------+----------+-----------------+-----------------+-------------+% dolt add cases && dolt commit -m "Add new case for today"
commit pk9fei8l35php871j2gptaaa92v0e324
Author: Taylor Bantle <[email protected]>
Date: Tue Dec 07 11:43:56 -0800 2021
Add new case for today
% dolt remote add taylor taylor/corona-virus
% dolt remote
origin
taylor% dolt login
Credentials created successfully.
pub key: <some hash>
/Users/taylor/.dolt/creds/<another hash>.jwk
Opening a browser to:
https://dolthub.com/settings/credentials#<some hash>
Please associate your key with your account.
Checking remote server looking for key association.Key successfully associated with user: taylor email [email protected]% dolt push taylor add-new-case
| Tree Level: 1, Percent Buffered: 0.00%, Files Written: 0, Files Uploaded: 1, Current Upload Speed: 22 kB/s




dolt ci ls. This command lists existing Dolt continuous integration workflows by name.
dolt ci remove. This command removes a Dolt continuous integration workflow by name.
$ dolt sql --save "Show tables query" -q "show tables;"$ dolt sql -x "Show tables query"name: example workflow
on: push
branches: [main]
jobs:
- name: example job
steps:
- name: step 1
saved_query_name: "Show tables query"Some common workflows involving hooks include:
Triggering continuous integration builds
Updating an external issue tracker
Creating automatic releases
DoltHub hooks currently only support push, pull request and branch events on a database, while GitHub supports a much longer list of events on both organizations and repositories.
If you have an event you'd like us to support, file an issue or reach out to us on Discord.
This blog covers an in depth example for how to set up a webhook for a push event on DoltHub. You can add and manage webhooks in the settings tab of any of your DoltHub databases.
This event occurs when a commit is pushed.
The payload delivered for a push event webhook contains information about the push that was made to the database. The payload is a JSON object that includes the following fields:
event_name
string
The name of the event triggered by the webhook. In this case, it will always be "push".
ref
string
The name of the branch that was pushed.
head
string
The SHA of the most recent commit on ref after the push.
prev
string
An example of the payload:
The pull request event webhook is triggered whenever a pull request is created, merged, or closed. The payload includes these fields:
event_name
string
The name of the event triggered by the webhook. In this case, it will always be "pull_request".
repository.name
string
The name of the database for the pull request base.
repository.owner
string
The owner (username or org name) of the pull request base.
pullID
string
An example of the payload object:
A branch Event Webhook is triggered when a branch is created or deleted. The payload includes these fields:
event_name
string
The name of the event. In this case, it will always be "branch".
repository.name
string
The name of the database where the branch is located.
repository.owner
string
The owner (username or org name) of the database where the branch change occurred.
branch
string
An example of the payload:
DoltHub and DoltLab >= v0.8.1 allow users to optionally transform files before they're imported into a database. To do this, users can provide a Transform Url at the time they're uploading a file.
If a Transform Url is provided, DoltHub/DoltLab will send an http POST request containing the uploaded file contents. In the http response to this request, the service receiving the request can return new file contents that will be imported into the database in place of the original file contents.
Additionally, on the Database Settings page of DoltHub/DoltLab, users can provide Transform Url along with request headers, which, will automatically be used to transform all file uploads across a database.
Pull requests are automatically created against a database after successful file uploads and imports.
{
"event_name": "push",
"ref": "refs/heads/main",
"head": "ns725d8noah3m0mjjvrilet1rsmcgna2",
"prev": "6higvr7ic9ndahfruh3kufu409im44jd",
"repository": {
"name": "test",
"owner": "liuliu"
}
}{
"event_name": "pull_request",
"repository": {
"name": "test",
"owner": "dolthub"
},
"pullID": "15",
"action": "Opened",
"state":"Open",
"fromBranch": "liuliu/feature-branch",
"toBranch": "main",
"author": "liuliu",
"fromRepository": {
"name": "test",
"owner": "liuliu"
},
"sender": "liuliu"
}{
"event_name": "branch",
"repository": {
"name": "test",
"owner": "liuliu"
},
"branch": "liuliu/workspace-fashionable-sponge",
"action": "Deleted branch"
}The SHA of the most recent commit on ref before the push.
repository.name
string
The name of the database where the push occurred.
repository.owner
string
The owner (username or org name) of the database where the push occurred.
The ID of the pull request that triggered the webhook.
action
string
The action that triggered the webhook (Opened, Closed, Merged.).
state
string
The current state of the pull request after the event.
fromRepository.name
string
The name of the database where the pull request is created from.
fromRepository.name
string
The owner of the database where the pull request is created from.
fromBranch
string
The branch that the pull request is created from.
toBranch
string
The branch that the pull request wants to merge into.
author
string
The author of the pull request.
sender
string
The user who triggered the event.
The full name of the branch (e.g., refs/heads/main).
action
string
The action that triggered the webhook (deleted or created the branch).

We will use an example DoltHub database, dolthub/ip-to-country and the Python requests library to explore it in the Python console:
This shows our database metadata as a dictionary:
We can now execute a query:
This yields the results as JSON, with both schema and data:
As a reminder, this API is in Alpha version, and we will continue to flesh this out, as well as improve query performance, over time.
You can also explore our SQL API on DoltHub whenever you execute a query:
Since adding, updating, and deleting data can take a bit of time to execute against larger databases, we made the writable API asynchronous. Therefore we needed to add two HTTP endpoints for executing these kinds of queries using the SQL API.
We can use our SHAQ database as an example. These are the steps we'd take to update a player's id in the player stats table.
First, we want to hit the write endpoint with our UPDATE query. This will start an asynchronous operation.
The main branch already exists in the database, and providing a <to_branch> that doesn't exist will create this new branch from the <from_branch>.
The yielded JSON results include an operation_name.
operation_name can be used to poll the second endpoint to check if the operation is done.
A done operation will yield a response that includes some query metadata, including from and to commit ids.
Since a lot of Dolt's functionality is exposed via SQL, we can use the commit ids to query the dolt_commit_diff_$tablename table to view the resulting diff from our UPDATE query.
We can repeat this process with as many queries as we want. Every query will create a commit on the <from_branch>.
Once we're satisfied with our changes, we can merge our branches by hitting the first endpoint with an empty query.
You can learn more about using the writable SQL API here.
The transform file server can expect a http POST request containing the contents of an uploaded file along with the following minimum http headers:
content-type: The type of file contents.
content-length: The size of the file.
x-dolthub-owner: The database owner name.
x-dolthub-database: The database name.
x-dolthub-from-branch: The branch created to import the data.
x-dolthub-to-branch: The target branch of the pull request.
x-import-filename: The name of the uploaded file.
x-import-md5: The md5 of the uploaded file.
If the uploaded file has a content-type of text/csv, text/psv, application/json or application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, the following headers may also be included on the request:
x-import-table: The name of the table the import affects.
x-import-primary-keys: The primary keys of the table.
x-import-operation: The type of import operation to perform. Possible values are:
overwrite: Used to create or overwrite a table with contents of file.
update: Used to update a table with contents of file using table's existing schema.
replace: Used to replace a table with contents of file using table's existing schema.
x-import-force: If x-import-operation=overwrite and data already exists in the destination table, a value of true here allows the target table to be overwritten.
In the http response to this POST, transform file servers should provide the transformed file contents in the response body. Additionally, depending on the type of import desired, the response should include some combination of the following http headers:
content-type: Required. The type of file contents to be imported into the database. Supported values are:
text/csv
application/sql
application/json
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
content-length: Required. The size of the file.
x-import-filename: Required. The name of the file.
x-import-md5: Required. The md5 of the file.
x-import-table: Required if content-type != application/sql. The name of the table the import affects.
x-import-operation: Required if content-type != application/sql. The type of import operation to perform. Possible values are:
overwrite: Used to create or overwrite a table with contents of file.
update
x-import-primary-keys: Optional. The primary keys of the table.
x-import-force: Optional. If x-import-operation=overwrite and data already exists in the destination table, a value of true here allows the target table to be overwritten.
We've created a simple example golang transform server to demonstrate how you can run one yourself.
This example server receives the POST request and prints the request headers like so:
Running the example server its default mode will always respond to a transform request with static CSV content and attach the following headers to the http response:
content-type=text/csv
x-import-md5=<static csv md5>
x-import-filename=transformed.csv
x-import-table=csv_table
x-import-operation=overwrite
x-import-primary-keys=pk,col1
The pull request opened on DoltHub/DoltLab for this import will reflect this static CSV content, regardless of the contents of the file uploaded originally.
DoltHub and DoltLab's automated file import process will create pull requests between new branches created during the database import process and the target branch chosen during file upload.
As a result, you can create N imports, and corresponding pull requests, from a single transform request by responding with SQL that creates a new branch for each import and pull request you want.
Let's look at how this works in the example transform server. When this server is run with the --sql flag, it will always respond to transform requests with the following static sql content:
This content will run against a checkout of the DoltHub/DoltLab database's x-dolthub-to-branch. For this example the x-dolthub-to-branch=main.
When this SQL file runs in the database, first, it will create and checkout a new branch import-branch-1, create table t1, and insert rows into that table. Then, it will add and commit those changes.
Next, it checks out the x-dolthub-to-branch branch again, before creating another new branch and adding a table and some rows to that branch. It also adds and commits these changes.
DoltHub and DoltLab's import process will detect these new branches and open pull request with these changes against the x-dolthub-to-branch.
Finally, the above content shows that the x-dolthub-to-branch is checked out once again and table t3 is created directly on this branch, but these changes are not added or committed in this SQL content.
Instead, the automated import process will detect any outstanding changes made to the x-dolthub-to-branch during import and commit them to the x-dolthub-from-branch. It will also create a pull request from the x-dolthub-from-branch to the x-dolthub-to-branch.
import requests
owner, database = 'dolthub', 'ip-to-country'
res = requests.get('https://dolthub.com/api/v1alpha1/{}/{}'.format(owner, database))
res.json(){'query_execution_status': 'Success',
'query_execution_message': '',
'repository_owner': 'dolthub',
'repository_name': 'ip-to-country',
'commit_ref': 'main',
'sql_query': 'SHOW TABLES;',
'schema': [{'columnName': 'Table',
'columnType': 'longtext'}],
'rows': [{'Table': 'IPv4ToCountry'}, {'Table': 'IPv6ToCountry'}]}query = '''SELECT * FROM IPv4ToCountry WHERE CountryCode2Letter = "AU"'''
res = requests.get(
'https://www.dolthub.com/api/v1alpha1/{}/{}/{}'.format(owner, database, branch),
params={'q': query},
)
res.json(){'query_execution_status': 'RowLimit',
'query_execution_message': '',
'repository_owner': 'dolthub',
'repository_name': 'ip-to-country',
'commit_ref': 'main',
'sql_query': 'SELECT * FROM IPv4ToCountry WHERE CountryCode2Letter = "AU"',
'schema': [{'columnName': 'IPFrom',
'columnType': 'Int'},
{'columnName': 'IpTo', 'columnType': 'Int'},
{'columnName': 'Registry', 'columnType': 'String'},
{'columnName': 'AssignedDate', 'columnType': 'Int'},
{'columnName': 'CountryCode2Letter',
'columnType': 'String'},
{'columnName': 'CountryCode3Letter',
'columnType': 'String'},
{'columnName': 'Country', 'columnType': 'String'}],
'rows': [{'IPFrom': '16777216',
'IpTo': '16777471',
'Registry': 'apnic',
'AssignedDate': '1313020800',
'CountryCode2Letter': 'AU',
'CountryCode3Letter': 'AUS',
'Country': 'Australia'},
.
.
.
]}owner, database, from_branch, to_branch = 'dolthub', 'SHAQ', 'main', 'update-player'
query = '''UPDATE player_season_stat_totals SET player_id=714287 WHERE player_id=15404617'''
headers = {
'authorization': 'token [see Authentication section for token]'
}
write_res = requests.post(
f'https://www.dolthub.com/api/v1alpha1/{owner}/{database}/write/{from_branch}/{to_branch}',
params={'q': query},
headers=headers,
)
write_json = write_res.json(){
"query_execution_status": "Success",
"query_execution_message": "Operation created. Poll the operation endpoint to see the execution status of this query.",
"repository_owner": "dolthub",
"repository_name": "SHAQ",
"to_branch_name": "update-player",
"from_branch_name": "main",
"query": "UPDATE player_season_stat_totals SET player_id=714287 WHERE player_id=15404617",
"operation_name": "operations/72abb56b-d478-43ae-9a2d-c9602184c7ab"
}def get_operation(op_name):
op_res = requests.get(
f'https://www.dolthub.com/api/v1alpha1/{owner}/{database}/write',
params={'operationName': op_name},
headers=headers,
)
return op_res.json()
def poll_operation(op_name):
done = False
while done == False:
poll_res = get_operation(op_name)
done = poll_res['done']
if done:
return poll_res
else:
time.sleep(3)
res = poll_operation(write_json['operation_name']){
"_id": "operations/72abb56b-d478-43ae-9a2d-c9602184c7ab",
"done": true,
"res_details": {
"query_execution_status": "Success",
"query_execution_message": "Query OK, 2 rows affected. Rows matched: 2 Changed: 2 Warnings: 0.",
"owner_name": "tbantle",
"repository_name": "SHAQ",
"from_commit_id": "518ue176ec13qf563e87uoerkqqepood",
"to_commit_id": "5sp7i0bov6itma2u04dpk15ui24lrigp"
}
}from_commit, to_commit = res['res_details']['from_commit_id'], res['res_details']['to_commit_id']
query = f'''SELECT from_player_id, to_player_id, diff_type FROM `dolt_commit_diff_player_season_stat_totals` WHERE from_commit="{from_commit}" AND to_commit="{to_commit}"'''
diff_res = requests.get(
f'https://www.dolthub.com/api/v1alpha1/{owner}/{database}/{to_branch}',
params={'q': query},
headers=headers,
)
diff_res.json()merge_res = requests.post(
f'https://www.dolthub.com/api/v1alpha1/{owner}/{database}/write/{to_branch}/{from_branch}',
params=None,
headers=headers,
)
merge_json = merge_res.json()
poll_operation(merge_json['operation_name'])received request
request headers: X-Import-Filename: [states.csv]
request headers: User-Agent: [Go-http-client/1.1]
request headers: X-Dolthub-Database: [example-database]
request headers: X-Dolthub-From-Branch: [example-user/import-fashionable-wolf]
request headers: X-Dolthub-Owner: [example-user]
request headers: X-Dolthub-To-Branch: [main]
request headers: Content-Type: [text/csv]
request headers: X-Import-Md5: [UjtXMOXuBEBXKXh4tkmvhQ==]
request headers: Accept-Encoding: [gzip]
content-length: 42
status-code: 200CALL DOLT_CHECKOUT('-b', 'import-branch-1');
CREATE TABLE t1 (
pk int primary key,
col1 varchar(55),
col2 varchar(55),
col3 varchar(55)
);
INSERT INTO t1 (pk, col1, col2, col3) VALUES (1, 'a', 'b', 'c');
INSERT INTO t1 (pk, col1, col2, col3) VALUES (2, 'd', 'e', 'f');
INSERT INTO t1 (pk, col1, col2, col3) VALUES (3, 'g', 'h', 'i');
CALL DOLT_COMMIT('-A', '-m', 'Create table t1');
CALL DOLT_CHECKOUT('main');
CALL DOLT_CHECKOUT('-b', 'import-branch-2');
CREATE TABLE t2 (
pk int primary key,
col1 varchar(55),
col2 varchar(55),
col3 varchar(55)
);
INSERT INTO t2 (pk, col1, col2, col3) VALUES (1, 'j', 'k', 'l');
INSERT INTO t2 (pk, col1, col2, col3) VALUES (2, 'm', 'n', 'o');
INSERT INTO t2 (pk, col1, col2, col3) VALUES (3, 'p', 'q', 'r');
CALL DOLT_COMMIT('-A', '-m', 'Create table t2');
CALL DOLT_CHECKOUT('main');
CREATE TABLE t3 (
pk int primary key,
col1 varchar(55),
col2 varchar(55),
col3 varchar(55)
);
INSERT INTO t3 (pk, col1, col2, col3) VALUES (1, 's', 't', 'u');
INSERT INTO t3 (pk, col1, col2, col3) VALUES (2, 'v', 'w', 'x');
INSERT INTO t3 (pk, col1, col2, col3) VALUES (3, 'y', 'z', 'aa');replace: Used to replace a table with contents of file using table's existing schema.

DoltHub provides a database API for fetching and creating data on your database. You can create a database, create a pull request, create a pull request comment, and merge a pull request through these APIs.
Here's an example of how to create a new database called museum-collections under the organization dolthub using an .
Creating a database requires authentication, so you must include this authorization header in your request. See the section for more details.
Here is an example of opening a pull request on the museum-collections database with data from the Los Angeles County Museum of Art. This data was added to the lacma branch on a fork database, whose owner is liuliu, we would like to eventually merge lacma branch into the main branch using an .
Include this header in your request.
This API allows you to retrieve the details of a specific pull request in the museum-collections database. In this example, we will retrieve the details of pull request #1.
Include this header in your request.
This API allows you to update a pull request by providing the fields you want to update in the request body. You can update the title, description, and state (only closing a pull request is supported).
Here's an example of how to update pull request #1 on the museum-collections database. In this example, we will set a new title, description, and close the pull request.
Here is an example of listing pull requests for the museum-collections database using an . The response of pull request list is paginated, so you need to use the next page token included in the response to retrieve the following pages of pull requests.
Include this header in your request.
Here is an example of adding a pull request comment using an .
Include this header in your request.
Here is an example of merging a pull request #66 on a database museum-collections using an . Note that the merge operation is asynchronous and creates an operation that can be polled to get the result.
To poll the operation and check its status, you can use the operationName in the returned response of the merge request to query the API. Once the operation is complete, the response will contain a job_id field indicating the job that's running the merge, as well as other information such as the database_owner, database_name, and pull_id.
Keep in mind that the time it takes for the merge operation to complete can vary depending on the size of the pull request and the complexity of the changes being merged.
Include this header in your request with the API token you created.
Then use GET to poll the operation to check if the merge operation is done.
Here is an example of uploading a file lacma.csv to create a table lacma on a database museum-collections using an . Note that the file import operation is asynchronous and creates an operation that can be polled to get the result.
To poll the operation and check its status, you can use the operationName in the returned response of the file upload post to query the API. Once the operation is complete, the response will contain a job_id field indicating the job that's running the file import as well as the id of the pull request that's created when the import job is completed.
Keep in mind that the time it takes for the import operation to complete can vary depending on the size of the file and the complexity of the changes being applied to the database. The file size limit is 100 MB.
Include this header in your request with the API token you created.
To upload the file, include two fields in the request body, file and params, the file should be type of Blob, and params should be a JSON object.
Then use GET to poll the operation to check if the import operation is done.
Here is an example of uploading a CSV file to create a table through this api endpoint in Javascript, you can reference the documentation for additional information.:
And an example of polling the job status in Javascript:
Here's an example of how to create a new branch in database museum-collections under the organization dolthub using an .
Creating a branch requires authentication, so you must include this authorization header in your request. See the section for more details.
Here's an example of how to list branches in the database museum-collections under the organization dolthub using an .
Listing branches requires authentication, so you must include this authorization header in your request. See the section for more details.
Here's an example of how to create a new tag in the database museum-collections under the organization dolthub using an .
Creating a tag requires authentication, so you must include this authorization header in your request. See the section for more details.
Here's an example of how to list tags in the database museum-collections under the organization dolthub using an .
Listing tags requires authentication, so you must include this authorization header in your request. See the section for more details.
Here's an example of how to create a new release in the database museum-collections under the organization dolthub using an .
Creating a release requires authentication, so you must include this authorization header in your request. See the section for more details.
{% swagger src="../../../.gitbook/assets/dolthub-api/createRelease.json" path="/{owner}/{database}/releases" method="post" % }
Here's an example of how to list releases in the database museum-collections under the organization dolthub using an .
Listing releases requires authentication, so you must include this authorization header in your request. See the section for more details.
DoltHub provides support for asynchronous operations, including merging, SQL writes, and file importing. When you execute one of these operations from the API, you will get an operation name that you can poll using another endpoint to check the operation status and other information.
This API endpoint lets you monitor the status of all the operations you started in one place without needing to poll the endpoints for singular operations. These operations have error and metadata fields which contain useful information for troubleshooting and debugging.
For example, if you have executed a few SQL write queries using that , you can list those operations using the operationType query parameter to filter for SqlWrite operations. The metadata will show the query executed, database and branch that the query ran on, as well as any syntax or other errors you may have encountered.
Here's an example of how to list SqlWrite operations initiated by user liuliu using an .
Listing operations requires authentication, so you must include this authorization header in your request. See the section for more details.
DoltHub performs certain asynchronous operations through job execution, including merging, importing, SQL reading, and migrating. When these operations are initiated via the API, you receive an operation name that includes the job ID.
This API endpoint lets you monitor the status of jobs started in a specific database.
Here is an example of how to list all the jobs on a database museum-collections using an .
Listing jobs requires authentication, so you must include this authorization header in your request. See the section for more details.
CI for DoltHub and DoltLab requires Dolt v1.45.3. This release contains the dolt ci command you will use to configure CI to run.
To start, let's ensure we have the correct Dolt version installed locally.
Now, let's clone a database that's hosted on DoltHub that we want to run CI tests on.
I've created the fork, dolthub/options, of the popular DoltHub databasepost-no-preference/options, and cloned a copy of my fork locally.
The first step required to enable a Dolt database to run CI is to run the dolt ci init command. This command will create Dolt's internal CI tables and also writes a new commit to the branch.
After CI initialization, we need to create the workflow file that will define our CI tests.
Modeled after GitHub Action's , Dolt CI is configured with a yaml file that gets imported into the database.
Let's create an example workflow.yaml file now.
The above workflow.yaml definition should look pretty familiar to GitHub Action's users.
It defines a new Dolt workflow named "my first DoltHub workflow" in the top-level name field, specifies when this workflow should run in the on field, and what should happen when the workflow runs, defined in the jobs field. Each of these top-level fields is required.
A Workflow's name must be unique and is case-insensitive.
The on field defines when the workflow should run, or rather, what should trigger the workflow to run. The above workflow is configured to run whenever a push to this database's master branch occurs.
jobs defines the work to be performed when the workflow is run. Each workflow Job must have a unique name and at least one step defined in the steps field. Currently, these are where Dolt CI differs the most from GitHub Actions.
In GitHub Actions, a workflow step, or action step, can be the running of an arbitrary binary or snippet of code that is executed as part of a Job. For Dolt CI though, at least in its current form, a job step can only execute a Saved Query, which must be identified by name in the saved_query_name field.
A in Dolt, is an arbitrary SQL query stored in the database for execution at a later time. By specifying the name of the saved query in the workflow.yaml file, we are configuring CI to execute the "show tables" saved query against the master branch, whenever a push to master occurs.
Additionally, each "step" optionally allows an expected_rows or expected_columns field to be defined, which can be used to assert the number of rows or columns in the resulting output of the saved query.
For this simple example, we will assert that the number of rows returned from our "show tables" saved query will be equal to (==), 2.
Let's save this file, and store our new workflow in the database.
To do this, we simply use the dolt ci import <file> command.
Like the dolt ci init command earlier, this command also automatically creates a new commit. At this point the file we created workflow.yaml is no longer needed, as the configuration has been persisted in the Dolt database.
If we ever need this file again, we can simply list the defined workflows in the database with the dolt ci ls command, then export the workflow back to yaml with the dolt ci export <workflow name> command.
The final step we need to perform on our local database is to define the saved query named "show tables".
A saved query can be added by using the with the --save option. And, as the name of our saved query suggests, we'll save the query "SHOW TABLES;".
Above we can see the expected output of the "show tables" saved query. The results have two rows, one row for each table in the database.
After creating the saved query, we can see the creation of the internal dolt_query_catalog table which stores saved queries. We now need to add and commit this new table to our database.
With the workflow and saved query defined on the database, we can push our updates to DoltHub.
Once we push the master branch, DoltHub will run our workflow for the first time. It does this by creating a new Dolt CI Job, in the same way it creates Jobs for pull request merges.
Dolt CI Jobs are visible on from the "Jobs" tab.
Here you can see the workflow job we defined in our yaml file "validate tables" is now running. Notice that for the time being, all Dolt CI Jobs will say they were created by the database owner, regardless of the person responsible for the push event. This will be fixed at a later date.
If the Dolt CI Job is successful, meaning the saved query succeeded and the expected number of rows were found, the Job will have status "Succeeded". If the Job failed, the status would be "Failed" and we would see a relevant failure message on the Job's detail page.
To demonstrate what this looks like, let's update our workflow.yaml file to include another step. This time, let's add a step for a saved query that does not exist in the database.
You can see that we've now added a step called "assert table option_chain exists" to our workflow, which uses a saved query "option_chain exists". Let's import these new changes into our local database, but skip saving the saved query "option_chain exists".
Now with our workflow configuration updated and pushed to DoltHub, we can go check the status of our new Workflow Job.
As expected, the Workflow Job that ran on push now failed. And if we click "View details" we can see that it's because we did not add the saved query to the database.
Ok, so let's fix the step we just added by adding the saved query "option_chain exists" to the database and pushing the changes.
And after that latest Workflow Job completes, we can see that our "validate tables" Job is passing again!
Let's update our workflow.yaml once more, but get a little fancier with it.
Above, we've done quite a bit more than before. With this updated workflow we now have added additional job definitions that will check each tables' schema, and ensure each table has data.
After importing this new workflow.yaml into our local database, we create all of the saved queries we've referenced above in the file. The queries for each are shown in the dolt_query_catalog table below.
After we push these new changes to DoltHub, they'll be live.
Now, let's say that hypothetically, a new committer, unaware of the updated workflow we've defined, comes along and decides to delete all data from the option_chain table and pushes this change to master.
Our updated workflow defined for this database will kick-off each of the Workflow Jobs, as seen in the image below.
But, we'd expect the "check data" Workflow Job to fail. And, it does.
If we click "View details", we can see the failure resulted from expecting > 0 rows, but got 0.
headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}const fs = require("fs");
const url =
"https://www.dolthub.com/api/v1alpha1/dolthub/museum-collections/upload";
const headers = {
"Content-Type": "application/json",
authorization: [api token you created],
};
const filePath = "lacma.csv";
fetchFileAndSend(filePath);
async function fetchFileAndSend(filePath) {
const params = {
tableName: "lacma",
fileName: "lacma.csv",
branchName:"main",
fileType: "Csv",
importOp: "Create",
primaryKeys: ["id"],
};
const formData = new FormData();
const fileData = fs.readFileSync(filePath);
const blob = new Blob([buffer], { type: "application/octet-stream" });
await formData.append("file", blob, "lacma.csv");
formData.append("params", JSON.stringify(params));
fetch(url, {
method: "POST",
headers,
body: formData,
})
.then((response) => {
// process response
})
.catch((error) => {
// process error
});
}
function pollOperation(op_name,branch_name) {
const url = `https: //www.dolthub.com/api/v1alpha1/dolthub/museum-collections/upload?branchName=${branch_name}&operationName=${op_name}`;
const headers = {
"Content-Type": "application/json",
authorization: [api token you created],
};
while (true) {
const res = await fetch(url, {
method: "GET",
headers,
});
const data = await res.json();
if (data.job_created) {
return data;
} else {
await new Promise(r => setTimeout(r, 1000));
}
}
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}headers = {
'authorization': '[api token you created]'
}% dolt version
dolt version 1.45.3% dolt clone dolthub/options
cloning https://doltremoteapi.dolthub.com/dolthub/options
% cd options
% dolt status
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean% dolt ci init
% dolt log -n 1
commit mptat9ijrblrb2q4j2digq9miav3vfv0 (HEAD -> master)
Author: 😺😺😺😺 <[email protected]>
Date: Wed Nov 13 15:36:27 -0800 2024
Successfully initialized Dolt CI
%







# workflow.yaml
name: my first DoltHub workflow
on:
push:
branches:
- master
jobs:
- name: validate tables
steps:
- name: assert expected tables exist
saved_query_name: show tables
expected_rows: "== 2"% dolt ci import workflow.yaml
% dolt log -n 1
commit 0r25jqhrh26p0s9g7s8pes6qdrmq7bf0 (HEAD -> master)
Author: 😺😺😺😺 <[email protected]>
Date: Wed Nov 13 15:37:01 -0800 2024
Successfully stored workflow: my first DoltHub workflow
%% dolt ci ls
my first DoltHub workflow
% dolt ci export "my first dolthub workflow"
Dolt CI Workflow 'my first DoltHub workflow' exported to /Users/dustin/doltdbs/options/my_first_DoltHub_workflow.yaml.
% ls
my_first_DoltHub_workflow.yaml
%% dolt sql --save "show tables" -q "SHOW TABLES;"
+--------------------+
| Tables_in_options |
+--------------------+
| option_chain |
| volatility_history |
+--------------------+
% dolt status
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
(use "dolt push" to publish your local commits)
Untracked tables:
(use "dolt add <table>" to include in what will be committed)
new table: dolt_query_catalog% dolt add .
% dolt commit -m 'add show tables saved query'
commit jn5n8pqc43hqs59hgolinf2mrhhkncm7 (HEAD -> master)
Author: 😺😺😺😺 <[email protected]>
Date: Wed Nov 13 15:38:35 -0800 2024
add show tables saved query
%% dolt push origin master
/ Uploading...
To https://doltremoteapi.dolthub.com/dolthub/options
* [new branch] master -> master
%# workflow.yaml
name: my first DoltHub workflow
on:
push:
branches:
- master
jobs:
- name: validate tables
steps:
- name: assert expected tables exist
saved_query_name: show tables
expected_rows: "== 2"
- name: assert table option_chain exists
saved_query_name: option_chain exists% dolt ci import workflow.yaml
% dolt log -n 1
commit gngghlouuq6g4k8fs8qpldvr0scfcjvr (HEAD -> master)
Author: 😺😺😺😺 <[email protected]>
Date: Wed Nov 13 15:56:00 -0800 2024
Successfully stored workflow: my first DoltHub workflow
% dolt push origin master
- Uploading...
To https://doltremoteapi.dolthub.com/dolthub/options
* [new branch] master -> master
%% dolt sql --save "option_chain exists" -q "select * from option_chain limit 1;"
+------------+------------+------------+--------+----------+-------+-------+--------+--------+--------+---------+--------+--------+
| date | act_symbol | expiration | strike | call_put | bid | ask | vol | delta | gamma | theta | vega | rho |
+------------+------------+------------+--------+----------+-------+-------+--------+--------+--------+---------+--------+--------+
| 2019-02-09 | A | 2019-02-15 | 65.00 | Call | 10.50 | 11.25 | 0.2705 | 1.0000 | 0.0000 | -0.0046 | 0.0000 | 0.0124 |
+------------+------------+------------+--------+----------+-------+-------+--------+--------+--------+---------+--------+--------+
% dolt status
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "dolt add <table>" to update what will be committed)
(use "dolt checkout <table>" to discard changes in working directory)
modified: dolt_query_catalog
% dolt add .
% dolt commit -m 'add option_chain exists saved query'
commit g8hm71ar28djfv1q08uvphn44njuecb7 (HEAD -> master)
Author: 😺😺😺😺 <[email protected]>
Date: Wed Nov 13 16:04:15 -0800 2024
add option_chain exists saved query
% dolt push origin master
\ Uploading...
To https://doltremoteapi.dolthub.com/dolthub/options
* [new branch] master -> master# workflow.yaml
name: my first DoltHub workflow
on:
push:
branches:
- master
jobs:
- name: validate tables
steps:
- name: assert expected tables exist
saved_query_name: show tables
expected_rows: "== 2"
- name: assert table option_chain exists
saved_query_name: option_chain exists
- name: assert table volatility_history
saved_query_name: volatility_history exists
- name: validate schema
steps:
- name: assert 13 option_chain columns exist
saved_query_name: check option_chain column length
expected_columns: "<= 13"
- name: assert call_put column exist
saved_query_name: check option_chain.call_put exists
expected_columns: "== 1"
- name: assert 16 volatility_history columns exist
saved_query_name: check volatility_history column length
expected_columns: ">= 16"
- name: assert act_symbol column exist
saved_query_name: check volatility_history.act_symbol exists
expected_columns: "== 1"
- name: check data
steps:
- name: assert option_chain table has data
saved_query_name: check option_chain data
expected_rows: "> 0"
- name: assert volatility_history table has data
saved_query_name: check volatility_history data
expected_rows: "> 0"% dolt sql -r vertical -q "select * from dolt_query_catalog;"
*************************** 1. row ***************************
id: check option_chain column length
display_order: 4
name: check option_chain column length
query: select * from option_chain limit 1;
description:
*************************** 2. row ***************************
id: check option_chain data
display_order: 8
name: check option_chain data
query: select * from option_chain limit 1;
description:
*************************** 3. row ***************************
id: check option_chain.call_put exists
display_order: 5
name: check option_chain.call_put exists
query: select call_put from option_chain limit 1;
description:
*************************** 4. row ***************************
id: check volatility_history column length
display_order: 6
name: check volatility_history column length
query: select * from volatility_history limit 1;
description:
*************************** 5. row ***************************
id: check volatility_history data
display_order: 9
name: check volatility_history data
query: select * from volatility_history limit 1;
description:
*************************** 6. row ***************************
id: check volatility_history.act_symbol exists
display_order: 7
name: check volatility_history.act_symbol exists
query: select act_symbol from volatility_history limit 1;
description:
*************************** 7. row ***************************
id: option_chain exists
display_order: 2
name: option_chain exists
query: select * from option_chain limit 1;
description:
*************************** 8. row ***************************
id: show tables
display_order: 1
name: show tables
query: SHOW TABLES;
description:
*************************** 9. row ***************************
id: volatility_history exists
display_order: 3
name: volatility_history exists
query: select * from volatility_history limit 1;
description:
%% dolt sql -q "delete from option_chain;"
Query OK, 71203454 rows affected (0.00 sec)
% dolt status
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "dolt add <table>" to update what will be committed)
(use "dolt checkout <table>" to discard changes in working directory)
modified: option_chain
% dolt add .
% dolt commit -m 'remove option_chain data'
commit gjo41are9nbo4ocq9faapickmqtum26q (HEAD -> master)
Author: 😺😺😺😺 <[email protected]>
Date: Wed Nov 13 19:00:31 -0800 2024
remove option_chain data
% dolt push origin master
- Uploading...
To https://doltremoteapi.dolthub.com/dolthub/options
* [new branch] master -> masterThe SQL query to execute.
SHOW TABLESThe name of the database owner.
dolthubThe name of the database.
ip-to-countrySuccess
Bad request. The request was invalid or could not be processed.
The SQL query to execute.
SELECT * FROM IPv4ToCountry WHERE CountryCode2Letter = 'AU'The name of the database owner.
dolthubThe name of the database.
ip-to-countryThe database ref to execute the query against.
newbranchSuccess
Bad request. The request was invalid or could not be processed.
The name of the database owner.
dolthubThe name of the database.
SHAQThe base branch.
mainThe branch to write to. Will be created from the from_branch if it doesn't exist.
featureThe SQL query to execute. Use the request body instead for larger queries.
UPDATE player_season_stat_totals SET player_id=714287 WHERE player_id=15404617SQL write query to execute. Can be used in place of the query parameter for larger queries.
UPDATE player_season_stat_totals SET player_id=714287 WHERE player_id=15404617Success
Bad request. The request was invalid or could not be processed.
The name of the database owner.
dolthubThe name of the database.
SHAQThe name of the operation
operations/72abb56b-d478-43ae-9a2d-c9602184c7abSuccess
Bad request. The request was invalid or could not be processed.
GET /api/v1alpha1/{owner}/{database} HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
{
"query_execution_status": "Success",
"query_execution_message": "",
"repository_owner": "dolthub",
"repository_name": "ip-to-country",
"commit_ref": "main",
"sql_query:": "SHOW TABLES",
"schema": [
{
"columnName": "Tables_in_dolt",
"columnType": "longtext"
}
],
"rows": [
{
"[columnName]": "IPv4ToCountry"
}
]
}GET /api/v1alpha1/{owner}/{database}/{ref} HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
POST /api/v1alpha1/{owner}/{database}/write/{from_branch}/{to_branch} HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 90
{
"query": "UPDATE player_season_stat_totals SET player_id=714287 WHERE player_id=15404617"
}GET /api/v1alpha1/{owner}/{database}/write?operationName=operations%2F72abb56b-d478-43ae-9a2d-c9602184c7ab HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
{
"query_execution_status": "Success",
"query_execution_message": "",
"repository_owner": "dolthub",
"repository_name": "ip-to-country",
"commit_ref": "newbranch",
"sql_query:": "SELECT * FROM IPv4ToCountry WHERE CountryCode2Letter = 'AU'",
"schema": [
{
"columnName": "IPFrom",
"columnType": "Int"
}
],
"rows": [
{
"[columnName]": "16777216"
}
]
}{
"query_execution_status": "Success",
"query_execution_message": "Operation created. Poll the operation endpoint to see the execution status of this query.",
"repository_owner": "dolthub",
"repository_name": "SHAQ",
"from_branch_name": "main",
"to_branch_name": "main",
"query:": "UPDATE player_season_stat_totals SET player_id=714287 WHERE player_id=15404617",
"operation_name:": "operations/72abb56b-d478-43ae-9a2d-c9602184c7ab"
}{
"_id": "operations/72abb56b-d478-43ae-9a2d-c9602184c7ab",
"done": true,
"res_details": {
"query_execution_status": "Success",
"query_execution_message": "Query OK, 2 rows affected. Rows matched: 2 Changed: 2 Warnings: 0.",
"owner_name": "dolthub",
"repository_name": "SHAQ",
"from_commit_id": "518ue176ec13qf563e87uoerkqqepood",
"to_commit_id": "5sp7i0bov6itma2u04dpk15ui24lrigp"
}
}This API allows you to create a new Dolt database.
A description of the database.
Records from museums around the world.The name of the owner of the database.
dolthubThe name of the repository for the database.
museum-collectionsThe visibility of the database (public or private).
publicDatabase created successfully.
Bad request. The request was invalid or could not be processed.
This API allows you to create a new pull request.
The name of the owner of the database.
dolthubThe name of the database.
museum-collectionsThe title of the pull request.
LACMA dataThe description of the pull request.
Records from the Los Angeles County of Museum.The name of the owner of the source branch.
liuliuThe name of the database containing the source branch.
museum-collectionsThe name of the source branch.
lacmaThe name of the owner of the destination branch.
dolthubThe name of the database containing the destination branch.
museum-collectionsThe name of the destination branch.
mainPull request created successfully.
Bad request. The request was invalid or could not be processed.
Get information about a specific pull request.
The name of the database owner.
dolthubThe name of the database.
museum-collectionsID of the pull request
1Success
Bad request. The request was invalid or could not be processed.
List pull requests
The name of the database owner.
dolthubThe name of the database.
museum-collectionsThe pageToken to get the next page of results
AWE2Nm9uMWQ23FSQ7oRTbCXYTLLvNDhNs5hIFebQFI66FW-SYXGSlh3XcUQ8zmtLQ00QgD0X5FZr5ZTAhvT2FfRrGog7OuUno9wdTIXFQpkkX0opYoJL6Vrn2emlXkMBTiZYMqChyhR92_Yxd58B0w5nMrfXFf8v7xfAkN46hwFilter pulls by state, can be Open, Closed, or Merged.
OpenFilter pulls by review status, can be Approved, AssignedReviewer, Rejected or Reviewed
ApprovedSearch by pull request title or author name.
testSuccess
Bad request. The request was invalid or could not be processed.
Owner of the database
dolthubdatabase name
museum-collectionsPull request ID
66Comment to be added to the pull request
The pull request looks good!Success
Bad request. The request was invalid or could not be processed.
Poll the operation to check if the merge operation is done
The owner of the database
dolthubThe database name
museum-collectionsThe ID of the pull request
66The operation name to check
repositoryOwners/dolthub/repositories/museum-collections/jobs/b09a9221-9dcb-4a15-9ca8-a64656946f12The status of the merge operation
Bad request. The request was invalid or could not be processed.
This endpoint merges a pull request into the destination branch.
The name of the database owner.
dolthubThe name of the database.
museum-collectionsThe ID of the pull request to merge.
66The pull request was merged successfully.
Bad request. The request was invalid or could not be processed.
This endpoint allows you to upload a file to DoltHub to create, update, overwrite, or replace a table.
The name of the database owner.
dolthubThe name of the database.
museum-collectionsThe file to be uploaded.
Pull request created successfully.
Bad request. The request was invalid or could not be processed.
Poll the operation to check if the file import operation is done
The owner of the database
dolthubThe database name
museum-collectionsThe name of the branch to upload the file to.
mainThe operation name to check
repositoryOwners/dolthub/repositories/museum-collections/jobs/b09a9221-9dcb-4a15-9ca8-a64656946f12The status of the file import operation
Bad request. The request was invalid or could not be processed.
This API endpoint allows you to create a new branch in your database.
The name of the owner of the database.
dolthubThe name of the database.
museum-collectionsThe type of revision, can be either 'branch', 'ref' or 'commit'.
branchThe name of revision. If revisionType is 'branch', this is the name of the base branch. If revisionType is 'commit', this is the commit hash.
mainThe name of the new branch.
feature-branchSuccess
Bad request. The request was invalid or could not be processed.
This API endpoint allows you to list all branches in your database.
The name of the owner of the database.
dolthubThe name of the database.
museum-collectionsSuccess
Bad request. The request was invalid or could not be processed.
This API endpoint allows you to create a new tag in your database.
The name of the owner of the database.
dolthubThe name of the database.
museum-collectionsThe name of the tag.
v1The description of the tag.
First version of the databaseThe type of revision, can be either 'branch', 'ref' or 'commit'.
branchThe name of revision. If revisionType is 'branch', this is the name of the base branch. If revisionType is 'commit', this is the commit hash.
mainSuccess
Bad request. The request was invalid or could not be processed.
This API endpoint allows you to list all releases in your database.
The name of the owner of the database.
dolthubThe name of the database.
museum-collectionsThe next page token.
1234567890Success
Bad request. The request was invalid or could not be processed.
This API endpoint allows you to list all jobs in your database.
The name of the owner of the database.
dolthubThe name of the database.
museum-collectionsSuccess
Bad request. The request was invalid or could not be processed.
The name of the user who initiated the operations. This user's name must match the user associated with the api token.
liuliuSpecific type of operation for this query. Supported operation types are SqlWrite, SqlRead, Import, Merge, Migrate.
SqlWriteToken for the next page of results
AWE2Nm9uMWQ26pQQpqLNLXu7a60647lpiZoDFrf5WDGHo68XNC-rfr068rymbEdUHCXidRxx7_fwGBMSzQi6C_D50NcJFXm0BwRnGmmHEL4T4xxkWoX3sL5mKD-PuMRuxeHPsR0NB5Rzi70jGzblVlfBTIHPJ20c630pNLrI_spxH0tYTzMnQ4uPpr3ub9P50FEH9i4Au0gUkmvj8NUibbGWi-R1AJYplEPr=Success
Bad request. The request was invalid or could not be processed.
The name of the database owner.
dolthubThe name of the database.
museum-collectionsID of the pull request to update.
1The updated title of the pull request.
Added new dataThe updated description of the pull request.
Added new data from LACMA museum.The updated state of the pull request (can only update to 'closed')
closedSuccess
Bad request. The request was invalid or could not be processed.
The name of the owner of the database.
dolthubThe name of the database.
museum-collectionsSuccess
Bad request. The request was invalid or could not be processed.
{
"status": "Success",
"description": "Records from museums around the world.",
"repository_owner": "dolthub",
"repository_name": "museum-collections",
"visibility": "public"
}{
"status": "Success",
"title": "LACMA data",
"description": "Records from the Los Angeles County of Museum.",
"from_owner_name": "liuliu",
"from_repository_name": "museum-collections",
"from_branch_name": "lacma",
"to_owner_name": "dolthub",
"to_repository_name": "museum-collections",
"to_branch_name": "main",
"pull_id": "66"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"pull_id": "1",
"title:": "Added new data",
"description:": "Added missing museums, sourced from museums.com",
"state": "merged",
"from_branch_owner": "liuliu",
"from_branch_database": "museum-collections",
"from_branch_name": "feature",
"to_branch_owner": "dolthub",
"to_branch_database": "museum-collections",
"to_branch_name": "main",
"created_at": "2023-07-01T18:00:00Z",
"author": "liuliu"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"pulls": [
{
"pull_id": "1",
"title:": "Added new data",
"description:": "Added missing museums, sourced from museums.com",
"state": "merged",
"created_at": "2023-07-01T18:00:00Z",
"creator": "liuliu"
}
],
"next_page_token": "AWE2Nm9uMWQ23FSQ7oRTbCXYTLLvNDhNs5hIFebQFI66FW-SYXGSlh3XcUQ8zmtLQ00QgD0X5FZr5ZTAhvT2FfRrGog7OuUno9wdTIXFQpkkX0opYoJL6Vrn2emlXkMBTiZYMqChyhR92_Yxd58B0w5nMrfXFf8v7xfAkN46hw"
}{
"status": "Success",
"repository_owner": "dolthub",
"repository_name": "museum-collections",
"pull_id": "66",
"comment": "The pull request looks good!"
}{
"status": "Success",
"operation_name": "repositoryOwners/dolthub/repositories/museum-collections/jobs/b09a9221-9dcb-4a15-9ca8-a64656946f12",
"job_created": true,
"database_owner": "dolthub",
"database_name": "museum-collections",
"pull_id": "66",
"job_status": "In Progress"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"pull_id": "66",
"operation_name": "repositoryOwners/dolthub/repositories/museum-collections/jobs/b09a9221-9dcb-4a15-9ca8-a64656946f12",
"user_operation_name": "users/liuliu/userOperations/5e4834c9-375d-4bbd-bdaf-09eb0734127c"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"branch_name": "main",
"table_name": "lacma",
"operation_name": "repositoryOwners/dolthub/repositories/museum-collections/jobs/b09a9221-9dcb-4a15-9ca8-a64656946f12",
"user_operation_name": "users/liuliu/userOperations/5e4834c9-375d-4bbd-bdaf-09eb0734127c"
}{
"status": "Success",
"operation_name": "repositoryOwners/dolthub/repositories/museum-collections/jobs/b09a9221-9dcb-4a15-9ca8-a64656946f12",
"job_created": true,
"database_owner": "dolthub",
"database_name": "museum-collections",
"pull_id": "66",
"job_status": "Completed"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"new_branch_name": "feature-branch",
"revision_type": "branch",
"revision_name": "main"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"tag_name": "v1",
"tag_description": "First version of the database",
"revision_type": "branch",
"revision_name": "main"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"next_page_token": "1234567890",
"releases": [
{
"release_title": "my-release-v1",
"release_tag": "v1",
"release_commit_sha": "1234567890",
"release_description": "First version of the database",
"release_created_at": "2023-03-31T18:00:00Z",
"release_updated_at": "2023-03-31T18:00:00Z"
}
]
}GET /api/v1alpha1/users/{username}/operations HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
PATCH /api/v1alpha1/{owner}/{database}/pulls/{pull_id} HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 93
{
"title": "Added new data",
"description": "Added new data from LACMA museum.",
"state": "closed"
}GET /api/v1alpha1/{owner}/{database}/tags HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
POST /api/v1alpha1/database HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 132
{
"description": "Records from museums around the world.",
"ownerName": "dolthub",
"repoName": "museum-collections",
"visibility": "public"
}POST /api/v1alpha1/{owner}/{database}/pulls HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 275
{
"title": "LACMA data",
"description": "Records from the Los Angeles County of Museum.",
"fromBranchOwnerName": "liuliu",
"fromBranchRepoName": "museum-collections",
"fromBranchName": "lacma",
"toBranchOwnerName": "dolthub",
"toBranchRepoName": "museum-collections",
"toBranchName": "main"
}GET /api/v1alpha1/{owner}/{database}/pulls/{pull_id} HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
GET /api/v1alpha1/{owner}/{database}/pulls HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
POST /api/v1alpha1/{owner}/{database}/pulls/{pull_id}/comments HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 42
{
"comment": "The pull request looks good!"
}GET /api/v1alpha1/{owner}/{database}/pulls/{pull_id}/merge?operationName=repositoryOwners%2Fdolthub%2Frepositories%2Fmuseum-collections%2Fjobs%2Fb09a9221-9dcb-4a15-9ca8-a64656946f12 HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
POST /api/v1alpha1/{owner}/{database}/pulls/{pull_id}/merge HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
POST /api/v1alpha1/{owner}/{database}/upload HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: multipart/form-data
Accept: */*
Content-Length: 149
{
"file": "binary",
"params": {
"tableName": "lacma",
"fileName": "lacma.csv",
"branchName": "main",
"fileType": "Csv",
"importOp": "Create",
"primaryKeys": [
"id"
]
}
}GET /api/v1alpha1/{owner}/{database}/upload?branch=main&operationName=repositoryOwners%2Fdolthub%2Frepositories%2Fmuseum-collections%2Fjobs%2Fb09a9221-9dcb-4a15-9ca8-a64656946f12 HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
POST /api/v1alpha1/{owner}/{database}/branches HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 80
{
"revisionType": "branch",
"revisionName": "main",
"newBranchName": "feature-branch"
}GET /api/v1alpha1/{owner}/{database}/branches HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
POST /api/v1alpha1/{owner}/{database}/tags HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 107
{
"tagName": "v1",
"tagMessage": "First version of the database",
"revisionType": "branch",
"revisionName": "main"
}GET /api/v1alpha1/{owner}/{database}/releases HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
GET /api/v1alpha1/{owner}/{database}/jobs HTTP/1.1
Host: www.dolthub.com
authorization: YOUR_API_KEY
Accept: */*
{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"branches": [
{
"branch_name": "main"
}
]
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"jobs": [
{
"job_id": "repositoryOwners/dolthub/repositories/museum-collections/jobs/aa37149c-61c3-4ce2-b3d8-694d2a152257",
"creator": "liuliu",
"description": "Merge pull request #9",
"job_type": "Merge",
"status": "Failed",
"created_at": "2024-01-09T22:19:49.000Z",
"error": "failed to push to branch"
}
],
"next_page_token": "AWE2Nm9uMWQ2M84_Q_ajmYOdCDIg_Ac8OuedPyAGoTT3TsBNnTSE29QPb6oJmZdbjYwdFjTwu6_ioVx4nsp3eCPoO5zyATKGsauocvy4onXjoWGfqmatl2dcm-2Ks45NPT0qRPu37HjVcaC0Qj2X5_KHcYI70fzOLn1RogexmtBlf_AtI3os4DntzhZtfp9GFtHiVekppo_26viXiKcjy0DpKay5"
}{
"status": "Success",
"operations": [
{
"operation_name": "users/liuliu/userOperations/5e4834c9-375d-4bbd-bdaf-09eb0734127c",
"creator": "liuliu",
"description": "Run query CREATE TABLE tablename (\\n pk INT,\\n col1 VARCHAR(255),\\n PRIMARY KEY (pk)\\n);",
"operation_type": "SqlWrite",
"operation_status": "liuliu",
"error": "table with name tablename already exists",
"metadata": "{'from_branch_name':'main','from_repo_name':'api-db','from_repo_owner':'liuliu','sql_query':'CREATE TABLE tablename (\\n pk INT,\\n col1 VARCHAR(255),\\n PRIMARY KEY (pk)\\n);','to_branch_name':'main','to_repo_name':'api-db','to_repo_owner':'liuliu'}",
"created_at": "2024-01-09T22:19:49.000Z",
"updated_at": "2024-01-09T22:19:50.000Z"
}
],
"next_page_token": "AWE2Nm9uMWQ26pQQpqLNLXu7a60647lpiZoDFrf5WDGHo68XNC-rfr068rymbEdUHCXidRxx7_fwGBMSzQi6C_D50NcJFXm0BwRnGmmHEL4T4xxkWoX3sL5mKD-PuMRuxeHPsR0NB5Rzi70jGzblVlfBTIHPJ20c630pNLrI_spxH0tYTzMnQ4uPpr3ub9P50FEH9i4Au0gUkmvj8NUibbGWi-R1AJYplEPr="
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"pull_id": "1",
"title": "Added new data",
"description": "Added new data from LACMA museum.",
"state": "closed"
}{
"status": "Success",
"database_owner": "dolthub",
"database_name": "museum-collections",
"tags": [
{
"tag_name": "v1",
"tag_description": "First version of the database",
"tagged_at": "2023-03-31T18:00:00Z"
}
]
}