dbNet
dbNet is a web-based SQL IDE using Go as a backend, and your browser (or electron) as front-end. I built it because I was unsatisfied with the database clients out there. Alot of them are top-heavy, unituitive, slow or expensive. dbNet aims to be smart and useful especially for analysis and simply querying any SQL database.
See the GitHub Repo here: https://github.com/dbnet-io/dbnet
Getting Started
There are a few things to set prior to running the endpoint.
- Install dbNet (see the Installation page for more details)
- Set up your database connection(s) (see the Connections section for more details)
- Run the application with
dbnet serve(see the Server page for more details)
Installation
Brew on Mac
brew install dbnet-io/dbnet/dbnet
# You're good to go!
dbnet -h
Scoop on Windows
scoop bucket add dbnet https://github.com/dbnet-io/scoop-dbnet.git
scoop install dbnet
# You're good to go!
dbnet -h
Docker
docker pull dbnetio/dbnet
# You're good to go!
docker run --rm -it dbnetio/dbnet -h
Build from source
git clone https://github.com/dbnet-io/dbnet.git
cd dbnet
go mod tidy
# Ensure you have npm & yarn installed with NodeJS 16+
bash build.sh
./dbnet -h
Telemetry
To disable telemetry, please pass the DBNET_TELEMETRY=FALSE environment variable. When enabled, the following properties are collected each time the dbnet serve command is ran.
versionosactionmachine_id
See here for source code.
CLI
dbnet -h
$ dbnet -h
dbnet - A database client that lets your SQL superpowers shine | https://github.com/dbnet-io/dbnet
Version 0.0.40
Usage:
dbnet [conns|serve|exec]
Subcommands:
conns list & test local connections
serve launch the dbnet server
exec execute a SQL query
Flags:
--version Displays the program version string.
-h --help Displays help with available flag, subcommand, and positional value parameters.
dbnet conns
conns - list & test local connections
Usage:
conns [list|test]
Subcommands:
list list local connections detected
test test a local connection
Flags:
--version Displays the program version string.
-h --help Displays help with available flag, subcommand, and positional value parameters.
dbnet exec
exec - execute a SQL query
Usage:
exec [conn]
Positional Variables:
conn The connection name (Required)
Flags:
--version Displays the program version string.
-h --help Displays help with available flag, subcommand, and positional value parameters.
--query The SQL query to execute
--file The SQL file to execute
dbnet serve
serve - launch the dbnet server
Flags:
--version Displays the program version string.
-h --help Displays help with available flag, subcommand, and positional value parameters.
--host The host to use. (default: 0.0.0.0)
--port The port to use (default: 5897)
Running dbnet serve:
2023-06-19 09:35:40 INF Serving @ http://0.0.0.0:5987
{"time":"2023-06-19T09:35:40.107434-03:00","level":"INFO","prefix":"echo","message":"Echo (v5.0.0-alpha). High performance, minimalist Go web framework https://echo.labstack.com"}
{"time":"2023-06-19T09:35:40.107443-03:00","level":"INFO","prefix":"echo","message":"http(s) server started on [::]:5987"}
{"time":"2023-06-19T09:35:40.412349-03:00","level":"INFO","id":"1687178140410519000","remote_ip":"::1","host":"localhost:5987","method":"GET","uri":"/static/css/2.3bc1b4fc.chunk.css","user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36","status":200,"error":"","latency":1798708,"latency_human":"1.798708ms","bytes_in":0,"bytes_out":457971}
Connections
Connections
- Clickhouse
- Google BigQuery
- MySQL
- Oracle
- Redshift
- PostgreSQL
- SQLite
- DuckDb
- MotherDuck
- SQL Server
- Snowflake
Environment
dbNet looks for connection credentials in several places:
- Environment Variables
- DBT Profiles Files (located at
~/.dbt/profiles.yml)
The easiest ways is to see your detected connections is to use the dbnet conns sub-command. Follow in the next section.
Managing Connections
dbNet makes it easy to list and test connections.
$ dbnet conns
conns - list & test local connections
Usage:
conns [list|test]
Subcommands:
list list local connections detected
test test a local connection
Flags:
--version Displays the program version string.
-h --help Displays help with available flag, subcommand, and positional value parameters.
Set Connections
Here we can easily set a connection with as a environment variable. For example, for a postgres database, we can do:
export POSTGRES="postgresql://myuser:mypass@host.ip:5432/mydatabase?sslmode=require"
See the respective documentation for your database of interest (links above).
List Connections
Once connections are set, we can run the dbnet conns list command to list our detected connections:
$ dbnet conns list
+--------------------------+-----------------+-------------------+
| CONN NAME | CONN TYPE | SOURCE |
+--------------------------+-----------------+-------------------+
| FINANCE_BQ | DB - BigQuery | env variable |
| LOCALHOST_DEV | DB - PostgreSQL | dbt profiles yaml |
| MSSQL | DB - SQLServer | env variable |
| MYSQL | DB - MySQL | env variable |
| ORACLE_DB | DB - Oracle | env variable |
| MY_PG | DB - PostgreSQL | env variable |
+--------------------------+-----------------+-------------------+
Test Connections
We can also test a connection by running the dbnet conns test command:
$ dbnet conns test LOCALHOST_DEV
9:04AM INF success!
BigQuery
Setup
The following credentials keys are accepted:
project(required) -> The GCP project ID for the projectkey_file(required) -> The Service Account JSONdataset(required) -> The default dataset (like a schema)
Here is an example of setting a connection named BIGQUERY:
# YAML format
export BIGQUERY="{type: bigquery, project: <project>, dataset: <dataset>, key_file: /path/to/service.account.json}"
Clickhouse
Setup
The following credentials keys are accepted:
host(required) -> The hostname / ip of the instanceuser(required) -> he username to access the instancedatabase(required) -> The database name of the instanceschema(optional) -> The default schema to usepassword(optional) -> The password to access the instanceport(optional) -> The port of the instance. Default is9000.
Here is an example of setting a connection named CLICKHOUSE:
# URL format
export CLICKHOUSE="clickhouse://myuser:mypass@host.ip:9000/mydatabase"
# YAML format
export CLICKHOUSE="{type: clickhouse, host: <host>, user:<user>, database:<database>, password:<password>, port:<port>}"
DuckDB
Setup
The following credentials keys are accepted:
instance(required) -> The local file path of the database fileschema(optional) -> The default schema to use.duckdb_version(optional) -> The CLI version of DuckDB to use. You can also specify the env. variableDUCKDB_VERSION.read_only(optional) -> Whether to open the connection inreadonlymode. Acceptstrueorfalse. Default isfalse.interactive(optional) -> Whether to communicate to the DuckDB CLI via interactive mode instead of reopening the connection each time. Acceptstrueorfalse. Default isfalse.
Here is an example of setting a connection named DUCKDB:
export DUCKDB='duckdb:///path/to/file.db?schema=public'
MotherDuck
Setup
The following credentials keys are accepted:
database(required) -> The motherduck database namemotherduck_token(required) -> The service token. See here for instructions on getting it.schema(optional) -> The default schema to use.duckdb_version(optional) -> The CLI version of DuckDB to use. You can also specify the env. variableDUCKDB_VERSION.read_only(optional) -> Whether to open the connection inreadonlymode. Acceptstrueorfalse. Default isfalse.interactive(optional) -> Whether to communicate to the DuckDB CLI via interactive mode instead of reopening the connection each time. Acceptstrueorfalse. Default istrue.
Here is an example of setting a connection named MOTHERDUCK:
export MOTHERDUCK='motherduck://my_db?motherduck_token=xxxxxxxxxxxx'
Potential Issue
If you have a .duckdbrc file, which runs commands whenever the DuckDB CLI is invoked, this may interfere with normal dbNet querying. If you are facing weird issues and have this file, try again after deleting it.
MySQL
Setup
The following credentials keys are accepted:
host(required) -> The hostname / ip of the instanceuser(required) -> he username to access the instancedatabase(required) -> The database name of the instanceschema(optional) -> The default schema to usepassword(optional) -> The password to access the instanceport(optional) -> The port of the instance. Default is3306.
Here is an example of setting a connection named MYSQL:
# URL format
export MYSQL="mysql://myuser:mypass@host.ip:3306/mydatabase?tls=skip-verify"
# YAML format
export MYSQL="{type: mysql, host: <host>, user: <user>, database: <database>, password: <password>, port: <port>}"
Oracle
Setup
The following credentials keys are accepted:
host(required) -> The hostname / ip of the instanceuser(required) -> The username to access the instancepassword(required) -> The password to access the instanceschema(optional) -> This is the default schemasid(optional) -> The Oracle System ID / Service Name of the instancetns(optional) -> The Oracle TNS string of the instanceport(optional) -> The port of the instance. Default is1521.
Here is an example of setting a connection named ORACLE:
# URL format
export ORACLE="oracle://myuser:mypass@host.ip:1521/<sid>"
# YAML format
export ORACLE="{type: oracle, host: <host>, user: <user>, sid: <sid>, password: <password>, port:<port>, schema: <schema>}"
Postgres
Setup
The following credentials keys are accepted:
host(required) -> The hostname / ip of the instanceuser(required) -> The username to access the instancedatabase(required) -> The database name of the instanceschema(optional) -> The default schema to usepassword(optional) -> The password to access the instanceport(optional) -> The port of the instance. Default is5432.sslmode(optional) -> The sslmode of the instance. Default isdisable.
Here is an example of setting a connection named POSTGRES:
# YAML format
export POSTGRES="{type: postgres, host: <host>, user: <user>, database: <database>, password: <password>, port: <port>, sslmode: require}"
# URL
export POSTGRES="postgresql://myuser:mypass@host.ip:5432/mydatabase?sslmode=require"
Redshift
Setup
The following credentials keys are accepted:
host(required) -> The hostname / ip of the instanceuser(required) -> he username to access the instancedatabase(required) -> The database name of the instanceaws_bucket(required) -> The name of the S3 Bucket for Bulk Loading / Unloadingaws_access_key_id(required) -> The AWS Access Key ID to access the bucket for Bulk Loading / Unloadingaws_secret_access_key(required) -> The AWS Secret Key to access the bucket for Bulk Loading / Unloadingschema(optional) -> The default schema to use when loadingpassword(optional) -> The password to access the instanceport(optional) -> The port of the instance. Default is5439.
Here is an example of setting a connection named REDSHIFT:
# YAML format
export REDSHIFT="{type: redshift, host: <host>, user: <user>, database: <database>, password: <password>, port: <port>, sslmode: require}, aws_bucket: <aws_bucket>, aws_access_key_id: <aws_access_key_id>, aws_secret_access_key: <aws_secret_access_key>"
export REDSHIFT="{type: redshift, url: 'redshift://myuser:mypass@host.ip:5439/mydatabase', aws_bucket: <aws_bucket>, aws_access_key_id: <aws_access_key_id>, aws_secret_access_key: <aws_secret_access_key>}"
SQLite
Setup
The following credentials keys are accepted:
instance(required) -> The local file path, HTTP or S3 url of the database fileaccess_key_id(optional) -> The Access Key ID to access the bucket (if providing a S3 url)secret_access_key(optional) -> The Secret Key to access the bucket (if providing a S3 url)endpoint(optional) -> The S3 endpoint hostname for non-AWS providers (if providing a S3 url). Examples:nyc3.digitaloceanspaces.com,a345678c73f3e8bddd084cb125876543.r2.cloudflarestorage.com, etc.
Here is an example of setting a connection named SQLITE:
# URL format
export SQLITE='sqlite:///path/to/file.db'
# YAML format
export SQLITE="{type: sqlite, instance: /path/to/file.db}"
export SQLITE="{type: sqlite, url: 's3://<bucket>/<key>', access_key_id: '<access_key_id>', secret_access_key: '<secret_access_key>}'"
description: Connect to a SQL Server database
SQL Server
Setup
The following credentials keys are accepted:
host(required) -> The hostname / ip of the instanceuser(required) -> The username to access the instancedatabase(required) -> The database name of the instanceschema(optional) -> The default schema to usepassword(optional) -> The password to access the instanceport(optional) -> The port of the instance. Default is1433.
Here is an example of setting a connection named MSSQL:
# URL format
export MSSQL="sqlserver://myuser:mypass@host.ip:1433/mydatabase"
# YAML format
export MSSQL="{type: sqlserver, host: <host>, user: <user>, database: <database>, password: <password>, port: <port>}"
Snowflake
Setup
The following credentials keys are accepted:
host(required) -> The hostname or account the instance (eg.pua90768.us-east-11)user(required) -> The username to access the instancedatabase(required) -> The database name of the instancepassword(required) -> The password to access the instanceschema(optional) -> The default schema to userole(optional) -> The role to access the instancewarehouse(optional) -> The warehouse to usepasscode(optional) -> Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login.authenticator(optional) -> Specifies the authenticator to use to login.
Here is an example of setting a connection named SNOWFLAKE:
# YAML format
export SNOWFLAKE="{type: snowflake, host: <host>, user: <user>, database: <database>, password: <password>, port: <port>, schema: <schema>, role: <role>}"
# URL format
export SNOWFLAKE="snowflake://myuser:mypass@host.account/mydatabase?schema=<schema>&role=<role>"