SQLSpec is an experimental Python library designed to streamline and modernize your SQL interactions across a variety of database systems. While still in its early stages, SQLSpec aims to provide a flexible, typed, and extensible interface for working with SQL in Python.
Note: SQLSpec is currently under active development and the API is subject to change. It is not yet ready for production use. Contributions are welcome!
sqlglot to validate your SQL against specific dialects and seamlessly convert between them.SQLSpec is a work in progress. While it offers a solid foundation for modern SQL interactions, it does not yet include every feature you might find in a mature ORM or database toolkit. The focus is on building a robust, flexible core that can be extended over time.
We've talked about what SQLSpec is not, so let's look at what it can do.
These are just a few examples that demonstrate SQLSpec's flexibility. Each of the bundled adapters offers the same config and driver interfaces.
from sqlspec import SQLSpec
from sqlspec.adapters.sqlite import SqliteConfig
# Create SQLSpec instance and configure database
db_manager = SQLSpec()
config = SqliteConfig(pool_config={"database": ":memory:"}) # Thread local pooling
db_manager.add_config(config)
# Execute queries with automatic result mapping
with db_manager.provide_session(config) as session:
# Simple query
result = session.execute("SELECT 'Hello, SQLSpec!' as message")
print(result.get_first()) # {'message': 'Hello, SQLSpec!'}
# Type-safe single row query
row = session.select_one("SELECT 'Hello, SQLSpec!' as message")
print(row) # {'message': 'Hello, SQLSpec!'}
Warning: The SQL Builder API is highly experimental and will change significantly.
from sqlspec import sql
# Build a simple query
query = sql.select("id", "name", "email").from_("users").where("active = ?")
statement = query.to_statement()
print(statement.sql) # SELECT id, name, email FROM users WHERE active = ?
# More complex example with joins
query = (
sql.select("u.name", "COUNT(o.id) as order_count")
.from_("users u")
.left_join("orders o", "u.id = o.user_id")
.where("u.created_at > ?")
.group_by("u.name")
.having("COUNT(o.id) > ?")
.order_by("order_count", desc=True)
)
# Execute the built query with parameters
with db_manager.provide_session(config) as session:
results = session.execute(query, "2024-01-01", 5)
SQLSpec supports automatic mapping to typed models using popular libraries:
from sqlspec import SQLSpec
from sqlspec.adapters.sqlite import SqliteConfig
from pydantic import BaseModel
class User(BaseModel):
id: int
name: str
email: str
db_manager = SQLSpec()
config = SqliteConfig(pool_config={"database": ":memory:"})
db_manager.add_config(config)
with db_manager.provide_session(config) as session:
# Create and populate test data
session.execute_script("""
CREATE TABLE users (id INTEGER, name TEXT, email TEXT);
INSERT INTO users VALUES (1, 'Alice', 'alice@example.com');
""")
# Map single result to typed model
user = session.select_one("SELECT * FROM users WHERE id = ?", 1, schema_type=User)
print(f"User: {user.name} ({user.email})")
# Map multiple results
users = session.select("SELECT * FROM users", schema_type=User)
for user in users:
print(f"User: {user.name}")
SQLSpec provides several convenient methods for executing queries:
with db_manager.provide_session(config) as session:
# Execute any SQL and get full result set
result = session.execute("SELECT * FROM users")
# Get single row (raises error if not found)
user = session.select_one("SELECT * FROM users WHERE id = ?", 1)
# Get single row or None (no error if not found)
maybe_user = session.select_one_or_none("SELECT * FROM users WHERE id = ?", 999)
# Execute with many parameter sets (bulk operations)
session.execute_many(
"INSERT INTO users (name, email) VALUES (?, ?)",
[("Bob", "bob@example.com"), ("Carol", "carol@example.com")]
)
# Execute multiple statements as a script
session.execute_script("""
CREATE TABLE IF NOT EXISTS logs (id INTEGER, message TEXT);
INSERT INTO logs (message) VALUES ('System started');
""")
This is a quick implementation using some of the built-in Secret and Extension management features of SQLSpec's DuckDB integration.
It allows you to communicate with any compatible OpenAI conversations endpoint (such as Ollama). This example:
open_prompt DuckDB extensionsopen_prompt compatible secret required to use the extension# /// script
# dependencies = [
# "sqlspec[duckdb,performance]",
# ]
# ///
import os
from sqlspec import SQLSpec
from sqlspec.adapters.duckdb import DuckDBConfig
from pydantic import BaseModel
class ChatMessage(BaseModel):
message: str
db_manager = SQLSpec()
config = DuckDBConfig(
pool_config={"database": ":memory:"},
driver_features={
"extensions": [{"name": "open_prompt"}],
"secrets": [
{
"secret_type": "open_prompt",
"name": "open_prompt",
"value": {
"api_url": "http://127.0.0.1:11434/v1/chat/completions",
"model_name": "gemma3:1b",
"api_timeout": "120",
},
}
],
},
)
db_manager.add_config(config)
with db_manager.provide_session(config) as session:
result = session.select_one(
"SELECT open_prompt(?)",
"Can you write a haiku about DuckDB?",
schema_type=ChatMessage
)
print(result) # result is a ChatMessage pydantic model
In this example, we are again using DuckDB. However, we are going to use the built-in to call the Google Gemini embeddings service directly from the database.
This example will:
http_client and vss (vector similarity search) DuckDB extensionsgenerate_embeddings macro exists in the DuckDB database# /// script
# dependencies = [
# "sqlspec[duckdb,performance]",
# ]
# ///
import os
from sqlspec import SQLSpec
from sqlspec.adapters.duckdb import DuckDBConfig
EMBEDDING_MODEL = "gemini-embedding-exp-03-07"
GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY")
API_URL = (
f"https://generativelanguage.googleapis.com/v1beta/models/{EMBEDDING_MODEL}:embedContent?key=${GOOGLE_API_KEY}"
)
db_manager = SQLSpec()
config = DuckDBConfig(
pool_config={"database": ":memory:"},
driver_features={
"extensions": [{"name": "vss"}, {"name": "http_client"}],
"on_connection_create": lambda connection: connection.execute(f"""
CREATE IF NOT EXISTS MACRO generate_embedding(q) AS (
WITH __request AS (
SELECT http_post(
'{API_URL}',
headers => MAP {{
'accept': 'application/json',
}},
params => MAP {{
'model': 'models/{EMBEDDING_MODEL}',
'parts': [{{ 'text': q }}],
'taskType': 'SEMANTIC_SIMILARITY'
}}
) AS response
)
SELECT *
FROM __request,
);
"""),
},
)
db_manager.add_config(config)
with db_manager.provide_session(config) as session:
result = session.execute("SELECT generate_embedding('example text')")
print(result.get_first()) # result is a dictionary when `schema_type` is omitted.
SQLSpec can load and manage SQL queries from files using aiosql-style named queries:
from sqlspec import SQLSpec
from sqlspec.loader import SQLFileLoader
from sqlspec.adapters.sqlite import SqliteConfig
# Initialize with SQL file loader
db_manager = SQLSpec(loader=SQLFileLoader())
config = SqliteConfig(pool_config={"database": ":memory:"})
db_manager.add_config(config)
# Load SQL files from directory
db_manager.load_sql_files("./sql")
# SQL file: ./sql/users.sql
# -- name: get_user
# SELECT * FROM users WHERE id = ?
#
# -- name: create_user
# INSERT INTO users (name, email) VALUES (?, ?)
with db_manager.provide_session(config) as session:
# Use named queries from files
user = session.execute(db_manager.get_sql("get_user"), 1)
session.execute(db_manager.get_sql("create_user"), "Alice", "alice@example.com")
SQLSpec includes a built-in migration system for managing schema changes. After configuring your database with migration settings, use the CLI commands:
# Initialize migration directory
sqlspec --config myapp.config init
# Generate new migration file
sqlspec --config myapp.config create-migration -m "Add user table"
# Apply all pending migrations
sqlspec --config myapp.config upgrade
# Show current migration status
sqlspec --config myapp.config show-current-revision
For Litestar applications, replace sqlspec with your application command:
# Using Litestar CLI integration
litestar database create-migration -m "Add user table"
litestar database upgrade
litestar database show-current-revision
SQLSpec CLI supports tab completion for bash, zsh, and fish shells. Enable it with:
# Bash - add to ~/.bashrc
eval "$(_SQLSPEC_COMPLETE=bash_source sqlspec)"
# Zsh - add to ~/.zshrc
eval "$(_SQLSPEC_COMPLETE=zsh_source sqlspec)"
# Fish - add to ~/.config/fish/completions/sqlspec.fish
eval (env _SQLSPEC_COMPLETE=fish_source sqlspec)
After setup, you can tab-complete commands and options:
sqlspec <TAB> # Shows: create-migration, downgrade, init, ...
sqlspec upgrade --<TAB> # Shows: --bind-key, --help, --no-prompt, ...
See the CLI documentation for complete setup instructions.
In this example we demonstrate how to create a basic configuration that integrates into Litestar:
# /// script
# dependencies = [
# "sqlspec[aiosqlite]",
# "litestar[standard]",
# ]
# ///
from litestar import Litestar, get
from sqlspec import SQLSpec
from sqlspec.adapters.aiosqlite import AiosqliteConfig, AiosqliteDriver
from sqlspec.extensions.litestar import SQLSpecPlugin
@get("/")
async def simple_sqlite(db_session: AiosqliteDriver) -> dict[str, str]:
return await db_session.select_one("SELECT 'Hello, world!' AS greeting")
sqlspec = SQLSpec()
sqlspec.add_config(AiosqliteConfig(pool_config={"database": ":memory:"}))
app = Litestar(route_handlers=[simple_sqlite], plugins=[SQLSpecPlugin(sqlspec)])
SQLSpec originally drew inspiration from features found in the aiosql library. This is a great library for working with and executing SQL stored in files. It's unclear how much of an overlap there will be between the two libraries, but it's possible that some features will be contributed back to aiosql where appropriate.
The primary goal at this stage is to establish a native connectivity interface that works seamlessly across all supported database environments. This means you can connect to any of the supported databases using a consistent API, regardless of the underlying driver or dialect.
This list is not final. If you have a driver you'd like to see added, please open an issue or submit a PR!
Each adapter uses a consistent configuration pattern with pool_config for connection parameters:
# SQLite
SqliteConfig(pool_config={"database": "/path/to/database.db"})
AiosqliteConfig(pool_config={"database": "/path/to/database.db"}) # Async
AdbcConfig(connection_config={"uri": "sqlite:///path/to/database.db"}) # ADBC
# PostgreSQL (multiple drivers available)
PsycopgSyncConfig(pool_config={"host": "localhost", "database": "mydb", "user": "user", "password": "pass"})
PsycopgAsyncConfig(pool_config={"host": "localhost", "database": "mydb", "user": "user", "password": "pass"}) # Async
AsyncpgConfig(pool_config={"host": "localhost", "database": "mydb", "user": "user", "password": "pass"})
PsqlpyConfig(pool_config={"dsn": "postgresql://user:pass@localhost/mydb"})
AdbcConfig(connection_config={"uri": "postgresql://user:pass@localhost/mydb"}) # ADBC
# DuckDB
DuckDBConfig(pool_config={"database": ":memory:"}) # or file path
AdbcConfig(connection_config={"uri": "duckdb:///path/to/database.duckdb"}) # ADBC
# MySQL
AsyncmyConfig(pool_config={"host": "localhost", "database": "mydb", "user": "user", "password": "pass"}) # Async
# Oracle
OracleSyncConfig(pool_config={"host": "localhost", "service_name": "XEPDB1", "user": "user", "password": "pass"})
OracleAsyncConfig(pool_config={"host": "localhost", "service_name": "XEPDB1", "user": "user", "password": "pass"}) # Async
# BigQuery
BigQueryConfig(pool_config={"project": "my-project", "dataset": "my_dataset"})
AdbcConfig(connection_config={"driver_name": "adbc_driver_bigquery", "project_id": "my-project", "dataset_id": "my_dataset"}) # ADBC
| Driver | Database | Mode | Status |
|---|---|---|---|
adbc | Postgres | Sync | ✅ |
adbc | SQLite | Sync | ✅ |
adbc | Snowflake | Sync | ✅ |
adbc | DuckDB | Sync | ✅ |
asyncpg | PostgreSQL | Async | ✅ |
psycopg | PostgreSQL | Sync | ✅ |
psycopg | PostgreSQL | Async | ✅ |
psqlpy | PostgreSQL | Async | ✅ |
aiosqlite | SQLite | Async | ✅ |
sqlite3 | SQLite | Sync | ✅ |
oracledb | Oracle | Async | ✅ |
oracledb | Oracle | Sync | ✅ |
duckdb | DuckDB | Sync | ✅ |
bigquery | BigQuery | Sync | ✅ |
spanner | Spanner | Sync | 🗓️ |
sqlserver | SQL Server | Sync | 🗓️ |
mysql | MySQL | Sync | 🗓️ |
asyncmy | MySQL | Async | ✅ |
snowflake | Snowflake | Sync | 🗓️ |
sqlspec/:
adapters/: Database-specific drivers and configuration classes for all supported databasesextensions/: Framework integrations and external library adapters
litestar/: Litestar web framework integration with dependency injection ✅aiosql/: Integration with aiosql for SQL file loading ✅fastapi/, flask/, etc.builder/: Fluent SQL query builder with method chaining and type safety
mixins/: Composable query building operations (WHERE, JOIN, ORDER BY, etc.)core/: Core query processing infrastructure
statement.py: SQL statement wrapper with metadata and type informationparameters.py: Parameter style conversion and validationresult.py: Result set handling and type mappingcompiler.py: SQL compilation and validation using SQLGlotcache.py: Statement caching for performance optimizationdriver/: Base driver system with sync/async support and transaction management
mixins/: Shared driver capabilities (result processing, SQL translation)migrations/: Database migration system with CLI commandsstorage/: Unified data import/export operations with multiple backends
backends/: Storage backend implementations (fsspec, obstore)utils/: Utility functions, type guards, and helper toolsbase.py: Main SQLSpec registry and configuration managerloader.py: SQL file loading system for .sql filescli.py: Command-line interface for migrations and database operationsconfig.py: Base configuration classes and protocolsprotocols.py: Type protocols for runtime type checkingexceptions.py: Custom exception hierarchy for SQLSpectyping.py: Type definitions, guards, and optional dependency facadesSQLSpec is an open-source project, and contributions are welcome! Whether you're interested in adding support for new databases, improving the query interface, or simply providing feedback, your input is valuable.
Disclaimer: SQLSpec is under active development. Expect changes and improvements as the project evolves.