So I have found a workaround: use pymssql instead of pyodbc (both in the import statement and in the engine). It lets you build your joins using database names and without specifying them in the engine. And there is no need to specify a driver in this case.

There might be a problem if you are using Python 3.6 which is not supported by pymssql oficially yet, but you can find unofficial wheels for your Python 3.6 here. It works as is supposed to with my queries.

Here is the original code with joins, rebuilt to work with pymssql:

import pandas as pd
import sqlalchemy as sql
import pymssql

server = '100.10.10.10'
myQuery = '''SELECT first.Field1, second.Field2
           FROM db1.schema.Table1 AS first
           JOIN db2.schema.Table2 AS second
           ON first.Id = second.FirstId'''
engine = sql.create_engine('mssql+pymssql://{}'.format(server))
df = pd.read_sql_query(myQuery, engine)

As for the unofficial wheels, you need to download the file for Python 3.6 from the link I gave above, then cd to the download folder and run pip install wheels where 'wheels' is the name of the wheels file.

UPDATE:

Actually, it is possible to use pyodbc too. I am not sure if this should work for any SQL Server setup, but everything worked for me after I had set 'master' as my database in the engine. The resulting code would look like this:

import pandas as pd
import sqlalchemy as sql
import pyodbc

server = '100.10.10.10'
driver = 'SQL+Server'
db = 'master'
myQuery = '''SELECT first.Field1, second.Field2
           FROM db1.schema.Table1 AS first
           JOIN db2.schema.Table2 AS second
           ON first.Id = second.FirstId'''
engine = sql.create_engine('mssql+pyodbc://{}/{}?driver={}'.format(server, db, driver))
df = pd.read_sql_query(myQuery, engine)
Answer from Sergey Zakharov on Stack Overflow
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.read_sql.html
pandas.read_sql — pandas 3.0.1 documentation - PyData |
A SQL query will be routed to read_sql_query, while a database table name will be routed to read_sql_table. Note that the delegated function might have more specific notes about their functionality not listed here. ... SQL query to be executed or a table name. conADBC Connection, SQLAlchemy connectable, str, or sqlite3 connection
Top answer
1 of 2
14

So I have found a workaround: use pymssql instead of pyodbc (both in the import statement and in the engine). It lets you build your joins using database names and without specifying them in the engine. And there is no need to specify a driver in this case.

There might be a problem if you are using Python 3.6 which is not supported by pymssql oficially yet, but you can find unofficial wheels for your Python 3.6 here. It works as is supposed to with my queries.

Here is the original code with joins, rebuilt to work with pymssql:

import pandas as pd
import sqlalchemy as sql
import pymssql

server = '100.10.10.10'
myQuery = '''SELECT first.Field1, second.Field2
           FROM db1.schema.Table1 AS first
           JOIN db2.schema.Table2 AS second
           ON first.Id = second.FirstId'''
engine = sql.create_engine('mssql+pymssql://{}'.format(server))
df = pd.read_sql_query(myQuery, engine)

As for the unofficial wheels, you need to download the file for Python 3.6 from the link I gave above, then cd to the download folder and run pip install wheels where 'wheels' is the name of the wheels file.

UPDATE:

Actually, it is possible to use pyodbc too. I am not sure if this should work for any SQL Server setup, but everything worked for me after I had set 'master' as my database in the engine. The resulting code would look like this:

import pandas as pd
import sqlalchemy as sql
import pyodbc

server = '100.10.10.10'
driver = 'SQL+Server'
db = 'master'
myQuery = '''SELECT first.Field1, second.Field2
           FROM db1.schema.Table1 AS first
           JOIN db2.schema.Table2 AS second
           ON first.Id = second.FirstId'''
engine = sql.create_engine('mssql+pyodbc://{}/{}?driver={}'.format(server, db, driver))
df = pd.read_sql_query(myQuery, engine)
2 of 2
1

The following code is working for me. I am using SQL server with SQLAlchemy

import pyodbc
import pandas as pd
cnxn = pyodbc.connect('DRIVER=ODBC Driver 17 for SQL Server;SERVER=your_db_server_id,your_db_server_port;DATABASE=pangard;UID=your_db_username;PWD=your_db_password')
query = "SELECT * FROM database.tablename;"
df = pd.read_sql(query, cnxn)
print(df)
🌐
Pandas
pandas.pydata.org › docs › dev › reference › api › pandas.read_sql.html
pandas.read_sql — pandas 3.0.0rc0+27.g47fea804d6 documentation
A SQL query will be routed to read_sql_query, while a database table name will be routed to read_sql_table. Note that the delegated function might have more specific notes about their functionality not listed here. ... SQL query to be executed or a table name. conADBC Connection, SQLAlchemy connectable, str, or sqlite3 connection
🌐
GitHub
github.com › pandas-dev › pandas › issues › 51015
BUG: SQLAlchemy 2.0.0 creates incompatibility with Pandas read_sql (fails to execute) · Issue #51015 · pandas-dev/pandas
January 27, 2023 - With the newest release of SQLAlchemy (2.0, released on 26. Jan 23), the read_sql function fails with the following stacktrace: Traceback (most recent call last): File "/app/test.py", line 31, in <module> df = pd.read_sql(f''' File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 590, in read_sql return pandas_sql.read_query( File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 1560, in read_query result = self.execute(*args) File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 1405, in execute return self.connectable.execution_options().execute(*args, **kwargs) AttributeError: 'OptionEngine' object has no attribute 'execute'
Author   ephe-meral
🌐
GeeksforGeeks
geeksforgeeks.org › python › read-sql-database-table-into-a-pandas-dataframe-using-sqlalchemy
Read SQL Database Table into a Pandas DataFrame using SQLAlchemy - GeeksforGeeks
January 15, 2026 - Example: This example creates a small SQLite database, inserts data into a table and then reads that table into a Pandas DataFrame. ... import pandas as pd from sqlalchemy import create_engine, text db = create_engine("sqlite:///data.db") with db.begin() as con: con.execute(text("CREATE TABLE IF NOT EXISTS users (id INTEGER, name TEXT)")) con.execute(text("DELETE FROM users")) # avoid duplicates con.execute(text("INSERT INTO users VALUES (1,'John'),(2,'Emma')")) df = pd.read_sql_table("users", db) print(df)
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.read_sql_query.html
pandas.read_sql_query — pandas 3.0.1 documentation
Read SQL query or database table into a DataFrame. ... Any datetime values with time zone information parsed via the parse_dates parameter will be converted to UTC. ... >>> from sqlalchemy import create_engine >>> engine = create_engine("sqlite:///database.db") >>> sql_query = "SELECT int_column ...
🌐
KDnuggets
kdnuggets.com › using-sql-with-python-sqlalchemy-and-pandas
Using SQL with Python: SQLAlchemy and Pandas - KDnuggets
June 12, 2024 - Use the `pd.read_sql_table` function to load the entire table and convert it into a Pandas dataframe. The function requires table anime, engine objects, and column names. Display the top 5 rows. import pandas as pd import psycopg2 from sqlalchemy ...
🌐
Pandas
pandas.pydata.org › pandas-docs › stable › reference › api › pandas.read_sql.html
pandas.read_sql — pandas 2.2.3 documentation - PyData |
A SQL query will be routed to read_sql_query, while a database table name will be routed to read_sql_table. Note that the delegated function might have more specific notes about their functionality not listed here. ... SQL query to be executed or a table name. conADBC Connection, SQLAlchemy connectable, str, or sqlite3 connection
Find elsewhere
🌐
Hackers and Slackers
hackersandslackers.com › connecting-pandas-to-a-sql-database-with-sqlalchemy
Connecting Pandas to a Database with SQLAlchemy
January 19, 2022 - table_df = pd.read_sql_table( table_name, con=engine ) ... The first two parameters we pass are the same as last time: first is our table name, and then our SQLAlchemy engine. The above snippet is perhaps the quickest and simplest way to translate a SQL table into a Pandas DataFrame, with essentially no configuration needed!
🌐
CoderPad
coderpad.io › blog › development › how-to-manipulate-sql-data-using-sqlalchemy-and-pandas
How To Manipulate SQL Data Using SQLAlchemy and Pandas - CoderPad
June 7, 2023 - We then pass this object to the read_sql method with the statement method applied to it. This statement method retrieves the SQLAlchemy selectable object that we need for the SQL parameter in the read_sql method. We then pass the engine object to the con parameter of that method. Finally, we have our new Pandas data frame: first_article_df.
🌐
Pandas
pandas.pydata.org › pandas-docs › version › 1.5 › reference › api › pandas.read_sql.html
pandas.read_sql — pandas 1.5.2 documentation
A SQL query will be routed to read_sql_query, while a database table name will be routed to read_sql_table. Note that the delegated function might have more specific notes about their functionality not listed here. ... SQL query to be executed or a table name. conSQLAlchemy connectable, str, ...
🌐
GeeksforGeeks
geeksforgeeks.org › sqlalchemy-orm-conversion-to-pandas-dataframe
SQLAlchemy ORM conversion to Pandas DataFrame - GeeksforGeeks
June 22, 2022 - The db.select() will get converted to raw SQL query when read by the read_sql() method. In the output, we have also printed the type of the response object. The output is a pandas DataFrame object where we have fetched all the records present in the student's table. ... import pandas import sqlalchemy as db from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() # DEFINE THE ENGINE (CONNECTION OBJECT) engine = db.create_engine("mysql+pymysql:\ //root:password@localhost/Geeks4Geeks") # CREATE THE TABLE MODEL TO USE IT FOR QUERYING class Students(Base): __tablename__ =
🌐
GitHub
gist.github.com › samukasmk › 11f1ab0638abe8a680e40334e3399aa4
Pandas: read_sql from SQLAlchemy Models · GitHub
August 18, 2015 - Pandas: read_sql from SQLAlchemy Models. GitHub Gist: instantly share code, notes, and snippets.
🌐
Like Geeks
likegeeks.com › home › python › pandas › read sql query/table into dataframe using pandas read_sql
Read SQL Query/Table into DataFrame using Pandas read_sql
October 16, 2023 - When dealing with large datasets that don’t fit into memory, you can use the chunksize parameter in read_sql. This returns an iterable object of type TextFileReader, allowing you to process your data in chunks. If we have a large ‘users’ table, and we want to process the data in chunks of 500 rows at a time: from sqlalchemy import create_engine engine = create_engine('mysql+mysqlconnector://USERNAME:PASSWORD@HOST/DB_NAME') #Enable streaming to load only records when Pandas fetch them con = engine.connect().execution_options( stream_results=True) chunks = pd.read_sql("SELECT * FROM users", con, chunksize=500) for chunk in chunks: print(chunk)
🌐
Towards Data Science
towardsdatascience.com › home › latest › how to connect to sql databases from python using sqlalchemy and pandas
How to Connect to SQL Databases from Python Using SQLAlchemy and Pandas | Towards Data Science
January 21, 2025 - Alternatively, we can also achieve it using "pandas.read_sql". Since SQLAlchemy is integrated with Pandas, we can use its SQL connection directly with "con = conn".
🌐
AskPython
askpython.com › home › pandas read_sql: read sql query/database table into a dataframe
Pandas read_sql: Read SQL query/database table into a DataFrame - AskPython
January 31, 2023 - You have learned how to use Pandas ... previous tutorials. One such way is Pandas read_sql(), which enables you to read a SQL query or database table into a DataFrame....
🌐
GeeksforGeeks
geeksforgeeks.org › connecting-pandas-to-a-database-with-sqlalchemy
Connecting Pandas to a Database with SQLAlchemy - GeeksforGeeks
January 26, 2022 - We can also pass SQL queries to the read_sql_table function to read-only specific columns or records from the PostgreSQL database. The procedure is still the same. The SQL syntax remains the same as a conventional syntax to query data from a ...