It seems that your data are measured with resolution 0.1 and that the range is at least 18.7. My guess given the mention of "weather" is that they are Celsius temperatures.

Let's guess that the variable has a range 50 in those units: the tails beyond the quartiles are often longer than the difference between the quartiles. That would mean of the order of 500 distinct values.

It seems that your sample size is of the order of 500000, so on average each distinct value occurs about 1000 times, and ties are everywhere.

It's also entirely possible that your data are quirkier than that if human readings are involved. Many observers use some final digits rather than others, although the quirks can vary, including preferences for 0 and 5 as final digits or for even digits.

Ties are likely to be the issue, together with a rule that the same values must be assigned to the same bin.

Answer from Nick Cox on Stack Exchange
🌐
Joseantunes
joseantunes.tech › random code › 2018 › 09 › 07 › pandas-and-psycopg2.html
Use psycopg2 to query into a DataFrame · Jose Antunes
September 7, 2018 - import pandas.io.sql as sqlio SQL_QUERY = "SELECT * FROM test_table WHERE id = ANY(%s)" test_ids = [1, 2, 3] result_df = sqlio.read_sql_query(SQL_QUERY, params=(test_ids,), conn)
🌐
Stack Overflow
stackoverflow.com › questions › 73734510 › how-to-execute-sql-query-with-parameters-in-pandas
python - How to execute SQL query with parameters in Pandas - Stack Overflow
import pandas.io.sql as sqlio def getAnalysisMetaStatsDF(self): session = self.connection() ids = self.getAnalysisIds() # this is a list of integers data = sqlio.read_sql_query("Select * from analysis_stats where analysis_id in %s", [tuple(ids)], session) print(data)
Top answer
1 of 3
27

Break this up into three parts to help isolate the problem and improve readability:

  1. Build the SQL string
  2. Set parameter values
  3. Execute pandas.read_sql_query

Build SQL

First ensure ? placeholders are being set correctly. Use str.format with str.join and len to dynamically fill in ?s based on member_list length. Below examples assume 3 member_list elements.

Example

member_list = (1,2,3)
sql = """select member_id, yearmonth
         from queried_table
         where yearmonth between {0} and {0}
         and member_id in ({1})"""
sql = sql.format('?', ','.join('?' * len(member_list)))
print(sql)

Returns

select member_id, yearmonth
from queried_table
where yearmonth between ? and ?
and member_id in (?,?,?)

Set Parameter Values

Now ensure parameter values are organized into a flat tuple

Example

# generator to flatten values of irregular nested sequences,
# modified from answers http://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python
def flatten(l):
    for el in l:
        try:
            yield from flatten(el)
        except TypeError:
            yield el

params = tuple(flatten((201601, 201603, member_list)))
print(params)

Returns

(201601, 201603, 1, 2, 3)

Execute

Finally bring the sql and params values together in the read_sql_query call

query = pd.read_sql_query(sql, db2conn, params)
2 of 3
15

WARNING! Although my proposed solution here works, it is prone to SQL injection attacks. Therefor, it should never be used directly in backend code! It is only safe for offline analysis.

If you're using python 3.6+ you could also use a formatted string litteral for your query (cf https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep498)

start, end = 201601, 201603
selected_members = (111, 222, 333, 444, 555)  # requires to be a tuple

query = f"""
    SELECT member_id, yearmonth FROM queried_table
    WHERE yearmonth BETWEEN {start} AND {end}
      AND member_id IN {selected_members}
"""

df = pd.read_sql_query(query, db2conn)
🌐
Medium
yiruchen1993.medium.com › pandas-to-postgresql-3ab3b7216faa
Pandas to PostgreSQL. Write a pandas DataFrame to a SQL… | by imflorachen | Medium
November 24, 2020 - # DB table to df query_sql = “SELECT * FROM %s;” % ‘mytable’ table_data = sqlio.read_sql_query(query_sql, postgreSQLConnection)
🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.read_sql_query.html
pandas.read_sql_query — pandas 3.0.1 documentation
>>> from sqlalchemy import create_engine >>> engine = create_engine("sqlite:///database.db") >>> sql_query = "SELECT int_column FROM test_data" >>> with engine.connect() as conn, conn.begin(): ... data = pd.read_sql_query(sql_query, conn)
Find elsewhere
🌐
MSSQLTips
mssqltips.com › home › benchmarking sql server io with sqlio
Benchmarking SQL Server IO with SQLIO
October 4, 2010 - SQL Server does random reads when doing bookmark lookups or when reading from fragmented tables. Here are the tests that I ran: sqlio -dL -BH -kR -frandom -t1 -o1 -s90 -b64 testfile.dat sqlio -dL -BH -kR -frandom -t2 -o1 -s90 -b64 testfile.dat sqlio -dL -BH -kR -frandom -t4 -o1 -s90 -b64 ...
🌐
GitHub
gist.github.com › jakebrinkmann › de7fd185efe9a1f459946cf72def057e
Read SQL query from psycopg2 into pandas dataframe · GitHub
import pandas as pd import psycopg2 with psycopg2.connect("host='{}' port={} dbname='{}' user={} password={}".format(host, port, dbname, username, pwd)) as conn: sql = "select count(*) from table;" dat = pd.read_sql_query(sql, conn)
🌐
Brent Ozar Unlimited®
brentozar.com › archive › 2008 › 09 › finding-your-san-bottlenecks-with-sqlio
Finding Your SAN Bottlenecks with SQLIO
February 13, 2017 - SQLIO isn’t CPU-bound at all, and you can use more threads than you have processors. The more load we throw at storage, the faster it goes – to a point. ... -b8 and -b64: the size of our IO requests in kilobytes. SQL Server does a lot of random stuff in 8KB chunks, and we’re also testing sequential stuff in 64KB chunks. -frandom and -fsequential: random versus sequential access. Many queries ...
🌐
Spark By {Examples}
sparkbyexamples.com › home › pandas › pandas read sql query or table with examples
Pandas Read SQL Query or Table with Examples - Spark By {Examples}
December 2, 2024 - Pandas read_sql() function is used to read data from SQL queries or database tables into DataFrame. This function allows you to execute SQL queries and
🌐
Red Gate Software
red-gate.com › home › the sql server sqlio utility
The SQL Server Sqlio Utility | Simple Talk
August 24, 2021 - The next step is to define a set of sqlio commands that use a variety of I/O sizes and types to test each I/O path. Note, however, that you’re not trying to simulate SQL Server I/O patterns. Instead, you’re trying to determine your I/O subsystem’s capacity. That means running tests for both read ...
Top answer
1 of 3
10

You need to use the params keyword argument:

f = pd.read_sql_query('SELECT open FROM NYSEMSFT WHERE date = (?)', conn, params=(date,))
2 of 3
5

As @alecxe and @Ted Petrou have already said, use explicit parameter names, especially for the params parameter as it's a fourth parameter in the pd.read_sql_query() function and you used it as a third one (which is coerce_float)

But beside that you can improve your code by getting rid of the for date in dates: loop using the following trick:

import sqlite3

dates=['2001-01-01','2002-02-02']
qry = 'select * from aaa where open in ({})'

conn = sqlite3.connect(r'D:\temp\.data\a.sqlite')

df = pd.read_sql(qry.format(','.join(list('?' * len(dates)))), conn, params=dates)

Demo:

Source SQLite table:

sqlite> .mode column
sqlite> .header on
sqlite> select * from aaa;
open
----------
2016-12-25
2001-01-01
2002-02-02

Test run:

In [40]: %paste
dates=['2001-01-01','2002-02-02']
qry = 'select * from aaa where open in ({})'
conn = sqlite3.connect(r'D:\temp\.data\a.sqlite')

df = pd.read_sql(qry.format(','.join(list('?' * len(dates)))), conn, params=dates)
## -- End pasted text --

In [41]: df
Out[41]:
         open
0  2001-01-01
1  2002-02-02

Explanation:

In [35]: qry = 'select * from aaa where open in ({})'

In [36]: ','.join(list('?' * len(dates)))
Out[36]: '?,?'

In [37]: qry.format(','.join(list('?' * len(dates))))
Out[37]: 'select * from aaa where open in (?,?)'

In [38]: dates.append('2003-03-03')   # <-- let's add a third parameter

In [39]: qry.format(','.join(list('?' * len(dates))))
Out[39]: 'select * from aaa where open in (?,?,?)'
🌐
ProgramCreek
programcreek.com › python › example › 101334 › pandas.read_sql_query
Python Examples of pandas.read_sql_query
def test_nan_fullcolumn(self): # full NaN column (numeric float column) df = DataFrame({'A': [0, 1, 2], 'B': [np.nan, np.nan, np.nan]}) df.to_sql('test_nan', self.conn, index=False) # with read_table result = sql.read_sql_table('test_nan', self.conn) tm.assert_frame_equal(result, df) # with read_sql -> not type info from table -> stays None df['B'] = df['B'].astype('object') df['B'] = None result = sql.read_sql_query('SELECT * FROM test_nan', self.conn) tm.assert_frame_equal(result, df)
Top answer
1 of 2
47

Backgrounds:

When using sqlalchemy with pandas read_sql_query(query, con) method, it will create a SQLDatabase object with an attribute connectable to self.connectable.execute(query). And the SQLDatabase.connectable is initialized as con as long as it is an instance of sqlalchemy.engine.Connectable (i.e. Engine and Connection).

Case I: when passing Engine object as con

Just as example code in your question:

from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('...')
df = pd.read_sql_query(query, con=engine)

Internally, pandas just use result = engine.execute(query), which means:

Where above, the execute() method acquires a new Connection on its own, executes the statement with that object, and returns the ResultProxy. In this case, the ResultProxy contains a special flag known as close_with_result, which indicates that when its underlying DBAPI cursor is closed, the Connection object itself is also closed, which again returns the DBAPI connection to the connection pool, releasing transactional resources.

In this case, you don't have to worry about the Connection itself, which is closed automatically, but it will keep the connection pool of engine.

So you can either disable pooling by using:

engine = create_engine('...', poolclass=NullPool)

or dispose the engine entirely with engine.dispose() at the end.

But following the Engine Disposal doc (the last paragraph), these two are alternative, you don't have to use them at the same time. So in this case, for simple one-time usage of read_sql_query and clean-up, I think this should be enough:

# Clean up entirely after every query.
engine = create_engine('...')
df = pd.read_sql_query(query, con=engine)
engine.dispose()

Case II: when passing Connection object as con:

connection = engine.connect()
print(connection.closed) # False
df = pd.read_sql_query(query, con=connection)
print(connection.closed) # False again
# do_something_else(connection)
connection.close()
print(connection.closed) # True
engine.dispose()

You should do this whenever you want greater control over attributes of the connection, when it gets closed, etc. For example, a very import example of this is a Transaction, which lets you decide when to commit your changes to the database. (from this answer)

But with pandas, we have no control inside the read_sql_query, the only usefulness of connection is that it allows you to do more useful things before we explicitly close it.


So generally speaking:

I think I would like to use following pattern, which gives me more control of connections and leaves the future extensibility:

engine = create_engine('...')
# Context manager makes sure the `Connection` is closed safely and implicitly
with engine.connect() as conn:
    df = pd.read_sql_query(query, conn)
    print(conn.in_transaction()) # False
    # do_something_with(conn)
    trans = conn.begin()
    print(conn.in_transaction()) # True
    # do_whatever_with(trans)
    print(conn.closed) # False
print('Is Connection with-OUT closed?', conn.closed) # True
engine.dispose()

But for simple usage cases such as your example code, I think both ways are equally clean and simple for clean-up DB IO resources.

2 of 2
2

I have tested and even after the connection is closed (connection.close()), it is still present on the table sys.sysprocesses (of the database) throughout the execution of the script. Thus, if the script (after the connection) lasts another 10 minutes, the connection remains present on the sys.sysprocesses table for 10 minutes.

I think it is significant to draw attention to this fact: connection closed YES, process in the database closed NO.

Here are some scripts I used for testing:

sql = "select * from tbltest"
s_con = '...' #connection information

con = URL.create("mssql+pyodbc", query={"odbc_connect": s_con})
engine = create_engine(con)

with engine.connect() as con:
    frame = pd.read_sql(sql=sql, con=con)
    print(con.closed) # False

print(con.closed) # True
engine.dispose()

from time import sleep
sleep(20) # Pause for 20 seconds to launch the query with SSMS

Use of SSMS

Query for check connection
SELECT * FROM sys.sysprocesses