Yes, at the end of the day it will be commited automatically.

Pandas calls SQLAlchemy method executemany (for SQL Alchemy connections):

conn.executemany(self.insert_statement(), data_list)

And according to the SQL Alchemy docs executemany issues commit at the end.

For SQLite connection commit is called explicitly:

def run_transaction(self):
    cur = self.con.cursor()
    try:
        yield cur
        self.con.commit()
    except:
        self.con.rollback()
        raise
    finally:
        cur.close()
Answer from MaxU - stand with Ukraine on Stack Overflow
🌐
Wordpress
capelastegui.wordpress.com › 2018 › 05 › 21 › commit-and-rollback-with-pandas-dataframe-to_sql
Commit and rollback with pandas.DataFrame.to_sql() – Software development notes
May 21, 2018 - In our team, we were previously working under the assumption that this is not supported by pandas.DataFrame.to_sql() . Because of this, we implemented functions to write to the database with rollback support, like the following: def insert_df_to_table(engine, table, df, schema, session=None, commit=False): # Inserts dataframe to database table # If no session has been created, set up a new one and commit the transaction if not session: sm = sessionmaker(bind=engine) session = sm() commit = True metadata = MetaData(bind=engine) datatable = Table(table, metadata, schema=schema, autoload=True) list_of_dicts = df.to_dict(orient='records') try: session.execute(datatable.insert(), list_of_dicts) if commit: # leave it to the parent procedure to commit session.commit() except Exception as e: if commit: # leave it to the parent procedure to rollback session.rollback() raise
Discussions

sql server - pandas.DataFrame.to_sql inserts data, but doesn't commit the transaction - Stack Overflow
I'm able to commit changes using pyodbc connection and full insert statement, however pandas.DataFrame.to_sql() with SQLAlchemy engine doesn't work. More on stackoverflow.com
🌐 stackoverflow.com
Pandas to_sql gives no errors but isn’t inserting in to a table.
did you do a commit? More on reddit.com
🌐 r/learnpython
6
1
October 6, 2021
Rollback for pandas.DataFrame.to_sql?
Not sure that you can.. Maybe do a read_sql to store the original into a dataframe, put it in a SQLite db so if something goes wrong you can read it back to dataframe and do another to_sql to rewrite the data you originally modified. That's the way that I do it - there is probably a better way though. More on reddit.com
🌐 r/Python
3
2
April 6, 2016
sql - Airflow + pandas read_sql_query() with commit - Stack Overflow
Question Can I commit a SQL transaction to a DB using read_sql()? Use Case and Background I have a use case where I want to allow users to execute some predefined SQL and have a pandas dataframe More on stackoverflow.com
🌐 stackoverflow.com
🌐
GitHub
github.com › pandas-dev › pandas › issues › 32933
pandas to_sql(if_exists='replace') should not commit after the drop query · Issue #32933 · pandas-dev/pandas
March 23, 2020 - BugIO SQLto_sql, read_sql, read_sql_queryto_sql, read_sql, read_sql_queryMultithreadingParallelism in pandasParallelism in pandas ... it is hard to generate a reproducible code for this problem as it would need parallel processes reading from a table, and another process that updates that table frequently. ... The issue is that to_sql with (if_exists='replace') carries the two instructions separately and commits them serially into the database.
Author   ma7555
Top answer
1 of 2
2

I had a similar problem: when trying to write use df.to_sql (from pandas) with a sqlalchemy engine created with mssql+pymssql.

sqlalchemy.exc.OperationalError: (pymssql._pymssql.OperationalError) Cannot commit transaction: (3902, b'The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')

Turns out that the issue had to do with properly managing query commitments and connections closing. The easiest way to manage this was by using SQLAlchemy's built in compatibility with Python's with

 SQL_CONNECTION =  sqlalchemy.create_engine('mssql+pymssql://'+ 'SQL_USERNAME' +':' + qp(SQL_PASSWORD) + '@'+ SQL_SERVER + '/'+ SQL_DB) #TODO make username dynamic
 with SQL_CONNECTION.connect() as connection:
        with connection.begin():
            df.to_sql(SQL_TABLE, connection, schema='dbo', if_exists='replace')
2 of 2
1

I had the same issue, I realised you need to tell pyodbc which database you want to use. For me the default was master, so my data ended up there.

There are two ways you can do this, either:

connection.execute("USE <dbname>")

Or define the schema in the df.to_sql():

df.to_sql(name=<TABELENAME>, conn=connection, schema='<dbname>.dbo')

In my case the schema was <dbname>.dbo I think the .dbo is default so it could be something else if you define an alternative schema

This was referenced in this answer, it took me a bit longer to realise what the schema name should be.

🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.DataFrame.to_sql.html
pandas.DataFrame.to_sql — pandas 3.0.2 documentation
Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy support is provided for sqlite3.Connection objects. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable. See here. If passing a sqlalchemy.engine.Connection which is already in a transaction, the transaction will not be committed...
🌐
Pandas
pandas.pydata.org › pandas-docs › version › 2.0 › reference › api › pandas.DataFrame.to_sql.html
pandas.DataFrame.to_sql — pandas 2.0.3 documentation
Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy support is provided for sqlite3.Connection objects. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable. See here. If passing a sqlalchemy.engine.Connection which is already in a transaction, the transaction will not be committed...
🌐
Data to Fish
datatofish.com › pandas-dataframe-to-sql
Pandas DataFrame to SQL
# Connect to db conn = sqlite3.connect(desktop_path + '/fish_db') # Create a cursor object which allows you to read from and write to the db. c = conn.cursor() # Stage the table creation. Need to commit to "save" the table in the db. c.execute(''' CREATE TABLE IF NOT EXISTS fishes ([fish_name] TEXT, [egg_count] INTEGER) ''') # Commit staged actions conn.commit() Suppose you want to import the following DataFrame: import pandas ...
🌐
Pandas
pandas.pydata.org › docs › dev › reference › api › pandas.DataFrame.to_sql.html
pandas.DataFrame.to_sql — pandas 3.0.0rc2+20.g501c5052ca documentation
Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy support is provided for sqlite3.Connection objects. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable. See here. If passing a sqlalchemy.engine.Connection which is already in a transaction, the transaction will not be committed...
Find elsewhere
🌐
Reddit
reddit.com › r/learnpython › pandas to_sql gives no errors but isn’t inserting in to a table.
r/learnpython on Reddit: Pandas to_sql gives no errors but isn’t inserting in to a table.
October 6, 2021 -

It connects (I can read back data) but it isn’t inserting. It also gives no errors. Using sqlalchemy and pymssql as a driver. My first thought was to set autocommit to true but that hasn’t fixed it.

Any ideas?

🌐
Reddit
reddit.com › r/python › rollback for pandas.dataframe.to_sql?
r/Python on Reddit: Rollback for pandas.DataFrame.to_sql?
April 6, 2016 -

I'm trying to execute a set of SQLAlchemy commands (a delete and an insert) and then finally write a pandas.DataFrame to the DB. I call to_sql for that.

I need a way to roll it all back if anything goes wrong. The SQLAlchemy commands are easy since they can be contained within a transaction. Not so with to_sql.

pandas.DataFrame.to_sql takes named argument con of type SQLAlchemy engine. Presumably, it creates it's own connection.

Here's essentially what I'm doing...

connection = engine.connect()
transaction = connection.begin()
try:
	delete(my_table, clause).execute()
	my_dataframe.to_sql('ahother_table', connection)
	transaction.commit()
except:
	transaction.rollback()

How do I rollback to_sql?

Top answer
1 of 2
3

I had a similar use case -- load data into SQL Server with Pandas, call a stored procedure that does heavy lifting and writes to tables, then capture the result set into a new DataFrame.

I solved it by using a context manager and explicitly committing the transaction:

# Connect to SQL Server
engine = sqlalchemy.create_engine('db_string')
with engine.connect() as connection:
    # Write dataframe to table with replace
    df.to_sql(name='myTable', con=connection, if_exists='replace')

    with connection.begin() as transaction:
        # Execute verification routine and capture results
        df_processed = pandas.read_sql(sql='exec sproc', con=connection)
        transaction.commit()
2 of 2
2

read_sql won't commit because as that method name implies, the goal is to read data, not write. It's good design choice from pandas. This is important because it prevents accidental writes and allows interesting scenarios like running a procedure, read its effects but nothing is persisted. read_sql's intent is to read, not to write. Expressing intent directly is a gold standard principle.

A more explicit way to express your intent would be to execute (with commit) explicitly before fetchall. But because pandas offers no simple way to read from a cursor object, you would lose the ease of mind provided by read_sql and have to create the DataFrame yourself.

So all in all your solution is fine, by setting autocommit=True you're indicating that your database interactions will persist whatever they do so there should be no accidents. It's a bit weird to read, but if you named your sql_template variable something like write_then_read_sql or explain in a docstring, the intent would be clearer.

🌐
Blogger
htykuut.blogspot.com › 2018 › 12 › pandasdataframetosql-inserts-data-but.html
pandas.DataFrame.to_sql inserts data, but doesn't commit the transaction
December 7, 2018 - I'm able to commit changes using pyodbc connection and full insert statement, however pandas.DataFrame.to_sql() with SQLAlchemy engine doesn't work.
🌐
pandas
pandas.pydata.org › pandas-docs › dev › reference › api › pandas.DataFrame.to_sql.html
pandas.DataFrame.to_sql — pandas 3.0.0rc0+40.gecf28e538a documentation
Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy support is provided for sqlite3.Connection objects. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable. See here. If passing a sqlalchemy.engine.Connection which is already in a transaction, the transaction will not be committed...
🌐
Stack Overflow
stackoverflow.com › questions › 64375454 › pandas-to-sql-with-transactions
python - Pandas to_sql with transactions - Stack Overflow
October 15, 2020 - import pandas as pd import sqlalchemy data = {'col1': ['val1', 'val2'], 'col2': ['val3', 'val4']} df = pd.DataFrame(data, columns=['col1', 'col2']) engine_pg = sqlalchemy.create_engine('postgresql://user:pass@localhost:5432/postgres') engine_ora = sqlalchemy.create_engine('oracle://user:pass@localhost:1521/orcl', max_identifier_length=128) conn_pg = engine_pg.connect() conn_ora = engine_ora.connect() trans_pg = conn_pg.begin() trans_ora = conn_ora.begin() try: df.to_sql('table', conn_pg, schema='public', if_exists='replace', index=False) df.to_sql('table', conn_ora, schema='user', if_exists='replace', index=False) #1 trans_pg.commit() #2 trans_ora.commit() except Exception: trans_pg.rollback() trans_ora.rollback()
🌐
CopyProgramming
copyprogramming.com › howto › pandas-dataframe-to-sql-inserts-data-but-doesn-t-commit-the-transaction
Transaction not committed when using Pandas.DataFrame.to_sql for data insertion - Sql server
May 2, 2023 - Could someone assist me with this, or is it necessary for me to report it as a possible panda-related problem? ... Encountering a similar problem, I learned that specifying the desired database to pyodbc was necessary. By default, it directs the data to the master database, resulting in my data being saved there. You have a couple of options to accomplish this task: ... Specify the schema either within the df.to_sql() or define it in the df.to_sql() .
🌐
GitHub
github.com › pandas-dev › pandas › issues › 18981
pandas dataframe to_sql insert to postgresql not work · Issue #18981 · pandas-dev/pandas
December 29, 2017 - def testSQL(): import psycopg2 import pandas as pd from sqlalchemy import create_engine · try: engine = create_engine('postgresql+psycopg2://pub:pub@localhost/Wind_Quote') #conn=engine.raw_connection() #cur = conn.cursor() df = pd.DataFrame() df['stock_code']='0000' df['ipo_date'] ='2015-01-07' df['sec_name'] ='11111111' df['trade_code'] ='22222222222' try : df.to_sql(name='Test_stock_basic_Info_2', con=engine,if_exists='replace',index=False) #conn.commit() #cur.execute('COMMIT') except Exception as e: print ( time.strftime('At [%Y-%m-%d %H:%M:%S]', time.localtime(time.time())),": index_data:
Author   teeger
🌐
PyPI
pypi.org › project › fast-to-sql
fast-to-sql · PyPI
from datetime import datetime import pandas as pd import pyodbc from fast_to_sql import fast_to_sql # Test Dataframe for insertion df = pd.DataFrame({ "Col1": [1, 2, 3], "Col2": ["A", "B", "C"], "Col3": [True, False, True], "Col4": [datetime(2020,1,1),datetime(2020,1,2),datetime(2020,1,3)] }) # Create a pyodbc connection conn = pyodbc.connect( """ Driver={ODBC Driver 17 for SQL Server}; Server=localhost; Database=my_database; UID=my_user; PWD=my_pass; """ ) # If a table is created, the generated sql is returned create_statement = fast_to_sql( df, "my_great_table", conn, if_exists="replace", custom={"Col1":"INT PRIMARY KEY"} ) # Commit upload actions and close connection conn.commit() conn.close() fast_to_sql( df, name, conn, if_exists="append", custom=None, temp=False, copy=False, clean_cols=True ) df: pandas DataFrame to upload ·
      » pip install fast-to-sql
    
Published   Dec 30, 2023
Version   2.3.0