Consider a temp table which would be exact replica of your final table, cleaned out with each run:

engine = create_engine('postgresql+psycopg2://user:pswd@mydb')
df.to_sql('temp_table', engine, if_exists='replace')

sql = """
    UPDATE final_table AS f
    SET col1 = t.col1
    FROM temp_table AS t
    WHERE f.id = t.id
"""

with engine.begin() as conn:     # TRANSACTION
    conn.execute(sql)
Answer from Parfait on Stack Overflow
Top answer
1 of 8
86

Consider a temp table which would be exact replica of your final table, cleaned out with each run:

engine = create_engine('postgresql+psycopg2://user:pswd@mydb')
df.to_sql('temp_table', engine, if_exists='replace')

sql = """
    UPDATE final_table AS f
    SET col1 = t.col1
    FROM temp_table AS t
    WHERE f.id = t.id
"""

with engine.begin() as conn:     # TRANSACTION
    conn.execute(sql)
2 of 8
15

It looks like you are using some external data stored in df for the conditions on updating your database table. If it is possible why not just do a one-line sql update?

If you are working with a smallish database (where loading the whole data to the python dataframe object isn't going to kill you) then you can definitely conditionally update the dataframe after loading it using read_sql. Then you can use a keyword arg if_exists="replace" to replace the DB table with the new updated table.

df = pandas.read_sql("select * from your_table;", engine)

#update information (update your_table set column = "new value" where column = "old value")
#still may need to iterate for many old value/new value pairs
df[df['column'] == "old value", "column"] = "new value"

#send data back to sql
df.to_sql("your_table", engine, if_exists="replace")

Pandas is a powerful tool, where limited SQL support was just a small feature at first. As time goes by people are trying to use pandas as their only database interface software. I don't think pandas was ever meant to be an end-all for database interaction, but there are a lot of people working on new features all the time. See: https://github.com/pandas-dev/pandas/issues

🌐
Pandas
pandas.pydata.org › docs › reference › api › pandas.DataFrame.to_sql.html
pandas.DataFrame.to_sql — pandas 3.0.1 documentation
) ... stmt = stmt.on_duplicate_key_update(b=stmt.inserted.b, c=stmt.inserted.c) ... result = conn.execute(stmt) ... return result.rowcount >>> df_conflict.to_sql(name="conflict_table", con=conn, if_exists="append", # noqa: F821 ... method=insert_on_conflict_update) 2 · Specify the dtype (especially useful for integers with missing values). Notice that while pandas is forced to store the data as floating point, the database supports nullable integers.
People also ask

What is the `to_sql()` method in pandas?
The `to_sql()` method in pandas is a function that allows you to write data from a DataFrame to a SQL database. It provides a convenient way to store pandas data in a database for long-term storage, sharing with other systems, or further processing with SQL.
🌐
docs.kanaries.net
docs.kanaries.net › topics › Pandas › pandas-to-sql
Optimizing SQL Queries in Pandas: Pandas to SQL Made Easy! – ...
What is pandas translate to SQL?
`pandasql` is a library that translates SQL queries into pandas commands. This allows you to write SQL-like queries to manipulate your pandas DataFrames.
🌐
docs.kanaries.net
docs.kanaries.net › topics › Pandas › pandas-to-sql
Optimizing SQL Queries in Pandas: Pandas to SQL Made Easy! – ...
How do you handle NULL or NaN values when using `to_sql()`?
By default, pandas will convert NaN values in the DataFrame to NULL when writing to the database. If you want to handle NaN values differently, you can use the `fillna()` method in pandas to replace NaN values with a specific value before writing to the database.
🌐
docs.kanaries.net
docs.kanaries.net › topics › Pandas › pandas-to-sql
Optimizing SQL Queries in Pandas: Pandas to SQL Made Easy! – ...
🌐
GitHub
github.com › pandas-dev › pandas › issues › 14553
Adding (Insert or update if key exists) option to `.to_sql` · Issue #14553 · pandas-dev/pandas
September 13, 2016 - In this case, the id=2 row would get updated to age=44 and the id=3 row would get added ... I looked at pandas sql.py sourcecode to come up with a solution, but I couldn't follow. ... import pandas as pd from sqlalchemy import create_engine import sqlite3 conn = sqlite3.connect('example.db') c = conn.cursor() c.execute('''DROP TABLE IF EXISTS person_age;''') c.execute(''' CREATE TABLE person_age (id INTEGER PRIMARY KEY ASC, age INTEGER NOT NULL) ''') conn.commit() conn.close() ##### Create original table engine = create_engine("sqlite:///example.db") sql_df = pd.DataFrame({'id' : [1, 2], 'age'
Author   cdagnino
🌐
Reddit
reddit.com › r/learnpython › pandas to sql, if row exists then replace, otherwise append.
r/learnpython on Reddit: Pandas to SQL, if row exists then replace, otherwise append.
April 11, 2022 -

Good morning all, hoping you can help.

I'm a bit of programming noob, but I've written a python script that does the below:

  • first queries my MariaDB SQL database and retrieves the maximum datetime from a table column (dateLastAction).

  • This datetime is then used as a filter in an API request to retrieve any items updated after the max datetime from my SQL table.

  • I then transform the response and normalize it to a pandas dataframe which matches the structure of the SQL table exactly.

The dataframe now contains some rows which do exist in the database, and some which aren't present at all.

So my question is, is it possible check my MariaDB table for each of the 'ticketId' column in my pandas dataframe (this is the primary key for the table), and if the 'ticketId' is present, replace the row, and if it's not present, append to the table?

If none of the rows were present in MariaDB then I would append rows of my dataframe to the SQL table using:

df.to_sql('tickets', engine, index=False, if_exists='append')

the 'if_exists' portion is relating to the table itself though, not the individual rows.

Can anyone share some insight on how I can achieve this? Is it easier to split my dataframe into two, one for new rows, and then rows to be replaced?

Code outline for what I'm trying to achieve:

from sqlalchemy import create_engine
import datetime
import requests
import pandas as pd


## STEP 1: Retrieve the max 'dateLastAction' value from MariaDB 'tickets' table
hostname = 'hostname'
dbname = 'dbname'
uname = 'uname'
pwd = 'pwd'

engine = create_engine('mysql+pymysql://{user}:{pw}@{host}/{db}'.format(host=hostname, db=dbname, user=uname, pw=pwd))

query = 'SELECT dateLastAction FROM tickets WHERE dateLastAction IN (SELECT max(dateLastAction) FROM tickets)'
result = engine.execute(query).fetchone()
lastAction = result[0]
lastAction = lastAction.strftime('%Y-%m-%dT%H:%M:%S')


## STEP 2: Request tickets from PSA with dateLastAction larger than that stored in MariaDB
filter = "?$filter=LastActivityUpdate+gt+DateTime'" + str(lastAction) + "'"
request = requests.get('url' + filter, headers=headers)
response = request.json()
df = pd.json_normalize(response)


## STEP 3: Add results to MariaDB
???

Thank you in advance!

🌐
Python.org
discuss.python.org › python help
Panda dataframe update existing table with to_sql - Python Help - Discussions on Python.org
May 30, 2024 - Hi I am new to Python and is trying to make my first python application. But I have some problem with panda. How do I update an existing table with panda dataframe with out getting duplicate errors saying key already exists. Is it possible to skip record that already exists or what is best practice?
🌐
Pandas
pandas.pydata.org › docs › dev › reference › api › pandas.DataFrame.to_sql.html
pandas.DataFrame.to_sql — pandas 3.0.0rc2+20.g501c5052ca documentation
) ... stmt = stmt.on_duplicate_key_update(b=stmt.inserted.b, c=stmt.inserted.c) ... result = conn.execute(stmt) ... return result.rowcount >>> df_conflict.to_sql(name="conflict_table", con=conn, if_exists="append", # noqa: F821 ... method=insert_on_conflict_update) 2 · Specify the dtype (especially useful for integers with missing values). Notice that while pandas is forced to store the data as floating point, the database supports nullable integers.
Find elsewhere
🌐
Kanaries
docs.kanaries.net › topics › Pandas › pandas-to-sql
Optimizing SQL Queries in Pandas: Pandas to SQL Made Easy! – Kanaries
July 4, 2023 - Yes, you can use to_sql() to append records to an existing table in a SQL database. To do this, you simply need to set the if_exists parameter to 'append' when calling to_sql(). ... pandasql is a library that translates SQL queries into pandas commands.
🌐
w3resource
w3resource.com › pandas › dataframe › dataframe-to_sql.php
Pandas DataFrame: to_sql() function - w3resource
August 19, 2022 - DataFrame.to_sql(self, name, con, schema=None, if_exists='fail', index=True, index_label=None, chunksize=None, dtype=None, method=None)
🌐
Pandas
pandas.pydata.org › pandas-docs › version › 0.15 › generated › pandas.DataFrame.to_sql.html
pandas.DataFrame.to_sql — pandas 0.15.2 documentation
Contributing to pandas · Internals · Release Notes · Enter search terms or a module, class or function name. DataFrame.to_sql(name, con, flavor='sqlite', schema=None, if_exists='fail', index=True, index_label=None, chunksize=None, dtype=None)¶ · Write records stored in a DataFrame to a SQL database.
🌐
PyPI
pypi.org › project › pandas-upsert-to-mysql
pandas-upsert-to-mysql · PyPI
July 1, 2020 - Pandas official (up to 1.0.5 version) to_sql method does not implement upsert feature. Its parameter if_exist has avaliable values as below:
      » pip install pandas-upsert-to-mysql
    
Published   Jul 01, 2020
Version   0.0.3
Top answer
1 of 3
10

I can think of two options, but number 1 might be cleaner/faster:

1) Make SQL decide on the update/insert. Check this other question. You can iterate by rows of your 'df', from i=1 to n. Inside the loop for the insertion you can write something like:

query = """INSERT INTO table (id, name, age) VALUES(%s, %s, %s)
ON DUPLICATE KEY UPDATE name=%s, age=%s"""
engine.execute(query, (df.id[i], df.name[i], df.age[i], df.name[i], df.age[i]))

2) Define a python function that returns True or False when the record exists and then use it in your loop:

def check_existence(user_id):
    query = "SELECT EXISTS (SELECT 1 FROM your_table where user_id_str = %s);"
    return list(engine.execute(query,  (user_id, ) ) )[0][0] == 1

You could iterate over rows and do this check before inserting

Please also check the solution in this question and this one too which might work in your case.

2 of 3
1

Pangres is the tool for this job.

Overview here: https://pypi.org/project/pangres/

Use the function pangres.fix_psycopg2_bad_cols to "clean" the columns in the DataFrame.

Code/usage here: https://github.com/ThibTrip/pangres/wiki https://github.com/ThibTrip/pangres/wiki/Fix-bad-column-names-postgres Example code:

# From: <https://github.com/ThibTrip/pangres/wiki/Fix-bad-column-names-postgres>
import pandas as pd

# fix bad col/index names with default replacements (empty string for '(', ')' and '%'):

df = pd.DataFrame({'test()':[0],
                   'foo()%':[0]}).set_index('test()')
print(df)

test()  foo()%
     0      0

# clean cols, index w/ no replacements
df_fixed = fix_psycopg2_bad_cols(df)

print(df_fixed)

test    foo
   0      0

# fix bad col/index names with custom replacements - you MUST provide replacements for '(', ')' and '%': 

# reset df
df = pd.DataFrame({'test()':[0],
                   'foo()%':[0]}).set_index('test()')

# clean cols, index w/ user-specified replacements
df_fixed = fix_psycopg2_bad_cols(df, replacements={'%':'percent', '(':'', ')':''})

print(df_fixed)
test    foopercent
   0             0

Will only fix/correct some of the bad characters:

Replaces '%', '(' and ')' (characters that won't play nicely or even at all)

But, useful in that it handles cleanup and upsert.

(p.s., I know this post is over 4 years old, but still shows up in Google results when searching for "pangres upsert determine number inserts and updates" as the top SO result, dated May 13, 2020.)

Top answer
1 of 4
20

I think the easiest way would be to:

first delete those rows that are going to be "upserted". This can be done in a loop, but it's not very efficient for bigger data sets (5K+ rows), so i'd save this slice of the DF into a temporary MySQL table:

# assuming we have already changed values in the rows and saved those changed rows in a separate DF: `x`
x = df[mask]  # `mask` should help us to find changed rows...

# make sure `x` DF has a Primary Key column as index
x = x.set_index('a')

# dump a slice with changed rows to temporary MySQL table
x.to_sql('my_tmp', engine, if_exists='replace', index=True)

conn = engine.connect()
trans = conn.begin()

try:
    # delete those rows that we are going to "upsert"
    engine.execute('delete from test_upsert where a in (select a from my_tmp)')
    trans.commit()

    # insert changed rows
    x.to_sql('test_upsert', engine, if_exists='append', index=True)
except:
    trans.rollback()
    raise

PS i didn't test this code so it might have some small bugs, but it should give you an idea...

2 of 4
9

A MySQL specific solution using Panda's to_sql "method" arg and sqlalchemy's mysql insert on_duplicate_key_update features:

def create_method(meta):
    def method(table, conn, keys, data_iter):
        sql_table = db.Table(table.name, meta, autoload=True)
        insert_stmt = db.dialects.mysql.insert(sql_table).values([dict(zip(keys, data)) for data in data_iter])
        upsert_stmt = insert_stmt.on_duplicate_key_update({x.name: x for x in insert_stmt.inserted})
        conn.execute(upsert_stmt)

    return method

engine = db.create_engine(...)
conn = engine.connect()
with conn.begin():
    meta = db.MetaData(conn)
    method = create_method(meta)
    df.to_sql(table_name, conn, if_exists='append', method=method)
🌐
GitHub
github.com › ryanbaumann › Pandas-to_sql-upsert
GitHub - ryanbaumann/Pandas-to_sql-upsert: Extend pandas to_sql function to perform multi-threaded, concurrent "insert or update" command in memory · GitHub
The goal of this library is to extend the Python Pandas to_sql() function to be: Muti-threaded (improving time-to-insert on large datasets) Allow the to_sql() command to run an 'insert if does not exist' to the database
Starred by 85 users
Forked by 16 users
Languages   Jupyter Notebook 67.7% | Python 32.3%
🌐
Saturn Cloud
saturncloud.io › blog › how-to-insert-or-update-if-exists-in-mysql-using-pandas
How to Insert or Update If Exists in MySQL Using Pandas | Saturn Cloud Blog
October 4, 2023 - To insert or update data in MySQL using Pandas, you can use the if_exists parameter in the to_sql() method.
🌐
Vultr Docs
docs.vultr.com › python › third-party › pandas › DataFrame › to_sql
Python Pandas DataFrame to_sql() - Save Data to SQL Table | Vultr Docs
December 30, 2024 - Here, to_sql() saves the DataFrame df to an SQL table named daily_sales. The if_exists='replace' parameter ensures the table is dropped and created anew if it exists.
🌐
Dask
docs.dask.org › en › latest › generated › dask.dataframe.to_sql.html
dask.dataframe.to_sql — Dask documentation
Name of SQL table. ... Specify the schema (if database flavor supports this). If None, use default schema. if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’