Good question, note, read_sql is a wrapper around 'read_sql_table and read_sql_query. Reading through the source, a ValueError is consistently thrown inside the parent and the helper functions. So you can safely catch a ValueError and handle appropriately. (Do have a look at the source)
Good question, note, read_sql is a wrapper around 'read_sql_table and read_sql_query. Reading through the source, a ValueError is consistently thrown inside the parent and the helper functions. So you can safely catch a ValueError and handle appropriately. (Do have a look at the source)
I just stumbled on this in a similar problem and found the answer to seek the exception from SQLalchemy..
try:
df = pd.read_sql_query(QUERY, engine)
except sqlalchemy.exc.OperationalError as e:
logger.Info('Error occured while executing a query {}'.format(e.args))
more information can be found here. SQL Alchemy Docs
pandas.read_sql() fails when the select is not the sole part of the query
read_sql says wrong syntax when pandas read_sql can work with the same syntax
python - Handling errors produced by incorrect queries using psycopg2 and pandas - Stack Overflow
python - to_sql pandas data frame into SQL server error: DatabaseError - Stack Overflow
According to the to_sql doc, the con parameter is either an SQLAchemy engine or the legacy DBAPI2 connection (sqlite3). Because you are passing the connection object rather than the SQLAlchemy engine object as the parameter, pandas is inferring that you're passing a DBAPI2 connection, or a SQLite3 connection since its the only one supported. To remedy this, just do:
myeng = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
# Code to create your df
...
# Now write to DB
df.to_sql('table', myeng, index=False)
try this. good to connect MS SQL server(SQL Authentication) and update data
from sqlalchemy import create_engine
params = urllib.parse.quote_plus(
'DRIVER={ODBC Driver 13 for SQL Server};'+
'SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
#df: pandas.dataframe; mTableName:table name in MS SQL
#warning: discard old table if exists
df.to_sql(mTableName, con=engine, if_exists='replace', index=False)
Hi guys.
I am having an weird error that I have no idea how to solve.
I have a huge data-set and I am trying to upload it to sql using pandas. I can't read the entire file so I am reading it in chunks. My idea is to use a for loop to go through all the chunk and append them to the sql.
Here my code:
import numpy as np
import pandas as pd
import sqlalchemy as sql
# CREATE SQL ALCHEMY OBJCET
connect_string = 'mysql://....'
sql_engine = sql.create_engine(connect_string)
# Load Training set
train_chunks = pd.read_csv("train.csv",
chunksize=10000,
low_memory=False)
for train in train_chunks:
# ADD TO SQL
train.to_sql(train, con=sql_engine, if_exists='append', index=False)
And I get the error:
TypeError: 'DataFrame' objects are mutable, thus they cannot be hashed
I have done this before (without the chunks) and never got this error. I have been trying to figure out for an entire afternoon now and no luck.
Can anybody save me?
Thanks!