# optimization – Battery Parameter (Internal Resistance R0, R1,C1) Estimation in Python

I want to fit the non-linear experimental data with the model function by estimating some parameters in the function. The model function I have is: Vocv+aIt+(bIt)(1-exp^(-x/bc)) where a,b and c are the parameters needs to be estimated/adjusted in order to fit with the experimental data.

``````P.s:: I am working on Parameter Optimization of a Battery in Python.
``````

Before applying the optimization I have tested the function, by manually adjusting the values of a,b,c to fit with experimental data and the result is following:

However, after applying the optimization in which I have set initial values as found out by trail and error manually it still can not optimize further.

Code:

``````import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from scipy.optimize import differential_evolution
import warnings
Vocv = Data.loc(:,'Vocv')
It = Data.loc(:, 'It')
t = Data.loc(:, 'Tt')
Ex_voltage = Data.loc(:, 'Experimental_Voltage(v)')
Ex_time = Data.loc(:, 'Experimental_Time(s)')
X = np.array((Vocv, It, t))

'''Befor optimization-------------------------------------------------------------------------------------'''
def voltage_1(X, a0, a1, a2):
return (X(0)+(a0*X(1))+(a1*X(1))*(1-np.exp(-1/(a1*a2))*X(2)))

a0 = 4.500e-07  ##### Initial Values guessed by trial and error

a1 = 3e-07      ##### Initial Values guessed by trial and error

a2 = 0.02       ##### Initial Values guessed by trial and error

errors = (Ex_voltage - voltage_1(X, a0, a1, a2))
square = np.sum(errors**2)

plt.plot(t, voltage_1(X, a0, a1, a2))
plt.plot(Ex_time, Ex_voltage, label='Experimental Data')

'''---------After applying Optimization-------------------------------------------------------------------
~~~~~~~~~~~~~~Minimize method to estimate battery parameter-------------------------------------------------------------'''
''' Need to estimate value of a0,a1,a2,a3 in such a way that it minimize the errors as low as possible'''
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = voltage_1(X, *parameterTuple)
return np.sum((Ex_voltage - val) ** 2.0)

def generate_Initial_Parameters():
# min and max used for bounds
maxX = 100

parameterBounds = ()
parameterBounds.append((4.500e-07, maxX)) # search bounds for R0
parameterBounds.append((3e-07, maxX)) # search bounds for R1
parameterBounds.append((0.02, 0.06)) # search bounds for C1

# "seed" the numpy random number generator for repeatable results
result = differential_evolution(sumOfSquaredError, parameterBounds)
return result.x

# generate initial parameter values
geneticParameters = generate_Initial_Parameters()

def SumsqErrors(A):
y = (X(0)+(A(0)*X(1))+(A(1)*X(1))*(1-np.exp(-1/(A(1)*A(2)))*X(2)))
errors = Ex_voltage - y
return np.sum(errors**2)
bestparameters = minimize(SumsqErrors, geneticParameters, method="trust-constr")
print(bestparameters)

bestA = bestparameters.x
n2 = len(t)
y2 = np.empty(n2)
for i in range(n2):
y2(i) = voltage_1((Vocv(i), It(i), t(i)), bestA(0), bestA(1),bestA(2))

font = {'size'   : 15}
import matplotlib
matplotlib.rc('font',**font)
t2 = Ex_time
fig = plt.figure()
plt.plot(t,y2, c='r', marker = 'o')
plt.plot(t2,Ex_voltage, c='k', marker = '>')
plt.plot(t, voltage_1(X, a0, a1, a2), label='Before_Optimization')
plt.legend(loc = 'best')
plt.grid()
plt.xlabel(r'Time \$(s) \$')
plt.ylabel(r'Voltage \$(V)\$')
``````

The output is following:

``````a0 = 4.50022988e-07
a1 = 3.00017973e-07
a2 = 0.02
``````

Basically the same value as initial value given; it is not optimizing to give me an optimal solution. I am very confused that what is wrong in my code, unable to understand because of very less experience in programming.

The expected result is: The error should be as less as possible between experimental/measured data and model/simulated data to give me an optimal solution.