The main aim of this Python jupyter project is to create a job demographic segmentation model to tell the bank which of its customers are at the highest risk of leaving.
Dataset Link: https://www.kaggle.com/aakash50897/churn-modellingcsv
The dataset contains 10,000 rows and 14 columns. Out of 14 features, 13 features are independent features and 1 is a dependent feature. The main task is to find the customers who are at the highest risk of leaving or the customer is reliable to the bank based on the customer data present in the bank and that could govern the bank's decision whether or not to give loans.
1)CustomerId: Unique id of the customer in the bank.
2)Surname Name of the customer.
3)CreditScore: The credit score of the customer.
4)Geography: It tells the area.
5)Gender: It tells about whether the customer is male or female.
6)Age: It describes the age of the customer.
7)Tenure: How long the customer have been in the bank.
8)Balance: The bank balance of the respective customer at that time.
9)No of products: It tells about the no of products are there for the customer.
10)HasCrCard: it tells whether the customer has a credit card or not.
11)IsActiveMember: it tells whether the customer is an active member or not.
12)EstimatedSalary: it tells the estimated salary of the customer based on the prior data.
13)Exited: It tells whether the customer is excited from the bank or not.
1-excited from the bank.
0-Not excited from the bank.
Step 1:
Importing the required Python Libraries
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import LabelEncoder from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split import tensorflow as tf from sklearn.metrics import classification_report
Step 2:
Importing the dataset
data= pd.read_csv(r'P:\Churn_Modelling.csv') data.head()
Data.head() commands prints the first five rows of the dataset
Step 3:
data.info()
Output :
# Column Non-Null Count Dtype --- ------ -------------- ----- 0 RowNumber 10000 non-null int64 1 CustomerId 10000 non-null int64 2 Surname 10000 non-null object 3 CreditScore 10000 non-null int64 4 Geography 10000 non-null object 5 Gender 10000 non-null object 6 Age 10000 non-null int64 7 Tenure 10000 non-null int64 8 Balance 10000 non-null float64 9 NumOfProducts 10000 non-null int64 10 HasCrCard 10000 non-null int64 11 IsActiveMember 10000 non-null int64 12 EstimatedSalary 10000 non-null float64 13 Exited 10000 non-null int64
Step 4:
Checking the Null values in the dataset
data.isnull().sum()
Output :
RowNumber 0 CustomerId 0 Surname 0 CreditScore 0 Geography 0 Gender 0 Age 0 Tenure 0 Balance 0 NumOfProducts 0 HasCrCard 0 IsActiveMember 0 EstimatedSalary 0 Exited 0 dtype: int64
Step 5:
Statistical analysis of the features in dataset.
data.describe()
The above chunk of code prints the mean, count, Maximum value, Minimum Value of each feature Present in the dataset.
Step 6:
Analyzing the Gender Variable (Getting no of classes in gender variable)
data['Gender'].value_counts()
Output:
Male 5457 Female 4543 Name: Gender, dtype: int64
Step 7:
Comparision of Male and Female with respect to their Frequency using bar plot.
classes = pd.value_counts(data['Gender'], sort = True) classes.plot(kind = 'bar', rot=0) plt.title('comparison of male and female') plt.xlabel('Gender') plt.ylabel('population') plt.show()
Step 8:
Analyzing the Age variable.
data['Age'].value_counts()
Output:
37 478 38 477 35 474 36 456 34 447 ... 92 2 88 1 82 1 85 1 83 1 Name: Age, Length: 70, dtype: int64
Step 9:
Comparision of Age
plt.hist(x = data.Age, bins = 10, color = 'orange') plt.title('comparison of Age') plt.xlabel('Age') plt.ylabel('population') plt.show()
Step 10 :
Finding the correlation between the variables using heatmap.
sns.heatmap(data.corr(), annot = True, vmin=-1, vmax=1, center= 0)
Step 11:
Splitting the Dataset column-wise into two parts.
X = data.iloc[:, 3:-1].values y = data.iloc[:, -1].values
X-Independent Features
y-Dependent Feature.
Step 12:
Encoding the Categorical data
Label Encoding the "Gender" column
le = LabelEncoder() X[:, 2] = le.fit_transform(X[:, 2])
print(X)
Output :
Step 13:
One Hot Encoding the "Geography" column using the column Transformer.
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [1])], remainder='passthrough') X = np.array(ct.fit_transform(X))
print(X)
Output:
[[1.0 0.0 0.0 ... 1 1 101348.88] [0.0 0.0 1.0 ... 0 1 112542.58] [1.0 0.0 0.0 ... 1 0 113931.57] ... [1.0 0.0 0.0 ... 0 1 42085.58] [0.0 1.0 0.0 ... 1 0 92888.52] [1.0 0.0 0.0 ... 1 0 38190.78]]
Step 14:
Feature Scaling
sc = StandardScaler() X = sc.fit_transform(X) print(X)
Output :
[[ 0.99720391 -0.57873591 -0.57380915 ... 0.64609167 0.97024255 0.02188649] [-1.00280393 -0.57873591 1.74273971 ... -1.54776799 0.97024255 0.21653375] [ 0.99720391 -0.57873591 -0.57380915 ... 0.64609167 -1.03067011 0.2406869 ] ... [ 0.99720391 -0.57873591 -0.57380915 ... -1.54776799 0.97024255 -1.00864308] [-1.00280393 1.72790383 -0.57380915 ... 0.64609167 -1.03067011 -0.12523071] [ 0.99720391 -0.57873591 -0.57380915 ... 0.64609167 -1.03067011 -1.07636976]]
Step 15:
Splitting the dataset into Training set and testing set.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
Building the Artificial Neural Network(ANN):
Steps for Building the ANN:
1)Initializing the ANN.
2)Adding the Input layer and the First Hidden layer.
3)Adding the Second Hidden layer.
4)Adding the Output layer.
ann = tf.keras.models.Sequential() ann.add(tf.keras.layers.Dense(units=6, activation='relu')) ann.add(tf.keras.layers.Dense(units=6, activation='relu')) ann.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
Compiling the ANN model
ann.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
Training the ANN model on the training set
ann.fit(X_train, y_train, batch_size = 32, epochs = 100)
y_pred = ann.predict(X_test) y_pred = (y_pred > 0.5)
True Positive: when a case was positive and predicted positive.
True Negative: When a case was negative and predicted negative.
False Positive: when a case was negative and predicted was positive.
False Negative: when a case was positive and predicted was Negative.
from sklearn.metrics import confusion_matrix, accuracy_score confusion_matrix= confusion_matrix(y_test, y_pred) print(confusion_matrix)
Output :
accuracy_score(y_test, y_pred)
Output : 0.8605
print(classification_report(y_test,y_pred))
Output :
precision recall f1-score support 0 0.88 0.95 0.92 1595 1 0.73 0.50 0.59 405 accuracy 0.86 2000 macro avg 0.80 0.73 0.75 2000 weighted avg 0.85 0.86 0.85 2000
Submitted by Kotha Sai Narasimha Rao (kothasainarasimharao)
Download packets of source code on Coders Packet
Comments